Student guide. HP-UX System and Network Administration I. 1 of 3 H3064S J.00

Size: px
Start display at page:

Download "Student guide. HP-UX System and Network Administration I. 1 of 3 H3064S J.00"

Transcription

1 HP-UX System and Network Administration I Student guide 1 of 3 Use of this material to deliver training without prior written permission from HP is prohibited.

2

3 HP-UX System and Network Administration I Student guide 1 of 3 Use of this material to deliver training without prior written permission from HP is prohibited.

4 Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. This is an HP copyrighted work that may not be reproduced without the written permission of HP. You may not use these materials to deliver training to any person outside of your organization without the written permission of HP. UNIX is a registered trademark of The Open Group. X/Open is a registered trademark, and the X device is a trademark of X/Open Company Ltd. in the UK and other countries. Export Compliance Agreement Export Requirements. You may not export or re-export products subject to this agreement in violation of any applicable laws or regulations. Without limiting the generality of the foregoing, products subject to this agreement may not be exported, re-exported, otherwise transferred to or within (or to a national or resident of) countries under U.S. economic embargo and/or sanction including the following countries: Cuba, Iran, North Korea, Sudan and Syria. This list is subject to change. In addition, products subject to this agreement may not be exported, re-exported, or otherwise transferred to persons or entities listed on the U.S. Department of Commerce Denied Persons List; U.S. Department of Commerce Entity List (15 CFR 744, Supplement 4); U.S. Treasury Department Designated/Blocked Nationals exclusion list; or U.S. State Department Debarred Parties List; or to parties directly or indirectly involved in the development or production of nuclear, chemical, or biological weapons, missiles, rocket systems, or unmanned air vehicles as specified in the U.S. Export Administration Regulations (15 CFR 744); or to parties directly or indirectly involved in the financing, commission or support of terrorist activities. By accepting this agreement you confirm that you are not located in (or a national or resident of) any country under U.S. embargo or sanction; not identified on any U.S. Department of Commerce Denied Persons List, Entity List, US State Department Debarred Parties List or Treasury Department Designated Nationals exclusion list; not directly or indirectly involved in the development or production of nuclear, chemical, biological weapons, missiles, rocket systems, or unmanned air vehicles as specified in the U.S. Export Administration Regulations (15 CFR 744), and not directly or indirectly involved in the financing, commission or support of terrorist activities. Printed in the US HP-UX System and Network Administration I Student guide (1 of 3) September 2010

5 Contents Module 1 Course Overview 1 1. SLIDE: Course Audience SLIDE: Course Agenda SLIDE: HP-UX Versions SLIDE: HP-UX System Administration Resources Module 2 Navigating SAM and the SMH 2 1. SLIDE: SAM and SMH Overview SLIDE: Launching the SMH TUI SLIDE: Launching the SMH GUI via Autostart SLIDE: Launching the SMH GUI via Start-on-Boot SLIDE: Verifying the SMH Certificate SLIDE: Logging into the SMH SLIDE: SMH Menus and Tabs SLIDE: SMH->Home (1 of 2) SLIDE: SMH->Home (2 of 2) SLIDE: SMH->Tools (1 of 4) SLIDE: SMH->Tools (2 of 4) SLIDE: SMH->Tools (3 of 4) SLIDE: SMH->Tools (4 of 4) SLIDE: SMH->Settings SLIDE: SMH->Tasks SLIDE: SMH->Logs SLIDE: SMH Group Access Control SLIDE: SMH Authentication SLIDE: SMH and SIM Integration Possibilities SLIDE: For Further Study LAB: Configuring and Using the System Management Homepage LAB SOLUTIONS: Configuring and Using the System Management Homepage Module 3 Managing Users and Groups 3 1. SLIDE: User and Group Concepts SLIDE: What Defines a User Account? SLIDE: The /etc/passwd File SLIDE: The /etc/shadow File SLIDE: The /etc/group File SLIDE: Creating User Accounts SLIDE: Modifying User Accounts SLIDE: Deactivating User Accounts SLIDE: Removing User Accounts SLIDE: Configuring Password Aging SLIDE: Configuring Password Policies SLIDE: Managing Groups SLIDE: Managing /etc/skel LAB: Managing User Accounts LAB SOLUTIONS: Managing User Accounts i H3064S I Hewlett-Packard Development Company, L.P.

6 Contents Module 4 Navigating the HP-UX File System 4 1. SLIDE: Introducing the File System Paradigm SLIDE: System Directories SLIDE: Application Directories SLIDE: Commands to Help You Navigate LAB: HP-UX File System Hierarchy LAB SOLUTIONS: HP-UX File System Hierarchy Module 5 Configuring Hardware 5 1. SLIDE: Hardware Components SLIDE: CPUs SLIDE: Cell Boards, Blades, Crossbars, and Blade Links SLIDE: SBAs, LBAs, and PCI Expansion Buses SLIDE: ilo / MP Cards SLIDE: Core I/O Cards SLIDE: Internal Disks, Tapes, and DVDs SLIDE: Interface Adapter Cards SLIDE: Disk Arrays and LUNs SLIDE: SANs and Multipathing SLIDE: Partitioning Overview SLIDE: npar, vpar, VM, and Secure Resource Partition Overview SLIDE: Part 2: System Types SLIDE: Integrity Server Overview SLIDE: Entry-Class Rackmount Server Overview SLIDE: Entry-Class Rackmount Server Example: HP Integrity rx2660 (front) SLIDE: Entry-Class Rackmount Server Example: HP Integrity rx2660 (rear) SLIDE: Mid-Range Cell-Based Server Overview SLIDE: Mid-Range Cell-Based Server Example: HP Integrity rx8640 (front) SLIDE: Mid-Range Cell-Based Server Example: HP Integrity rx8640 (rear) SLIDE: High-End Cell-Based Server Overview SLIDE: High-End Cell-Based Server Example: HP Integrity Superdome (front) SLIDE: High-End Cell-Based Server Example: HP Integrity Superdome (rear) SLIDE: HP BladeSystem Overview SLIDE: HP BladeSystem Enclosure Overview SLIDE: HP BladeSystem Enclosure Example: HP BladeSystem c7000 Enclosure SLIDE: HP Integrity Blade Server Model Overview SLIDE: HP Integrity Server Blade Example: HP Integrity BL890c i SLIDE: HP Integrity Superdome 2 Overview SLIDE: HP Integrity Superdome 2 Example: HP Integrity Superdome SLIDE: Viewing the System Configuration SLIDE: Viewing npar, vpar, and VM Hardware SLIDE: Part 3: HP-UX Hardware Addressing SLIDE: Hardware Addresses SLIDE: Legacy vs. Agile View Hardware Addresses SLIDE: Legacy HBA Hardware Addresses SLIDE: Legacy Parallel SCSI Hardware Addresses SLIDE: Legacy FC Hardware Addresses (1 of 2) SLIDE: Legacy FC Hardware Addresses (2 of 2) SLIDE: Viewing Legacy HP-UX Hardware Addresses SLIDE: Agile View HBA Hardware Addresses SLIDE: Agile View Parallel SCSI Hardware Addresses SLIDE: Agile View FC Lunpath Hardware Addresses (1 of 2) H3064S I.00 ii 2009 Hewlett-Packard Development Company, L.P.

7 Contents SLIDE: Agile View FC Lunpath Hardware Addresses (2 of 2) SLIDE: Agile View FC LUN Hardware Path Addresses SLIDE: Viewing LUN Hardware Paths via Agile View SLIDE: Viewing LUNs and their lunpaths via Agile View SLIDE: Viewing HBAs and their lunpaths via Agile View SLIDE: Viewing LUN Health via Agile View SLIDE: Viewing LUN Attributes via Agile View SLIDE: Enabling and Disabling lunpaths via Agile View SLIDE: Part 4: Slot Addressing SLIDE: Slot Address Overview SLIDE: Slot Address Components SLIDE: Viewing Slot Addresses SLIDE: Part 6: Managing Cards and Devices SLIDE: Installing Interface Cards w/out OL* (11i v1, v2, v3) SLIDE: Installing Interface Cards with OL* (11i v1) SLIDE: Installing Interface Cards with OL* (11i v2, v3) SLIDE: Installing New Devices (11i v1, v2, v3) LAB: Exploring the System Hardware LAB SOLUTIONS: Exploring the System Hardware Module 6 Configuring Device Files 6 1. SLIDE: Device Special File Overview SLIDE: DSF Attributes SLIDE: DSF Types: Legacy vs. Persistent SLIDE: DSF Directories SLIDE: Legacy DSF Names SLIDE: Persistent DSF Names SLIDE: LUN, Disk, and DVD DSF Names SLIDE: Boot Disk DSF Names SLIDE: Tape Drive DSF Names SLIDE: Tape Autochanger DSF Names SLIDE: Terminal, Modem, and Printer DSF Names SLIDE: Listing Legacy DSFs SLIDE: Listing Persistent DSFs SLIDE: Correlating Persistent DSFs with LUNs and lunpaths SLIDE: Correlating Persistent DSFs with WWIDs SLIDE: Correlating Persistent DSFs with Legacy DSFs SLIDE: Decoding Persistent and Legacy DSF Attributes SLIDE: Managing Device Files SLIDE: Creating DSFs via insf SLIDE: Creating DFSs via mksf SLIDE: Creating DSFs via mknod SLIDE: Removing DSFs via rmsf SLIDE: Disabling and Enabling Legacy Mode DSFs LAB: Configuring Device Files LAB SOLUTIONS: Configuring Device Files Module 7 Managing Disk Devices 7 1. SLIDE: Disk Partitioning Concepts SLIDE: Whole Disk Partitioning Concepts SLIDE: Logical Volume Manager Concepts iii H3064S I Hewlett-Packard Development Company, L.P.

8 Contents 7 4. SLIDE: LVM Physical Volume Concepts SLIDE: LVM Volume Group Concepts SLIDE: LVM Logical Volume Concepts SLIDE: LVM Extent Concepts SLIDE: LVM Extent Size Concepts SLIDE: LVM Volume Group Versions and Limits SLIDE: LVM DSF Directories SLIDE: LVMv1 Volume Group and Logical Volume DSFs SLIDE: LVMv2 Volume Group and Logical Volume DSFs SLIDE: Creating Physical Volumes SLIDE: Creating LVMv1 Volume Groups SLIDE: Creating LVMv2 Volume Groups SLIDE: Creating Logical Volumes SLIDE: Verifying the Configuration SLIDE: Disk Space Management Tool Comparison LAB: Configuring Disk Devices LAB SOLUTIONS: Configuring Disk Devices Module 8 Managing File Systems 8 1. SLIDE: File System Overview SLIDE: File System Types SLIDE: Part 1: File System Concepts SLIDE: Superblock Concepts SLIDE: Inode Concepts SLIDE: Directory Concepts SLIDE: Block and Extent Concepts SLIDE: Hard Link Concepts SLIDE: Symbolic Link Concepts SLIDE: Intent Log Concepts SLIDE: HFS / VxFS Comparison SLIDE: Part 2: Creating and Mounting File Systems SLIDE: Overview: Creating and Mounting a File System SLIDE: Creating a File System SLIDE: Mounting a File System SLIDE: Unmounting a File System SLIDE: Automatically Mounting File Systems SLIDE: Mounting CDFS File Systems SLIDE: Mounting ISO Files SLIDE: Mounting LOFS File Systems SLIDE: Mounting MemFS File Systems LAB: Creating and Mounting File Systems LAB SOLUTIONS: Creating and Mounting File Systems H3064S I.00 iv 2009 Hewlett-Packard Development Company, L.P.

9 Module 1 Course Overview Objectives Upon completion of this module, you will be able to do the following: Describe the target audience for this course. List the topics covered in this course. List the currently supported HP-UX operating system versions. List some common reference sources used by HP-UX system administrators. Determine a system s current OS version

10 Module 1 Course Overview 1 1. SLIDE: Course Audience Course Audience This fast-paced 5-day course is the first of two courses HP offers to prepare new UNIX administrators to successfully manage an HP-UX server or workstation. The course assumes that the student has experience with general UNIX user commands. Student Notes This fast-paced 5-day course is the first of two courses HP offers to prepare new UNIX administrators to successfully manage an HP-UX server or workstation. The course assumes that the student has experience with general UNIX user commands

11 Module 1 Course Overview 1 2. SLIDE: Course Agenda Course Agenda Course Overview Navigating the SMH Managing Users and Groups Navigating the HP-UX File System Accessing the System Console Booting PA-RISC Systems Booting Integrity Systems Configuring the Kernel Managing Hardware Managing Device Files Managing Disk Devices Managing File Systems Managing Swap Space Maintaining Disks and File Systems Preparing for Disasters Managing Software with SD-UX Managing Patches with SD-UX Installing the OS with Ignite-UX Course Review Student Notes HP-UX System Administrators often serve a number of roles from configuring peripherals, to managing user accounts, to installing software and patches. Over the span of five days, this course covers the core skills required by all HP-UX system administrators. HP recommends that students attend the follow-on to this course, HP-UX System and Network Administration 2 (H3065S), to complete the course sequence for new HP-UX administrators. HP Education also offers courses covering numerous advanced HP-UX system and network administration topics. See our website, for more information

12 Module 1 Course Overview 1 3. SLIDE: HP-UX Versions HP-UX Versions HP currently supports several HP-UX 11i versions Slides and notes in this course cover all three current versions Labs will be completed on 11i v3 Release Release Supports PARISC Supports Integrity Identifier Name Servers Workstations Servers Workstations i v1 yes yes no no yymm* 11i v2 yes no yes no yymm* 11i v3 yes no yes no * Updated 11i v2/v3 media kits continue to be released every ~six months Student Notes Since HP-UX 11i was first released for PA-RISC in 2000, HP has released a number of versions of the operating system for the Integrity product line. The table on the slide lists the release identifier (as reported by HP-UX commands), release name (as used in the HP-UX documentation), and supported platform for each release of HP-UX 11i. HP distributes updated media kits with new patches and minor software updates approximately every six months. The four digits following 11i v1v2/v3 indicate each release s release year and month. Use the uname -r command to determine which HPUX version your system is currently running: # uname -r B

13 Module 1 Course Overview To determine which media kit your system was installed from, use swlist to check the version# on the QPKBASE patch bundle. # swlist -l bundle QPKBASE # Initializing... # Contacting target "rx26u221"... # # Target: myhost:/ # QPKBASE B a Base Quality Pack Bundle for HP-UX 11i v3, March 2009 The slides and notes in this course cover all three currently supported versions of the operating system: 11i v1, v2, and v3. The lab exercises require 11i v3. To determine end of support dates for each current HP-UX version, see HP s support roadmap online at

14 Module 1 Course Overview 1 4. SLIDE: HP-UX System Administration Resources HP-UX System Administration Resources In addition to the traditional UNIX man pages, HP provides a number of resources that you can use to learn more about your HP-UX system. HP s product website: HP s IT Resource Center: HP s documentation website: HP s software download website: HP Education Services: Student Notes Beyond this course, there is a wealth of resources available to assist new HP-UX system administrators. HP s corporate/product website describes all of HP s current hardware, software, and service offerings. HP s IT Resource Center provides a wealth of cookbooks, white papers, FAQ lists, patches, user forums, and an online response center that you can use to research HP-UX features and problems. The ITRC user forums are particularly helpful. Portions of the ITRC content are only available to customers with support contracts

15 Module 1 Course Overview HP s documentation website provides an online, searchable library containing all of HP s HP-UX manuals. If your site doesn t have Internet access, the Instant Information DVD included in the HP-UX media kit provides DVD-based access to the same documents. The HP-UX System Administrator s Guide, volumes 1-5, provides particularly useful information for new HP-UX 11i v3 system administrators. The equivalent HP-UX 11i v1 and v2 manual is titled Managing Systems and Workgroups: A Guide for HP-UX System Administrators. Visit HP s software download website to download and purchase HP-UX software products and updates. HP Education Services offers a wide variety of courses on HP-UX and other HP products. Visit our website regularly to stay abreast of the latest course offerings

16 Module 1 Course Overview 1-8

17 Module 2 Navigating the SMH Objectives Upon completion of this module, you will be able to do the following: Describe the purpose and features of SAM and the SMH. Launch the SMH GUI and TUI interfaces. Enable SMH autostart functionality. View hardware status information via the SMH. Launch SMH tools. Create custom SMH tools. Execute SMH tasks. View log files via the SMH. Configure SMH group access rights. Configure SMH authentication. Describe SMH/SIM integration possibilities

18 Module 2 Navigating the SMH 2 1. SLIDE: SAM and SMH Overview SAM and SMH Overview SAM provides an intuitive, menu-based administration interface in 11i v1 and v2 SMH provides an intuitive, menu-based administration interface in 11i v3 Both tools simplify complex administration tasks and minimize errors Both tools are sometimes less flexible than the command-line interface Feature SAM SMH HP-UX versions support 11i v1, v2 11i v1*, v2*, v3 Intuitive Terminal User Interface (TUI) Yes Yes, in 11i v3 only Intuitive Graphical User Interface (GUI) X-based Web-based Configurable to provide access to non-root users Yes Yes Built-in help facility Yes Yes Customizable and extensible Yes Yes Uses standard HP-UX commands to perform tasks No Yes Integrates with HP Systems Insight Manager (SIM) No Yes Windows, Linux support No Yes Student Notes New HP-UX System Administrators often find that the HP s System Administration Manager (SAM) and the System Management Homepage (SMH) interfaces simplify many administration tasks. Both tools provide intuitive, menu-based interfaces for adding users, configuring the kernel, configuring network interface cards, and other common administration tasks. Both also include informative help screens, and automatic error-checking. Like many menu-based interfaces, though, both SAM and SMH often provide less flexibility than command line utilities. The notes below describe the features of both tools. The remainder of this module focuses on the SMH. An appendix at the end of the course discusses SAM in a bit more detail

19 Module 2 Navigating the SMH HP-UX versions supported SAM is the primary menu-based administration tool for 11i v1 and v2. The SMH is available for these older versions of the operating system, but with limited functionality. SMH replaces SAM entirely in 11i v3. The /usr/sbin/sam command is still available in 11i v3, but launches the SMH rather than SAM. The latest version of the SMH for all versions of HP-UX may be downloaded from Intuitive Terminal User Interface (TUI) SAM provides an intuitive Terminal User Interface (TUI) that may be accessed in any 80x24 terminal or terminal emulator window. The TUI interface relies on standard keyboard keys rather than a mouse to navigate the SAM menus. In 11i v3, the SMH provides a TUI interface, too. Intuitive Graphical User Interface (GUI) SAM and the SMH both provide an intuitive graphical user interface. Administrators use a mouse and keyboard to navigate the administration menus. SAM s GUI requires X-windows. The SMH uses a more flexible, SSL-protected, web-based GUI interface that may be accessed from any Internet Explorer or Firefox web browser. Accessing the system via a web interface provides much greater flexibility for administrators who manage systems remotely. Configurable to provide access to non-root users By default, only users with root privileges can access SAM and the SMH. However, administrators can grant full or restricted access to other users and operators who help manage the system, too. This makes it possible to provide root-like privileges without sharing the root password. Built-in help facility SAM and the SMH both provide extensive online help. Customizable and extensible Administrators can add custom tools to the SAM and SMH interfaces. For instance, an administrator might add a custom tool to launch database daemons directly from the SAM/SMH interface. Uses standard HP-UX commands to perform tasks The SMH relies primarily on standard HP-UX commands. Administrators can review commands in the SMH log file and can use those commands in scripts. SAM uses HP-UX commands and backend scripts and executables to complete administration tasks. Administrators can review the commands in the /var/sam/log/samlog file, but many of the commands called from the SAM interface cannot be executed outside of SAM

20 Module 2 Navigating the SMH Integrates with HP Systems Insight Manager (SIM) HP Systems Insight Manager (SIM) provides an intuitive web interface for managing multiple of HP servers, blades, network, and storage devices. When SIM reports a problem with a server, a few mouse clicks automatically launch the server s SMH page so the administrator can research the cause of the problem or execute an SMH tool to resolve the issue. SAM is not integrated with SIM. Windows, Linux support Though this course focuses on using SMH to manage HPUX, the product is also available for customers running Microsoft Windows or Linux on any HP Proliant or Integrity servers. The SMH tools vary somewhat, but the SMH interface, architecture, and look and feel is consistent across platforms and operating systems. SAM is only available on HP-UX

21 Module 2 Navigating the SMH 2 2. SLIDE: Launching the SMH TUI Launching the SMH TUI The SMH offers a web interface and, in 11i v3, a TUI interface Use smh to launch the TUI interface Use the arrow keys and shortcuts listed at the bottom of each screen to navigate the TUI # smh SMH->Accounts for Users and Groups->Local Users Login Name User ID Primary Group Real Name Last Login ================================================================ user1 301 class student NEVER user2 302 class student Mon Jun 11 12:56:10 user3 303 class student NEVER user4 304 class student Thu Jun 14 10:23:20 < > x-exit smh ESC-Back 1-Help m-modify User ENTER-Details /-Search a-add User Ctrl o-other Actions16 NOTE: this screenshot has been formatted and truncated to fit the slide Student Notes SMH is included on the operating environment DVDs for HP-UX 11i v1 (since September 2005), 11i v2 (since May 2005), and 11i v3 (all media kits). You can also download the product from Not all SMH features are available on all HP-UX versions. New media kits often introduce new SMH functionality. Use the swlist command to determine your system s SMH version. # swlist SysMgmtWeb SMH has several additional dependencies, all of which are included in the 11i v2 and 11i v3 operating environments. On 11i v1, HP also recommends installing the KRNG11i patch bundle from for improved security. The SMH offers a web interface in all HP-UX versions, and, in 11i v3, a TUI interface as well. To launch the TUI interface, log into the target system as user root using any 24x80 terminal emulator, and run smh

22 Module 2 Navigating the SMH Use the [Tab] key to jump back and forth between the menu bar and the other regions on the screen, and the arrow keys to scroll up and down and left and right. Look for keyboard shortcuts at the bottom of the screen

23 Module 2 Navigating the SMH 2 3. SLIDE: Launching the SMH GUI via Autostart Launching the SMH GUI via Autostart SMH web access is provided via an Apache web server daemon By default, SMH is configured to run in autostart mode A lightweight smhstartd daemon starts at boot time Users connect to smhstartd via web address smhstartd launches the Apache/SMH daemon when needed smhstartd redirects each request via HTTPS to the Apache/SMH daemon Apache/SMH terminates after 30 minutes of inactivity Enable SMH autostart # smhstartconfig a on b off Verify SMH autostart # smhstartconfig HPSMH 'autostart url' mode...: ON HPSMH 'start on boot' mode...: OFF Start Tomcat when HPSMH starts...: OFF Access the SMH from any web browser # firefox Browser smhstartd Apache/SMH Student Notes HP-UX provides the SMH web interface via a dedicated Apache web server daemon. There are two common techniques for launching this daemon. By default, SMH is configured to run in autostart mode, as described below. The next slide describes start on boot mode. During the system boot process, the /sbin/init.d/hpsmh startup script launches a lightweight smhstartd daemon during the boot process. smhstartd runs continuously until system shutdown, listening for incoming connection requests from clients. Users connect to smhstartd via web address When the server receives a connection request on smhstartd launches the Apache/SMH daemon via the following command. /opt/hpws/apache/bin/httpd \ -k start \ -DSSL -f \ /opt/hpsmh/conf/smhpd.conf 2-7

24 Module 2 Navigating the SMH smhstartd then redirects the client s request to the newly-launched, SSL-enabled Apache daemon at smhstartd also launches an /opt/hpsmh/lbin/timeoutmonitor script, which automatically terminates the Apache/SMH daemon after 30 minutes of inactivity. The timeout period is configurable via the TIMEOUT_SMH variable in /opt/hpsmh/conf/timeout.conf. Autostart is the default SMH configuration mode. If another administrator disabled autostart, re-enable it via the smhstartconfig command. Then execute smhstartconfig again without any options to verify your work. # smhstartconfig a on b off /etc/rc.config.d/hpsmh has been edited to enable HPSMH to be autostarted using port NOTE: HPSMH 'start on boot' mode is already disabled. # smhstartconfig HPSMH 'autostart url' mode...: ON HPSMH 'start on boot' mode...: OFF Start Tomcat when HPSMH starts...: OFF If your organization s security policy prohibits web servers on production servers, you can disable the SMH web interface entirely with the following commands: # smhstartconfig -a off -b off /etc/rc.config.d/hpsmh has been edited to disable the autostarting of HPSMH using port NOTE: HPSMH 'start on boot' mode is already disabled. # smhstartconfig HPSMH 'autostart url' mode...: OFF HPSMH 'start on boot' mode...: OFF Start Tomcat when HPSMH starts...: OFF Changes made via smhstartconfig simply modify variables in the /etc/rc.config.d/hpsmh file, which is read by the /sbin/init.d/hpsmh startup script during the boot process. This file can also be edited directly with the vi editor. After making changes, be sure to re-run the startup script. # vi /etc/rc.config.d/hpsmh # /sbin/init.d/hpsmh start 2-8

25 2 4. SLIDE: Launching the SMH GUI via Start-on-Boot Module 2 Navigating the SMH Launching the SMH GUI via Start-on-Boot Alternatively, configure the Apache/SMH daemon to run perpetually Apache/SMH daemon starts at boot time and runs perpetually Users connect directly to the Apache/SMH daemon via HTTPS Advantage: SMH clients can connect directly via HTTPS, avoiding a redirect Disadvantage: Apache runs perpetually on the system Enable SMH start-on-boot # smhstartconfig a off b on Verify SMH autostart # smhstartconfig HPSMH 'autostart url' mode...: OFF HPSMH 'start on boot' mode...: ON Start Tomcat when HPSMH starts...: OFF Browser Apache/SMH Access the SMH from any web browser # firefox Student Notes The previous slide explained how to launch the Apache/SMH daemon on an as-needed basis via SMH autostart. Administrators who wish to connect to the SMH directly via HTTPS may prefer to start the Apache/SMH daemon during the boot process and allow it to run perpetually. Autostart is the default SMH configuration mode. Use the smhstartconfig command to enable and verify SMH start-on-boot. # smhstartconfig -a off -b on /etc/rc.config.d/hpsmh has been edited to disable the autostarting of HPSMH using port /etc/rc.config.d/hpsmh has been edited to enable the 'start on boot' startup mode of HPSMH server. # smhstartconfig HPSMH 'autostart url' mode...: OFF HPSMH 'start on boot' mode...: ON Start Tomcat when HPSMH starts...: OFF 2-9

26 Module 2 Navigating the SMH If your organization s security policy prohibits web servers on production servers, you can disable the SMH web interface entirely with the following commands: # smhstartconfig -a off -b off /etc/rc.config.d/hpsmh has been edited to disable the autostarting of HPSMH using port NOTE: HPSMH 'start on boot' mode is already disabled. # smhstartconfig HPSMH 'autostart url' mode...: OFF HPSMH 'start on boot' mode...: OFF Start Tomcat when HPSMH starts...: OFF Changes made via smhstartconfig simply modify variables in the /etc/rc.config.d/hpsmh file, which is read by the /sbin/init.d/hpsmh startup script during the boot process. This file can also be edited directly with the vi editor. After making changes, be sure to re-run the startup script. # vi /etc/rc.config.d/hpsmh # /sbin/init.d/hpsmh start

27 Module 2 Navigating the SMH 2 5. SLIDE: Verifying the SMH Certificate Verifying the SMH Certificate Browsers use security certificates to authenticate the identity of HTTPS servers By default, SMH uses self-signed security certificates Some administrators install certificates signed by a Certificate Authority (CA) instead If using self-signed certificates, browsers may display a security warning Mozilla security certificate warning: IE security certificate warning: Student Notes If the SMH start-on-boot functionality is enabled, users connect directly to the SMH via If SMH autostart functionality is enabled, users initially connect to then get redirected to In either case, the user ultimately accesses the SMH server through an https Secure Socket Layer (SSL) connection. Accessing the server via SSL ensures that: All communications between the browser and SMH server are encrypted, and Users can verify the identity of the SMH server to which they are connected. Any time a web browser accesses a website via the HTTPS protocol, the web server presents a security certificate. The client browser compares the certificate provided by the web server with information obtained from a trusted certificate authority (CA) such as

28 Module 2 Navigating the SMH By default, SMH uses self-signed certificates, which are signed by the SMH server itself rather than a well-known CA. The browser can t determine the authenticity of self-signed certificates, so it displays a warning similar to the messages shown on the slide. If you see a security certificate warning message, but your server and client reside on a secure, trusted network, you may choose to ignore the message and proceed with the connection. For better security, security-conscious administrators prefer to install a signed certificate on the SMH server from a trusted CA. The process required to install a signed certificate on an SMH server is described on the SMH Settings->Security->Local Server Certificate screen in the SMH interface

29 Module 2 Navigating the SMH 2 6. SLIDE: Logging into the SMH Logging into the SMH After connecting to the SMH daemon, enter an authorized HP-UX username/password By default, only members of the HP-UX root group can log into the SMH Other HP-UX groups can optionally be granted access to the SMH, too Student Notes After connecting to the SMH daemon, enter an authorized HP-UX username/password. By default, only members of the HPUX root group can log into the SMH. User root is typically the only member of the root group. To determine which users belong to your system s root group, use nsquery. # nsquery group root No policy for group in nsswitch.conf. Using "files nis" for the group policy. Searching /etc/group for root Group name: root Group Id: 0 Group membership: root Switch configuration: Terminates Search A later slide in this chapter explains how to grant other user groups access to the SMH, too

30 Module 2 Navigating the SMH 2 7. SLIDE: SMH Menus and Tabs SMH Menus and Tabs SMH utilizes a tabbed interface Use the Home tab, the default tab, to view hardware/status information Use the Settings tab to customize SMH security and add custom menu items Use the Tasks tab to execute arbitrary commands on the server Use the Tools tab to view and configure OS features Use the Logs tab to launch SMH s web-based log file viewers Use the Support and Help tabs to get help Menu Tabs Return to Main Menu Logout Disable Timeout Which SMH screen am I viewing? General Host Information MP Link Icon Legend Refresh Data Toggle Menu Format Student Notes The SMH utilizes a tabbed interface. Use the Home tab, the default tab, to view summary system status information. Use the Settings tab to customize SMH security and add custom menu items. Use the Tasks tab to execute arbitrary commands on the server. Use the Tools tab to view and configure OS features. Use the Logs tab to launch SMH s web-based log file viewers. Use the Support tab to access HP s online IT Resource Center and user forums. Use the Help tab to learn more about the SMH. The next few slides describe each tab in detail

31 Module 2 Navigating the SMH The SMH banner graphic includes links to a number of other resources in the SMH, too. On the far left, the SMH reports which SMH screen you are currently viewing. The next block reports your system hostname and model string. The next block provides a link to the Management Processor, which provides a console login interface that is required for some system administration tasks. Two icons on the far right enable you to select the SMH list or icon menu format. Two links above the menu format buttons take you back to the SMH Home screen, or log you out. The Legend link displays a legend that explains the meaning of the SMH icons. The Refresh link refreshes the current SMH screen when system conditions change. By default, SMH sessions terminate after several minutes of inactivity. Click the checkbox at top right to disable the auto-logout feature

32 Module 2 Navigating the SMH 2 8. SLIDE: SMH->Home (1 of 2) SMH->Home (1 of 2) The SMH Home tab summarizes the status of the system s subsystems Click any subsystem for more detailed information Contents of the Home tab vary from model to model Click the Legend link to view an icon legend Student Notes The SMH Home tab summarizes the status of the cooling, power, memory, and other hardware subsystems. The subsystems listed may vary somewhat from system model to system model. To learn more about a subsystem, click the subsystem name. To the left of each subsystem name, the SMH displays a color-coded icon that represents the subsystem s health status. Click the Legend link in the SMH header, or see the legend included on the slide, to determine what each icon represents. The oversize status icon at the top left of the SMH Home page summarizes the overall system status. In the sample system shown on the slide, one of the network interface cards is disconnected, which results in a minor warning for the network subsystem, and for the system as a whole. Though not shown in the screenshot on the slide, the Home tab also includes a System Configuration box containing links to some of the commonly used SMH system administration tools. A slide later in this chapter discusses tools in detail

33 WBEM Module 2 Navigating the SMH The SMH collects status information about the operating system and the system hardware via Web Based Enterprise Management (WBEM) protocols and standards. WBEM is an industry standard developed and used by multiple vendors. Most HP operating systems, platforms and devices include WBEM providers that provide information to SMH and other HP management tools. To learn more about HP s WBEM providers and solutions, visit To learn more about WBEM standards and protocols, visit Use the swlist command to see which WBEM providers are installed on your HP-UX 11i v1, v2, or v3 system. # swlist -l product grep -i wbem LVM-Provider R LVM WBEM Provider SCSI-Provider B CIM/WBEM Provider for SCSI HBA SGWBEMProviders A HP Serviceguard WBEM Providers WBEMP-LAN B LAN Provider: CIM/WBEM Provider WBEMServices A WBEM Services CORE Product vmprovider A WBEM Provider for Integrity VM HP adds new and updated WBEM providers in each media kit release. The latest WBEM providers are also available on

34 Module 2 Navigating the SMH 2 9. SLIDE: SMH->Home (2 of 2) SMH->Home (2 of 2) From the Home tab Click a hardware subsystem (e.g.: Physical Memory ) for more details Output varies from model to model NOTE: screenshot has been formatted and truncated to fit the slide Student Notes From the SMH Home tab, you can click any subsystem link to view more detailed information about that subsystem. The screenshot on the slide shows the physical memory subsystem detail, including the status, location, capacity, type, and serial number of each DIMM (Dual Inline Memory Module)

35 Module 2 Navigating the SMH SLIDE: SMH->Tools (1 of 4) SMH->Tools (1 of 4) The Tools tab provides GUI interfaces for many common admin tasks Some tools launch GUI interfaces, some launch web interfaces, others run CLIs Supported tools vary from release to release Student Notes The SMH Tools tab provides GUI interfaces for many common system administration tasks. The slide shows some of the tools included by default in the SMH. Some tools launch GUI interfaces, some launch web interfaces, others run command line utilities. In the current release, some SMH tools launch legacy SAM interfaces, too. Supported tools vary from OS release to OS release

36 Module 2 Navigating the SMH SLIDE: SMH->Tools (2 of 4) SMH->Tools (2 of 4) To run a tool... Click a tool (e.g.: File Systems ) on the Tools tab Select an object (e.g.: /home ) from the resulting object list Select an action (e.g.: Unmount ) from the resulting action list Provide the information requested in the dialog box that follows Student Notes In order to launch a tool, simply click the tool s link on the SMH Tools tab. The interface that follows varies from tool to tool. Most of the recently developed tools use a web interface similar to the File System tool shown on the slide. Click a tool (e.g.: File Systems ) on the Tools tab. Select an object (e.g.: /home ) from the resulting object list. Select an action (e.g.: Unmount ) from the resulting action list on the right side of the screen. Provide the information requested in the dialog box that follows

37 Module 2 Navigating the SMH SLIDE: SMH->Tools (3 of 4) SMH->Tools (3 of 4) Dialog boxes vary from tool to tool Most include an explanation of the tool and it s limitations and side-effects Most include a preview button that displays the HP-UX command(s) executed by the tool Student Notes Tool dialog boxes vary from tool to tool. Most include an explanation of the tool s purpose, its limitations, and any potential sideeffects. Most include a Preview button that displays the HP-UX command(s) that will be executed by the tool

38 Module 2 Navigating the SMH SLIDE: SMH->Tools (4 of 4) SMH->Tools (4 of 4) Some SMH tools are simply wrappers for external non-web-based applications Select your preferred language Enter your desktop system s $DISPLAY variable value Look at the command preview to determine which command the tool executes Click Run NOTE: screenshot has been formatted and truncated to fit the slide Student Notes Some SMH tools simply launch legacy SAM interfaces, or other GUI and CLI applications. Launching these types of tools displays a window similar to the dialog box shown on the slide. To use these tools: Select your preferred language from the pull-down menu. English users should select C. If the tool is GUI-based, enter your desktop system s $DISPLAY name. Execute echo $DISPLAY in a shell window to determine the appropriate display name. Look at the command preview at the bottom of the screen to determine which command the tool executes. Click Run

39 Module 2 Navigating the SMH What happens next varies from tool to tool. CLI-based tools simply execute the command and display the resulting STDOUT/STDERR output. Web-based tools run in a new browser window. X-based applications, such as the swinstall tool shown on the slide, launch an X- based interface similar to the swinstall interface below

40 Module 2 Navigating the SMH SLIDE: SMH->Settings SMH->Settings The Settings tab allows you to add and remove your own custom tools, too Access the Settings tab Click Add Custom Menu Use the resulting dialog box to create the custom tool Custom tools may be added to existing SMH tool categories, or new custom categories Custom tools may launch X applications, CLI commands, or web applications Custom tools may be configured to run as root when launched by non-root users Custom tools may be executed just like built-in SMH tools Student Notes The SMH has quite a few built-in tools. For even more flexibility, SMH allows the administrator to add custom tools, too. Access the Settings tab. Click Add Custom Menu. Use the resulting dialog box to create the custom tool. Custom tools may be added to existing tool categories, or new custom categories. Custom tools may launch X applications, non-interactive CLI commands, or web-based applications. Custom tools may be configured to run as root when launched by non-root users. To execute a custom tool, just click the tool s link as you would any other SMH tool. CLIbased tools execute the command non-interactively and display the resulting

41 Module 2 Navigating the SMH STDOUT/STDERR output. Web-based tools run in a new browser window. GUI-based tools open a new X-window

42 Module 2 Navigating the SMH SLIDE: SMH->Tasks SMH->Tasks Use the Tasks tab to execute a single command through the SMH Access the Tasks tab Click Launch or Run, and follow the prompts to run the program SMH reports the command s STDERR and STDOUT output Student Notes The SMH Settings tab allows administrators to create permanent custom tools to execute frequently-used commands. The SMH Task tab allows administrators to execute one-time commands remotely, without permanently adding a tool to the SMH menus. Access the Tasks tab. Click Launch or Run and follow the prompts to run the program. Select your preferred language from the pull-down menu. English users should select C. If the tool is GUI-based, enter your desktop system s $DISPLAY name. Execute echo $DISPLAY in a shell window to determine the appropriate display name. SMH reports the command s STDERR and STDOUT output

43 Module 2 Navigating the SMH SLIDE: SMH->Logs SMH->Logs SMH provides web-based log file viewers for viewing some common system log files Access the Logs tab Select a log file viewer (e.g.: System Log Viewer ) Use the Select tab to select a log file (e.g.: syslog.log vs. OLDsyslog.log ) Use the Layout and Filters tabs to customize the column layout Use the Display tab to view the log contents Log file viewer features for other log files may vary Student Notes SMH provides web-based log file viewers for viewing and filtering several common system log files. Access the Logs tab. Select a log file viewer (e.g.: System Log Viewer ). Different log viewers may have slightly different interfaces. The steps below apply to the System Log Viewer, which displays the contents of the /var/adm/syslog/syslog.log log file. The syslog.log file captures error, warning, and status messages from a variety of subsystems and services. Use the Select tab to select a log file (e.g.: syslog.log vs. OLDsyslog.log ). Use the Layout and tab to customize the column layout, and use the Filters tab to filter the log file contents by date and time

44 Module 2 Navigating the SMH Use the Display tab to view the log file contents. Use the scroll bar to move forwards and backwards through the file. Use the Search text box to search the file for specific patterns. Log file viewer features for other log files may vary. If you want to add log file viewers for other log files into the SMH, use the Add Custom Menu feature described previously, put the tool on the Logs page, and enter /usr/bin/cat /my/log/file/name in the Command/URL field

45 Module 2 Navigating the SMH SLIDE: SMH Group Access Control SMH Group Access Control Users must enter a valid HP-UX username/password in order to access the SMH SMH determines a user s access rights (if any) via the user s HP-UX group memberships By default, only members of the root group can access the SMH Use Settings->Security->User Groups to grant SMH access to other HP-UX groups Student Notes Users must enter a valid HP-UX username/password in order to access the SMH. SMH determines a user s access rights (if any) via the user s HP-UX group memberships. By default, only members of the root group can access the SMH. If other users such as operators, backup administrators, or database administrators need access to the SMH, use the Settings->Security->User Groups menu to grant SMH access to other HP-UX groups. The User Groups menu offers three different access levels. Members of groups that have SMH Administrator privileges can use all of the SMH tools and features, add custom tools, and grant SMH access rights to other user groups. By default, the SMH grants members of the root group SMH Administrator privileges. Members of groups that have SMH Operator privileges can access most SMH tools and features, but cannot add or remove custom tools, execute arbitrary tasks as root, or modify the SMH user, group, security, and authentication settings. Members of groups that have SMH User privileges can use tools that display information but cannot use SMH tools to modify either the system or SMH configuration

46 Module 2 Navigating the SMH Access Control in the SMH TUI The SMH TUI interface manages access control via a different mechanism. By default, only the administrator can launch the SMH TUI. To provide TUI access to non-root users, launch the TUI-based smh r restricted SMH user configuration tool and select a user. # smh -r The privileges set for the user from the Text User Interface doesn't apply to Graphical User Interface. System Management Homepage(SMH) in Graphical User Interface has a different way of setting the privileges. Please look at smh(1m) man page for more information Do you want to continue (y/n) <y>: y SMH->Restricted SMH->Select users Login Primary Has SAM users Group privileges ==================================================================== user1 users Yes user2 users No user3 users No user4 users No user6 users No user7 users No user8 users No user9 users No user10 users No x-exit smh ENTER-Select /-Search r-remove Privileges g-display Groups

47 Module 2 Navigating the SMH Next, specify which SMH functional areas the user should be allowed to access. Be sure to press s to save the selected privileges before exiting. SMH->Restricted SMH->Functional Areas Selected user : user Functional Areas Access Status ==================================================================== Resource Management Disabled Disks and File Systems Enabled Display Disabled Kernel Configuration Disabled Printers and Plotters Disabled Networking and Communications Disabled Peripheral Devices Disabled Security Attributes Configuration Disabled Software Management Disabled Auditing and Security Disabled Accounts for Users and Groups Disabled x-exit smh Esc-Back s-save Privileges D-Disable All e-enable d-disable E-Enable All The user should then be able to run /usr/sbin/smh and access the selected SMH functional areas

48 Module 2 Navigating the SMH SLIDE: SMH Authentication SMH Authentication Security conscious system administrators can enable additional SMH authentication features via other links on the Settings->Security menu Anonymous/Local Access: Allow local and/or remote users to access the SMH without providing a username/password IP Binding: Only allow users to access SMH from selected networks IP Restricted login: Only allow users to access SMH from selected IP addresses Local Server Certificate: Import a security certificate for the SMH server from a third party Timeouts: Specify SMH session timeout values Trust Mode: Determine how SMH authenticates configuration requests from remote SIM servers Trusted Management Servers: Import security certificates for SIM servers, if using SIM to remotely manage SMH nodes Student Notes Security conscious system administrators can enable additional SMH authentication features via other links on the Settings->Security menu. Local/Anonymous Access Anonymous Access enables a user to access the System Management Homepage without logging in. This feature is disabled by default. HP does not recommend enabling anonymous access. Local Access enables local users to access the System Management Homepage without being challenged for authentication. If Local Access/Anonymous is selected, any local user has access limited to unsecured pages without being challenged for a username and password. If Local Access/Administrator is selected, any user with access to the local console is granted full access to all SMH features

49 IP Binding Module 2 Navigating the SMH IP Binding specifies which IP networks and subnets the System Management Homepage accepts requests from. A maximum of five subnet IP addresses and netmasks can be defined. The System Management Homepage allows access from If IP Binding is enabled and no subnet/mask pairs are configured, then the System Management Homepage is only available to If IP Binding is not enabled, users can access the SMH from any network or subnet. IP Restricted login IP Restricted Login allows the administrator to specify a semi-colon separated list of IP address ranges that should be explicitly allowed or denied SMH access. If an IP address is excluded, it is excluded even if it is also listed in the included box. If there are IP addresses in the inclusion list, then only those IP addresses are allowed log-in access with the exception of localhost. If no IP addresses are in the inclusion list, then log-in access is allowed to any IP addresses not in the exclusion list. Local Server Certificate When a user connects to the server s SMH, the client browser uses public/private key authentication to verify that the browser connected to the legitimate server. SMH uses selfsigned certificates by default. For greater security, SMH administrators can obtain authentication keys for the SMH server from a third party Certificate Authority. The SMH help screens explain this process in detail. Timeouts Use this feature to change SMH session and interface timeout values. Trust Mode HP Systems Insight Manager (SIM) is an HP product that allows administrators to monitor and manage multiple servers and devices from a central management station. The next slide provides a brief overview of SIM functionality. SIM utilizes SMH for some management tasks. The SMH Trust Mode screen determines how SMH authenticates requests received from remote servers. Trusted Management Servers If the SMH Trust Mode described above requires public/private key authentication of SIM servers, use the Trusted Management Servers link in SMH to import certificates from the SIM server. User Groups See the previous slide for a discussion of SMH User Groups

50 Module 2 Navigating the SMH SLIDE: SMH and SIM Integration Possibilities SMH and SIM Integration Possibilities HP SMH provides an intuitive web interface for managing a single system HP SIM provides an intuitive web interface for managing multiple systems SIM manages all HP-supported operating systems, and most HP-supported devices SIM can automatically, seamlessly launch any server s SMH page SIM consolidates status, log, and other information from multiple nodes SIM provides robust role based security and key-based authentication SIM is included with HP-UX; other licensed plug-ins provide even greater functionality Student Notes HP SMH provides an intuitive web interface for managing a single HP system. HP Systems Insight Manager provides an intuitive web interface for managing multiple HP servers and devices from a consolidated central management interface. SIM manages all HP-supported operating systems, and most HP-supported devices, including storage devices, blade servers, Proliant Windows/Linux servers, blade enclosures and servers, and much more. SIM integrates with the SMH, and can seamlessly launch any HP Windows/Linux/HP-UX server s SMH. SIM consolidates status, log, and other information from multiple nodes. In large environments, this consolidated monitoring greatly simplifies monitoring and troubleshooting tasks. SIM provides robust role-based security, using single-sign-on key-based authentication, so authorized administrators can seamlessly access multiple servers in a secure fashion without entering multiple usernames and passwords

51 Module 2 Navigating the SMH Basic SIM functionality is included with HP-UX. Some customers purchase additional SIM plug-ins for even greater flexibility. For more information about SIM, attend HP Education s HB508S HP-UX Systems Insight Manager class, or visit the SIM product page at

52 Module 2 Navigating the SMH SLIDE: For Further Study For Further Study Course from HP Customer Education: HB508S HP Systems Insight Manager (SIM) for HP-UX Manuals on HP System Management Homepage User Guide HP System Management Homepage Installation Guide HP System Management Homepage Release Notes Student Notes

53 2 21. LAB: Configuring and Using the System Management Homepage Directions Module 2 Navigating the SMH Carefully follow the instructions below and record your answers in the spaces provided. Part 1: Configuring SMH autostart functionality 1. Verify that the SysMgmtWeb product is installed on your system. # swlist SysMgmtWeb # swconfig x reconfigure=true SysMgmtHomepage.* 2. Use smhstartconfig to determine which SMH startup mode is enabled by default

54 Module 2 Navigating the SMH Part 2: Accessing the SMH (Internet Explorer) Depending on your lab equipment setup, your instructor will tell you to do either lab Part 2 or Part Launch the Internet Explorer web browser and point it to the SMH autostart URL, Replace server_ip with your server's IP address. a. If you are accessing your lab system remotely via a Virtual Lab portal server, launch the portal s Internet Explorer via the browser link on the VL webtop. In some VL environments, there may be an SMH link on the webtop that opens a browser directly to the SMH. b. If you are accessing your lab system from a PC that has full network connectivity to your lab system, launch Internet Explorer on your PC. 2. If asked if you wish to be redirected to view pages over a secure connection, click [OK]. You should see a Security Alert indicating that the security certificate provided by the SMH server was issued by a company you have not chosen to trust. By default, the SMH uses self-signed authentication certificates, issued by the SMH server itself. It s possible to obtain a security certificate for the SMH server from a third party Certificate Authority ; for the sake of the lab, we ll use the self-signed certificate. When asked if you want to proceed, click [Yes]. 3. Login as user root on the SMH login page. If your browser s status bar is enabled, note the padlock icon in the bottom right corner of the browser window indicating that the connection to the server is secure

55 Module 2 Navigating the SMH Part 3: Accessing the SMH (Firefox; Mozilla is still available) Depending on your lab equipment setup, your instructor will tell you to do either lab Part 2 or Part Launch a Firefox web browser. 2. Point your web browser to the SMH autostart URL, Replace server with your fully-qualified server hostname. 3. A window titled Website Certified by an Unknown Authority window may appear. By default, the SMH uses self-signed authentication certificates, issued by the SMH server itself. It s possible to obtain a security certificate for the SMH server from a third party Certificate Authority ; for the sake of the lab, we ll use the self-signed certificate. a. Click the Accept this certificate permanently radio button to permanently accept the self-signed certificate from the SMH server. b. Click [OK] to proceed past the Website Certified by an Unknown Authority window. c. A Security Warning message should appear indicating that you have requested an encrypted page. Click [OK] to proceed to the SMH login screen. 4. Login as user root on the SMH login page. Note the padlock icon in the bottom right corner of the browser window, indicating that you are connected to the server via a secure connection

56 Module 2 Navigating the SMH Part 4: Navigating the SMH web interface Use the SMH to complete the tasks below. If you wish, explore other SMH pages of interest, too. 1. Use the SMH Home tab links to view detailed status reports on some of your lab system s hardware components. 2. Use the SMH Home tab links to view detailed reports of your lab system s process information, networking information, and memory utilization. 3. Navigate to the SMH Tools tab and use the Defragment Extents link to defragment the /home file system. 4. Navigate to the SMH Tasks tab and use the Run Command as Root link to execute /usr/bin/passwd f user1, which forces user1 to change his/her password at next login. 5. Navigate to the SMH Logs tab and use the System Log Viewer link to view all lines in /var/adm/syslog/syslog.log that contain the string inetd

57 Module 2 Navigating the SMH Part 5: Creating custom SMH tools (Optional) SMH includes quite a few built-in features. For even greater flexibility, though, SMH also allows the system administrators to create custom SMH tools on any SMH screen. 1. Access the SMH Settings screen. 2. Click Add Custom Menu. 3. From the Type pulldown menu, select Command Line. 4. From the Page pulldown menu, select Tools. 5. In the Category field, enter Disks and File Systems. 6. In the Tool Name field, enter Purge /tmp. 7. In the Command/URL field enter the following command, which purges all files from /tmp which haven t been accessed in at least seven days: /usr/bin/find /tmp type f atime +7 exec rm + 8. Click [Add]. 9. Access the SMH Tools tab. 10. In the Disk & File Systems category, click the new Purge /tmp tool. 11. Click [Run] to run the tool

58 Module 2 Navigating the SMH Part 6: Cleanup Close your SMH browser window before proceeding to the next chapter

59 2 22. LAB SOLUTIONS: Configuring and Using the System Management Homepage Directions Module 2 Navigating the SMH Carefully follow the instructions below and record your answers in the spaces provided. Part 1: Configuring SMH autostart functionality 1. Verify that the SMH product is installed and configured on your system. # swlist SysMgmtWeb # swconfig x reconfigure=true SysMgmtHomepage.* 2. Use smhstartconfig to determine which SMH startup mode is enabled by default. Answer: # smhstartconfig HPSMH 'autostart url' mode...: ON HPSMH 'start on boot' mode...: OFF Start Tomcat when HPSMH starts...: OFF Autostart mode is the default SMH startup mode

60 Module 2 Navigating the SMH Part 2: Accessing the SMH (Internet Explorer) Depending on your lab equipment setup, your instructor will tell you to do either lab Part 2 or Part Note that when performing these labs in the HP Virtual Lab, there is an SMH button in the HPVL Reservation Window that will open an SMH browser window. The other method is to launch the Internet Explorer web browser and point it to the SMH autostart URL, Replace server_ip with your server's IP address. a. If you are accessing your lab system remotely via a Virtual Lab portal server, launch the portal s Internet Explorer via the browser link on the VL webtop. In some VL environments, there may be an SMH link on the webtop that opens a browser directly to the SMH. b. If you are accessing your lab system from a PC that has full network connectivity to your lab system, launch Internet Explorer on your PC. 2. If asked if you wish to be redirected to view pages over a secure connection, click [OK]. a. You should see a Security Alert indicating that the security certificate provided by the SMH server was issued by a company you have not chosen to trust. By default, the SMH uses self-signed authentication certificates, issued by the SMH server itself. It s possible to obtain a security certificate for the SMH server from a third party Certificate Authority ; for the sake of the lab, we ll accept the self-signed certificate. When asked if you want to proceed, click [Yes]. 3. Login as user root on the SMH login page. If your browser s status bar is enabled, note the padlock icon in the bottom right corner of the browser window indicating that the connection to the server is secure

61 Module 2 Navigating the SMH Part 3: Accessing the SMH (Firefox; Mozilla is still available) Depending on your lab equipment setup, your instructor will tell you to do either lab Part 2 or Part Launch Firefox web browser. 2. When performing these labs in the HP Virtual Lab, there is an SMH button in the HPVL Reservation Window that will open an SMH browser window. The other method is to point your web browser to the SMH autostart URL, Replace server with your fully-qualified server hostname. 3. A window titled Website Certified by an Unknown Authority window may appear. By default, the SMH uses self-signed authentication certificates, issued by the SMH server itself. It s possible to obtain a security certificate for the SMH server from a third party Certificate Authority ; for the sake of the lab, we ll use the self-signed certificate. a. Click the Accept this certificate permanently radio button to permanently accept the self-signed certificate from the SMH server. b. Click [OK] to proceed past the Website Certified by an Unknown Authority window. c. A Security Warning message should appear indicating that you have requested an encrypted page. Click [OK] to proceed to the SMH login screen. 4. Login as user root on the SMH login page. Note the padlock icon in the bottom right corner of the browser window, indicating that you are connected to the server via a secure connection

62 Module 2 Navigating the SMH Part 4: Navigating the SMH web interface Use the SMH to complete the tasks below. If you wish, explore other SMH pages of interest, too. 1. Use the SMH Home tab links to view detailed status reports on some of your lab system s hardware components. 2. Use the SMH Home tab links to view detailed reports of your lab system s process information, networking information, and memory utilization. 3. Navigate to the SMH Tools tab and use the Defragment Extents link to defragment the /home file system. Answer: a. Navigate to the SMH Tools tab. b. Click the File Systems link. c. Select the radio button for the /home file system. d. Click the Defragment Extents link. You may have to scroll to the bottom right corner of the SMH screen to see this link. e. Review the comments and command preview. f. Click [Defragment] to proceed with the defragmentation. g. There shouldn t be any output or errors. h. Click [Back] to return to the file system list. 4. Navigate to the SMH Tasks tab and use the Run Command as Root link to execute /usr/bin/passwd f user1, which forces user1 to change his/her password at next login. Answer: a. Navigate to the SMH Tasks tab and click the Run Command as Root link. b. Enter C in the Language field. c. Enter /usr/bin/passwd f user1 in the Command field. d. Click [Run]. e. Click [Back] when the command completes

63 Module 2 Navigating the SMH 5. Navigate to the SMH Logs tab and use the System Log Viewer link to view all lines in /var/adm/syslog/syslog.log that contain the string inetd. Answer: a. Navigate to the SMH Logs tab and click the System and Consolidated Log Viewer link. b. On the Select tab, select the /var/adm/syslog/syslog.log file. c. On the Filters tab, enter inetd in the Search field. d. Click the Display tab to view the results

64 Module 2 Navigating the SMH Part 5: Creating custom SMH tools (Optional) SMH includes quite a few built-in features. For even greater flexibility, though, SMH also allows the system administrators to create custom SMH tools on any SMH screen. 1. Access the SMH Settings screen. 2. Click Add Custom Menu. 3. From the Type pulldown menu, select Command Line. 4. From the Page pulldown menu, select Tools. 5. In the Category field, enter Disks and File Systems. 6. In the Tool Name field, enter Purge /tmp. 7. In the Command/URL field enter the following command, which purges all files from /tmp which haven t been accessed in at least seven days: /usr/bin/find /tmp type f atime +7 exec rm + 8. Click [Add]. 9. Access the SMH Tools tab. 10. Click the new Purge /tmp tool. 11. Click [Run] to run the tool

65 Module 2 Navigating the SMH Part 6: Cleanup Close your SMH browser window before proceeding to the next chapter

66 Module 2 Navigating the SMH

67 Module 3 Managing Users and Groups Objectives Upon completion of this module, you will be able to do the following: List the minimum requirements for a user account. Identify each field in the /etc/passwd file. Identify each field in the /etc/shadow file. Identify each field in the /etc/group file. Create, modify, and remove user accounts. Create, modify, and remove user groups. Deactivate and reactivate a user account. Configure shadow passwords. Configure password aging. Customize default user account security attributes in /etc/default/security. Customize default user shell startup scripts in /etc/skel/

68 Module 3 Managing Users and Groups 3 1. SLIDE: User and Group Concepts User and Group Concepts Sue Jim Frank Users Sales Marie Jean Sue Bob Ann Frank Develop Student Notes In order to gain access to an HP-UX system and its resources, users are required to log in. By controlling access to your system, you can prevent unauthorized users from running programs that consume resources, and control access to the data stored on your system. Every user on an HP-UX system is assigned a unique username, password, and User Identification (UID) number. HP-UX uses the user s UID number to determine which files and processes are associated with each user on the system. Every user is also assigned a primary group membership and, optionally, up to 20 additional group memberships. HP-UX grants access to files and directories based on a user s UID and the groups to which the user belongs. Use the id command to determine a user s UID and primary group membership. # id user1 uid=301(user1) gid=301(class) 3-2

69 Module 3 Managing Users and Groups Use the groups command to determine a user s secondary group memberships. # groups user1 class class2 users This chapter describes the configuration files that define user accounts and groups, and the commands required to manage those files

70 Module 3 Managing Users and Groups 3 2. SLIDE: What Defines a User Account? What Defines a User Account? /etc/passwd user1:x:1001:20: :/home/user1:/usr/bin/sh user2:x:1002:20: :/home/user2:/usr/bin/sh user3:x:1003:20: :/home/user3:/usr/bin/sh /etc/shadow (optional; strongly recommended to enable) user1:btp2slrck70es:1001:::::: user2:btp2slrck70es:1002:::::: user3:btp2slrck70es:1003:::::: /etc/group users::20: accts::1001:user1,user2 sales::1002:user1,user2,user3,user4,user5,user6 /home user1 user2 user3 Student Notes User accounts are defined in the /etc/passwd file. Each line in the /etc/passwd file identifies a user s username, password, User ID, primary group, home directory, and other critical user-specific information. Some users may belong to multiple user groups. The /etc/passwd file defines each user s primary group membership. The /etc/group file defines additional group memberships. Finally, most users have a home directory under /home, beneath which they can store their personal files and directories

71 Module 3 Managing Users and Groups 3 3. SLIDE: The /etc/passwd File The /etc/passwd File /etc/passwd contains a one-line definition of each valid user account /etc/passwd (r--r--r--) root:qmaj8as.,8a3e:0:3::/:/sbin/sh daemon:*:1:5::/:/sbin/sh user1:adok60aazrgxu:1001:1001: :/home/user1:/usr/bin/sh user2:adok60aazrgxu:1002:1001: :/home/user2:/usr/bin/sh user3:adok60aazrgxu:1003:1001: :/home/user3:/usr/bin/sh Username Password UID GID Comments Home Directory Shell Use /usr/sbin/vipw to edit /etc/passwd Use /usr/sbin/pwck to check the /etc/passwd file syntax Student Notes The /etc/passwd file contains a one-line entry for each authorized user account. All fields are delimited by colons (:). Username The username that is used when a user logs in. The first character in each username should be alphabetic, but remaining characters may be alphabetic or numeric. Usernames are case sensitive. In 11i v1 and v2, the username must be 1-8 characters in length. If a name contains more than eight characters, only the first eight are significant. 11i v3 supports usernames up to 255 characters in length. However, this functionality must be manually enabled by temporarily stopping the pwgrd password hashing daemon, executing the lugadmin (long username groupname) command, and restarting pwgrd. This process shouldn t impact existing users or running processes. Once enabled, long usernames cannot be disabled

72 Module 3 Managing Users and Groups # /sbin/init.d/pwgr stop pwgrd stopped # lugadmin e Warning: Long user/group name once enabled cannot be disabled in future. Do you want to continue [yy]: y lugadmin: Note: System is enabled for long user/group name # /sbin/init.d/pwgr start pwgrd started To determine if long usernames are enabled, execute lugadmin l. 64 indicates that the maximum username length is 8 characters. 256 indicates that long usernames are enabled. # lugadmin l 256 Commands such as who, ll, and ps that display usernames may truncate usernames greater than 8 characters. The user represented in the who output below has username ThisIsALongName. $ who ThisIsA+ console Jun 13 13:27 Long usernames may cause problems for scripts and applications that attempt to parse the output from these commands or the contents of the /etc/passwd file. Password The encrypted password. You can encrypt a new password for a user via the passwd command. /etc/passwd supports user passwords up to eight characters. If the password field is empty, the user can login without entering a password. An asterisk (*) in the password field deactivates an account. Nothing you can type will encrypt to an asterisk, so, no one can log in using the associated login name. User ID Each user must be assigned a user ID. User ID 0 is reserved for root, and UIDs 1-99 are reserved for other predefined accounts required by the system. SAM, SMH, and ugweb automatically assign UID numbers when creating new groups. Version of HP-UX introduced support for User IDs as large as 2,147,483,646. Prior to HP-UX 10.20, UIDs greater than 60,000 were not supported. To determine your system s maximum UID, check the MAXUID 3-6

73 Module 3 Managing Users and Groups parameter in /usr/include/sys/param.h. Using large UIDs may cause problems when sharing files with other systems that do not support large UIDs. Group ID Comments The user s primary group ID (GID). This number corresponds with an entry in the /etc/group file. See the /etc/group discussion later in the chapter for more information. The comment field. It allows you to add extra information about the users, such as the user's full name, telephone extension, organization, or building number. Home directory The absolute path to the directory the user will be in when they log in. If this directory does not exist or is invalid, then the user s home directory becomes /. Command The absolute path of a command to be executed when the user logs in. Typically, this is a shell. The shells that are usually used are /usr/bin/sh, /usr/bin/ksh, and /usr/bin/csh. Administrators must use the/sbin/sh POSIX shell. Most non-root users should use the /usr/bin/sh POSIX shell. If the field is empty, the default is /usr/bin/sh. The command entry does not have to be a shell. For example, you can create the following entry in /etc/passwd: date:rc70x.4.hgjdc:20:1::/:/usr/bin/date The command is /usr/bin/date. If you type date at the login prompt, then type the appropriate password, the system will run the /usr/bin/date command, and then log you out. NOTE: The permissions on the passwd file should be read only (r--r--r--) and the owner must be root. Required Entries in /etc/passwd Several entries are required in /etc/passwd to support various system daemons and processes. The list below lists the most critical required user accounts; other may be required, too, to support your system s applications. root:rz1lps2jyh3ia:0:3::/:/sbin/sh daemon:*:1:5::/:/sbin/sh bin:*:2:2::/usr/bin:/sbin/sh sys:*:3:3::/: adm:*:4:4::/var/adm:/sbin/sh uucp:*:5:3::/var/spool/uucppublic:/usr/lbin/uucp/uucico lp:*:9:7::/var/spool/lp:/sbin/sh nuucp:*:11:11::/var/spool/uucppublic:/usr/lbin/uucp/uucico 3-7

74 Module 3 Managing Users and Groups hpdb:*:27:1:allbase:/:/sbin/sh nobody:*:-2:60001::/: Editing /etc/passwd If you are using vi to edit /etc/passwd and a user attempts to change a password while you are editing, the user's change will not be entered into the file. To prevent this situation, use vipw when editing /etc/passwd. # vipw This command puts a lock on the /etc/passwd file by copying /etc/passwd to /etc/ptmp. If a user attempts to change a password, he or she will be told that the passwd file is busy. When you leave vipw, some automatic checks are done, and if your changes are correct, the temporary file is moved to /etc/passwd. Otherwise, /etc/passwd will remain unchanged. Checking the /etc/passwd File The consistency of the /etc/passwd file can be checked with the /usr/sbin/pwck command. It will check for the number of fields in each entry, and whether login directory and optional program name exist, and validate the number of fields, login name, user ID and group ID. # pwck [/etc/passwd] user1:fnnmd.dgyptlu:301:301:student:/home/user1 Too many/few fields 3-8

75 Module 3 Managing Users and Groups 3 4. SLIDE: The /etc/shadow File The /etc/shadow File Passwords can optionally be stored in /etc/shadow /etc/shadow is more secure than /etc/passwd /etc/shadow (r ) user1:adok60aazrgxu:12269:70:140:70:35:: User Name Encrypted Password Min Days Warn Days Unused Last Changed Max Days Inactive Days Install the ShadowPassword product (only necessary in 11i v1) Use /usr/sbin/pwck to verify your current /etc/passwd file syntax Use /usr/sbin/pwconv to move passwords from /etc/passwd to /etc/shadow Use /usr/sbin/pwunconv to move passwords back to /etc/passwd Student Notes The default permissions on the /etc/passwd file are r--r--r--. Since the file is worldreadable, anyone with a valid login can view the file and view encrypted passwords. Hackers sometimes exploit this fact to extract a list of encrypted passwords and run a password cracking utility to gain access to other users accounts. Unfortunately, removing world-read permission on the /etc/passwd file isn t a viable solution to this problem. Many commands, from login, to ps, to ll use the /etc/passwd file to convert UIDs to usernames, and vice versa. Changing the /etc/passwd file permissions to 400 would cause these commands to fail. HP s shadow password functionality addresses this problem by moving encrypted passwords and other password information to the /etc/shadow file, which has 400 permissions to ensure that it is only readable by root. Other user account information (UIDs, GIDs, home directory paths, and startup shells) remain in the /etc/passwd file to ensure that login, ps, ll, and other commands can still convert UIDs to usernames

76 Module 3 Managing Users and Groups Configuring Shadow Passwords By default, the /etc/shadow file doesn t exist. Use the cookbook below to convert to a shadow password system: 1. Shadow password support is included by default in 11i v2 and v3. HP-UX 11i v1 administrators, however, must download and install the ShadowPassword patch bundle from Use the swlist command to determine if the product has already been installed. # swlist ShadowPassword 2. Run pwck to verify that there aren t any syntax errors in your existing /etc/passwd file. # pwck 3. Use the pwconv command to move your passwords to the /etc/shadow file. # pwconv *Warning*: There is a restriction on the use of shadow password functionality in this release of HP-UX. Failure to consider this limitation may lead to an inability to log in to the system after the conversion is performed. A system converted to use shadow passwords is not compatible with any repository other than files and ldap. This means that the passwd entry in the nsswitch.conf file must not contain nis, nis+, or dce. Would you like to proceed with the conversion? (yes/no): yes 4. Verify that the conversion succeeded. The /etc/passwd file should remain worldreadable, but the /etc/shadow file should only be readable by root. The encrypted passwords in /etc/passwd should have been replaced by x s. # ll /etc/passwd /etc/shadow -r--r--r-- 1 root sys 914 May 18 14:35 /etc/passwd -r root sys 562 May 18 14:35 /etc/shadow 5. You can revert to the traditional non-shadowed password functionality at any time via the pwunconv command. # pwunconv All of the standard password commands, including passwd, useradd, usermod, userdel, and pwck are shadow password aware

77 Module 3 Managing Users and Groups Fields in /etc/shadow The /etc/shadow file is an ASCII file consisting of any number of user entries separated by newlines. Each user entry line consists of the following fields separated by colons: username Each login name must match a username in /etc/passwd. In 11i v3, /etc/shadow is compatible with long usernames as described on the /etc/passwd slide previously. password last changed min days max days warn days inactivity expiration reserved When you convert to a shadowed system, each password in /etc/passwd is replaced with an x, and the encrypted passwords are copied to the second field in /etc/shadow. If the /etc/shadow password field is null, then there is no password and no password is demanded on login. Login can be prevented by entering a * in the /etc/shadow password field. The number of days since January 1, 1970 that the password was last modified. This field is used by the password aging mechanism, which will be described later in the chapter. The minimum number of days that a user must retain a password before it can be changed. This field is used by the password aging mechanism, which will be described later in the chapter. The maximum number of days for which a password is valid. A user who attempts to login after his password has expired is forced to supply a new one. If min days and max days are both zero, the user is forced to change his password the next time he logs in. If min days is greater than max days, then the password cannot be changed. These restrictions do not apply to the superuser. This field is used by the password aging mechanism, which will be described later in the chapter. The number of weeks the user is warned before his password expires. This field is used by the password aging mechanism, which will be described later in the chapter. The maximum number of days of inactivity allowed after a password has expired. The account is locked if the password is not changed within the specified number of days after the password expires. If this field is set to zero, then the user is required to change his password. This field is only used by HP-UX trusted systems, which aren t discussed in this course. The absolute number of days since Jan 1, 1970 after which the account is no longer valid. A value of zero in this field indicates that the account is locked. The reserved field is always null, and is reserved for future use

78 Module 3 Managing Users and Groups Editing /etc/shadow Manually editing the /etc/shadow file isn t recommended. On a shadow password system, you should use the useradd, usermod, userdel, and passwd commands to manage user accounts in both /etc/passwd and /etc/shadow. These commands will be described in detail later in the chapter. Enabling SHA-512 Passwords in /etc/shadow Traditionally, HP-UX has used a variation of the DES encryption algorithm to encrypt user passwords in /etc/passwd. HP-UX 11i v2 and v3 now support the more secure SHA-512 algorithm if you install the Password Hashing Infrastructure patch bundle from HP-UX 11i v3 also supports long passwords up to 255 characters if you add the LongPass11i3 patch bundle, too. Use the following commands to determine if your system has these patch bundles: In 11i v2: # swlist SHA In 11i v3: # swlist PHI11i3 LongPass11i3 These patches are not available for 11i v1. After installing the software, add the following two lines to /etc/default/security to enable SHA512 password hashing: # vi /etc/default/security CRYPT_DEFAULT=6 CRYPT_ALGORITHMS_DEPRECATE= unix The lines above ensure that when passwords are created or changed, HP-UX always uses the new SHA-512 algorithm rather than the legacy 3DES unix algorithm. Existing users can continue using their legacy passwords until their passwords expire, or until they manually change their passwords. As users change their passwords, note that the resulting passwords in /etc/shadow become much longer. The $6$ prefix in the second password field below indicates that the password was encrypted via SHA-512. Before: After: user1:9otpronwckt9w:14370:::::: user1:$6$at65drdj$e9mfdcrnmmyjp1oeaolzgslsyaxmzms1tggdni8 SUqrYYPvGSZXZNh/Ov0O5RdMgCe3Vap5DApx0zpr6XB190.:14370:::::: This functionality only works on systems that store passwords in /etc/shadow rather than /etc/passwd

79 Module 3 Managing Users and Groups NIS and NIS+ are incompatible with this feature, as are some third party applications that directly parse encrypted passwords. Enabling Long Passwords in /etc/shadow On 11i v3 systems, you can also enable long passwords up to 255 characters in length by adding this line to /etc/default/security: # vi /etc/default/security CRYPT_DEFAULT=6 CRYPT_ALGORITHMS_DEPRECATE= unix LONG_PASSWORD=1 This functionality only works on systems that store passwords in /etc/shadow, and that have the SHA512 password functionality enabled. See the HP-UX Password Hashing Infrastructure Release Notes on for more information, the HP-UX LongPassword page on and HP HP-UX Security (H3541S) on course to learn more about these features

80 Module 3 Managing Users and Groups 3 5. SLIDE: The /etc/group File The /etc/group File /etc/passwd determines a user s primary group membership /etc/group determines a user s secondary group memberships other::1:root,daemon,uucp,sync users::20: accts::1001:user1,user2 sales::1002:user1,user2,user3,user4,user5,user6 Group GID Members Use /usr/bin/vi to edit /etc/group Use /usr/sbin/grpck to check the /etc/group file syntax Student Notes When a user logs in on HP-UX system, HP-UX checks the GID field in the user's /etc/passwd entry to determine the user s primary group membership. The /etc/group file determines a user s secondary group memberships. Users will be granted group access rights to any file associated with either their primary or secondary groups. New files and directories that the user creates will, by default, be assigned to the user s primary group. Users who prefer to associate new files and directories with a secondary group can use the newgrp command to temporarily change their GID. # newgrp sales

81 To return to the primary group, run newgrp without any options. # newgrp To determine which groups a user belongs to, use the groups command. # groups user1 sales accts /etc/group File Format The colon delimited /etc/group file defines user groups. Module 3 Managing Users and Groups group_name is the mnemonic name associated with the group. If you ll a file, you will see this name printed in the group field. In 11i v1 and v2, group names may only be 8 characters in length. In 11i v3, the lugadmin command enables long group names up to 255 characters. password group_id may contain an encrypted group-level password in earlier versions, but is no longer used. is the group ID (GID). This is the number that should be placed in the /etc/passwd file in the group_id field. GIDs 1-99 are reserved for other predefined groups required by the system. SAM, SMH, and ugweb automatically assign GID numbers when creating new groups. Version of HP-UX introduced support for GIDs as large as 2,147,483,646. Prior to HP-UX 10.20, GIDs greater than 60,000 were not supported. To determine your system s maximum UID, check the MAXUID parameter in /usr/include/sys/param.h. Using large GIDs may cause problems when sharing files with other systems that don t support large UIDs. group_list is a list of usernames of users who are members of the group. A user's primary group is defined in the fourth field of /etc/passwd, not in the /etc/group file. Each member can be a member of up to 20 secondary groups. This limit is determined by the NGROUPS_MAX parameter in /usr/include/limits.h. Also, each line in the /etc/group file can be no more than 2048 characters, as defined by the LINE_MAX parameter in /usr/include/limits.h

82 Module 3 Managing Users and Groups Required Entries in /etc/group root::0:root other::1:root,hpdb bin::2:root,bin sys::3:root,uucp adm::4:root,adm daemon::5:root,daemon mail::6:root lp::7:root,lp tty::10: nuucp::11:nuucp nogroup:*:-2: For more information on the /etc/group file, see group(4) in the HP-UX Reference manual. Checking the /etc/group File The consistency of the /etc/group file can be checked with the /usr/sbin/grpck command. It will check for the number of fields in each entry, and whether all login names appear in the password file. # grpck users::20:root,user101 user101 - Logname not found in password file

83 Module 3 Managing Users and Groups 3 6. SLIDE: Creating User Accounts Creating User Accounts Use useradd to create new user accounts Create a user account: # useradd o \ # allow a duplicate UID -u 101 \ # define the UID -g users \ # define the primary group -G class,training \ # define secondary groups -c student user \ # define the comment field m d /home/user1 \ # make a home directory for the user s /usr/bin/sh \ # define the default shell -e 1/2/09 \ # define an account expiration date -p fnnmd.dgyptlu \ # specify an encrypted password -t /etc/default/useradd \ # specify a template user1 # define the username Interactively set a password for the new account: # passwd user1 # interactively specify a password or # passwd d user1 # set a null password # passwd f user1 # force a password change at first login Student Notes The useradd command provides a convenient mechanism for adding user accounts. Without any options, useradd simply adds a user to the /etc/passwd file using all of the user account defaults: # useradd user1 # grep user1 /etc/passwd user1:x:101:20::/home/user1:/sbin/sh Most administrators choose to override one or more of these defaults via some combination of the command line options listed below: -o -u uid -u specifies the User ID (UID) for the new user. uid must be a nonnegative integer less than MAXUID as it is defined in the /usr/include/sys/param.h header file. uid defaults to the next available unique number above the maximum currently assigned number. UIDs from 0-99 are reserved

84 Module 3 Managing Users and Groups The o option allows the UID to be non-unique. This is most useful when creating multiple user accounts with UID 0 administrator privileges. -g group Specifies the integer group ID or character string name of an existing group. This defines the primary group membership of the new login. -G group Specifies a comma separated list of additional GIDs or group names. This defines the supplemental group memberships of the new login. Multiple groups may be specified as a comma separated list. Duplicates within the -g and -G options are ignored. -c comment Specifies the comment field in the /etc/passwd entry for this login. This can be any text string. A short description of the new login is suggested for this field. The field may be used to record users names, telephone numbers, office locations, employee numbers, or other information. The field isn t referenced by the system. -k skeldir Specifies the skeleton directory containing files that should be copied to all new user accounts. Defaults to /etc/skel. See the /etc/skel discussion later in this chapter for more information. -m -d dir -d specifies the new user s home directory path. The home directory path defaults to /home/username. With the optional m (make) option, useradd also creates the home directory. -s shell Specifies the full pathname of the new user s login shell. By default, the system uses /sbin/sh as the login shell. /sbin/sh is a POSIX shell, but it s a statically linked executable that consumes more system resources than the dynamically linked /usr/bin/sh shell. /sbin/sh is required for the root account, but other accounts should use /usr/bin/sh. -e expire Specifies the date after which this login can no longer be used. After expire, no user will be able to access this login. Use this option to create temporary logins. expire, which is a date, may be typed in a variety of formats, including mm/dd/yy. See the man page for other supported formats. This option only works on systems configured to use the /etc/shadow file. -f inactive Specifies the maximum number of days of continuous inactivity of the login before the login is declared invalid. This option is only supported on trusted systems. To learn more about HP s trusted system functionality, attend HP Customer Education s H3541S course. -p password Specifies an encrypted password for the account. The argument passed to p must be a valid encrypted password, created via the crypt perl/c function. The example below uses command substitution to execute a perl command that encrypts password hp for user1. Although this solution is convenient, beware that the command (which includes the user s cleartext password) will appear in the process table and in ~/.sh_history

85 Module 3 Managing Users and Groups useradd -p $(perl -e "print crypt('hp','xx')") user1 For a description of the perl function, type perlfunc f crypt. For a description of the equivalent C function, type man 2 crypt. If p isn t specified, useradd creates the user account, but doesn t enable it. Execute the passwd username command to interactively assign a password to the new account. -t template Specifies a template file, which establishes default options for the command. See the user template discussion below. /etc/default/useradd is the default template file. username Specifies the new user s username. The username should be between one and eight characters in length. The first character should be alphabetic. If the name contains more than eight characters, only the first eight are significant. The slide shows a complete example using many of these options. Setting a User Password The useradd command creates a user account, but unless the p option was specified, the passwd command must be used to define a password for the new account before the user can login. The administrator can either define a password for the user or set a null password: # passwd user1 # interactively specify a password for the user or # passwd d user1 # set a null password In either case, most administrators force new users to choose a new, memorable password the first time they login. # passwd f user1 # force a password change at first login Creating useradd Templates in /etc/default/ Administrators who manage many user accounts often configure useradd template files in the /etc/default/ directory. Template files establish default values for many of the useradd options. The useradd command consults the /etc/default/useradd template by default, but additional templates can be created as well with different default parameters for different types of users. The example below creates a useradd template that might be used when creating user accounts for C application developers who prefer to use the C shell and need to belong to the developer group. The example only demonstrates a few options. See the useradd(1m) man page for additional options. # useradd D \ # update defaults for a template -t /etc/default/useradd.cusers \ # template file location -b /home \ # base for home directories

86 Module 3 Managing Users and Groups -c C programmer \ # comment -g developer \ # primary group -s /usr/bin/csh # default shell To verify that the template was created, execute useradd with just the D and t options, or simply cat the file. # useradd -D -t /etc/default/useradd.cusers GROUPID 20 BASEDIR /home SKEL /etc/skel SHELL /usr/bin/csh INACTIVE -1 EXPIRE COMMENT programmer CHOWN_HOMEDIR no CREAT_HOMEDIR no ALLOW_DUP_UIDS no The example below uses the new template to create a user account. Recall that m creates a home directory for the new user. # useradd m -t /etc/default/useradd.cusers user1 # tail -1 /etc/passwd user1:*:101:20:programmer:/home/user1:/usr/bin/csh

87 Module 3 Managing Users and Groups 3 7. SLIDE: Modifying User Accounts Modifying User Accounts The administrator can use usermod to modify user accounts Users can modify some attributes of their own accounts via passwd, chsh, and chfn Modify a user account (Administrators): # usermod l user01 user1 # change the user s username # usermod o -u 101 user1 # change the user s UID # usermod -g users user1 # change the user s primary group # usermod -G class,training user1 # change the user s secondary group(s) # usermod -c student user1 # change the user s comment field # usermod m -d /users/user01 user1 # move the user s home directory # usermod s /usr/bin/ksh user1 # change the user s default shell # usermod e 1/3/09 user1 # change the user s account expiration # usermod -p fnnmd.dgyptlu user1 # non-interactively change a password Modify a user password (Administrators): # passwd user1 # interactively change a password Modify a user account or password (Users): $ passwd # change the user s password $ chsh user1 /usr/bin/ksh # change the user s shell $ chfn user1 # change the user s comment field Student Notes User account settings may be modified by the administrator, or, to a lesser extent, by users. Modifying a User Account (Administrators) The system administrator can change any user s account settings via the passwd and usermod commands. -l username Changes the user s username. This option doesn t, however, change the user s home directory name. See the m and -d options below. -o -u uid -u changes the user s User ID (UID). Changing a user s UID via usermod automatically changes the ownership of the files in the user s home directory to match the new UID. If the user owns files in other directories, though, be sure to use the chown command to change the ownership of those files to match the new UID

88 Module 3 Managing Users and Groups The o option allows the new UID to be non-unique (i.e.: allows duplicate UIDs). This is most useful when creating multiple user accounts with UID 0 administrator privileges. -g group Changes the user s primary group membership. -G group Replaces the user s existing secondary group memberships in /etc/group with a new list of secondary group memberships. Multiple groups may be specified as a comma separated list. -c comment Specifies the comment field in the /etc/passwd entry for this login. This can be any text string. A short description of the new login is suggested for this field. The field may be used to record users names, telephone numbers, office locations, employee numbers, or other information. The field isn t referenced by the system. -m -d dir -d Changes the user s home directory path in /etc/passwd. The m option moves the user s existing home directory to the new location specified by d. Without the m option, the user s home directory path is changed in /etc/passwd, but no files are moved. If the m option isn t specified, the directory following the d must be an existing directory. -p password Specifies an encrypted password for the account. The argument passed to p must be a valid encrypted password, created via crypt perl/c function. The example below uses command substitution to execute a perl command that encrypts password hp a new user1 account. # useradd -p $(perl -e "print crypt('hp','xx')") user1 For a description of the crypt function, type perlfunc f crypt. For a description of the equivalent C function, type man 2 crypt. -p is mostly used in scripts designed to modify multiple account passwords in an automated fashion. To interactively modify a user s password, use the passwd command instead. # passwd user1 Changing password for user1 New password: ****** Re-enter new password: ****** Passwd successfully changed -s shell Specifies the full pathname of the new user s login shell. By default, the system uses /sbin/sh as the login shell. /sbin/sh is a POSIX shell, but it s a statically linked executable that consumes more system resources than the dynamically linked /usr/bin/sh shell. /sbin/sh is required for the root account, but other accounts should use /usr/bin/sh. -e expire Specifies the date after which this login can no longer be used. After expire, no user will be able to access this login. Use this option to create

89 Module 3 Managing Users and Groups temporary logins. expire, which is a date, may be typed in a variety of formats, including mm/dd/yy. See the man page for other supported formats. This option only works on systems configured to use the /etc/shadow file. -f inactive Specifies the maximum number of days of continuous inactivity of the login before the login is declared invalid. This option is only supported on trusted systems. To learn more about HP s trusted system functionality, attend HP Customer Education s H3541S course. Modifying a User Password (Administrators) Administrators can change any user s password. The administrator isn t prompted for the user s existing password. $ passwd user1 Changing password for user1 New password: ****** Re-enter new password: ****** Passwd successfully changed Alternatively, use the d option to set a null password. Users with null passwords aren t prompted to enter a password at login. # passwd -d user1 In either case, consider using the f option to force the user to personally select a new password at next login. # passwd -f user1 Modifying a User Account (Users) Users can change their own passwords via the passwd command, but must know their current password. $ passwd Changing password for user1 Old password: ****** New password: ****** Re-enter new password: ****** Passwd successfully changed Users can modify some of their other account attributes, too, via the chsh, and chfn commands. $ passwd # change the user s password $ chsh user1 /usr/bin/ksh # change the user s shell $ chfn user1 # change the user s comment field interactively

90 Module 3 Managing Users and Groups 3 8. SLIDE: Deactivating User Accounts Deactivating User Accounts Deactivating a user account prevents the user from logging in However, the user s entry remains in the /etc/passwd file and can be reactivated The user s files can be left as-is, removed, or transferred to another user Deactivate a user account # passwd l user1 Reactivate a user account # passwd user1 Remove a user s home directory # rm rf /home/user1 Or Remove the user s files from every directory # find / -user user1 type f exec rm i + # find / -user user1 type d exec rmdir + Or Transfer ownership to a different user # find / -user user1 exec chown user2 + Student Notes If a user is going on leave, or no longer needs access to the system, deactivate/lock their account. Deactivating an account places an * in the user s password field and prevents the user from logging in. # passwd l user1 If the user returns, simply choose a new password for the user to reactivate their account. # passwd user1 If a user s account has been deactivated and the user s files will never be used by another user, reclaim the user s disk space by removing their home directory. # rm rf /home/user

91 Module 3 Managing Users and Groups Some users may have files scattered across other directories as well. Use the find command to find and remove the user s files and directories. The i option provides an opportunity to review each file before removing it. # find / -user user1 type f exec rm i + # find / -user user1 type d exec rmdir i + Alternatively, consider reassigning the user s files to a different user. The example below chowns all files owned by user1 to user2. # find / -user user1 chown user

92 Module 3 Managing Users and Groups 3 9. SLIDE: Removing User Accounts Removing User Accounts Removing a user removes the user from /etc/passwd and /etc/group The user s files can be left as-is, removed, or transferred to another user Delete a user account, but leave the user s files untouched # userdel user1 Delete a user account and remove the user s home directory # userdel r user1 Or Remove the user s files from every directory # find / -user user1 type f exec rm i + # find / -user user1 type d exec rmdir + Or Transfer ownership to a different user # find / -user user1 exec chown user2 + Find files owned by non-existent users or groups # find / -nouser exec ll d + # find / -nogroup exec ll -d + Student Notes If you are certain that a user will never need access to your system again, you may prefer to remove the user s account from the /etc/passwd file entirely. # userdel user1 If you want to remove the user s home directory, too, include the r (recursive remove) option. # userdel -r user1 Some users may have files scattered across other directories as well. You can use the find command to find and remove the user s other files and directories. # find / -user user1 type f exec rm i + # find / -user user1 type d exec rmdir i

93 Alternatively, consider reassigning the user s files to a different user. # find / -user user1 exec chown user2 + Module 3 Managing Users and Groups Or, perhaps simply leave the files on disk as-is. If you choose this approach, the ll command will report the old user s userid rather than username in the file owner field. Use the find command to general a list of all such orphaned files. # find / -nouser exec ll d + # find / -nogroup exec ll -d

94 Module 3 Managing Users and Groups SLIDE: Configuring Password Aging Configuring Password Aging Password aging forces users to change their passwords on a regular basis # passwd -n 7 -x 70 w 14 user1 # enable password aging for a user # passwd -s user1 # check a user s password status # passwd sa # check the status of all users Password Change Prohibited Password Change Allowed Password Warning Appears Password Change Required! t=0 days t=7 days t=56 days t=70 days (requires /etc/shadow) Student Notes Many administrators force users to change their passwords on a regular basis via password aging. Thus, even if a hacker were to obtain a copy of the /etc/passwd file, passwords gleaned from that file would only be useful for a short period of time. Password aging may be enabled via the /usr/bin/passwd command: # passwd -n 7 -x 70 w 14 user1 <min> argument rounded up to nearest week <max> argument rounded up to nearest week <warn> argument rounded up to nearest week The -x option defines the maximum number of days a user is allowed to retain a password. In the example on the slide, user1 will be forced to change his or her password every 28 days. The -n option defines the minimum number of days a user is required to retain a password after a password change. This, too, is rounded to the nearest week. In the example on the slide, user1 must retain each new password for a minimum of 7 days. This prevents a user

95 Module 3 Managing Users and Groups from changing their password, then immediately reverting to their previously used password each time their password expires. -n Sets the minimum number of days between password changes. Although this parameter must be specified in days, passwd rounds up to the nearest week. In the example on the slide, user1 must retain each new password for a minimum of 7 days. This prevents a user from changing their password, then immediately reverting to their previous password. -x Sets the maximum number of days allowed between password changes. Although this parameter must be specified in days, passwd rounds up to the nearest week. -w Sets the password expiration warning period. The w option causes the system to display a login warning message one or more weeks before a user s password expires. The number of days is configurable. The w option is only available on systems configured to use the /etc/shadow file. And must be specified in multiples of seven days. You can check the password status of a user's account with the -s option. # passwd -s user1 user1 PS 03/21/ This generates a one-line summary indicating when the minimum and maximum password aging parameters, as well as the week when the password was last changed. To view the aging status of all user accounts, execute: # passwd -sa user1 PS 03/21/ user2 PS user3 PS Password Aging Fields in the /etc/passwd and /etc/shadow Files On a non-shadowed system, password aging is put in effect for a particular user if the user's encrypted password in the passwd file is followed by a comma and a non-null string of characters. This string defines the age used to implement password aging. The characters that are used to represent digits are as follows: Characters Number of Weeks. 0 / A-Z a-z The first character of the age, M, denotes the maximum number of weeks for which a password is valid. A user who attempts to login after the password has expired is forced to supply a new one. The next character, m, denotes the minimum period in weeks that must expire before the password can be changed. The remaining characters define the week

96 Module 3 Managing Users and Groups (counted from the beginning of 1970) when the password was last changed (a null string is equivalent to zero). If m = M = 0 the user is forced to change the password at the next log in (and the age disappears from the password entry). If m > M (the string./), only a superuser (not the user) can change the password. On a shadow password system, password aging information is recorded in the /etc/shadow file rather than /etc/passwd. See the /etc/shadow slide elsewhere in the chapter for more information. Although these parameters may be set manually, it's much easier to use the /usr/bin/passwd command!

97 Module 3 Managing Users and Groups SLIDE: Configuring Password Policies Configuring Password Policies Use /etc/default/security to establish default password & security policies # vi /etc/default/security MIN_PASSWORD_LENGTH= PASSWORD_MIN_UPPER_CASE_CHARS= PASSWORD_MIN_LOWER_CASE_CHARS= PASSWORD_MIN_DIGIT_CHARS= PASSWORD_MIN_SPECIAL_CHARS= PASSWORD_MAXDAYS= PASSWORD_MINDAYS= PASSWORD_WARNDAYS= Student Notes In order to ensure that users choose secure passwords, HP-UX supports a configuration file called /etc/default/security that may be used to define a variety of security policies. To use these policies in 11i v1, install the ShadowPassword patch bundle and PHCO_ i v3, and the SecurityExt software bundle in 11i v2, provide support for several additional parameters not shown on the slide. See the security(4) man page for a complete list of policies and parameters available on your system. MIN_PASSWORD_LENGTH=N New passwords must contain at least N characters. PASSWORD_MIN_UPPER_CASE_CHARS=N New passwords must contain a minimum of N upper-case characters. In 11i v1, this only applies if PHCO_24606 is installed

98 Module 3 Managing Users and Groups PASSWORD_MIN_LOWER_CASE_CHARS=N New passwords must contain a minimum of N lower-case character. This only applies if PHCO_24606 is installed. PASSWORD_MIN_DIGIT_CHARS=N New passwords must contain a minimum of N digit characters are required in a password when changed. This only applies if PHCO_24606 is installed on your system. PASSWORD_MIN_SPECIAL_CHARS=N Specifies that a minimum of N special characters are required in a password when changed. PASSWORD_MAXDAYS=N This parameter controls the default maximum number of days that passwords are valid. This parameter applies only to local users and does not apply to trusted systems. The passwd -x option can be used to override this value for a specific user. PASSWORD_MINDAYS=N This parameter controls the default minimum number of days before a password can be changed. This parameter applies only to local users and does not apply to trusted systems. The passwd -n option can be used to override this value for a specific user. PASSWORD_WARNDAYS=N This parameter controls the default number of days before password expiration that a user is to be warned that the password must be changed. This parameter applies only to local users on Shadow Password systems. The passwd -w option can be used to override this value for a specific user

99 Module 3 Managing Users and Groups SLIDE: Managing Groups Managing Groups Each user can belong to one or more groups Groups can be managed via groupadd/groupmod/groupdel Group memberships can be managed via usermod and groups Create a new group # groupadd -g 200 accts Change a group name # groupmod -n accounts accts Add, modify, or delete a list of users to or from a group # groupmod a l user1,user2 accounts add a list of users to a group # groupmod m l user3,user4 accounts replace the list of users in a group # groupmod a l user3,user4 accounts delete a list of users from a group Delete a group # groupdel accounts Change a specific user s primary and secondary group membership # usermod g users user1 # usermod G class,training user1 View a user s group memberships # groups user1 Student Notes Each user on an HP-UX system may belong to one or more groups. Groups may be managed via the groupadd/groupmod/groupdel command line utilities. Group membership may be managed via the usermod and groups commands. Create a new group: # groupadd -g 200 accts Change a group name: # groupmod -n accounts accts Add a list of users to a group: # groupmod a l user1,user2 accounts Replace the current list of users in a group with a new list of users:

100 Module 3 Managing Users and Groups # groupmod m l user3,user4 accounts Delete a list of users from a group: # groupmod a l user3,user4 accounts Delete a group: # groupdel accounts

101 Module 3 Managing Users and Groups Change a user s primary and secondary group membership: # usermod g users user1 # usermod G class,training user1 View a user s group memberships: # groups user

102 Module 3 Managing Users and Groups SLIDE: Managing /etc/skel Managing /etc/skel ~/.profile and other hidden files establish a user s environment at login /etc/skel/ contains template files to be copied to every new user account Files can be added/modified/removed from /etc/skel as necessary Changes in /etc/skel don t affect existing user accounts /etc/skel/ /home/user1/.profile.shrc copied to new accounts.profile.shrc.exrc.exrc Student Notes When a user logs into a UNIX system, several scripts execute to establish the user s shell environment. The list below describes the scripts that execute during the POSIX and Korn shell login process. Login processes for other shells may vary. 1. After the user enters a username and password, the /usr/bin/login script checks the /etc/passwd file to verify that the user has a valid account. If the user's username and password are correct, the login program launches a shell for the user. 2. Next, the newly launched shell executes a script called /etc/profile. /etc/profile is a POSIX/Korn shell script that is maintained by the system administrator to configure a default environment for all users. The script accesses the /etc/path, /etc/manpath, and /etc/timezone files to set initial values for the PATH, MANPATH, and TZ variables. The script attempts to define the TERM variable automatically, too. Since /etc/profile executes every time any user logs in, the administrator can modify this file to set global default environment variables for all users at login time

103 Module 3 Managing Users and Groups 3. Next, the user's personal ~/.profile script executes. Each user has a.profile script that executes at login time to define additional environment variables, or to override the default environment variable values that the administrator defined in /etc/profile. 4. Finally, the shell looks for an environment variable called ENV. The ENV variable identifies a personal shell startup program that users may optionally choose to configure. POSIX shell users often create a ~/.shrc shell startup script, while Korn shell users typically define a ~/.kshrc shell startup script. Unlike the ~/.profile script, which only executes at login, the shell startup script executes every time the user logs in, runs a shell script, opens a terminal emulator window, or launches a shell. The POSIX and Korn shell startup scripts are typically used to define shell aliases. Users can modify their personal ~/.profile and ~/.shrc scripts. The administrator can create a template version of these in the /etc/skel directory. useradd automatically copies the files found in this directory to each new user home directory. Thus, if you wish to change the default configuration files that are copied to new users' home directories, simply modify the files in /etc/skel. Note that changes made in /etc/skel won't affect existing users' home directories. Updated files will only be copied to new user accounts. Additional files can be copied into /etc/skel as well, if your applications require configuration files in users' home directories. The /etc/skel directory on the slide includes a.exrc file which defines vi macros and keyboard shortcuts. Administrators on very large systems may choose to create subdirectories under /etc/skel for different user account types. Then, when creating a user account, use the useradd k skeldir option to specify which skeleton directory useradd should copy files from. NOTE: There is no CDE.dtprofile script in /etc/skel. The first time a user logs in via CDE, HP-UX attempts to copy either /etc/dt/config/sys.dtprofile (if it exists) or /usr/dt/config/sys.dtprofile to the user's ~/.dtprofile. Use the following procedure to customize the default.dtprofile: # cp p /usr/dt/config/sys.dtprofile \ /etc/dt/config/sys.dtprofile # vi /etc/dt/config/sys.dtprofile Some Common Environment Variables The.profile script establishes a user s environment by setting environment variables. The table below lists some of the most commonly modified environment variables. TERM The TERM variable defines the user's terminal type. If the TERM variable is set incorrectly, applications may not be able to write to the user's terminal properly

104 Module 3 Managing Users and Groups Valid terminal types are listed in the /usr/lib/terminfo/* directories. You can explicitly set an appropriate TERM value using a command similar to the following: export TERM=vt100 export TERM=hp export TERM=dtterm # for a vt100 type terminal # for an HP ASCII terminal # for a dtterm terminal emulator window More commonly, however, the TERM variable is set using the ttytype command, which can usually automatically determine your terminal type. The following portion of code can be included in one of the scripts that runs at login to set your terminal type for you: if [ "$TERM" = "" -o \ "$TERM" = "unknown" -o \ "$TERM" = "dialup" -o \ "$TERM" = "network" ] then eval `ttytype -s -a` fi export TERM PS1 The PS1 variable defines your shell prompt string. This, too, can be changed by the user. Some useful sample PS1 values are shown below: export PS1='$ ' # Use a simple "$ " prompt export PS1='$PWD $' # Include the user's pwd in the prompt export PS1='$PWD ($LOGNAME) $' # Include the user's username,too LPDEST LPDEST defines the user's default printer. The printer named in LPDEST takes precedence over the system-wide default printer configured by the system administrator. Examples: export LPDEST=laser export LPDEST=printera # use "laser" as the default printer # use "printera" as the default printer PATH Every time the user enters a command, the shell must find the executable associated with the requested command. The PATH variable contains a ":" separated list of directories that the shell should search for executables. If users need access to new applications and utilities, you may need to modify their PATH variables. You can append a new directory to the user's PATH using syntax similar to the following syntax: PATH=$PATH:/usr/local/bin # adds /usr/local/bin # to the existing PATH The initial PATH variable value usually taken from the /etc/path file. Oftentimes installing an application automatically updates the /etc/path file for you, so it may not be necessary to update individual users' PATHs

105 Module 3 Managing Users and Groups EDITOR Three variables must be defined if your users want to use command line editing: export EDITOR=vi export HISTFILE=~/.sh_history export HISTSIZE=50 EDITOR defines the user's preferred command line editor. emacs and vi are the only allowed values. HISTFILE determines the file that should be used to log commands entered by the user. HISTSIZE determines the number of commands retained in the shell's command buffer. TZ Defines the user s time zone. Internally, UNIX records timestamps as the number of seconds since January 1, 1970 UTC. Commands that display timestamps (date, who, ll, etc.) display dates and times relative to the timezone specified in the user s TZ variable. The administrator can establish a system-wide default value in /etc/timezone, but individual users may wish to customize the variable to match their local time zone. See the /usr/lib/tztab file for a list of recognized time zones. The example below establishes a TZ value appropriate for users in Chicago. export TZ=CST6CDT These are just some of the more commonly defined environment variables that you can define for your users. Other environment variables are defined in the man page for the POSIX shell (man 1 sh-posix), and still others may be required by your applications. Environment variables can be set from the command line, but are more commonly defined in the login configuration files, which will be covered later in this chapter. You can view a list of currently defined environment variables by executing the env command: # env

106 Module 3 Managing Users and Groups LAB: Managing User Accounts Directions Perform the following tasks. Record the commands you use, and answer all questions. The password for user accounts user1-24 is class1. Part 1: Creating and Modifying Users and Groups 1. Use the useradd command to create a user account for user25 on your system. Include the option to create a home directory for the user, and use /usr/bin/sh as the user s startup shell. Accept defaults for the other options. 2. Do you see an entry for the new user in the /etc/passwd file? Do you see an entry for the new user in the /etc/group file? Explain. 3. Can the user login at this point? 4. Choose and set a password for the new user. 5. Force the user to choose a new password the first time they login

107 6. Login as user25 to verify that the new account works. What happens? Module 3 Managing Users and Groups 7. Return to the root account. 8. Oops! We forgot to define the comment field for user25. Set user25 s comment field to student account. 9. user25 needs to collaborate with user24 on a project. Create a group called project, and ensure that user24 and user25 both have access to the group. 10. Create a /home/project directory that user24 and user25 can use to store and manage files associated with their project. Ensure that the administrator and members of the project group are the only users who can access the shared directory. # mkdir /home/project # chown root:project /home/project # chmod 770 /home/project 11. Verify that user24 and user25 have access to the group, and that other users don t. # su user23 c touch /home/project/f23 # should fail! # su user24 c touch /home/project/f24 # should succeed! # su user25 c touch /home/project/f25 # should succeed!

108 Module 3 Managing Users and Groups Part 2: Deactivating and Removing User Accounts 1. Deactivate user24's account. 2. Remove user25 s account without removing user25 s home directory. 3. What changed in the /etc/passwd file because of the commands in the previous two questions? 4. What happens now when user24 and user25 attempt to log in? telnet to your local host, and try to login using both usernames. What happens? # telnet localhost 5. What happened to the users home directories? Do a long listing of /home. Can you explain what you see? # ll d /home/user24 /home/user25 6. Re-enable user24's account. Choose a new password as you wish

109 Module 3 Managing Users and Groups Part 3: Implementing Shadow Passwords and Password Aging 1. Run pwconv to create the /etc/shadow file. You may see a warning noting that shadow passwords are incompatible with NIS. Since we re not using NIS, ignore the message. a. What is in the password field in /etc/passwd now? b. What fields are populated in /etc/shadow? c. What are the permissions on /etc/shadow? Why is this significant? 2. Enable shadow password aging on the user1 account. a. Ensure that the password is changed at least twice per year. b. Ensure that users wait at least one week between password changes. c. Provide a one-week warning before the user s password expires. 3. Apply the same password aging parameters to all users by modifying the appropriate variables in /etc/default/security. Also require users to choose passwords that are at least eight characters. 4. Before you continue on to the next part, revert to a non-shadowed password file

110 Module 3 Managing Users and Groups Part 4: (Optional) Automating User Account Creation Pretend for a moment that you are a system administrator at a large university. Fifty students have just enrolled to start classes, and you need to create user accounts for them. Can you write a simple shell script to automatically create the user accounts? Initially, you can assign the students null passwords, but force them to change their passwords after their first successful login. Assign /usr/bin/sh as the users startup shell. Hint: Try running the sample shell script below. What must be changed in the shell script to automatically create the desired accounts? #!/usr/bin/sh n=1 while ((n<=50)) do echo stud$n ((n=n+1)) done

111 Module 3 Managing Users and Groups Part 5: (Optional) Managing Users and Groups via the SMH If time permits, explore the Accounts for Users and Groups functional area in the SMH: # smh -> Accounts for Users and Groups or... # ugweb A similar Accounts for Users and Groups functional area exists in sam in earlier versions of HP-UX

112 Module 3 Managing Users and Groups LAB SOLUTIONS: Managing User Accounts Perform the following tasks. Record the commands you use, and answer all questions. The password for user accounts user1-24 is class1. Part 1: Creating and Modifying Users and Groups 1. Use the useradd command to create a user account for user25 on your system. Include the option to create a home directory for the user, and use /usr/bin/sh as the user s startup shell. Accept defaults for the other options. Answer: # useradd m s /usr/bin/sh user25 2. Do you see an entry for the new user in the /etc/passwd file? Do you see an entry for the new user in the /etc/group file? Explain. Answer: There should be an entry in the /etc/passwd file for the new user. However, the user isn t listed in /etc/group. A user's primary group membership is recorded in the /etc/passwd GID field; /etc/group only records secondary group memberships. 3. Can the user login at this point? Answer: The user can t login at this point since the user s password hasn t been defined yet. 4. Choose and set a password for the new user. Answer: # passwd user25 5. Force the user to choose a new password the first time they login. Answer: # passwd f user25 6. Login as user25 to verify that the new account works. What happens? # login Answer: The system should have required a password change for user

113 Module 3 Managing Users and Groups 7. Return to the root account. Answer: $ exit Log back in again as root. 8. Oops! We forgot to define the comment field for user25. Set user25 s comment field to student account. Answer: # usermod c student account user25 9. user25 needs to collaborate with user24 on a project. Create a group called project, and ensure that user24 and user25 both have access to the group. Answer: # groupadd project # usermod -G project user24 # usermod -G project user Create a /home/project directory that user24 and user25 can use to store and manage files associated with their project. Ensure that the administrator and members of the project group are the only users who can access the shared directory. # mkdir /home/project # chown root:project /home/project # chmod 770 /home/project 11. Verify that user24 and user25 have access to the group, and that other users don t. # su user23 c touch /home/project/f23 # should fail! # su user24 c touch /home/project/f24 # should succeed! # su user25 c touch /home/project/f25 # should succeed!

114 Module 3 Managing Users and Groups Part 2: Deactivating and Removing User Accounts 1. Deactivate user24's account. Answer: # passwd -l user24 Now try to log in as user user24. It should fail. 2. Remove user25 s account without removing user25 s home directory. Answer: # userdel user25 3. What changed in the /etc/passwd file because of the commands in the previous two questions? Answer: user24's password field is set to "*" to indicate that the account is disabled. user25's /etc/passwd entry disappeared entirely. 4. What happens now when user24 and user25 attempt to log in? telnet to your local host, and try to login using both usernames. What happens? # telnet localhost Answer: Both login attempts should fail. 5. What happened to the users home directories? Do a long listing of /home. Can you explain what you see? # ll d /home/user24 /home/user25 Answer: Both directories are still there, but the owner field for user25's directory lists a number rather than user25's username. Internally, HP-UX identifies file ownership by UID rather than username. ll attempts to resolve these UIDs into usernames. However, since user25 is no longer listed in /etc/passwd, the ll command has no way of determining which username is associated with the /home/user25 directory

115 Module 3 Managing Users and Groups 6. Re-enable user24's account. Choose a new password as you wish. Answer: # passwd user

116 Module 3 Managing Users and Groups Part 3: Implementing Shadow Passwords and Password Aging 1. Run pwconv to create the /etc/shadow file. You may see a warning noting that shadow passwords are incompatible with NIS. Since we re not using NIS, ignore the message. a. What is in the password field in /etc/passwd now? b. What fields are populated in /etc/shadow? c. What are the permissions on /etc/shadow? Why is this significant? Answer: The password fields in /etc/passwd should contain x s. Each /etc/shadow entry should contain a user name, an encrypted password, and a timestamp field that indicates when the password was last changed. The other fields should be empty. The permissions on /etc/shadow should be r , so hackers can t view user password information. 2. Enable shadow password aging on the user1 account. a. Ensure that the password is changed at least twice per year. b. Ensure that users wait at least one week between password changes. c. Provide a one-week warning before the user s password expires. Answer: # passwd x 180 n 7 w 7 user

117 Module 3 Managing Users and Groups 3. Apply the same password aging parameters to all users by modifying the appropriate variables in /etc/default/security. Also require users to choose passwords that are at least eight characters. Answer: # vi /etc/default/security MIN_PASSWORD_LENGTH=8 PASSWORD_MAXDAYS=180 PASSWORD_MINDAYS=7 PASSWORD_WARNDAYS=7 The file is read-only by default, so a :w! followed by :q is needed if vi(1m) editor is used. 4. Before you continue on to the next part, revert to a non-shadowed password file. Answer: # pwunconv

118 Module 3 Managing Users and Groups Part 4: (Optional) Automating User Account Creation 1. Pretend for a moment that you are a system administrator at a large university. Fifty students have just enrolled to start classes, and you need to create user accounts for them. Can you write a simple shell script to automatically create the user accounts? Initially, you can assign the students null passwords, but force them to change their passwords after their first successful login. Assign /usr/bin/sh as the users startup shell. Answer: Create a Shell script useradd_stud_accts.sh #!/usr/bin/sh n=1 while ((n<=50)) do echo stud$n useradd m s /usr/bin/sh stud$n passwd d f stud$n ((n=n+1)) done Make script executable and run: # chmod +x useradd_stud_accts.sh #./useradd_stud_accts.sh To clean up the accounts, create script userdel_stud_accts.sh. #!/usr/bin/sh n=1 while ((n<=50)) do echo stud$n userdel stud$n rm -rf /home/stud$n ((n=n+1)) done

119 Module 3 Managing Users and Groups Part 5: (Optional) Managing Users and Groups via the SMH If time permits, explore the Accounts for Users and Groups functional area in the SMH. From the Home Page, click "System Configuration." From the System Configuration Window, click "Accounts for Users and Groups". When this exercise is complete, Sign out of the SMH utility and close the browser window. A similar Accounts for Users and Groups functional area exists in sam in earlier versions of HP-UX

120 Module 3 Managing Users and Groups

121 Module 4 Navigating the HP-UX File System Objectives Upon completion of this module, you will be able to do the following: Describe the reasons for separating dynamic and static file systems. Describe the key contents of /sbin, /usr, /stand, /etc, /dev, /var (OS-related directories). Describe the key contents of /opt, /etc/opt, and /var/opt (application-related directories). Use find, whereis, and which to find files in the HP-UX file system

122 Module 4 Navigating the HP-UX File System 4 1. SLIDE: Introducing the File System Paradigm Introducing the File System Paradigm Static Files Executables Libraries System startup OS Application OS Application Dynamic Files Configuration Temporary User OS Application OS Application Student Notes Many HP-UX system administration tasks require the administrator to find and manipulate system and application configuration and log files. Understanding the philosophy behind the organization of the file system will ensure that you can successfully find the resources you need to perform administration tasks. Files in the HP-UX file system are organized by various categories. Static files are separated from dynamic files. Executable files are separated from configuration files. This philosophy provides a logical structure for the file system and simplifies administration as well. HP-UX Separates Static and Dynamic Portions of the File System Files and directories in HP-UX may be categorized as static or dynamic. The contents of static files and directories rarely change, except when patching or installing the operating system or applications. Executable files, libraries, and system start-up utilities are all considered to be static. Dynamic files and directories change frequently. They are stored in a separate portion of the file system. Configuration, temporary, and user files are all considered to be dynamic

123 Separating dynamic and static data offers the following advantages: System backups are easier. Disk space management is simplified. Module 4 Navigating the HP-UX File System HP-UX Separates Executable Files from Configuration Files Configuration data is kept separate from the executable code that uses that data. Separating executable files from configuration files offers the following advantages: Changes made to configuration data are not lost when updating the operating system. Executable files can be easily shared across the network, while host-specific configuration data is stored locally on each host. HP-UX Follows the AT&T SVR4 Standard File System Layout Though there are minor differences from vendor to vendor, the file system layout used in HP-UX is very similar to that used in other flavors of UNIX. This simplifies administration tasks for administrators with responsibilities on multiple vendors' machines

124 Module 4 Navigating the HP-UX File System 4 2. SLIDE: System Directories System Directories / (root) DYNAMIC FILES /opt /var /dev /mnt App1 App2 STATIC FILES /usr /sbin /etc /stand /tmp /home Student Notes The shaded directories in the diagram on the slide contain static data, while unshaded directories in the diagram contain dynamic data. The sharable portion of the operating system is located beneath /usr and /sbin. Only the operating system can install files into these directories. Applications are located beneath /opt. The directories /usr, /sbin, and the application subdirectories below /opt can be shared among networked hosts. Therefore, they must not contain host-specific information. The host-specific information is located in directories in the dynamic area of the file system. General definitions for these directories are: Directory /usr /sbin Definition Sharable operating system commands, libraries, and documentation. Minimum commands needed to boot the system and mount other file systems

125 Module 4 Navigating the HP-UX File System /opt /etc /dev /var /mnt /tmp /stand /home Applications. System configuration files. No longer contains executable files Device files Dynamic information such as logs and spooler files (previously in /usr). Local mounts Operating system temporary files Kernel and boot loader User directories A Closer Look at /usr The /usr directory contains the bulk of the operating system, including commands, libraries and documentation. The /usr file system contains operating system files, such as executable files and ASCII documentation. The allowed subdirectories in /usr are defined below; no additional subdirectories should be created. Examples of files that live here are /usr/bin /usr/conf /usr/contrib /usr/lbin /usr/local /usr/newconfig /usr/sbin /usr/share /usr/share/man /usr/share/doc Operating system user commands. Kernel configuration. Unsupported contributed software. Back-ends to other commands User-contributed software. Default operating system configuration data files. System administration commands. Architecture independent sharable files. Operating system man pages. Release notes

126 Module 4 Navigating the HP-UX File System A Closer Look at /var The /var directory is for multipurpose log, temporary, transient, variable sized, and spool files. The /var directory is extremely variable in size, hence the name. In general, any files that an application or command creates at runtime, and that are not critical to the operation of the system, should be placed in a directory that resides under /var. For example, /var/adm will contain log files and other runtime-created files related to system administration. /var will also contain variable size files like crontabs, and print and mail spooling areas. In general, files beneath /var are somewhat temporary. System administrators that wish to free up disk space are likely to search the /var hierarchy for files that can be purged. Some sites may choose not to make automatic backups of the /var directories. Examples of files that reside here are /var/adm /var/adm/crash /var/mail /var/opt/ /var/spool /var/tmp Common administrative files and log files. Kernel crash dumps. Incoming mail. Application-specific runtime files (e.g. logs, temporary files). Each application will have its own directory. Spooled files used by subsystems such as lp, cron, software distributor. Temporary files generated by commands in the /usr hierarchy A Closer Look at /var/adm This directory hierarchy is used for common administrative files, logs, and databases. For example, files generated by syslog(3c), files used by cron(1m), and kernel crash dumps will be kept here and in subdirectories. Examples of files that reside here are /var/adm/crash /var/adm/cron /var/adm/sw Kernel crash dumps will be located in this directory. Used for log files maintained by cron. cron is a subsystem that allows you to schedule processes to run at a specific time or at regular intervals. Used for log files maintained by the Software Distributor

127 Module 4 Navigating the HP-UX File System /var/adm/syslog /var/adm/sulog /var/adm/wtmp /var/adm/btmp /etc/utmp /var/adm/wtmps /var/adm/btmps /etc/utmps System log files. Applications as well as the kernel can log messages here. The syslogd daemon is responsible for writing the log messages. The behavior of the syslogd daemon can be customized with the/etc/syslog.conf file. The name of the default log file is /var/adm/syslog/syslog.log. At boot time this file is copied to OLDsyslog.log, and a new syslog.log is started. The syslog.log file is an ASCII file. This file contains a history of all invocations of the switch user command. sulog is an ASCII log file. On an 11i v1 system, this file contains a history of successful logins. This file is not ASCII. The last command is used to display this information. The wtmp file will continue to grow and should be trimmed by the administrator from time to time. On an 11i v1 system, this file contains a history of unsuccessful logins. This file is not ASCII. The lastb command is used to display this information. The btmp file will continue to grow and should be trimmed by the administrator from time to time. On an 11i v1 system, this file contains a record of all users logged onto the system. This file is used by commands such as write and who. This file is not an ASCII file and can not be directly viewed. On an 11i v2 system, this file contains a history of successful logins. This file is not ASCII. The last command is used to display this information. The wtmps file will continue to grow and should be trimmed by the administrator from time to time. On an 11i v2 system, this file contains a history of unsuccessful logins. This file is not ASCII. The lastb command is used to display this information. The btmps file will continue to grow and should be trimmed by the administrator from time to time. On an 11i v2 system, this file contains a record of all users logged onto the system. This file is used by commands such as write and who. This file is not an ASCII file and can not be directly viewed

128 Module 4 Navigating the HP-UX File System 4 3. SLIDE: Application Directories Application Directories Static /opt/<application>/ Dynamic /etc/opt/<appl> bin lbin lib share newconfig /var/opt/<appl> (Looks like /usr) Student Notes Each application will have its own subdirectory under /opt, /etc/opt, and /var/opt. The sharable, or static, part of the application is self-contained in its own /opt/application directory, which has the same hierarchy as the operating system layout: /opt/application/bin /opt/application/share/man /opt/application/lib /opt/application/lbin /opt/application/newconfig User commands. man pages. Libraries. Back end commands. Master copies of configuration files. The application's host-specific log files are located under /var/opt/application, and host-specific configuration files are located under /etc/opt/application

129 Module 4 Navigating the HP-UX File System 4 4. SLIDE: Commands to Help You Navigate Commands to Help You Navigate find whereis which file strings Searches the file hierarchy Locates source, binaries, and man pages Locates an executable in your PATH Determines file type Displays ASCII characters in binary files Student Notes As a system administrator, you will need to reference files in directories all over the HP-UX file system. HP-UX offers several tools for finding the files and executable files you need to perform administration tasks. The find Command The find command is a powerful tool for system administrators. It searches the file hierarchy starting at a specified point and finds files that match the criteria you select. You can search for files by name, owner, size, modification time, and so on. find also allows you to execute a command with the files found used as an argument. Examples Find all files belonging to the user greg: # find / -user greg Find files in /tmp that have not been accessed in 7 days: 4-9

130 Module 4 Navigating the HP-UX File System # find /tmp -type f -atime +7 Remove core files: # find / -name core -exec rm i {} \; The whereis Command The whereis command is useful when you receive "not found" error messages. It searches a predefined list of directories. By default, whereis looks for source, binaries, and man pages. You can limit the search to binary files by using the -b option. Example # whereis -b sam sam: /usr/sbin/sam The which Command The which command is useful for determining which version of a command will be used. Some commands have multiple homes. Which version you execute is determined by the order of the directories in your PATH variable. The file Command The file command performs a series of tests on a file and attempts to classify it. It can be useful for determining if a command is a shell script or a binary executable. Examples # file /sbin/shutdown /sbin/shutdown: s800 shared executable # file /etc/passwd /etc/passwd: ascii text The strings Command The strings command is useful when trying to find information in a binary file. It will print any printable characters in the file

131 Module 4 Navigating the HP-UX File System 4 5. LAB: HP-UX File System Hierarchy Directions Answer all the questions below. 1. Which of the following directories are dynamic? /etc /usr /sbin /dev /tmp 2. Viewing a report on your disk space usage, you note that /usr, /var, and /opt are all nearing 90% capacity. Which of these directories should you be most concerned about? Why? 3. Match the directory with its contents: 1. /usr/share/man A. kernel, boot loader 2. /stand B. system configuration files 3. /var/adm C. shareable operating system commands 4. /etc D. man pages 5. /usr E. application directories 6. /opt F. common admin files and logs 4. Where would you expect to find the cp and rm OS user executables? See if you are correct. 5. Where would you expect to find the smh, useradd, and userdel executables? See if you are correct

132 Module 4 Navigating the HP-UX File System 6. The pre_init_rc utility executes in the early stages of the system start-up procedure to check for file system corruption. Where would you expect to find this executable? See if you are correct. 7. There is a system log file that maintains a record of system shutdowns. Where would you expect to find the shutdown log file? See if you are correct. 8. In which directory would you expect to find the "hosts" configuration file, which contains network host names and addresses? See if you are correct. 9. Though many utilities and daemons maintain independent log files, many daemons and services write their errors and other messages to a log file called syslog.log. See if you can find the path for this file, then check to see if any messages have been written to the file in the last day. 10. Find all of the directories (if any) under /home that are owned by root. 11. (Optional) Find all the files under /tmp that haven't been accessed within the last day. 12. (Optional) Find all the files on your system that are greater than bytes in size. If you needed to make some disk space available on your system, would it be safe to simply remove these large files?

133 4 6. LAB SOLUTIONS: HP-UX File System Hierarchy Directions Answer all the questions below. 1. Which of the following directories are dynamic? /etc /usr /sbin /dev /tmp Answer: /etc /dev /tmp Module 4 Navigating the HP-UX File System 2. Viewing a report on your disk space usage, you note that /usr, /var, and /opt are all nearing 90% capacity. Which of these directories should you be most concerned about? Why? Answer: /var deserves the most attention here because it is a dynamic file system that could grow quite quickly in case of an error condition that creates entries in the system log files. /usr and /opt are static file systems that are less likely to cause problems. 3. Match the directory with its contents: 1. /usr/share/man A. kernel, boot loader 2. /stand B. system configuration files 3. /var/adm C. shareable operating system commands 4. /etc D. man pages 5. /usr E. application directories 6. /opt F. common admin files and logs

134 Module 4 Navigating the HP-UX File System Answer: 1. /usr/share/man D. man pages 2. /stand A. kernel, boot loader 3. /var/adm F. common admin files and logs 4. /etc B. system configuration files 5. /usr C. shareable operating system commands 6. /opt E. application directories 4. Where would you expect to find the cp and rm OS user executables? See if you are correct. Answer: Both are in /usr/bin, along with all the other user executables. 5. Where would you expect to find the smh, useradd, and userdel executables? See if you are correct. Answer: All three are in /usr/sbin along with many other administrative utilities. 6. The pre_init_rc utility executes in the early stages of the system start-up procedure to check for file system corruption. Where would you expect to find this executable? See if you are correct. Answer: pre_init_rc is in the /sbin directory, along with other files used during the boot process. 7. There is a system log file that maintains a record of system shutdowns. Where would you expect to find the shutdown log file? See if you are correct. Answer: The full path name is /etc/shutdownlog (/var/adm/shutdownlog is a symbolic link). Most OS log files are kept in /var/adm. 8. In which directory would you expect to find the "hosts" configuration file, which contains network host names and addresses? See if you are correct. Answer: The path name for the hosts file is /etc/hosts

135 Module 4 Navigating the HP-UX File System 9. Though many utilities and daemons maintain independent log files, many daemons and services write their errors and other messages to a log file called syslog.log. See if you can find the path for this file, then check to see if any messages have been written to the file in the last day. Answer: # more /var/adm/syslog/syslog.log 10. Find all of the directories (if any) under /home that are owned by root. Answer: # find /home -user root 11. (Optional) Find all the files under /tmp that haven't been accessed within the last day. Answer: # find /tmp -atime +1 type f 12. (Optional) Find all the files on your system that are greater than bytes in size. If you needed to make some disk space available on your system, would it be safe to simply remove these large files? Answer: # find / -size c type f Before removing these files, be sure to investigate the files purpose

136 Module 4 Navigating the HP-UX File System

137 Module 5 Configuring Hardware Objectives Upon completion of this module, you will be able to do the following: Describe the major hardware components of an HP-UX system Describe the high-level features of HP s current Integrity server products Describe the components of HP-UX legacy and Agile View hardware paths Describe the features of HP s npar, vpar, VM, and Secure Resource Partitions View a system s hardware model and configuration with machinfo and model View a system s peripheral devices and buses with ioscan and scsimgr View slots and interface cards with rad and olrad Add and replace interface cards with and without HP OL* functionality Add and remove pluggable and non-hot-pluggable devices 5-1

138 Module 5 Configuring Hardware 5 1. SLIDE: Hardware Components Hardware Components HP-UX systems have several hardware components: One or more Itanium single-, dual-, or quad-core CPUs for processing data One or more Cell Boards or Blades hosting CPU and memory One or more System/Local Bus Adapters that provide connectivity to expansion buses One or more PCI I/O expansion buses with slots for add-on Host Bus Adapters One or more Host Bus Adapter cards for connecting peripheral devices One or more Core I/O cards with built-in LAN, console, and boot disk connectivity An ilo / Management Processor to provide console access and system management Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes Every recent HP-UX system has several hardware components: One or more PA-RISC or Itanium single-, dual-, or quad-core CPUs for processing data. One or more Cell Boards or Blades hosting CPU and memory. One or more System/Local Bus Adapters that provide connectivity to expansion buses. One or more PCI I/O expansion buses with slots for add-on Host Bus Adapters. One or more Host Bus Adapter cards for connecting peripheral devices. One or more Core I/O cards with built-in LAN, console, and boot disk connectivity. An Integrated Lights Out / Management Processor (ilo/mp) card to provide local and remote console access and system management functionality. The slides that follow describe these components in detail

139 Module 5 Configuring Hardware 5 2. SLIDE: CPUs CPUs HP s current Integrity servers use Intel s 64-bit EPIC architecture Itanium 2 processors HP s older hp9000 servers used HP s proprietary 64-bit PARISC processors HP provides binary compatibility across processor types and generations Current Itanium 2 Processors Clock Speeds Intel Itanium Quad-Core 9300 Series Tukwila Processor 1.3GHz, 1.6GHz, 1.7 Intel Itanium Dual-Core 9200 Series Montvale Processor 1.4GHz GHz or 1.6GHz Intel Itanium Dual-Core 9100 Series Montecito Processor 1.4GHz or 1.6GHz Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes HP s HP-UX systems utilize two different processor families. The Itanium Processor Family (IPF ) All of HP s current HP-UX servers utilize Intel Itanium Processor Family (IPF) processors developed by Intel. All HP servers that utilize the IPF processors carry the HP Integrity brand name. The Itanium 2 architecture uses a variety of techniques to increase parallelism the ability to execute multiple instructions during each machine cycle. Parallelism improves performance because it allows multiple instructions to be executed simultaneously. The Itanium 2 architecture is designed to make certain the processor can execute as many instructions per cycle as possible. A key to the high performance of the IPF processors is the design philosophy at the heart of the processor, Explicitly Parallel Instruction Computing (EPIC). The IPF is a registered trademark of the Intel Corporation 5-3

140 Module 5 Configuring Hardware EPIC philosophy is a major reason why Itanium 2 processors are different from other 64-bit processors, providing much higher instruction-level parallelism without unacceptable increases in hardware complexity. EPIC achieves such performance by placing the burden of finding parallelism squarely on the compiler. Although processor hardware can extract a limited sort of parallelism, the best approach is to let the compiler, which can see the whole code stream, find the parallelism and make global optimizations. The compiler communicates this parallelism explicitly to the processor hardware by creating a threeinstruction bundle with directions on how the instructions should be executed. The hardware focuses almost entirely on executing the code as quickly as possible. The EPIC architecture, together with several other architecture innovations, gives the IPF processors a significant advantage over both IA32 and 64-bit RISC systems. As co-developer of the Itanium 2 architecture, HP has been able to take the lead in bringing production-ready Itanium 2 based servers to market. As shown on the slide, Intel has already released several generations of Itanium 2 processors. The latest generation of Itanium processors, the 9300 series Tukwila processor series features four processor cores on a single chip die, which increases computing density and delivers significant performance gains over earlier single- and dual-core processors. HP s newest systems utilize the 9300 series processor chips. Older models utilize the dual-core 9100 and 9200 series Itanium processors. These multi-core processors are further enhanced by increasing the on-chip cache sizes in each successive processor generation. The PA-RISC Processor Family Earlier model HP-UX systems utilized HP s proprietary Precision Architecture RISC (PA- RISC) processors. All recent HP servers that utilized PA-RISC carried the HP 9000 brand name. PA-RISC used Reduced Instruction Set Computing (RISC) principles to provide high performance, and high reliability. HP offered several iterations of its PA-RISC technology over the years. The early PA7000 series of chips used a 32-bit architecture, while the newer PA8000 series chips used a 64-bit architecture. HP s PA8800 and PA8900 processors are dual-core processors. A single PA8800 or PA8900 processor may contain one or two PARISC processor cores, thus allowing twice as many processors in a single system as was previously possible. The hp9000 Superdome supported up to 64 processor modules, a total of up to 128 PA8900 processor cores. The PA8900 processor was the last processor in the PA-RISC family. HP stopped selling PA- RISC servers at the end of 2008, but will support PA-RISC at least through PA-RISC / Integrity Application Compatibility Compatibility is an important feature that HP has always recognized and that HP customers have come to expect. For user space applications that utilize published APIs, HP: Maintains forward data, source, build environment, and binary compatibility across all hardware platforms of the same architecture family (e.g. Intel Itanium or PA

141 Module 5 Configuring Hardware RISC) which are supported by the same version of HP-UX; Provides forward data, source, build environment, and binary compatibility across HP-UX release versions and updates on HP 9000 servers and Integrity servers on their respective architectures. This is true for 32-bit or 64-bit applications on either architecture family; Delivers new features and improved performance with each new HP-UX release. Binary compatibility across operating system releases applies to legacy features (features that were present in the earlier release). There are some instances, however, where applications may be required to recompile in order to use or leverage a new feature. See the HP-UX release notes for information on new features that may require changes to applications. NOTE: This binary compatibility does not apply to kernel-intrusive applications or applications that rely on proprietary data structures inside HP-UX. Although most well-behaved PA-RISC binaries execute successfully on an Integrity system, the performance of a PA-RISC application running in compatibility mode may be less than that of the same application recompiled and running in native mode. PA-RISC applications that are largely interactive or I/O intensive should experience little to no noticeable degradation in performance, while those that perform heavy computation may run noticeably slower on an Integrity system than on a recent PA-RISC system. HP recommends recompilation for all applications and libraries where performance is a concern. Additionally, there is complete data compatibility between the HP-UX 11i releases for PA- RISC and Itanium-based systems. No data conversion is required when transferring data between releases of HP-UX 11i on PA-RISC and Integrity servers. For a more complete discussion of HP-UX compatibility, see the HP-UX 11i compatibility for HP Integrity and HP 9000 servers white paper at HP Integrity servers with Intel Itanium 2 processors offer the best HP-UX performance, scalability, and investment protection available. HP encourages current PA-RISC customers to consider upgrading their systems to Itanium. Consult your sales representative for details. Determining your Processor Type On 11i v1 and v2 systems, you can determine your processor type via the SAM system properties screen. # sam -> Performance Monitors -> System Properties -> Processor On Integrity systems, you can determine your processor type and configuration via the machinfo command. # machinfo CPU info: 5-5

142 Module 5 Configuring Hardware 2 Intel(R) Itanium(R) Processor 9340s (1.6 GHz, 20 MB) 4.79 GT/s QPI, CPU version E0 8 logical processors (4 per socket) Memory: MB (31.9 GB) Firmware info: Firmware revision: FP SWA driver revision: 1.18 IPMI is supported on this system. BMC firmware revision: 1.00 Platform info: Model: "ia64 hp Integrity BL860c i2" Machine ID number: 669ab3af-3d4c-11df-abc1-1a4b5386cd07 Machine serial number: USE008XX06 OS info: Nodename: bl860-1 Release: HP-UX B Version: U (unlimited-user license) Machine: ia64 ID Number: vmunix $Revision: vmunix: B.11.31_LR FLAVOR=perf For More Information For more information on HP s Itanium strategy, visit our IPF home page at To learn more about HP s PA-RISC to Integrity migration program, visit

143 Module 5 Configuring Hardware 5 3. SLIDE: Cell Boards, Blades, Crossbars, and Blade Links Cell Boards, Blades, Crossbars, and Blade Links On HP s mid-range and high-end servers, and on newer blade servers Each system is comprised of one or more cell boards or blades Each cell board or blade contains a portion of the system s memory and CPU resources All cell boards or blades are interconnected via a low latency crossbar or blade link Result: Any processor core can access resources on any blade or cell board Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes On HP s mid-range and high-end servers, and on newer blade servers, each system is comprised of one or more cell boards or blades. Each cell board or blade contains a portion of the system s memory and CPU resources. All of the system s cell boards or blades are interconnected via a low latency crossbar (on mid-range and high end servers) or blade link (on the blade servers). HP s crossbar and blade link technologies ensure that any processor core on a system can access resources on any other blade or cell board on that same system

144 Module 5 Configuring Hardware The diagram below shows the blade link used to interconnect foundation blades in HP s newer Integrity blade servers: The diagram below shows the HP sx2000 crossbar technology used to interconnect cell boards in HP s cell-based midrange and high-end Superdome servers: 5-8

145 Module 5 Configuring Hardware The diagram below shows the HP sx3000 crossbar technology used to interconnect Superdome 2 blades on the new Superdome 2 server: 5-9

146 Module 5 Configuring Hardware 5 4. SLIDE: SBAs, LBAs, and PCI Expansion Buses SBAs, LBAs, and I/O Expansion Buses System and Local Bus Adapters provide connectivity to I/O expansion buses I/O expansion buses provide one or more slots for device adapter cards HP supports PCI, PCI-X, and PCI-E bus types, and slot speeds up to ~2GB/sec HP OL* functionality on some servers facilitates adding/removing cards online Dedicated buses minimize downtime and maximize performance Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes Every cell, system board, or blade has a System Bus Adapter (SBA) that provides connectivity between the system s processors and the I/O expansion buses. The SBA connects to one or more Local Bus Adapters (LBAs) on the system s I/O backplane via a high-speed communications channel known as a rope. Some LBAs have a single rope connection to the SBA. Other LBAs utilize two ropes to the SBA for greater bandwidth. Each LBA provides an I/O bus to support one or more interface adapters or Host Bus Adapters (HBAs). PCI, PCI-X, and PCI-Express Expansion Buses HP s current servers utilize Peripheral Component Interconnect (PCI)-based I/O buses. PCI is a bus architecture that provides high-speed connectivity to and between interface adapters. PCI was developed by Intel, but has become an industry standard that is used on many platforms

147 Module 5 Configuring Hardware Since it was first introduced, the PCI standard has been enhanced several times to accommodate the greater bandwidth and shorter response times demanded from the input/output (I/O) subsystems of enterprise computers. The table below lists the PCI bus types available on recent Integrity servers. Slot Type Bus Width Bus Frequency Bandwidth PCI 32 bits 33.3 MHz 133 MB/s PCI 2x / Turbo 64 bits 33.3 MHz 266 MB/s PCI-X bits 66.6 MHz 0.5 GB/s PCI-X bits 133 MHz 1.1 GB/s PCI-X bits 266 MHz 2.1 GB/s PCI-Express 64 bits 266 MHz 2.6 GB/s The architecture diagram below shows the bus types provided on an Integrity rx6600 entryclass server. Model-specific technical white papers on HP s website provide similar technical details for other server models, too. Expansion Slots, I/O Chassis, I/O Expansion Enclosures, and Mezzanine Cards Rackmount entry-class and mid-range servers have card slots on the backplane of the server which host the expansion cards

148 Module 5 Configuring Hardware Superdome servers host expansion cards in one or more I/O chassis accessible from the front and rear of the server. Superdome 2 servers have no internal expansion card slots. Rather, Superdome 2 servers host expansion cards in one or more external I/O expansion enclosures. HP Integrity blade server administrators can add additional interfaces via the mezzanine expansion card slots located directly on the server blades. Slides later in the module describe each of these expansion solutions in greater detail. Learning More about Your Server s Expansion Buses To learn more about the expansion slots and cards available for your server, review your model s QuickSpecs on

149 Module 5 Configuring Hardware 5 5. SLIDE: ilo / MP Cards ilo / MP Cards All current HP servers support an Integrated Lights Out Management Processor The ilo / MP provides: Local console access via a local serial port Remote console access via modem or via telnet, HTTPS*, or SSH* network services Hardware monitoring and logging Power management and control * Not supported on all models Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes The next few slides discuss some of the cards and adapters that occupy PCI, PCI-X, and PCI- Express buses. All of HP s recent server models support an Integrated Lights Out / Management Processor (ilo/mp). The ilo/mp provides several important features: Local console access via a local serial port: Attach an ASCII terminal to the MP Serial port to install, update, boot, and reboot. Remote console access via modem or via telnet, HTTPS, or SSH network services: Remote administrators can use these ilo/mp features to remotely install, update, boot, reboot, and perform other administration tasks. Hardware monitoring and logging: The ilo/mp captures system hardware level diagnostics and system messages. Power management and control: Use the ilo/mp to view power status and power on/off system components

150 Module 5 Configuring Hardware And much more... The ilo/mp chapter elsewhere in this course describes these and many other ilo/mp features in detail

151 Module 5 Configuring Hardware 5 6. SLIDE: Core I/O Cards Core I/O Cards All HP servers include at least one Core I/O card or equivalent built-in interfaces Common Core I/O Functions Parallel SCSI Serial Attach SCSI 10/100/1000BaseT adapter Serial USB Graphics/VGA Audio Typical Usage Boot disk, tape, and DVD connectivity Boot disk connectivity LAN connectivity Serial terminal/modem connectivity Keyboard & mouse VGA monitor Speakers & Headphones Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes All Integrity servers include a Core I/O card or equivalent built-in interfaces that provide basic server connectivity. Cell-based servers may have multiple Core I/O cards to support node partitioning. Core I/O configurations vary, but typically include some combination of the following: One or more Parallel Small Computer System Interface (SCSI) interfaces for connecting the internal disk(s), tape drive, and optional DVD. A Serial Attach SCSI (SAS) interface, for connecting the internal disk(s). SAS provides greater expandability and better performance than parallel SCSI technology. Newer systems include SAS rather than parallel SCSI interfaces. One or two 10/100/1000BaseT interfaces, for connecting the system to a Local Area Network. Newer blade servers include standard, built-in LAN on Motherboard (LOM) dual-port 10Gb Ethernet interfaces. One or more serial ports, for connecting a terminal, modem, or serial printer

152 Module 5 Configuring Hardware One or more USB ports, for connecting a local keyboard and/or mouse. A graphics/vga adapter for connecting a local VGA monitor. This feature is only available on some entry-class servers. Audio ports, for connecting a headphone, microphone, and/or speakers. This feature is only available on some entry-class servers. To learn more about your server s Core I/O features, review your model s QuickSpecs on

153 Module 5 Configuring Hardware 5 7. SLIDE: Internal Disks, Tapes, and DVDs Internal Disks, Tapes, and DVDs Blade and rackmount servers support two or more internal hot-plug SCSI or SAS disks Rackmount servers also support one or more internal hot-plug DVD or DDS drives Most server models support an optional SmartArray controller SmartArray controller provides RAID 1, 5, and 6 functionality Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes The Core I/O / integrated parallel SCSI and SAS interfaces are commonly used to connect internal mass storage devices. Entry-class, mid-range, and Integrity blade server models support at least two internal SAS or SCSI disks. Entry-class servers support at least one internal DVD drive; some support one or more optional internal DDS tape drives, too. HP s high-end Superdome and Superdome 2 servers do not include any internal disk or tape drives; they rely on external devices or devices installed in an adjacent I/O expansion cabinet On all current systems, the internal disk and tape devices are hot-pluggable, enabling the administrator to service the devices while the server remains running in most cases. See your server s user service manual for details. Many models now support HP s SmartArray controller cards. The SCSI and SAS SmartArray cards provide hardware-based mirroring functionality using the server s internal disks. This useful feature ensures that the system continues running even if an internal disk fails. To learn more about your server s internal mass storage options, review your model s QuickSpecs on

154 Module 5 Configuring Hardware 5 8. SLIDE: Interface Adapter Cards Interface Adapter Cards Interface Adapters provide connectivity to additional devices Interface Adapter Type Typical Usage Parallel SCSI Host Bus Adapters Disks, tapes, CDROMs, DVDs Serial Attached SCSI Host Bus Adapters Disks, tapes, CDROMs, DVDs Smart Array Adapter Disks 1Gb, 2Gb, 4Gb Fiber Channel Host Bus Adapters Disk arrays, tape libraries 10Mb, 1Gb, 10Gb, and FLEX10 Ethernet Adapters LAN connectivity ATM, X.25 Adapters WAN connectivity Multi-function Adapters Fiber Channel + Ethernet Graphics/VGA Adapters VGA monitors Audio Adapters Headphone/Microphone/Speakers Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus ilo / MP Core I/O FC HBA FC HBA LAN Serial SCSI LAN Serial SAN Disk DVD LUN LUN LUN Student Notes The Core I/O card provides basic LAN and storage connectivity. Adding additional interface adapter cards makes it possible to connect to additional LANs, SANs, and external devices. The slide lists some of the common interface adapter card types commonly found on HP-UX systems today. Supported cards vary by server model and OS type and version; see your model s QuickSpecs on for details. If you plan to use the interface card to boot from a SAN device or a network-based Ignite-UX install server, check the QuickSpecs to verify that your interface card provides boot support for your OS version. Online Replacement, Addition, Deletion (Interface Card OL*) Some of the entry-class servers, and all of the current mid-range and high-end servers, now support HP s Interface Card OL* functionality, which makes it possible to add and replace (11i v1, v2, and v3), or remove (11i v3 only) interface cards without shutting down the system. If a card needs to be replaced, and the card isn t currently in use, the administrator can power down the card slot and replace the card while the OS and other slots remain functional

155 Module 5 Configuring Hardware To determine if your server supports OL*, execute rad q (11i v1) or olrad q (11i v2 and v3). If the command yields an error message, your server doesn t support OL*. The olrad output below suggests that three card slots on this server are unoccupied. Five slots are occupied and support OL* functionality. # olrad -q Driver(s) Capable Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode Num Spd Mode /0/8/ Off No N/A N/A N/A PCI-X PCI-X /0/10/ Off No N/A N/A N/A PCI-X PCI-X /0/12/ Off No N/A N/A N/A PCI-X PCI-X /0/14/ On Yes No Yes Yes PCI-X PCI /0/6/ On Yes No Yes Yes PCI-X PCI /0/4/ On Yes No Yes Yes PCI-X PCI-X /0/2/ On Yes No Yes Yes PCI-X PCI-X /0/1/ On Yes No Yes Yes PCI-X PCI-X In order to add/replace a card online, both the server s card slot and that interface card s driver must support OL*. To determine if an interface card s driver supports OL*, check the documentation accompanying the card

156 Module 5 Configuring Hardware 5 9. SLIDE: Disk Arrays and LUNs Disk Arrays and LUNs Most HP-UX servers today store application and user data on external disk arrays Storage in an array is subdivided into Logical Units (LUNs) Each LUN represents a virtual partition of disk space The array assigns each LUN a globally unique WW Identifier (WWID) From the operating system s perspective, a LUN is just another disk LUNs provide performance and high availability via RAID technology Blade Link / Crossbar CPUs Memory Cell Boards or Blades SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus FC HBA FC HBA Core I/O ilo / MP SAN LAN Serial SCSI LAN Serial LUN LUN LUN Disk DVD Student Notes Disk Arrays A disk array is a storage system consisting of multiple disk drive mechanisms managed by an array controller that makes the resulting disk space available to one or more hosts. As the volume of data managed on HP-UX systems has increased from megabytes, to gigabytes, to terabytes, disk arrays have become increasingly popular. Though many administrators still choose to configure internal disks as boot disks, most application and user data today is stored on external disk arrays. LUNs Disk arrays often have dozens, or even hundreds, of disk devices. Management software running on the array enables the array administrator to subdivide the array s disk space into one, two, or even hundreds of Logical Units (LUNs), or virtual disks

157 Module 5 Configuring Hardware LUNs and WWIDs The disk array automatically assigns every LUN a globally unique, 64-bit WW Identifier (WWID) that is typically displayed in hexadecimal form. Array administrators also assign each LUN an easier to remember LUN ID number. When troubleshooting issues with array administrators, you may be asked to provide a LUN s WWID or LUN ID. In 11i v3, administrators can easily view both numbers via the scsimgr command. A later slide in the chapter discusses the scsimgr command in detail. # scsimgr get_attr -a wwid -H 64000/0xfa00/0x4 name = wwid current = 0x600508b400012fd20000a default = saved = # scsimgr get_attr \ -a lunid \ -H 1/0/2/1/0.0x50001fe c.0x name = lunid current =0x (LUN # 1, Flat Space Addressing) default = saved = 11i v1 and v2 administrators must use utilities supplied by the array vendor to obtain a LUN s WWID and LUN ID. 11i v1 and v2 servers accessing HP disk arrays via HP s SecurePath software product can view WWIDs and other LUN attributes via the spmgr command. 11i v1 and v2 servers accessing HP disk arrays via HP s AutoPath software product can view WWIDs and other LUN attributes via the autopath command. LUNs and HP-UX HP-UX sees each LUN as a disk device. The same commands used to configure and manage a simple internal SCSI disk can also be used to manage a disk array LUN. Note, however, that HP-UX has no visibility to the underlying disks within the array that comprise the LUN. LUN RAID Levels The array administrator assigns each LUN a RAID level. This RAID Level determines the level of performance and reliability offered by the LUN. RAID (Redundant Arrays of Independent Disks or Redundant Arrays of Inexpensive Disks) is a technology used to efficiently and redundantly manage data spread across multiple independent disks. By distributing across multiple disks, I/O operations can overlap in a balanced way, improving performance. In most cases, RAID solutions also maintain redundant data on multiple disks to increase fault tolerance. Many different RAID technologies have been proposed over the years. Each level specifies a different disk array configuration and data protection method, and each provides a different level of reliability and performance. Only a few of these configurations are typically implemented in today s arrays:

158 Module 5 Configuring Hardware RAID 0: Striping RAID 1: Mirroring RAID 1+0: Mirroring+Striping RAID 3: Striping with parity RAID 5DP: Striping with distributed parity RAID 5DP: Striping with redundant distributed parity Array Benefits Disk arrays offer several advantages over traditional disk storage: Improved scalability: Many disk arrays provide hundreds of terabytes of disk space. Improved data availability: Disk arrays have multiple redundant components to ensure that data remains available even when a disk, controller, or power supply fails. When disks do fail, the array management software makes it very easy to replace the failed component without causing downtime. Improved performance: Disk arrays provide a variety of striping options to ensure load balancing across multiple disks. Most disk arrays include very large caches, too, so I/O requests can be serviced with very low latency. Improved flexibility: Disk arrays make it very easy to make additional space available when necessary, and re-allocate space that is underutilized. Improved manageability Today s arrays include sophisticated management software to simplify monitoring, performance tuning, and troubleshooting. For Further Study To learn more about RAID technology, attend HP Education s Accelerated SAN Essentials (UC434S), Managing HP StorageWorks Enterprise Virtual Array (UC420S), or HP StorageWorks XP Disk Array (H6773S) courses

159 Module 5 Configuring Hardware SLIDE: SANs and Multipathing SANs and Multipathing A Storage Area Network (SAN) is a special purpose network of servers, arrays, tape libraries, and SAN switches that allow administrators to more flexibly configure and manage disk and tape resources Most servers and arrays connect to the SAN via multiple HBAs to provide redundancy and high availability Result: Each LUN may be accessed by multiple paths through the SAN 11i v1 and v2 rely on the volume managers to manage multi-pathing 11i v3 implements a new mass storage stack with native OS multi-pathing LUN x 4 paths LUN x 4 paths LUN x 4 paths server w/ 2 HBAs SAN switches array w/ 2 controllers SAN LUN LUN LUN Student Notes SANs For even greater flexibility, arrays are oftentimes connected to multiple hosts via a Storage Area Network (SAN). A Storage Area Network (SAN) is a special purpose network of servers and storage devices that allows administrators to more flexibly configure and manage disk and tape resources. Array administrators can control which LUNs are presented to each host on the SAN. Multipathing In high-availability environments, administrators often configure multiple physical paths to a disk array. Each path utilizes a unique path from the server s Host Bus Adapter (HBA), through the SAN, to an array controller. Depending on the complexity of your SAN, you may have two, four, eight, or even more paths to each LUN. Redundant links ensure that if an HBA or array controller fails the server can maintain connectivity to the array LUNs via the remaining link(s). Utilizing multiple paths to an array

160 Module 5 Configuring Hardware concurrently may provide performance benefits, too: if any single path to a LUN becomes overloaded, I/O can be redirected down one of the other paths. In 11i v1 and v2, the kernel isn t multi-path aware. It views each path to a multi-pathed LUN as an independent device, and relies on LVM Physical Volume Links (PV Links), VxVM Dynamic Multipathing (DMP), or path management software from the array vendor to determine which paths are redundant and how those paths should be used. Many disk array vendors offer additional software that can be added to the 11i v1 or v2 kernel to provide array-specific multi-pathing capabilities independent of LVM or VxVM. HP s Storageworks Secure Path product provides this functionality for HP s XP, EVA, and VA disk arrays. The Power Path product from EMC provides similar functionality for EMC disk arrays. To learn more about HP Storageworks Secure Path, visit 11i v3 implements a new mass storage stack that provides native OS multi-pathing. In the new mass storage stack, the kernel automatically recognizes, configures, and manages redundant LUN paths. LVM PV Links and third party path management software are no longer required

161 Module 5 Configuring Hardware SLIDE: Partitioning Overview Partitioning Overview HP partitioning technologies allow multiple applications to run on a single server with dedicated CPU/memory/IO resources that can be flexibly reallocated as necessary Without Partitioning: Each app runs on a separate server No resource sharing With Partitioning: Apps run in separate partitions on a shared server Resources can be reallocated as necessary Partition #1 Transaction Processing Partition #1 Transaction Processing CPU/memory/IO Partition #2 Batch Processing Partition #2 Batch Processing Student Notes In the past, most organizations deployed a dedicated server for each application. Allocating a dedicated server for each application guaranteed that the application didn t compete for resources with other applications, and ensured that hardware, software, or security issues on the server would only impact one application. Unfortunately, this approach generally resulted in over-provisioning. Most applications experience peak usage periods and low usage periods. Administrators purchase systems to accommodate the peaks, but find that system resources are underutilized during low usage periods. If every application runs on a separate server, the only way to reallocate CPU, memory, and other resources from an under-utilized system to an overtaxed system is to shutdown both machines and physically move components between the system chassis. HP partitioning technologies allow multiple applications to run on a single server with dedicated CPU/memory/IO resources that can be flexibly reallocated as necessary. Partitioning also provides fault isolation, ensuring that application/os/hardware errors in one partition don t impact workloads running in other partitions on the same server 1. 1 The level of fault isolation provided varies, depending on the partitioning technology selected

162 Module 5 Configuring Hardware SLIDE: npar, vpar, VM, and Secure Resource Partition Overview npar, vpar, VM, & Secure Resource Partition Overview HP offers a variety of flexible partitioning technologies Feature npars vpars VMs SRPs CPU Granularity Cell/Blade CPU Sub-CPU CPU %age I/O Granularity I/O chassis LBA Sub-LBA Bandwidth %age HW Fault Isolation? Yes No No No OS Fault Isolation? Yes Yes Yes No Resource Isolation? Yes Yes Yes Yes HW Support Cell-Based, Superdome, & Superdome 2 IA/PA Servers Cell-Based, Superdome, & Superdome 2 IA/PA Servers All IA Servers All IA/PA Servers OS Support HPUX, Windows, Linux, OpenVMS HPUX HPUX, Windows, Linux, OpenVMS HPUX Student Notes HP offers a variety of partitioning solutions. Node Partitions (npars) Mid-range, Superdome, and Superdome 2 server administrators can improve both utilization and flexibility by configuring multiple electrically isolated hardware-based npar partitions on a server, each containing one or more cell boards and the cell boards associated CPU, memory, and I/O resources. npar Advantages: npars allow the administrator to run multiple OS instances on a server, and move cell boards between npars to balance utilization, while still guaranteeing hardware and OS fault isolation. Applications running in one npar can t access resources in another npar. An OS panic, hardware failure, or security breach in one npar has no impact on the other npars. npar Disadvantages: In comparison to some of the other partitioning solutions below, npars provide a bit less flexibility since they only allow blade- / cell-level partition granularity

163 Module 5 Configuring Hardware npar Support: npars are only supported on midrange, Superdome, and Superdome 2 servers. On Integrity servers, npars support HP-UX, Windows, Linux, and OpenVMS operating systems. Servers with multiple npars can run a different OS in each npar. Virtual Partitions (vpars) Virtual Partitions (vpars) enable administrators to carve a server or npar into one or more vpar partitions, each running a separate instance of the operating system. vpar Advantages: vpars provide greater flexibility than npars, since each vpar can be assigned individual processors, individual LBAs, and a percentage of physical memory. Applications running in one vpar can t access resources in another vpar, and an OS panic in one vpar has no impact on other vpars. Since individual hardware components are assigned to each vpar, vpars have little impact on an application s performance. vpars allow the administrator to move CPUs between vpars very easily. The latest version of vpars also supports dynamic memory migration between vpars. vpar Disadvantages: Unlike npars, vpars don t provide hardware fault isolation. When a cell board fails, multiple vpars on the cell board may panic as a result. vpar Support: vpars are supported on all current midrange, Superdome, and Superdome 2 servers. However, not all models support the latest version of the vpars software, and not all interface cards provide vpar support. See the vpars documentation for details. vpars only support HP-UX. Integrity Virtual Machines (VMs) Integrity VMs enable administrators to carve a server or npar into one or more Virtual Machine guests, each running a separate instance of the operating system. VM Advantages: VM guests provide fully virtualized hardware, allowing the administrator to allocate resources at the sub-cpu level and share interface cards between VMs. VMs provide software fault isolation. OS problems on one VM should have no impact on other VM guests. VM CPU and memory entitlements guarantee each VM a minimum amount of memory and CPU resources; remaining resources may be shared by multiple VMs, potentially improving utilization. VM guests can also be moved between physical servers, often without modifying the applications inside the VM! Moving a VM does require stopping and restarting the OS running inside the VM. VM Disadvantages: VMs provide software, but not hardware fault isolation. Also, VMs incur greater performance penalties than vpars, particularly for I/O bound applications. Support: VMs are supported on all Integrity servers including entry class systems and Integrity blades. At the time this book went to press, Integrity VMs supported HP-UX, Windows, and Linux. OpenVMS will eventually be supported as a guest OS. Check the current QuickSpecs for the latest support list. vpars and VMs are mutually incompatible within an npar, though a server with multiple npars can run vpars in one npar and VMs in the other

164 Module 5 Configuring Hardware Secure Resource Partitions Secure Resource Partitions enable the administrator to run multiple applications within a single npar or vpar OS instance, while still providing each application guaranteed CPU, memory, and I/O resources. Secure Resource Partitions utilize several HP-UX products: Process Resource Manager (PRM) enforces minimum and maximum CPU, memory, and disk I/O bandwidth entitlements for each application. The administrator controls what percentage of system resources each Secure Resource Partition can utilize. Processor Sets (PSETS) enable the administrator to assign one or more dedicated processors to an application, and reallocate PSET assignments when necessary. Security Containment, a product introduced in 11i v2, facilitates the creation of security compartments that limit the network interfaces, sockets, files, directories, and kernel functions available to an application. Configuring each application in a separate security compartment ensures that applications can not intentionally or unintentionally interfere with other applications resources. IPFilter, an open source firewall solution, restricts network traffic flowing in and out of the SRP s network interfaces. IPSec, a standards-based HP product that can optionally encrypt and authenticate network traffic flowing in and out of the SRP s network interfaces. Secure Resource Partitions, an intuitive CLI / menu interface that automatically integrates and manages the components described above. Secure Resource Partition Advantages: Secure Resource Partitions enforce minimum and maximum CPU, memory, and disk I/O bandwidth entitlements for each application, and ensure that each application can only access its own files, directories, network interfaces and other resources. Secure Resource Partition Disadvantages: Secure Resource Partitions guarantee resource entitlements, but don t provide hardware or OS fault isolation. An OS panic or hardware failure causes all Secure Resource Partitions in the OS instance to fail. Secure Resource Partition Support: PRM and PSETS are supported on all HP-UX 11i v1, v2, and v3 servers. Security containment is only supported on 11i v2 and v3. The Secure Resource Partitions CLI/TUI interface is only supported on 11i v

165 Module 5 Configuring Hardware SLIDE: Part 2: System Types Configuring Hardware: Part 2: System Types Student Notes

166 Module 5 Configuring Hardware SLIDE: Integrity Server Overview Integrity Server Overview HP currently offers a wide variety of Itanium-based Integrity servers Traditionally, Integrity servers were rackmount / cell-based systems Most newer Integrity servers are blade-based servers The table below provides an overview of the current Integrity server models The following slides describe these architectures in greater detail Rackmount & Cell-Based Integrity Servers High-End Cell-Based Server: HP Integrity Superdome (64p/128c) Mid-Range Cell-Based Servers: HP Integrity rx8640 (16p/32c) HP Integrity rx7640 (8p/16c) Entry-Class rackmount Servers: HP Integrity rx2800 i2 (2p/8c) New! HP Integrity rx6600 (4p/8c) HP Integrity rx3600 (2p/4c) HP Integrity rx2660 (2p/4c) Blade-Based Integrity Servers High-End Server: HP Integrity Superdome 2 (32p/128c) New! Blade Servers: Integrity BL890c i2 Blades (8p/32c) New! Integrity BL870c i2 Blades (4p/16c) New! Integrity BL860c i2 Blades (2p/8c) New! Integrity BL870c Blades (4p/8c) Integrity BL860c Blades (2p/4c) NOTE: HP regularly introduces new system models and configurations. For the latest information, and more details, see Student Notes HP offers a wide variety of Integrity servers, from dual processor entry-class servers, to highend servers with that can accommodate several thousand concurrent users. Rack-Mount and Cell-Based Integrity Servers HP s entry-class servers are self-contained, rackmounted servers. Each server chassis includes processors and memory, as well as power, cooling, management components. HP s largest entry-class servers support up to eight cores. HP s mid-range servers are self-contained, rackmounted servers that utilize a cell-based architecture. Each server chassis contains one or more cell boards, as well as power, cooling, and management components. HP s largest mid-range, rack-mounted server supports up to 32 cores. HP s entry-class and mid-range rackmount servers all have model names that begin with HP Integrity rx, in which the rx is followed by a four digit number. In general, servers with

167 Module 5 Configuring Hardware higher model numbers (e.g.: rx7640) offer greater power and expandability than servers with lower model numbers (e.g.: rx2660). HP s high-end Integrity Superdome server also utilizes a cell-based architecture. The current cell-based Superdome model supports up to 128 cores. Blade-Based Integrity Servers Many organizations today deploy HP s blade server solutions rather than rackmounted servers. A blade server is a compact, high-density server that has its own CPU and memory, but that shares networking cables, switches, power, and storage with other blade servers in a specially designed HP BladeSystem enclosure. All of the components in the enclosure connect to a common midplane, eliminating the need for power, LAN, and SAN cables to individual server blades. Blade solutions often provide greater flexibility, faster server deployments, better manageability, less downtime, less power consumption, and lower costs than similar rackmounted solutions. The slide notes that HP currently offers a variety of Integrity blade servers with as many as 32 processor cores. HP s new high-end HP Integrity Superdome 2 server leverages HP BladeSystem technology, too. The HP Integrity Superdome 2 supports up to 128 cores. hp9000 Servers In the past, HP offered a variety of PA-RISC based HP-UX server solutions. HP no longer sells new PA-RISC servers, but does continue to support existing PA-RISC servers. Upgrade Paths Customers often find that as their business grows, their transaction volumes demand greater capacity and performance. Hewlett-Packard provides a comprehensive upgrade program that protects customers' investments in hardware, software and training. The upgrade program includes simple board upgrades, system swaps with aggressive trade-in credits, and 100 percent return credit on most software upgrades. Utility Pricing Solutions HP offers a number of practical, cost-effective pricing solutions to meet the needs of customers with growing or fluctuating demand. To learn more about our Instant Capacity and Pay Per Use solutions, visit Determining your System Model Type You can determine your system s model type and number via the model command

168 Module 5 Configuring Hardware # model ia64 hp server rx2660 OS Version Support Each HP-UX release only supports certain server models. To determine which hardware models support each operating system release, see For Further Information This slide is just an overview. Upcoming slides provide a bit more detail about the server categories. Hardware products change frequently. For the most current information on HP s hardware products, visit HP s product website at or contact your local HP sales representative

169 5 15. SLIDE: Entry-Class Rackmount Server Overview Module 5 Configuring Hardware Entry-Class Rackmount Server Overview HP s entry-class rackmount Integrity servers are ideal for customers who require flexibility, highavailability, and scalability up to eight processor cores in a traditional rackmount form factor; administrators often deploy entry-class rackmount servers in smaller branch office locations Common Features: Integrated LAN interface Integrated Management Processor Redundant hot-swap power supplies Redundant hot-swap cooling HP Integrity rx2800 i2 2 rack units 2 processors 8 cores Other features TBA HP Integrity rx rack units 2 processors 4 cores 8 DIMMS 8 internal disks 3 PCIe/PCI-X slots HP Integrity rx rack units 4 processors 8 cores 24 DIMMs 8 internal disks 8 PCIe/PCI-X slots HP Integrity rx rack units 4 processors 8 cores 48 DIMMs 16 internal disks 8 PCIe/PCI-X slots Student Notes HP s entry-class Integrity servers are ideal for customers who require flexibility, highavailability, and scalability up to eight processor cores in a traditional rackmount form factor. Most are also available in a pedestal mount for deskside use. Administrators often deploy entry-class servers in smaller branch office locations. The entry-class servers offer one to four processor sockets. Most models support dual-core Itanium processors. The rx2800 i2 supports a quad-core Itanium processor. PCI-X and PCI Express expansion slots allow the administrator to easily add additional LAN and mass storage interface cards to connect additional peripheral devices. All of the entry class servers include internal disks. Older servers used SCSI controllers/disks. Newer servers use Serial Attach SCSI (SAS) controllers/disks. Some servers also support HP s SmartArray controllers, which provide hardware mirroring. All internal disks on current server models are hot-pluggable, so failed disks can usually be replaced without shutting down the operating system

170 Module 5 Configuring Hardware All of the entry class servers offer a slimline DVD drive. The DVD is included standard on some servers, and as an option on others. Some models accommodate a tape drive in place of the DVD if desired. All of the entry class servers support an Integrated Lights Out (ilo) Management Processor card (though the card is an add-on option on some models). The ilo/mp enables the administrator to remotely access the system console, view system hardware status messages, reset the system, and power the system on and off. All ilo/mp cards provide remote access via telnet and HTTPS. The ilo web interface is very similar to the web interface provided by HP s ProLiant servers. Some models offer an SSH access option, too, for enhanced security. All of the current entry class servers include redundant, hot-plug power supplies, fans, and disks to minimize downtime. For detailed specifications of the entry-class servers, go to

171 5 16. SLIDE: Entry-Class Rackmount Server Example: HP Integrity rx2660 (front) Module 5 Configuring Hardware Entry-Class Rackmount Server Example: HP Integrity rx2660 (front) The next couple slides show the layout of an Integrity rx2660 entry-class rackmount server For descriptions of other entry-class servers visit DVD 2. Redundant Fans 3. VGA 4. USB 5. System Reset Button 6. Indicator Lights 7. Power Button 8. SAS Disks Student Notes The slide above shows the major components visible from the front of an rx2660 entry-class rackmount server. To learn more about the rx2660 and other entry class servers, go to

172 Module 5 Configuring Hardware SLIDE: Entry-Class Rackmount Server Example: HP Integrity rx2660 (rear) Entry-Class Rackmount Server Example: HP Integrity rx2660 (rear) PCI-x/PCI-e expansion slots a. Dual 1Gb Ethernet HBA b. Audio Adapter c. Dual U320 SCSI HBA 2. Core I/O Dual LAN ports 3. Smart Array Controller slot (empty) 4. Core I/O serial port 5. Core I/O VGA port 6. Core I/O USB port 7. MP serial port 8. MP LAN port 9. MP status LEDs 10. MP reset button 11. Redundant Hot-Swap Power Supplies Student Notes The photo on the slide above shows a rear view of the rx2660 entry-class rackmount server. To learn more about the rx2660 and other entry-class servers, go to

173 5 18. SLIDE: Mid-Range Cell-Based Server Overview Module 5 Configuring Hardware Mid-Range Cell-Based Server Overview HP s mid-range, rackmount cell-based servers are ideal for mission-critical, consolidation, and scale-up deployments that require up to 32 processor cores in a rackmount form factor. Common Features: Integrated LAN interfaces Integrated SCSI interface Integrated Management Processor Redundant hot-swap power supplies Redundant hot-swap cooling HP Integrity rx rack units 2 cell boards 8 processors 16 cores 32 DIMMs 4 internal disks 15 PCIe / PCI-X slots HP Integrity rx rack units 4 cell boards 16 processors 32 cores 64 DIMMs 8 internal disks 32 PCIe/PCI-X slots Student Notes HP s mid-range rackmount servers utilize HP s cell-based server technology, in which each server contains one or more cell boards. Each cell board may be connected to an optional I/O chassis which contains eight expansion slots. A low-latency crossbar backplane provides connectivity between the cell boards. The cell-based architecture provides tremendous expandability. As the need for processing power, expansion slots, and memory increases, additional cell boards may be added to the system. The rx7640 cell-based server supports up to two cell boards and 16 cores. The rx8640 cellbased server supports up to four cell boards and 32 cores. The mid-range, rackmount cell-based servers are ideal for mission-critical, consolidation, and scale-up deployments that require up to 32 processor cores in a rackmount form factor. For detailed specifications of this and other mid-range servers visit

174 Module 5 Configuring Hardware SLIDE: Mid-Range Cell-Based Server Example: HP Integrity rx8640 (front) Midrange Cell-Based Server Example: HP Integrity rx8640 (front) The graphic below shows the physical layout of an Integrity rx8640 server For descriptions of other mid-range servers visit Two DDS or DVD drives I/O power supplies Four hot-pluggable disks Redundant hot-swap fans Four cell boards Redundant hot-swap power supplies Student Notes The graphic on the slide shows the physical layout of a rack-mounted, mid-range rx8640 server. The rx8640 supports four cell boards, two 8-slot I/O chassis in the rear, two DDS/DVD bays, and four internal disks. Customers who require additional interface cards can purchase a System Expansion Unit (SEU) that provides two additional 8-slot I/O chassis, four additional internal disks, and two additional DDS/DVD bays. The rx7640 is similar, but supports two rather than four cellboards. For detailed specifications of this and other mid-range servers visit

175 Module 5 Configuring Hardware SLIDE: Mid-Range Cell-Based Server Example: HP Integrity rx8640 (rear) Midrange Cell-Based Server Example: HP Integrity rx8640 (rear) I/O bay with two 8-slot I/O chassis MP/Core I/O Redundant hot-swap fans Crossbar backplane Power inputs Student Notes The photo on the slide above shows a rear view of the rx8640 mid-range server. For detailed specifications of this and other mid-range servers visit

176 Module 5 Configuring Hardware SLIDE: High-End Cell-Based Server Overview High-End Cell-Based Server Overview For over a decade, enterprise customers have trusted HP s mission-critical cell-based Integrity Superdome server to provide maximum performance, scalability, and flexibility. Integrity Superdome: Up to two compute cabinets Up to two I/O Expansion Cabinets Up to 16 cell boards Up to 64 dual-core processors Up to 128 cores Up to 512 DIMMs Up to 192 PCIe/PCI-X slots Integrated ilo / MP Redundant hot-swap power supplies Redundant hot-swap cooling Student Notes For over a decade, enterprise customers have trusted HP s mission-critical cell-based Integrity Superdome server to provide maximum performance, scalability, and flexibility. HP s high-end Superdome servers support up to 16 cell boards, with 128 processor cores and 192 expansion slots. The cell-based architecture provides a great deal of expandability. As the need for processing power, expansion slots, and memory increases, additional cell boards may be added to the system. Node partitioning enables the administrator to assign (and re-assign!) cell boards to one or more functionally isolated npar partitions for even greater flexibility. For detailed specifications of this and other Superdome server configurations, visit

177 Module 5 Configuring Hardware SLIDE: High-End Cell-Based Server Example: HP Integrity Superdome (front) High-End Cell-Based Server Example: HP Integrity Superdome (front) The graphic on the right shows the front view physical layout of an 8-cell Integrity Superdome server compute cabinet For descriptions of other high-end Superdome servers visit Blowers 0-1 Cell Slots 0-7 I/O Bay 0 I/O Fans 0-4 I/O Chassis 1 and 3 each with 12 expansion slots Power Supplies Leveling Feet Front Student Notes The graphic on the slide shows the physical layout of an 8-cell Superdome server. Each Superdome compute cabinet contains up to eight cell boards with four dual-core Montecito processors per cell, and two I/O bays, each containing two 12-slot I/O chassis. Customers who require larger configurations can purchase two side-by-side compute cabinets to support up to 16 cell boards and 96 I/O expansion slots, as shown below. Optional I/O expansion units provide additional I/O expandability

178 Module 5 Configuring Hardware Cabinet 0 8 Cells 2 I/O bays 4 I/O chassis 48 slots Cabinet 1 8 Cells 2 I/O bays 4 I/O chassis 48 slots IOX Cabinet 8 peripherals 3 I/O bays 6 I/O chassis 72 slots IOX Cabinet 9 peripherals 3 I/O bays 6 I/O chassis 72 slots For detailed specifications of this and other Superdome server configurations, visit

179 Module 5 Configuring Hardware SLIDE: High-End Cell-Based Server Example: HP Integrity Superdome (rear) High-End Cell-Based Server Example: HP Integrity Superdome (rear) Blowers 2-3 Crossbar Backplane MP I/O Bay 1 I/O Chassis 1 and 3 each with 12 expansion slots Cable Groomer Rear Student Notes The photo on the slide above shows a rear view of the Integrity Superdome server computer cabinet

180 Module 5 Configuring Hardware SLIDE: HP BladeSystem Overview HP BladeSystem Overview For maximum flexibility, consider HP s HP BladeSystem solution A blade server is a compact, high-density server that has its own CPU and memory resources, but that shares network, power, cooling, and storage resources with other blade servers in an HP BladeSystem enclosure. HP BladeSystem advantages: Manageability Sophisticated integrated management and monitoring tools simplify administration of the blade enclosure, and of the blades within the enclosure Availability Redundant power, cooling, and interconnects eliminate single points of failure Flexibility The HP BladeSystem supports Integrity, ProLiant, and storage blades in a single enclosure. HP s Virtual Connect technology allows you to quickly deploy (and redeploy) without rewiring! Serviceability Simple tool-less replacement for most components; powerful, intuitive, proactive diagnostic tools Scalability Consolidated power and cooling and the BladeSystem s dense form factor, enable you to deploy more servers, more quickly and more cost effectively Student Notes For maximum flexibility, consider the HP BladeSystem Integrity blade server solutions. A blade server is a compact, high-density server that has its own CPU and memory, but that shares power, cooling, and an intuitive management interface in a specially designed HP BladeSystem enclosure. All of the components in the enclosure connect to a common midplane, eliminating the need for power, LAN, and SAN cables to individual server blades. The servers and all the components of the enclosure work together as a seamless unit, increasing efficiency and reducing costs by eliminating many of the overlapping resources required to support stacks of individual rack servers. The list below describes some of the most important features of HP s latest BladeSystems. Manageability:

181 Module 5 Configuring Hardware HP s c-class BladeSystem s Onboard Administrator provides a consolidated web-based interface for the administrator to manage BladeSystem components. This intuitive interface may be used to configure server, storage, network, and power settings locally through an interactive LCD panel or remotely through an easy-to-use web interface. It also facilitates blade infrastructure firmware updates and consolidates access to all of the ilo management processors. Detailed visual renderings of HP BladeSystem hardware as well as pre-programmed field-replaceable-unit (FRU) information help expedite identification and replacement of faulty components. For IT organizations that need to manage large numbers of HP BladeSystem enclosures, the Onboard Administrator command-line interface facilitates scripted operation of key management operations, and multiple Onboard Administrator modules can be discovered and launched from within HP s Systems Insight Manager. Also, thermallogic Monitoring software in the HP BladeSystem enclosure can automatically monitor and manage an enclosure s power, cooling, and other resources, enabling and disabling power supplies and fans as necessary based on the blades requirements. Availability: All power supplies, fans, and other critical enclosure components are redundant and hot-pluggable to ensure maximum uptime. Flexibility: Enclosures may contain a mix of ProLiant, Integrity, and storage blades, allowing the administrator to easily match the blade mix in the enclosure to the needs of the organization. BladeSystem mezzanine expansion cards are interchangeable: many of the fibre channel and Ethernet cards used on HP s c-class ProLiant blades are supported on c-class Integrity server blades. HP s c-class BladeSystem s Virtual Connect technology can drastically reduce and simplify cabling requirements, too

182 Module 5 Configuring Hardware Densely stacked rack-mounted servers with many Ethernet and Fibre Channel (FC) connections can result in hundreds of cables coming out of a rack. Installing and maintaining multitudes of cables is time-consuming and costly. When you add, move, or replace a traditional server, you must typically add new power and cooling units, and modify the LAN and SAN, which may require assistance from your LAN, SAN, and facility administrators. This may delay server changes and deployments. Virtual Connect is an industry standard-based implementation of server-edge I/O virtualization. It puts an abstraction layer between the servers and the external networks so that the LAN and SAN see a pool of servers rather than individual servers. Once the LAN and SAN connections are made to the pool of servers, the server administrator uses a Virtual Connect Manager Interface to create an I/O connection profile for each server. Instead of using the default media access control (MAC) addresses for all network interface controllers (NICs) and default World Wide Names (WWNs) for all host bus adapters (HBAs), the Virtual Connect Manager creates bay-specific I/O profiles, assigns unique MAC addresses and WWNs to these profiles, and administers them locally. Virtual Connect technology provides a simple, easy-to-use tool for managing the connections between HP BladeSystem c-class servers and external networks. It cleanly separates server enclosure administration from LAN and SAN administration, relieving LAN and SAN administrators from server maintenance and makes HP BladeSystem c- Class server blades change-ready, so that blade enclosure administrators can rapidly add, move, and replace server blades with minimal assistance from LAN/SAN administrators. Serviceability: HP s BladeSystem enclosures support tool-less removal of most components without removing the enclosure from rack or blades from the enclosure. Powerful, intuitive diagnostic tools simplify troubleshooting, too. Scalability: HP s BladeSystem provides more efficient power and cooling than rackmounted server solutions, since consolidated power supplies and zone-based cooling components in the enclosure provide power and cooling for multiple server blades. As a result, BladeSystem solutions may enable organizations to deploy more servers, much more quickly and affordably, in a smaller datacenter footprint than would otherwise be possible with rack-mounted servers. The graphic on the slide above is an HP BladeSystem c7000 blade enclosure with eight halfheight ProLiant blades and four full-height Integrity BL860c blades in the slots on the right

183 Module 5 Configuring Hardware SLIDE: HP BladeSystem Enclosure Overview HP BladeSystem Enclosure Overview HP currently offers two HP BladeSystem enclosures: the HP BladeSystem c7000 and c3000 Both enclosures utilize the same blades, interconnects, power, cooling, and other components Both enclosures enable you to mix and match a variety of Integrity and ProLiant server blades c7000 Blade Enclosure 10 rack units Up to 8 full height blades As shown: 8 half-height ProLiant blades 4 Integrity BL860c blades c3000 Blade Enclosure Rack and tower configurations 6 rack units Up to 4 full height blades As shown: 4 Integrity BL860c blades Student Notes HP currently offers two HP BladeSystem enclosures: the HP BladeSystem c7000 and c3000. Both enclosures utilize the same blades, interconnects, power, cooling, and other components, and both allow you to mix and match a variety of Integrity and ProLiant server blades in the enclosure. The c7000 is a 10 rack unit enclosure that can accommodate up to eight full height blades or sixteen half-height blades. The enclosure on the slide has eight half-height ProLiant blades on the left, and four full-height HP Integrity BL860C blades on the right. The c3000 is a 6 rack unit enclosure that can accommodate up to four full height blades or eight half-height blades. The c3000 is also available in a tower configuration for small office deployments. The c3000 on the slide has four full-height HP Integrity BL860C blades on the right. HP s HC590S Integrity Blade Server Administration course discusses the c-class blade enclosures and Integrity blade models and management tools in much greater detail

184 Module 5 Configuring Hardware For detailed product specifications, see the enclosure datasheets on These two white papers on provide additional information about the C-class BladeSystem enclosures: Technologies in the HP BladeSystem c7000 Enclosure technology brief HP BladeSystem c-class architecture technology brief

185 5 26. SLIDE: HP BladeSystem Enclosure Example: HP BladeSystem c7000 Enclosure Module 5 Configuring Hardware HP BladeSystem Enclosure Example: HP BladeSystem c7000 Enclosure The graphic below highlights some of the important components of the HP BladeSystem c7000 blade enclosure One or more server blades, with independent CPU, memory, and disks Insight display provides an intuitive interface to manage the enclosure (a web interface is available, too!) Blade Enclosure Front Redundant power supplies, managed and monitored by the enclosure efficiently provides power for the blades in the enclosure Blade Enclosure Back Redundant cooling fans, managed and monitored by the enclosure efficiently provide cooling for the blades in the enclosure Interconnects provide flexible LAN/SAN connectivity between blades (no cabling required!), and to external LANs & SANs, too! Student Notes The slide above highlights some of the critical components of the c7000 BladeSystem enclosure

186 Module 5 Configuring Hardware SLIDE: HP Integrity Blade Server Model Overview HP Integrity Blade Server Model Overview HP offers a complete line of Integrity server blades, from 2- to 32-cores All are compatible with the HP BladeSystem c3000 and c7000 enclosures All leverage HP BladeSystem s manageability, availability, flexibility, and serviceability features BL860C 1 blade slot 2 processors 4 cores 24 DIMMs BL870C 2 blade slots 4 processors 8 cores 48 DIMMs BL860C i2 1 blade slots 2 processors 8 cores 24 DIMMs BL870C i2 2 blade slots 4 processors 16 cores 48 DIMMs BL890C i2 4 blade slots 8 processors 32 cores 96 DIMMs Student Notes HP offers a complete line of Integrity server blades, from 2- to 32-cores. All are compatible with the HP BladeSystem c3000 and c7000 enclosures, and all leverage the HP BladeSystem manageability, availability, flexibility, and serviceability features described previously

187 Module 5 Configuring Hardware SLIDE: HP Integrity Server Blade Example: HP Integrity BL890c i2 HP Integrity Server Blade Example: HP Integrity BL890c i2 HP s BL860C i2, BL870C i2, and BL890C i2 blades all utilize a common foundation blade The Integrity Blade Link, using Intel s QPI fabric technology, conjoins 1, 2, or 4 foundation blades Each blade in the QPI fabric has full access to resources on the other blades via the QPI fabric = 4 x Foundation Blades, each providing: Integrity Blade Link Up to 2 processors / 8 cores Up to 24 DIMMs Two internal SAS disks Two dual-port 10Gb Flex-10 LAN interfaces Three mezzanine expansion card slots BL890c i2 Student Notes The graphic on the slide shows an Integrity BL890C i2. HP s BL860C i2, BL870C i2, and BL890C i2 blades all utilize a common foundation blade. Each foundation blade hosts: Up to 2 dual- or quad-core processors Up to 24 DIMMs Two internal SAS disks Two dual-port 10Gb Flex-10 LAN interfaces Three internal mezzanine expansion card slots The graphic below shows the foundation blade s internal architecture:

188 Module 5 Configuring Hardware The Integrity Blade Link, using Intel s QuickPath Interconnect (QPI) fabric technology, may be used to conjoin one to four foundation blades. Each blade in the QPI fabric has full access to resources on the other blades via the QPI fabric. This approach allows HP s Integrity blades to easily scale from 2 to 32 processor cores. The BL860C i2 utilizes one foundation blade The BL870C i2 conjoins two foundation blades The BL890C i2 conjoins four foundation blades The graphic below shows the architecture of the QPI fabric in a BL890C i2:

189 Module 5 Configuring Hardware HP s HC590S Integrity Blade Server Administration course discusses the c-class blade enclosures and Integrity blade models and management tools in much greater detail. For detailed product specifications, see the datasheets on Also read the following white paper s on to learn more about the Integrity blade architecture: Why Scalable Blades: HP Integrity Server Blades (BL860c i2, BL870c i2, and BL890c i2) Technologies in HP Integrity server blades (BL860c i2, BL870c i2, and BL890c i2)

190 Module 5 Configuring Hardware SLIDE: HP Integrity Superdome 2 Overview HP Integrity Superdome 2 Overview For maximum scalability, availability, and flexibility, consider HP s Superdome 2 Superdome 2 leverages its lower mid-plane, power, cooling, interconnects, and other modular components from the HP BladeSystem c7000 enclosure But adds a fault tolerant, low latency crossbar fabric that facilitates the creation of npars with up to 128 cores And an upper midplane, unique to the Superdome 2, to connect external I/O expansion enclosures, with up to 96 external PCIe I/O expansion cards Superdome 2 8-socket Superdome 2 16-socket Superdome 2 32-socket Student Notes For maximum scalability, availability, and flexibility, consider the HP Superdome 2 server. Superdome 2 leverages its lower mid-plane, power, cooling, interconnects, and other modular components from the HP BladeSystem c7000 enclosure. It adds a fault tolerant, low latency crossbar fabric that facilitates the creation of npars with up to 128 cores, and a Superdome 2 specific upper midplane that supports to connect external I/O expansion enclosures, each with up to 12 PCIe I/O expansion cards. HP offers three versions of the Superdome 2 server: The 8-socket / 32-core Superdome 2 has four Superdome 2 blades in a single Superdome 2 enclosure. The 16-socket / 64-core Superdome 2 has eight Superdome 2 blades in a single Superdome 2 enclosure. The 32-socket / 128-core Superdome 2 has sixteen Superdome 2 blades in two Superdome 2 enclosures connected via a single Superdome 2 enclosure

191 Module 5 Configuring Hardware SLIDE: HP Integrity Superdome 2 Example: HP Integrity Superdome 2 HP Integrity Superdome 2 Example: HP Integrity Superdome 2 Superdome 2 Compute Enclosure (up to 2 per complex, each with up to 8 blades) Front Rear The Superdome 2 compute enclosure contains up to eight 2-socket Superdome 2 blades Lower midplane is highly leveraged from the c7000; same power, cooling, interconnects Upper midplane provides a fault tolerant crossbar, and connectivity to I/O expansion enclosures External I/O expansion enclosures house additional PCIe expansion cards Upper Midplane Lower Midplane Superdome 2 I/O Expansion Enclosure (each enclosure provides 12 additional PCIe expansion slots) Front Rear Student Notes The graphic on the slide shows a close-up view of an 8-socket / 32-core Superdome 2. The Superdome 2 compute enclosure contains up to eight 2-socket Superdome 2 blades. Each Superdome 2 blade houses two quad-core Itanium processors, 32 DIMMs, two dual port 10Gb LAN interfaces, and up to three PCIe mezzanine expansion cards. The lower midplane is highly leveraged from the c7000, using many of the same power, cooling, and LAN/SAN interconnect modules. The upper midplane, designed specifically for the Superdome 2, provides a fault tolerant crossbar, and connectivity to external I/O expansion enclosures. Eight external I/O expansion enclosures each house up to twelve additional PCIe expansion cards

192 Module 5 Configuring Hardware The diagram below shows the Superdome 2 s internal architecture. To learn more about Superdome 2, attend HP Education s HK713S Superdome 2 Administration course. For detailed product specifications, see the datasheet on Or, read more about Superdome 2 architecture in the HP Superdome 2: the Ultimate Mission-critical Platform white paper on

193 Module 5 Configuring Hardware SLIDE: Viewing the System Configuration Viewing the System Configuration HP-UX provides several commands for viewing your system configuration View the system model string # model # uname a View processor, memory, and firmware configuration information # machinfo View cell boards, interface cards, peripheral devices, and other components # ioscan all components # ioscan C cell cell board class components # ioscan C lan LAN interface class components # ioscan C disk disk class devices # ioscan C fc fibre channel interfaces # ioscan C ext_bus SCSI buses # ioscan C processor processors # ioscan C tty serial (teletype) class components SAM and the SMH can also provide detailed hardware information Student Notes HP-UX provides several commands for viewing your system configuration. Execute the model command to determine your system s hardware model string. # model ia64 hp server rx2600 In 11i v2 and 11i v3, the machinfo command reports detailed processor, memory, firmware, model, and operating system information. # machinfo CPU info: 1 Intel(R) Itanium 2 processor (1.4 GHz, 1.5 MB) 400 MT/s bus, CPU version B1 Memory: 4084 MB (3.99 GB)

194 Module 5 Configuring Hardware Firmware info: Firmware revision: FP SWA driver revision: 1.18 IPMI is supported on this system. BMC firmware revision: 1.53 Platform info: Model: "ia64 hp server rx2600" Machine ID number: e85c91a d8-b1ce-0f6d684be9ae Machine serial number: US OS info: Nodename: myhost Release: HP-UX B Version: U (unlimited-user license) Machine: ia64 ID Number: vmunix $Revision: vmunix: B.11.31_LR FLAVOR=perf The ioscan command presents a hierarchical list of cell boards, interface cards, peripheral devices, and other components on your system. By default, ioscan reports each component s hardware path, class, and description. Add the C option to view a specific device class such as cell, disk, lan, or processor. Slides later in the chapter describe HP-UX hardware paths and other ioscan options in detail. # ioscan H/W Path Class Description ============================================================== root 1 cell 1/0 ioa System Bus Adapter (804) 1/0/0 ba Local PCI Bus Adapter (782) 1/0/2 ba Local PCI Bus Adapter (782) 1/0/2/0/0 ext_bus SCSI C1010 Ultra160 1/0/2/0/0.8 target 1/0/2/0/0.8.0 disk HP 36.4GST336607LC 1/0/2/0/0.10 target 1/0/2/0/ disk HP 36.4GST336607LC 1/0/14 ba Local PCI Bus Adapter (782) 1/0/14/0/0 lan HP A5230A 10/100Base-TX 1/5 memory Memory 1/10 processor Processor 1/11 processor Processor 1/12 processor Processor 1/13 processor Processor The SMH (11i v2 and v3) can also provide detailed hardware information. Click the Processors, Memory, and other links on the SMH Home tab

195 Module 5 Configuring Hardware

196 Module 5 Configuring Hardware SLIDE: Viewing npar, vpar, and VM Hardware Viewing npar, vpar, and VM Hardware Hardware resources allocated to one partition aren t visible to other partitions HP-UX commands only display devices & resources in the current partition Q: Why do I only see half of my interface cards and cell boards? A: ioscan only reports the devices available in my current partition! Student Notes Viewing system hardware resources becomes more complicated on partitioned systems. Hardware resources allocated to one npar, vpar, or Integrity VM are not visible to other partitions. The peripheral device and interface card management commands discussed in the remaining slides of the chapter -- such as ioscan, scsimgr, rad, olrad, pdweb, and sam -- only display devices in the current partition. To determine which resources have been assigned to other npars partitions on the system, run the parstatus command. Similarly, vparstatus reports which resources have been allocated to other virtual partitions, and hpvmstatus reports which resources have been allocated to Integrity virtual machine guests

197 Module 5 Configuring Hardware SLIDE: Part 3: HP-UX Hardware Addressing Configuring Hardware: Part 3: HP-UX Hardware Addressing Student Notes

198 Module 5 Configuring Hardware SLIDE: Hardware Addresses Hardware Addresses In order to successfully configure and manage devices on an HP-UX system, administrators must understand the addressing mechanism used to identify devices syslog.log tells me that one of my interface cards has failed but how can I tell which one I need to replace? Student Notes During the HP-UX startup process, the kernel automatically scans the system hardware and assigns a unique HP-UX hardware address to every bus adapter, interface card, and device. In order to configure new devices on your system, you need to be able to read and understand these hardware addresses. The next few slides discuss HP-UX hardware addressing in detail

199 Module 5 Configuring Hardware SLIDE: Legacy vs. Agile View Hardware Addresses Legacy vs. Agile View Hardware Addresses 11i v1 and v2 implement a legacy mass storage stack and addressing scheme 11i v3 implements a new mass storage stack, with many new enhancements 11i v3 uses new agile view addresses, but still supports legacy addresses, too 11i v3 s mass storage stack enhancements include: increased scalability enhanced adaptability native multipathing better management tools improved performance Student Notes 11i v1 and v2 implement a legacy mass storage stack and hardware addressing scheme. 11i v3 implements a new mass storage stack, with many enhancements and a new hardware addressing scheme to better support the SAN-based storage used on most HP-UX servers today. To ensure backward compatibility, 11i v3 still supports legacy hardware addresses, but HP encourages administrators to begin using the new Agile View hardware addresses. The notes below highlight some of the most important new features provided by the new mass storage stack and hardware addressing scheme. Increased Scalability The new mass storage stack significantly increases the operating system s mass storage capacity as shown in the table below. Feature HP-UX 11i v2 HP-UX 11i v3 Max I/O buses per server 256 No Limit Max LUNs per server (architectural limit 16m) Max LUN size 2TB >2TB Max I/O paths to a single LUN

200 Module 5 Configuring Hardware In addition, the mass storage stack has been enhanced to take advantage of large multi-cpu server configurations for greater parallelism. Adding more mass storage to a server does not appreciably slow down the boot process or the ioscan command that administrators use to view available hardware. See the HP-UX 11i v3 Mass Storage I/O Scalability white paper for details. Enhanced Adaptability The new mass storage stack enhances a server s ability to adapt dynamically to hardware changes, without shutting down the server or reconfiguring software. 11i v3 servers automatically detect the creation or modification of LUNs. If new LUNs are added, the new mass storage stack recognizes and configures them automatically. If an existing LUN s addressing, size, or I/O block size changes, the mass storage stack detects this without user intervention. When such changes occur, the mass storage stack notifies the relevant subsystems. For example, if a LUN expands, its associated disk driver, volume manager, and file system are notified. The volume manager volume or file system can then automatically expand accordingly. The new mass storage stack can also remove PCI host bus adapters (HBAs) without shutting down the server. Coupled with existing online addition and replacement features, online deletion enables you to replace a PCI card with a different PCI card, as long as the HBA slot permits it and no system critical devices are affected. You can also change the driver associated with a LUN; if the software drivers don t support rebinding online, the system remembers the changes and defers them until the next server reboot. Native Multipathing 11i v3 agile addressing creates a single virtualized hardware address for each disk or LUN regardless of the number of hardware paths to the device. The administrator can use the single virtualized hardware path, rather than the underlying hardware paths, when configuring the disk or LUN. When a volume manager, file system, or application accesses the device, the new mass storage stack transparently distributes I/O requests across all available hardware paths to the LUN using a choice of load balancing algorithms. If a path fails, the mass storage stack automatically disables the failed path and redistributes I/O requests across the remaining paths. The kernel monitors failed or non-responsive paths, so that when a failed path recovers it is automatically and transparently reincorporated into any load balancing. The mass storage stack automatically discovers and incorporates new paths, too. 11i v1 and v2 administrators typically rely on add-on multi-pathing products from array vendors to provide multi-pathing functionality. The new mass storage stack simplifies management of LUN Device Special Files (DSFs), too. The next chapter discusses these DSF enhancements in detail

201 Better Management Tools Module 5 Configuring Hardware The new mass storage stack includes several new tools for monitoring and managing devices. The improved ioscan command allows the administrator to easily correlate paths and LUNs. Another new ioscan option reports the health of each LUN and the underlying hardware paths. A new utility called scsimgr allows the administrator to easily view LUN attributes and usage statistics, and modify the load balancing algorithm used when accessing the LUN. The new tools and features are integrated with other systems and storage management utilities such as Systems Management Homepage (SMH) and Systems Insight Manager (SIM) and Storage Essentials. Improved Performance The new mass storage stack achieves better performance by using high levels of concurrent I/O operations and parallel processing, processor allegiance algorithms, and unique HP server hardware features such as Cell Local Memory. The operating system provides a choice of load balancing algorithms, too, so administrators can tune performance to meet each server s requirements. Compatibility 11i v3 supports both legacy addressing and Agile View addressing. HP encourages customers to begin using the new addressing scheme, though legacy hardware addresses are still available. HP-UX includes two commands to ease the migration to the new mass storage stack. The iofind command automatically identifies configuration files that reference legacy addresses, and optionally replaces them with equivalent Agile View addresses. The ioscan m hwpath command may be used to list Agile View LUN hardware paths and their equivalent legacy addresses

202 Module 5 Configuring Hardware SLIDE: Legacy HBA Hardware Addresses Legacy HBA Hardware Addresses HBAs are identified by hardware addresses that encode the HBA s cell/sba/lba/device/function location in the kernel s I/O tree structure 1/0/0/2/0 Cell SBA LBA device/function HBA hardware address Crossbar CPUs Memory Cell Boards SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus FC HBA FC HBA Core I/O MP SAN LAN Serial SCSI LAN Serial LUN LUN LUN Disk DVD Student Notes The next few slides discuss the legacy hardware addressing scheme used in 11i v1 and v2. Later slides discuss the addressing scheme used in 11i v3 s new mass storage stack. All current systems based on PCI/PCI-X/PCI-Express expansion buses use a fairly consistent hardware addressing scheme, which we will focus on here. Every LAN, LUN, disk, or tape drive hardware address begins with an HBA hardware address. An HBA hardware address encodes the HBA s location in the kernel s I/O tree structure: Cell/SBA/LBA/device/function The notes below describe each hardware path component in detail. Cell On cell-based servers, the first component of the HP-UX hardware path identifies the global cell number of the cell board to which the device is attached. On Superdome systems, cells are numbered rx/rp7xxx servers have two cell boards numbered 0-1. rx/rp8xxx servers have four cell boards numbered 0-3. Non-cellbased systems don t include cell numbers in the hardware paths

203 Module 5 Configuring Hardware SBA LBA The next portion of the HP-UX hardware address identifies the address of the System Bus Adapter (SBA). This portion of the address will always be 0 in HBA and peripheral device hardware paths. Hardware paths for processors and memory modules typically display a non-zero number in this component of the hardware path. The SBA connects to one or more Local Bus Adapters (LBAs) via high-speed communication channels known as ropes. Some LBAs have just one rope to the SBA. Other LBAs have two ropes to the LBA to provide enhanced throughput. The LBA component in the HP-UX hardware path identifies the rope number that connects the LBA to the SBA. When an LBA has two ropes to the SBA, the lower rope number is used in the hardware path. Because some LBAs utilize two ropes and others utilize just one, an HBA s rope/lba number typically isn t the same as its physical slot number. Device/Function Each LBA typically provides connectivity to one or two PCI, PCI-X, PCI-E expansion slots, each accommodating an interface card with one or more functions. The Device/Function numbers together uniquely identify a specific function on a specific PCI or PCI-X card. If a card isn t a multi-function card, a device/function combination 0/0 indicates that it is a PCI card, and 1/0 indicates that it is PCI-X. Slides later in the chapter describe the ioscan command, which lists a system s hardware paths, and the rad and olrad commands, which translate hardware paths into physical slot locations. Service manuals, which often include system-specific hardware addressing information, are available at

204 Module 5 Configuring Hardware SLIDE: Legacy Parallel SCSI Hardware Addresses Legacy Parallel SCSI Hardware Addresses Some servers use a parallel SCSI bus to connect internal disks, DVDs, and tapes Each SCSI bus supports multiple devices, each identified by a unique target address Each device on a SCSI bus may support multiple LUNs, identified by unique LUN IDs Legacy SCSI hardware addresses encode a SCSI device s HBA, target, and LUN ID 1/0/0/2/0.1.0 HBA hardware address Target LUN ID Example: the following hardware addresses represent three distinct devices on a SCSI bus 1/0/0/2/ /0/0/2/ /0/0/2/ target 2 target 6 target 10 Student Notes Some servers use a parallel SCSI bus to connect internal disks, DVDs, and tapes. Some Core I/O cards provide an external SCSI port, too, which may be used to connect additional SCSI devices. The server in the graphic on the slide has a SCSI HBA connected to three external SCSI devices. As shown in the graphic on the slide, legacy SCSI hardware addresses encode a SCSI device s HBA address, target address, and LUN ID. HBA Addresses The first part of a legacy SCSI hardware address identifies the address of the SCSI HBA to which the device is attached. In the example on the slide, all three SCSI devices are connected to the SCSI HBA at address 1/0/0/2/0. Thus, the hardware addresses for all three devices begin with 1/0/0/2/0. 1/0/0/2/ /0/0/2/ /0/0/2/

205 SCSI Target Addresses Module 5 Configuring Hardware Each SCSI bus supports multiple devices, each identified by a unique target address. Legacy addressing supports up to 16 targets, numbered 0-15, per SCSI bus. The next part of a SCSI device s legacy hardware address identifies the device s SCSI target address. The graphic on the slide shows a SCSI bus with three external devices identified by target addresses 2, 6, and 10. 1/0/0/2/ /0/0/2/ /0/0/2/ When attaching an external SCSI device, it may be necessary to manually assign a target address. Some devices use a series of binary DIP switches to set the address. Other devices use a series of jumper pins. Consult your device documentation to determine how to set your device s SCSI target address. LUN IDs Some SCSI devices may have a single SCSI target address, with multiple addressable units within the device. For example, tape autochangers often provide access to the autochanger s tape drive via one LUN ID, and access to the robotic mechanism via second LUN ID. A SCSI disk array may present multiple virtual disks, each identified by a unique LUN ID. HPUX uses the last component in the legacy SCSI hardware address to identify the LUN ID. Most autochangers and disk arrays today are often connected via SAS or Fibre channel interfaces rather than parallel SCSI, so the LUN ID portion of most SCSI device hardware paths is typically 0. 1/0/0/2/ /0/0/2/ /0/0/2/

206 Module 5 Configuring Hardware SLIDE: Legacy FC Hardware Addresses (1 of 2) Legacy FC Hardware Addresses (1 of 2) Disk array LUNs are often accessible via multiple paths through a SAN HP-UX assigns each path a legacy hardware path that encodes: the legacy hardware address of the server HBA used to access the LUN the SAN domain/area/port used to access the array the LUN ID of the target LUN within the array The 11i v1 and v2 kernels provide no automated path correlation or management 1/0/2/1/ HBA hardware address SAN domain/area/port Array LUN ID Example: The array below has three LUNs, each accessible via four SAN paths The next slide lists all legacy hardware paths to the first LUN LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 Student Notes Fibre channel disk array LUNs are often accessible via multiple paths through a SAN. The graphic on the slide shows a disk array with three LUNs, each accessible via four different paths. It isn t uncommon today to have four, eight, or even more different paths to a LUN. The next slide lists the legacy hardware paths that would be used to represent LUN 1 in the graphic. Each legacy hardware address encodes: The legacy hardware address of the server HBA used to access the LUN. See the HBA addressing discussion earlier in the chapter. The SAN domain/area/port used to access the array. Administrators may use the legacy hardware address s 8-bit domain, 8-bit area, and 8-bit port addresses to associate a hardware address with a specific path through the SAN from the server HBA to the target array controller. Different SAN switch vendors use the domain/area/port fields differently. HP Customer Education s Accelerated SAN Essentials class (UC434S) discusses these differences in detail

207 Module 5 Configuring Hardware The LUN ID of the target LUN within the array. When presenting LUNs to a server, the array administrator assigns each LUN a LUN ID. The legacy addressing scheme was designed to accommodate SCSI-2 bus addresses, in which each device on a bus was uniquely identified by a 7-bit controller number (ranging from 0 to 128), a 4-bit target number (ranging from 0 to 15), and a 3-bit LUN number (ranging from 0 to 7). Since today s arrays routinely present more than eight LUNs, the original 3-bit representation of the LUN ID is insufficient. Thus, legacy addresses now use all 14 controller/target/lun bits at the end of an FC hardware path to represent the LUN ID. Thus, the last three components of the legacy hardware addresses for LUN IDs 0-16 would be represented as follows: LUN ID (in decimal) LUN ID (in binary) LUN ID, as represented in a legacy HP-UX controller/target/lun address x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x x/x/x/x/x.x.x.x The 11i v1 and v2 kernels provide no automated path correlation or management; they treat each path as if it were an independent device. 11i v1 and v2 rely on the LVM and VxVM volume managers or add-on path management solutions such as HP s SecurePath product or EMC s PowerPath product to correlate redundant paths, ensure path failover when an HBA fails, and provide load balancing across paths to a LUN

208 Module 5 Configuring Hardware SLIDE: Legacy FC Hardware Addresses (2 of 2) Legacy FC Hardware Addresses (2 of 2) 1/0/2/1/ HBA hardware address SAN domain/area/port Array LUN ID HBA: 1/0/2/1/0 SAN domain/area/port: LUN ID: HW Path: 1/0/2/1/ HBA: 1/0/2/1/0 SAN domain/area/port: LUN ID: HW Path: 1/0/2/1/ HBA: 1/0/2/1/1 SAN domain/area/port: LUN ID: HW Path: 1/0/2/1/ HBA: 1/0/2/1/1 SAN domain/area/port: LUN ID: HW Path: 1/0/2/1/ Student Notes The example on the slide shows four different SAN paths to LUN ID 1, and each path s corresponding legacy hardware path. The heavy black lines represent the physical path through the SAN for each address. Note that the LUN ID is the same in all four paths; each path is simply a different path to the same LUN

209 5 40. SLIDE: Viewing Legacy HP-UX Hardware Addresses Module 5 Configuring Hardware Viewing Legacy HP-UX Hardware Addresses Use ioscan to view devices legacy HP-UX hardware addresses, properties, and states # ioscan short listing of all devices # ioscan -f full listing of all devices # ioscan kf full listing, using cached information # ioscan -kfh 0/0/0/3/0 full listing of all devices below 0/0/0/3/0 # ioscan -kfc disk full listing of "disk" class devices # ioscan f Class I H/W Path Driver S/W State H/W Type Description ================================================================ root 0 root CLAIMED BUS_NEXUS cell 0 1 cell CLAIMED BUS_NEXUS ioa 0 1/0 sba CLAIMED BUS_NEXUS SBA ba 0 1/0/0 lba CLAIMED BUS_NEXUS LBA slot 0 1/0/0/3 pci_slot CLAIMED SLOT PCI Slot ext_bus 0 1/0/0/3/0 mpt CLAIMED INTERFACE U320 SCSI target 0 1/0/0/3/0.6 tgt CLAIMED DEVICE disk 0 1/0/0/3/0.6.0 sdisk CLAIMED DEVICE HP Disk Student Notes You can view a list of the device on your system and their legacy HP-UX hardware addresses via the ioscan command. ioscan supports a number of useful options. # ioscan Scans hardware and lists all devices and other hardware devices found. Shows the hardware path, class, and a brief description of each component. # ioscan f Scans and lists the system hardware as before, but displays a "full" listing including several additional columns of information. See the ioscan field output descriptions below for more information. # ioscan kf Lists the system hardware as before, but uses cached information. On a large system with dozens of disks and interface cards, ioscan kf is much faster than ioscan f

210 Module 5 Configuring Hardware # ioscan -kfh 0/0/0/3/0 Shows a full listing of the component at the specified hardware address, and all nodes in the I/O tree below that node. The example shown here would display a full listing of both the HBA at address 0/0/0/3/0 and the targets and devices attached to that HBA (if any). -H is very useful on a large system if you just need to view information about a single device or bus. # ioscan -kfc disk Lists devices of the specified class only. Two other common classes are "tape" and "lan". The optional k option displays cached information. # ioscan kfn Lists device file names associated with each device. Device files are discussed at length in the next chapter. The optional k option displays cached information. Fields in the ioscan Output Class Instance The device class associated with the device or card. For example, all LAN cards, from Ethernet cards to Token Ring cards, belong to the lan class. All tape devices belong to the tape class. A device s class is determined by the device s kernel driver. The instance number associated with the device or card. It is a unique number assigned by the kernel to each card or device within a class. If no driver is available for the hardware component, or if an error occurs binding the driver, the kernel will not assign an instance number and a -1 will be listed. Instance numbers are used when creating device files, and will be discussed in more detail in the next chapter. H/W Path Driver A string of numbers, separated by / s and. s that uniquely identifies each hardware component on the system. Each number in the string represents the location of a hardware component on the path to the device. Some administrative tasks require the administrator to identify devices by hardware path. The name of the kernel driver that controls the hardware component. If no driver is available to control the hardware component, a question mark (?) is displayed in the output

211 Module 5 Configuring Hardware S/W state The result of software binding. CLAIMED means that a kernel driver successfully bound to the device. UNCLAIMED means that no driver was found to bind to the device. Add the device s driver to the kernel, then re-run ioscan. UNUSABLE means that the hardware at this address is no longer usable due to some irrecoverable error condition; a system reboot may clear this condition SUSPENDED means that the associated software and hardware are in suspended state. See the OL* discussion later in this chapter. DIFF_HW means that the hardware found does not match the associated software. NO_HW means that the hardware at this address is no longer responding. ERROR means that the hardware at this address is responding but is in an error state. SCAN means that a scan operation is in progress for this node in the I/O tree. Hardware Type Description Entity identifier for the hardware component. Common examples include INTERFACE (for interface cards), DEVICE (for peripheral devices), BUS_NEXUS (for bus adapters), or PROCESSOR (for processors). Describes the device or interface card, and in some cases, identifies the device s manufacturer and model number. Troubleshooting with ioscan -f After adding an interface card or SCSI device to your system, you should execute ioscan to see if your system recognizes the device. First, simply check to see that your new device appears in the ioscan output. If not, shutdown your machine and check to ensure that all the cables are connected properly. In the case of an interface card, ensure that the card is firmly inserted in the interface card slot in the backplane of your machine. Next, ensure that the hardware path is correct. Did you set the correct SCSI address? Add the device and its hardware path and description to the hardware diagram in your system log book

212 Module 5 Configuring Hardware Assuming the hardware path is correct, check the S/W state column in the ioscan -f output. In order to communicate with your new device or interface card, your kernel must have the proper device drivers configured. If the proper driver already exists in your kernel, the S/W State column should say CLAIMED. If this isn't the case, you will have to add the driver to the kernel. A later chapter discusses kernel configuration. If your new device appears to be CLAIMED by the kernel, proceed to the next chapter and learn how to create and use device files to access your new device

213 Module 5 Configuring Hardware SLIDE: Agile View HBA Hardware Addresses Agile View HBA Hardware Addresses Like earlier versions of HP-UX, Agile View HBA hardware addresses encode the HBA s cell/sba/lba/device/function location in the kernel s I/O tree structure 1/0/0/2/0 Cell SBA LBA device/function HBA hardware address Crossbar CPUs Memory Cell Boards SBA LBA LBA LBA PCI-X Bus PCI-X Bus PCI-X Bus FC HBA FC HBA Core I/O MP SAN LAN Serial SCSI LAN Serial LUN LUN LUN Disk DVD Student Notes The last few slides described the legacy hardware addressing scheme used in 11i v1 and v2. The next few slides discuss the agile view addressing scheme introduced by the new mass storage stack in 11i v3. Like the addressing scheme used in earlier versions of HP-UX, the new Agile View HBA hardware addresses encode the HBA s cell/sba/lba/device/function location in the kernel s I/O tree structure. Cell/SBA/LBA/device/function The notes below describe each hardware path component in detail. Cell On cell-based servers, the first component of the HP-UX hardware path identifies the global cell number of the cell board to which the device is attached. On Superdome systems, cells are numbered rx/rp7xxx servers have two cell boards numbered 0-1. rx/rp8xxx servers have four cell boards numbered 0-3. Non-cellbased systems don t include cell numbers in the hardware paths

214 Module 5 Configuring Hardware SBA LBA The next portion of the HP-UX hardware address identifies the address of the System Bus Adapter (SBA). This portion of the address will always be 0 in HBA and peripheral device hardware paths. Hardware paths for processors and memory modules typically display a non-zero number in this component of the hardware path. The SBA connects to one or more Local Bus Adapters (LBAs) via high-speed communication channels known as ropes. Some LBAs have just one rope to the SBA. Other LBAs have two ropes to the LBA to provide enhanced throughput. The LBA component in the HP-UX hardware path identifies the rope number that connects the LBA to the SBA. When an LBA has two ropes to the SBA, the lower rope number is used in the hardware path. Because some LBAs utilize two ropes and others utilize just one, an HBA s rope/lba number typically isn t the same as its physically slot number. Device/Function Each LBA typically provides connectivity to one or two PCI, PCI-X, PCI-E expansion slots, each accommodating an interface card with one or more functions. The Device/Function numbers together uniquely identify a specific function on a specific PCI or PCI-X card. If a card isn t a multi-function card, a device/function combination 0/0 indicates that it is a PCI card, and 1/0 indicates that it is PCI-X. Slides later in the chapter describe the ioscan command, which lists a system s hardware paths, and the rad and olrad commands, which translate hardware paths into physical slot locations. Service manuals, which often include system-specific hardware addressing information, are available at

215 Module 5 Configuring Hardware SLIDE: Agile View Parallel SCSI Hardware Addresses Agile View Parallel SCSI Hardware Addresses Agile View SCSI hardware addresses are similar to legacy SCSI hardware addresses... But Agile View represents target and LUN numbers in hex rather than decimal form 1/0/0/2/0.0xa.0x0 HBA hardware address Target LUN ID Example: the following hardware addresses represent three distinct devices on a SCSI bus 1/0/0/2/0.0x2.0x0 1/0/0/2/0.0x6.0x0 1/0/0/2/0.0xa.0x0 target 2 target 6 target 10 Student Notes Agile View hardware addresses for parallel SCSI devices are similar to legacy hardware addresses for parallel SCSI devices, but Agile View represents target and LUN numbers in hexadecimal rather than decimal form. As shown in the graphic on the slide, the Agile View parallel SCSI hardware address encodes the device s HBA address, target address, and LUN ID. HBA Addresses The first part of a legacy SCSI hardware address identifies the address of the SCSI HBA to which the device is attached. In the example on the slide, all three SCSI devices are connected to the SCSI HBA at address 1/0/0/2/0. Thus, the hardware paths for all three devices begin with 1/0/0/2/0. 1/0/0/2/ /0/0/2/ /0/0/2/

216 Module 5 Configuring Hardware SCSI Target Addresses Each SCSI bus supports multiple devices, each identified by a unique target address. The next part of a SCSI device s Agile View hardware address identifies the device s SCSI target address in hexadecimal form. The graphic on the slide shows a SCSI bus with three external devices identified by target addresses 2, 6, and 10. 1/0/0/2/0.0x2.0 1/0/0/2/0.0x6.0 1/0/0/2/0.0xa.0 When attaching an external SCSI device, it may be necessary to manually assign a target address. Some devices use a series of binary DIP switches to set the address. Other devices use a series of jumper pins. Consult your device documentation to determine how to set your device s SCSI target address. LUN IDs Some SCSI devices may have a single SCSI target address, with multiple addressable units within the device. HPUX uses the last component in the Agile View SCSI hardware address to identify the LUN ID in hexadecimal form. For example, tape autochangers often provide access to the autochanger s tape drive via one LUN ID, and access to the robotic mechanism via second LUN ID. A SCSI disk array may present multiple virtual disks, each identified by a unique LUN ID. HPUX uses the last component in the legacy SCSI hardware address to identify the LUN ID. Most autochangers and disk arrays today are often connected via SAS or Fibre channel interfaces rather than parallel SCSI, so the LUN ID portion of most SCSI device hardware paths is typically 0x0. Most disk arrays today are connected via SAS or Fibre channel interfaces rather than parallel SCSI, so the LUN ID portion of most SCSI device hardware paths is typically 0x0. 1/0/0/2/0.0x2.0x0 1/0/0/2/0.0x6.0x0 1/0/0/2/0.0xa.0x

217 5 43. SLIDE: Agile View FC Lunpath Hardware Addresses (1 of 2) Module 5 Configuring Hardware Agile View FC Lunpath Hardware Addresses (1 of 2) Agile view provides a lunpath hardware address for each path to each LUN Lunpath hardware addresses encode: the hardware address of the server HBA used to access the LUN the WW Port Name of the array controller FC port used to access the LUN the LUN address of the target LUN The mass storage stack automatically recognizes and manages redundant paths 1/0/2/1/0.0x64bits.0x64bits HBA hardware address WW Port Name LUN Address Example: The array below has three LUNs, each accessible via four SAN paths The next slide lists all Agile View lunpath addresses for the first LUN LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 LUN 1 LUN 2 LUN 3 Student Notes Like the legacy mass storage stack, Agile View provides a hardware address for each path to each LUN. Agile View calls these path-specific hardware addresses lunpath addresses. The graphic on the slide shows a disk array with three LUNs, each accessible via four different paths. The next slide lists the Agile View hardware paths that would be used to represent LUN 1 in the graphic. Each Agile View lunpath hardware address encodes: The legacy hardware address of the server HBA used to access the LUN. See the HBA addressing discussion earlier in the chapter. The 64-bit WW Port Name of the array controller FC port used to access the LUN. Disk arrays connect to a SAN via fibre channel ports on array controller cards. Arrays typically have redundant controllers, and each controller may have multiple ports connected to the SAN. Each array controller FC port is identified by a globally unique WWPN, which is included in the Agile View lunpath address

218 Module 5 Configuring Hardware The target LUN s LUN ID. The first two bits in this number identify the LUN s LUN addressing method, the next 14 bits represent the LUN ID number assigned to the LUN by the array administrator, and the last 48 bits are reserved for future use. Fortunately, the 11i v3 scsimgr command may be used to automatically extract the decimal LUN ID from the lunpath address. # scsimgr get_attr \ -a lunid \ -H 1/0/2/1/0.0x50001fe c.0x name = lunid current =0x (LUN # 1, Flat Space Addressing) default = saved = Unlike the legacy mass storage stack, the new mass storage stack automatically recognizes redundant paths and load balances I/O requests across lunpaths

219 5 44. SLIDE: Agile View FC Lunpath Hardware Addresses (2 of 2) Module 5 Configuring Hardware Agile View FC Lunpath Hardware Addresses (2 of 2) 1/0/2/1/0.0x64bits.0x64bits HBA hardware address WW Port Name LUN Address HBA: 1/0/2/1/0 WW Port Name: 0x50001fe c LUN ID: 0x HW Path: 1/0/2/1/0. 0x50001fe c. 0x HBA: 1/0/2/1/0 WW Port Name: 0x50001fe LUN ID: 0x HW Path: 1/0/2/1/0. 0x50001fe x HBA: 1/0/2/1/1 WW Port Name: 0x50001fe d LUN ID: 0x HW Path: 1/0/2/1/1. 0x50001fe d. 0x HBA: 1/0/2/1/1 WW Port Name: 0x50001fe LUN ID: 0x HW Path: 1/0/2/1/1. 0x50001fe x Student Notes The example on the slide shows four different SAN paths to LUN ID 1, and each path s corresponding legacy hardware path. The heavy black lines represent the physical path through the SAN for each address. Note that the LUN ID is the same in all four paths, but the HBA and WWPN portions of the lunpath address varies

220 Module 5 Configuring Hardware SLIDE: Agile View FC LUN Hardware Path Addresses Agile View FC LUN Hardware Path Addresses Agile View also provides a virtual LUN hardware address for disk/tape/lun The LUN hardware address represents the device/lun itself, not a path to the LUN Advantages: LUN hardware paths are unaffected by changes to the SAN topology The mass storage stack automatically correlates and manages redundant paths 64000/0xfa00/0x4 virtual root node virtual bus virtual LUN ID Example: the following example shows a LUN hardware address and its associated lunpaths 64000/0xfa00/0x4 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x LUN 1 LUN 2 LUN 3 Student Notes In addition to the lunpath hardware addresses discussed on the previous slide, Agile View presents a virtualized LUN hardware path for each parallel SCSI device, SAS disk, and fibre channel LUN. The LUN hardware path represents the device or LUN itself rather than a single physical path to the device or LUN. In the example on the slide, 64000/0xfa00/0x4 is a LUN hardware path representing a disk array LUN. The four addresses below the LUN hardware path represent the four lunpaths used to access the LUN /0xfa00/0x4 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x A LUN hardware path address has three components:

221 Module 5 Configuring Hardware LUN hardware addresses always start with 64000, the virtual root node of all Agile View LUN hardware addresses. This portion of the address should be consistent across all LUN hardware paths. The second component in a LUN hardware address is always 0xfa00, the virtual bus address used by all Agile View LUN hardware paths. This portion of the address should be consistent across all LUN hardware paths, too. The last component in the LUN hardware path is a virtual LUN ID. The kernel automatically assigns virtual LUN IDs, sequentially, as it identifies new LUNs. Note that the virtual LUN ID is virtual; it is not related to the LUN ID that is encoded in lunpath hardware addresses. The kernel maintains a persistent WWID to Virtual LUN ID map to ensure that LUN hardware paths remain consistent across reboots, even if the SAN topology changes. Agile View LUN hardware paths offer several significant advantages over legacy path-based addressing, particularly in SAN environments. On systems using the legacy mass storage stack, changes in the SAN topology change device hardware paths, and may require administrator intervention to update the volume manager and file system configuration. LUN hardware paths are unaffected by changes to the SAN topology. Since the Agile View LUN hardware paths don t change, changing the SAN topology no longer requires manual changes to the volume manager or file system configuration. On systems using the legacy mass storage stack, when configuring disks for use in Logical Volume Manager, the administrator must manually add each lunpath to the LVM configuration. The new mass storage stack automatically recognizes and manages redundant paths. HP encourages customers to begin using the new Agile View LUN hardware paths, although legacy hardware paths and lunpath addresses are still supported. The next few slides describe several 11i v3 commands for viewing the new hardware paths, and for converting legacy addresses to their Agile View equivalents

222 Module 5 Configuring Hardware SLIDE: Viewing LUN Hardware Paths via Agile View Viewing LUN Hardware Paths via Agile View Question: Which disks/luns are available on the system? Answer: Execute ioscan N to view agile view LUN hardware paths Standard ioscan options such as k, -f, C, and H may be included, too # ioscan kfn [-C disk] [-H 64000/0xfa00/0x4] Class I H/W Path Driver S/W State H/W Type Description ================================================================ disk /0xfa00/0x0 esdisk CLAIMED DEVICE HP 73.4G disk /0xfa00/0x1 esdisk CLAIMED DEVICE DVD+-RW disk /0xfa00/0x2 esdisk CLAIMED DEVICE HP 73.4G disk /0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x5 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x6 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x7 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x8 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x9 esdisk CLAIMED DEVICE HP HSV101 Student Notes When configuring additional disk space for applications, administrators frequently need to know which disks are available on the system. Use the ioscan command to view legacy device hardware addresses. Add the N option to view Agile View LUN hardware paths and lunpaths rather than legacy addresses. To view a kernel-cached, full listing, add k and f. Adding the C disk option limits the output to disk class devices. Search and list all devices using legacy hardware addresses. # ioscan Search and list all devices using Agile View addresses. # ioscan N Display a kernel-cached full list of devices using Agile View addressing

223 Module 5 Configuring Hardware # ioscan kfn Display a kernel-cached listing of disk class devices using Agile View addressing. # ioscan kfnc disk Class I H/W Path Driver S/W State H/W Type Description ================================================================ disk /0xfa00/0x0 esdisk CLAIMED DEVICE HP 73.4G disk /0xfa00/0x1 esdisk CLAIMED DEVICE DVD+-RW disk /0xfa00/0x2 esdisk CLAIMED DEVICE HP 73.4G disk /0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x5 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x6 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x7 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x8 esdisk CLAIMED DEVICE HP HSV101 disk /0xfa00/0x9 esdisk CLAIMED DEVICE HP HSV101 Display a kernel-cached listing of a device at a specific hardware path. # ioscan kfnh 64000/0xfa00/0x4 Class I H/W Path Driver S/W State H/W Type Description ================================================================ disk /0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV

224 Module 5 Configuring Hardware SLIDE: Viewing LUNs and their lunpaths via Agile View Viewing LUNs and their lunpaths via Agile View Question: Which lunpaths are associated with each LUN hardware path? Answer: Execute ioscan m lun Optionally provide a specific LUN hardware path via the H option The command also reports the health status of each LUN # ioscan m lun [-H 64000/0xfa00/0x4] disk /0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x /dev/disk/disk30 /dev/rdisk/disk30 disk /0xfa00/0x5 esdisk CLAIMED DEVICE online HP HSV101 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x /dev/disk/disk31 /dev/rdisk/disk31 Student Notes The new m lun option is specifically designed to display LUNs and lunpaths. Like the legacy ioscan command, ioscan m lun reports each device s class, instance, hardware path, driver, software state, hardware state, and description. Between the hardware type and description fields note that ioscan m lun also reports the disk s health status. online indicates that the disk or LUN is fully functional. Limited, unusable, disabled, or offline indicate that there may be a problem. See the ioscan(1m) man page or the Monitoring LUN Health slide later in the module for details. Below the LUN hardware path, the command reports all of the lunpath hardware addresses available to access each LUN. Below the lunpath hardware addresses, the command reports each device s device special files, too (eg: /dev/disk/disk30). The next module discusses device special files in detail

225 Module 5 Configuring Hardware # ioscan m lun Class I H/W Path Driver SW State H/W Type Health Description ==================================================================== disk /0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x /dev/disk/disk30 /dev/rdisk/disk30 disk /0xfa00/0x5 esdisk CLAIMED DEVICE online HP HSV101 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x /dev/disk/disk31 /dev/rdisk/disk31 By default, the command displays all disks and LUNs. Add the H option to view a specific disk or LUN. # ioscan m lun -H 64000/0xfa00/0x4 Class I H/W Path Driver SW State H/W Type Health Description ==================================================================== disk /0xfa00/0x4 esdisk CLAIMED DEVICE online HP HSV101 1/0/2/1/0.0x50001fe c.0x /0/2/1/0.0x50001fe x /0/2/1/1.0x50001fe d.0x /0/2/1/1.0x50001fe x /dev/disk/disk30 /dev/rdisk/disk

226 Module 5 Configuring Hardware SLIDE: Viewing HBAs and their lunpaths via Agile View Viewing HBAs and their lunpaths via Agile View Question: Which lunpaths are associated with each HBA? Answer: Execute ioscan m hwpath Optionally provide a specific HBA address via the H option # ioscan kfnh 1/0/2/1/0 Class I H/W Path Driver S/W State H/W Type Description =================================================================== fc 0 1/0/2/1/0 fcd CLAIMED INTERFACE 4Gb Dual Port FC tgtpath 4 1/0/2/1/0.0x50001fe estp CLAIMED TGT_PATH fibre_channel target lunpath 4 1/0/2/1/0.0x50001fe x0 eslpt CLAIMED LUN_PATH LUN path for ctl8 lunpath 8 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk30 lunpath 9 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk31 lunpath 10 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk32 (continues) Student Notes When troubleshooting SAN issues, it may also be helpful to know which lunpaths utilize a given HBA. Use the ioscan kfnh command, followed by the HBA hardware address, to find out. The example below lists the lunpaths serviced by the 1/0/2/1/0 HBA. Following each lunpath, the command reports the device special file name of the disk or LUN associated with that lunpath. The next module discusses device special files in detail. # ioscan -kfnh 1/0/2/1/0 Class I H/W Path Driver S/W State H/W Type Description =================================================================== fc 0 1/0/2/1/0 fcd CLAIMED INTERFACE HP AB Gb Dual Port PCI/PCI-X Fibre Channel Adapter (FC Port 1) tgtpath 4 1/0/2/1/0.0x50001fe estp CLAIMED TGT_PATH fibre_channel target served by fcd driver lunpath 4 1/0/2/1/0.0x50001fe x0 eslpt CLAIMED LUN_PATH LUN path for ctl8 lunpath 8 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk30 lunpath 9 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk

227 Module 5 Configuring Hardware lunpath 10 1/0/2/1/0.0x50001fe x eslpt CLAIMED LUN_PATH LUN path for disk32 tgtpath 3 1/0/2/1/0.0x50001fe c estp CLAIMED TGT_PATH fibre_channel target served by fcd driver lunpath 3 1/0/2/1/0.0x50001fe c.0x0 eslpt CLAIMED LUN_PATH LUN path for ctl8 lunpath 5 1/0/2/1/0.0x50001fe c.0x eslpt CLAIMED LUN_PATH LUN path for disk30 lunpath 6 1/0/2/1/0.0x50001fe c.0x eslpt CLAIMED LUN_PATH LUN path for disk31 lunpath 7 1/0/2/1/0.0x50001fe c.0x eslpt CLAIMED LUN_PATH LUN path for disk

228 Module 5 Configuring Hardware SLIDE: Viewing LUN Health via Agile View Viewing LUN Health via Agile View Question: Are there any failed mass storage interface cards, paths, or devices? Answer: Execute ioscan P health Optionally provide a specific LUN hardware path via H or a specific class via -C Check the health of disks and LUNs # ioscan P health [ C disk] [-H 64000/0xfa00/0x4] Class I H/W Path health ===================================== disk /0xfa00/0x4 online Check the health of a fibre channel adapter and itr lunpaths # ioscan -P health [-H 1/0/2/1/0] Class I H/W Path health ==================================================================== fc 0 1/0/2/1/0 online tgtpath 3 1/0/2/1/0.0x50001fe online lunpath 1 1/0/2/1/0.0x50001fe x0 online lunpath 6 1/0/2/1/0.0x50001fe x online lunpath 7 1/0/2/1/0.0x50001fe x standby lunpath 8 1/0/2/1/0.0x50001fe x online lunpath 9 1/0/2/1/0.0x50001fe x standby (continues) Student Notes Identifying failed interfaces and devices is a critical system administration task. HP-UX automatically displays messages in /var/adm/syslog/syslog.log, and sometimes on the console, when the operating system encounters hardware problems. 11i v3 administrators can proactively check the state of the system s HBAs, controllers, disks, and LUNs any time via the ioscan P health command. The command reports one of the following health states for each HBA and mass storage component node in the I/O tree. online offline limited unusable node is online and functional node has gone offline and is inaccessible node is online but performance is degraded due to some links, paths, and connections being offline an error condition occurred which requires manual intervention (for example, authentication failure, hardware failure, and so on)

229 Module 5 Configuring Hardware testing disabled standby node is being diagnosed node has been disabled or suspended node is functional but not in use The command may be executed several different ways. Report the health status of all disks/luns. # ioscan P health C disk Report the health status of a specific disk/lun, or fibre channel adapter. # ioscan P health H 64000/0xfa00/0x4 Report the status of all fibre channel adapters. # ioscan P health C fc Report the health status of a specific fibre channel adapter and its lunpaths. # ioscan P health H 1/0/2/1/

230 Module 5 Configuring Hardware SLIDE: Viewing LUN Attributes via Agile View Viewing LUN Attributes via Agile View Question: What is my LUN s WWID and LUN ID? Answer: Use scsimgr Use a LUN hardware path to determine a disk s WWID # scsimgr get_attr -a wwid [all_lun] [-H 64000/0xfa00/0x4] name = wwid current = 0x600508b400012fd20000a default = saved = Use one of the LUN s lunpath hardware addresses to determine a disk s LUNID # scsimgr get_attr \ -a lunid \ -H 1/0/2/1/0.0x50001fe c.0x name = lunid current =0x (LUN # 1, Flat Space Addressing) default = saved = Student Notes HP-UX administrators identify devices by hardware address, but SAN administrators identify LUNs by their globally unique WWID names and array administrator-assigned LUN IDs. To translate Agile View addresses into WWIDs and LUN IDs, use the scsimgr command. Obtaining WWIDs The first example on the slide displays LUN WWID attributes. To view all LUN WWIDs, specify the all_lun argument. Or, to view a specific LUN s WWID, include the H option and a specific LUN hardware path. # scsimgr get_attr -a wwid all_lun SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30 name = wwid current = 0x600508b400012fd20000a default = saved =

231 Module 5 Configuring Hardware SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk31 name = wwid current = 0x600508b400012fd default = saved = # scsimgr get_attr -a wwid -H 64000/0xfa00/0x4 SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk30 name = wwid current = 0x600508b400012fd20000a default = saved = Obtaining LUN IDs The second example displays a LUN s LUN ID attribute using the LUN s Agile View lunpath address. In order to view the LUN ID, you must provide a specific lunpath. Recall that you can obtain a LUN s lunpaths via the ioscan m lun command. # scsimgr get_attr \ -a lunid \ -H 1/0/2/1/0.0x50001fe c.0x name = lunid current =0x (LUN # 1, Flat Space Addressing) default = saved = These are just a few of the many attributes and statistics provided by the scsimgr command. See the scsimgr(1m) man page for more options

232 Module 5 Configuring Hardware SLIDE: Enabling and Disabling lunpaths via Agile View Enabling and Disabling lunpaths via Agile View Goal: Disable a lunpath in preparation for removing an interface card Solution: Use scsimgr disable enable Disable a lunpath # scsimgr -f disable H 1/0/2/1/0.0x50001fe c.0x LUN path 1/0/2/1/0.0x50001fe c.0x disabled successfully Determine lunpath status # ioscan -P health -H 1/0/2/1/0.0x50001fe c.0x Class I H/W Path health =================================================================== lunpath 5 1/0/2/1/0.0x50001fe c.0x disabled Reenable a lunpath # scsimgr enable -H 1/0/2/1/0.0x50001fe c.0x LUN path 1/0/2/1/0.0x50001fe c.0x enabled successfully Student Notes By default, the new mass storage stack interleaves access requests among lunpaths to a LUN. When planning to remove an interface card, or when troubleshooting SAN connectivity issues, the administrator may choose to temporarily or permanently disable one or more lunpaths to a LUN. Use the scsimgr commands below. As long as at least one path to a LUN remains functional, the LUN should remain accessible. Disable a lunpath (-f = force): # scsimgr -f disable \ H 1/0/2/1/0.0x50001fe c.0x LUN path 1/0/2/1/0.0x50001fe c.0x disabled successfully

233 Module 5 Configuring Hardware Determine the lunpath s status: # ioscan -P health \ -H 1/0/2/1/0.0x50001fe c.0x Class I H/W Path health =================================================================== lunpath 5 1/0/2/1/0.0x50001fe c.0x disabled Reenable a lunpath: # scsimgr enable \ -H 1/0/2/1/0.0x50001fe c.0x LUN path 1/0/2/1/0.0x50001fe c.0x enabled successfully

234 Module 5 Configuring Hardware SLIDE: Part 4: Slot Addressing Configuring Hardware: Part 4: Slot Addressing Student Notes

235 Module 5 Configuring Hardware SLIDE: Slot Address Overview Student Notes The section of the chapter discussed HP-UX hardware addresses. HP-UX hardware addresses are useful for viewing and managing peripheral devices, LUNs, and disks. However, the SBA/LBA/device/function components of an HP-UX hardware address don t provide sufficient information to determine where an interface card is physically located on a server. When replacing a failed interface card on a Superdome server with 192 expansion slots, identifying the right slot can be challenging! Some entry-class servers and all mid-range and high-end HP-UX servers now enable administrators to map an interface card s HP-UX hardware address to a more meaningful HP-UX slot address that identifies the card s physical cabinet, bay, chassis, and slot address

236 Module 5 Configuring Hardware SLIDE: Slot Address Components Slot Address Components Blowers Blowers Cabinets Cell Boards I/O Bay 0 Fan0 Fan1 Fan2 Fan3 Fan4 Backplane Power Utility Subsystem I/O Bay 1 Fan4 Fan3 Fan2 Fan1 Fan0 Chassis 1 ( slots 0-11) Chassis 3 (slots 0-11) Chassis 1 (slots 0-11) Chassis 3 (slots 0-11) Power Supplies Cabinet #0 Front Power and Cabling Cabinet #0 Rear Slot Address Format: Cabinet-Bay-Chassis-Slot Slot Address Example: Slot Address Explanation: Cabinet 0, Bay 1, Chassis 3, Slot 4 Student Notes The slot address consists of four components, which identify an interface card s exact location on a system. The first portion of the slot address identifies a slot s cabinet number. As shown on the slide, a Superdome complex may have one or two system cabinets, and two additional I/O expansion (IOX) cabinets. Superdome interface card slots in the first cabinet will have a 0 in cabinet portion of the slot address. Superdome interface card slots in the second cabinet will have a 1 in the cabinet portion of the slot address. Interface cards in the expansion cabinets will have an 8 or 9 in the first portion of the slot address. On non- Superdome systems, the cabinet number will always be 0. The second component of the slot address identifies the slot s I/O bay. Each Superdome cabinet has two I/O bays. I/O bay 0 is located on the front of the cabinet, and I/O bay 1 is located in the rear of the cabinet. Each Superdome IOX cabinet can have three vertically stacked I/O bays, numbered 1 to 3 from bottom to top. Additional space in the IOX can be used to install peripheral devices. On non-superdome systems, the I/O bay number will always be 0. The diagram below shows the location of the I/O bays on the front and back of a Superdome cabinet

237 Module 5 Configuring Hardware The third component of the slot address identifies the slot s I/O chassis number. Each I/O bay contains up to two I/O chassis. On Superdome systems, the I/O chassis are physically distinct components; the chassis on the left is I/O chassis 1 and the I/O chassis on the right is I/O chassis 3. On the rp7xxx, rx7xxx, rp8xxx, and rx8xxx servers, there are two logical I/O chassis are numbered 0 and 1, but they are located in a single physical card cage. The fourth component of the slot address identifies the slot number. Each Superdome I/O chassis has twelve slots, numbered On rp7xxx, rx7xxx, rp8xxx, and rx8xxx servers, each I/O chassis has eight slots numbered

238 Module 5 Configuring Hardware SLIDE: Viewing Slot Addresses Viewing Slot Addresses Use rad (11i v1) or olrad (11i v2 and v3) to view slot addresses and properties # olrad -q Driver(s) Capable Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode Num Spd Mode /0/8/ Off No N/A N/A N/A PCI-X PCI-X /0/10/ Off No N/A N/A N/A PCI-X PCI-X /0/12/ Off No N/A N/A N/A PCI-X PCI-X /0/14/ On Yes No Yes Yes PCI-X PCI /0/6/ On Yes No Yes Yes PCI-X PCI /0/4/ On Yes No Yes Yes PCI-X PCI-X /0/2/ On Yes No Yes Yes PCI-X PCI-X /0/1/ On Yes No Yes Yes PCI-X PCI-X # rad -q Slot Path Bus Speed Power Occupied Suspended Capable /0/8/ On Yes No Yes /0/10/ On Yes No Yes /0/12/ On Yes No Yes Student Notes You can view slot addresses and correlate those addresses to HP-UX hardware addresses via the rad command (11i v1) or olrad command (11i v2 and v3). These commands are only available on servers that support slot addressing. For each slot, the commands report the following columns: Slot Path Bus Num Max Spd Spd The Slot column reports the slot s slot address. The Path column reports the slot s corresponding HP-UX hardware path. The Bus Num column reports the slot s bus number. The Max Spd column reports the maximum speed (MHz) supported by the slot. The Spd column reports the maximum speed (MHz) supported by the card currently in the slot

239 Module 5 Configuring Hardware Pwr Occu Susp OLAR OLD Max Mode The Pwr column reports the power status of the slot. Slots can be powered up/down to facilitate I/O card replacement. This functionality will be described in detail later in the chapter. Is the slot currently occupied by an interface card? The Susp column identifies slots in the suspended state. A card must be suspended before it can be replaced. The OLAR column (olrad) or Capable column (rad) indicates if the slot supports HP s OL* online card addition and replacement functionality. The OLD column (olrad) indicates if the slot supports HP s OL* online card delete functionality. This feature is new in 11i v3. The Max Mode column distinguishes PCI-X slots from PCI slots

240 Module 5 Configuring Hardware SLIDE: Part 6: Managing Cards and Devices Configuring Hardware: Part 6: Managing Cards and Devices Student Notes

241 Module 5 Configuring Hardware SLIDE: Installing Interface Cards w/out OL* (11i v1, v2, v3) Installing Interface Cards w/out OL* (11i v1, v2, v3) In order to add, replace, or remove non OL* interface cards, shutdown and power-off the system, then add/replace/remove the card Installing a new interface card without OL*: Verify card compatibility Verify that the required driver is configured in the kernel Properly shutdown and power off the system Install the interface card Power up Run ioscan to verify that the card is recognized Student Notes The procedures on this and the following two slides describe how to properly install additional interface cards. On some entry-class servers, you must shutdown and power-off your system as described below in order to add, remove, or replace an interface card. On servers that support HP s Online Addition and Replacement (OL*) functionality, you can shutdown a single card slot without shutting down your entire server. The OL* process is described over the next couple slides

242 Module 5 Configuring Hardware Installing a new interface card without OL* is a multi-step process: 1. Verify card compatibility. HP servers support a variety of I/O expansion buses. Make sure your new interface card matches the card slot that you intend to use. See your owner's manual or server QuickSpecs to determine which card types can be used in each card slot. 2. Verify that the required driver is configured in the kernel. Check your interface card documentation to determine what driver is required, then use sam or kcweb to determine if the required driver is configured in your kernel. A later chapter in this course discusses kernel configuration in detail. 3. Use the shutdown command to properly shut down the system. When you see a message indicating that it is safe to power-off, either press the power button, or use the Management Processor power control (pc) command to power-off your system. # shutdown hy 0 4. Install the interface card. Static discharge can easily damage interface cards. Be sure to follow the anti-static guidelines that come with your interface card. 5. Power-up the system. 6. During the system boot process, the kernel should scan the system for new interface cards and devices. Use ioscan -kfn to verify that the system recognized the new interface card. Does your new device appear in the device list? Is the new device CLAIMED? If not, it may be necessary to add a new driver to the kernel. See our kernel configuration chapter later in the course! WARNING: Always check your support agreement before opening the cabinet of any HP system. Attempting to service hardware components without the assistance of an HP engineer may invalidate your warranty or support agreement

243 5 58. SLIDE: Installing Interface Cards with OL* (11i v1) Module 5 Configuring Hardware Installing Interface Cards with OL* (11i v1) HP s OL* technology make it possible to add and replace interface cards without rebooting Installing a new interface card with OL* in 11i v1: Verify card compatibility Verify that the required driver is configured in the kernel Go to the SAM "Peripheral Devices -> Cards" screen Select an empty slot from the object list Select "Actions -> Light Slot LED" to identify the card slot Select "Actions -> Add" to analyze the slot Insert the card as directed Wait for SAM to power on, bind, and configure the card Check ioscan to verify that the card is recognized Student Notes Prior to HP-UX 11i, adding or removing an interface card always required a system reboot, as described in the process on the preceding slide. HP-UX 11i v1 introduced a new technology called "Interface Card Online Addition and Replacement" (OL*), which provides the ability to add and replace PCI interface cards without a system reboot. HP-UX 11i v3 enables the administrator to permanently remove interface cards without rebooting, too. The OL* functionality is currently only supported by selected interface cards, on selected servers running HP-UX 11i v1, v2, and v3. See your system hardware manual to determine if your server supports this functionality. The notes below describe the OL* process on 11i v1. The next slide describes the 11i v2 and v3 OL* process. Installing a New Interface Card with OL* Installing an interface card using the new OL* functionality is a multi-step process that may be performed from the command line via the /usr/bin/rad command, or by using the SAM GUI/TUI. HP strongly recommends using SAM for OL* administration. The procedure for adding an OL* interface card via SAM is described below. If you prefer to work from the

244 Module 5 Configuring Hardware command line, see chapter 2 in HP's Configuring Peripherals for HP-UX manual on 1. Verify card compatibility. Check the documentation accompanying your interface card to verify that the card is OL* compatible. Check your system owner's manual for details. 2. Verify that the required driver is configured in the kernel. Without the proper driver configured, you may be able to physically install the card, but the card will be unusable. Check your interface card documentation to determine what driver is required, then use the sam -> Kernel Configuration -> Drivers screen to determine if the required driver is configured in your kernel. A later chapter will describe the process required to add a new driver. 3. Go to the sam -> Peripheral Devices -> Cards screen. This screen lists all of the interface cards installed on your system, and includes several items in the Actions menu for managing OL* interface cards Cards File View Options Actions Help I/O Cards 2 of 23 selected Hardware Slot Slot Path Driver State Power Description /0/2/1 c720 not OLAR-able - SCSI C87x Fast Wid - 0/0/4/0 func0 not OLAR-able - PCI BaseSystem (10-0/0/4/1 asio0 not OLAR-able - Service Processor 5 0/2/0 - - on empty slot 6 0/5/0 - - on empty slot 7 0/1/0 - - on empty slot 8 0/3/0 - - on empty slot 9 0/9/0/0 c8xx active on SCSI C1010 Ultra W 9 0/9/0/1 c8xx active on SCSI C1010 Ultra W 10 0/8/0/0 c8xx active on SCSI C1010 Ultra W 10 0/8/0/1 c8xx active on SCSI C1010 Ultra W Select an empty slot from the object list. Slots that are available for use by new interface cards should be marked either empty slot or unclaimed card in the Description field. Select one of these slots Cards File View Options Actions Help I/O Cards 2 of 23 selected Hardware Slot Slot Path Driver State Power Description /0/2/1 c720 not OLAR-able - SCSI C87x Fast Wid - 0/0/4/0 func0 not OLAR-able - PCI BaseSystem (10-0/0/4/1 asio0 not OLAR-able - Service Processor 5 0/2/0 - - on empty slot 6 0/5/0 - - on empty slot 7 0/1/0 - - on empty slot 8 0/3/0 - - on empty slot 9 0/9/0/0 c8xx active on SCSI C1010 Ultra W

245 Module 5 Configuring Hardware 9 0/9/0/1 c8xx active on SCSI C1010 Ultra W 10 0/8/0/0 c8xx active on SCSI C1010 Ultra W 10 0/8/0/1 c8xx active on SCSI C1010 Ultra W Select Actions -> Light Slot LED to identify the card slot on the system backplane. This should light an LED on the selected PCI slot to indicate which slot should be used for the new interface card. 6. Select Actions -> Add to analyze the slot. In order to insert the new interface card, the selected PCI slot must be powered down. On some servers, multiple slots may share a common "power domain". Slots within a power domain are powered on or off as a unit. Powering off the power domain containing the interface card for the system boot disk or other critical system resources could be disastrous! SAM automatically analyzes the selected slot's power domain to ensure that it is safe to temporarily disable the power domain while the new card is being added. 7. Insert the card as directed by SAM. 8. SAM will power-on the card, identify and "bind" an appropriate kernel driver, and run a post addition script to finish configuring the card, if necessary. 9. Check ioscan to verify that the card is recognized. Does the card appear in the hardware list? Is it CLAIMED? Other OL* Possibilities OL* also makes it possible to replace interface cards without rebooting. Simply select the Actions -> Replace menu item rather than Actions -> Add. There are some restrictions, however. Generally speaking, the replacement card must be identical to the original card. For Further Study See the Configuring Peripherals for HP-UX" manual on for more information regarding OL* procedures. WARNING: Be sure to check your support agreement before opening the cabinet of an HP system. Attempting to service hardware components without the assistance of an HP engineer may invalidate your warranty or support agreement

246 Module 5 Configuring Hardware SLIDE: Installing Interface Cards with OL* (11i v2, v3) Installing Interface Cards w/ OL* (11i v2, v3) HP s mid- and high-end servers OLAR technology make it possible to add, replace, and remove interface cards without rebooting Installing a new interface card with OLAR: Verify card compatibility Verify that the required driver is configured in the kernel Go to the SMH "Peripheral Device Tool -> OLRAD Cards" screen Select an empty slot Click Turn On/Off Slot LED Click Add Card Online Click Run Critical Resource Analysis Click Power Off to power-off the slot Insert the new card Click Bring Card Online Check ioscan to verify that the card is recognized Student Notes Installing an OL* capable interface card in 11i v2 and v3 is a multi-step process that may be performed from the command line via the /usr/bin/olrad CLI utility or the SMH GUI/TUI interfaces. The procedure for adding an OL* interface card via the SMH is described below. 1. Verify card compatibility. Check the documentation accompanying your interface card to verify that the card is OL* compatible. Check your system's hardware manual for details. 2. Verify that the required driver is configured in the kernel. Without the proper driver configured, you may be able to physically install the card, but the card will be unusable. Check your interface card documentation to determine what driver is required. A later chapter will describe the process required to view and add drivers. 3. Launch the SMH and access the OLRAD Cards tab on the Peripheral Device Tool. To learn more about enabling and launching the SMH see the SMH chapter elsewhere in this course. Login using the root username and password. # firefox Launch the GUI # smh Launch the TUI

247 Module 5 Configuring Hardware Select an empty slot from the object list. Slots that are available for use by new interface cards should report no in the Occupied column. 4. Click Turn On/Off Slot LED. Check the slot LEDs on the backplane of your system to verify that you selected the right slot. 5. Click Add Card Online. This should display a dialog box similar to the following: 6. Click Run CRA (Critical Resource Analysis) to analyze the slot. In order to insert the new interface card, the selected PCI slot must be powered down. On some servers, multiple slots may share a common OL* "power domain". Slots within a power domain are powered on or off as a unit. Powering off the power domain containing the interface card for the system boot disk or other critical system resources could be disastrous! pdweb automatically analyzes the selected slot's power domain to ensure that it is safe to temporarily disable the power domain while the new card is being added. A CRA may report several different outcomes:

248 Module 5 Configuring Hardware System Critical Impacts: Performing an OL* operation on the selected slot will likely impact /, /stand, /usr, or /etc file systems, or a swap device. Proceeding with the OL* operation may crash or significantly degrade system performance. Data Critical Impacts: Other Impacts: Success: Performing an OL* operation on the selected slot will likely impact one or more locally mounted file systems, open device files, or non-suspended network interface cards. Proceeding with the OL* operation may cause data corruption. Loss of a CDFS file system will not trigger a Data Critical CRA warning. Performing an OL* operation on the selected slot may impact unused logical volumes, CDFS file systems, cards protected by high-availability resources, networking cards that are suspended, or one path to a multi-pathed logical volume. Performing an OLR operation on the selected slot won t impact any currently used resources. Error: An internal olrad error occurred. 7. If the CRA succeeds, you should see a screen like this: 8. Insert the new card. Ensure that the card slot latch is closed firmly. 9. Click Bring Card Online

249 Module 5 Configuring Hardware 10. Finally, check ioscan -f to verify that the card is recognized. Does the card appear in the hardware list? Is it CLAIMED? Replacing an Interface Card OL* also makes it possible to replace interface cards without rebooting. Simply select an OL* capable card from the SMH OLRAD Cards tab, and select the Replace Card Online link on the main menu and follow the prompts in the dialog boxes that follow. There are some restrictions, however. When replacing an interface card online, you must use an identical replacement card. This is referred to as like-for-like replacement. Using a similar but not identical card can cause unpredictable results. For example, a newer version of the target card with identical hardware may use a newer firmware version that may conflict with the current driver. If a new card is not acceptable, the system will report that the card cannot be resumed, and olrad/pdweb will return an error. During the replacement process, the driver instance for each port on the target card runs in a suspended state. I/O to the ports is either queued or failed while the drivers are suspended. When the replacement card comes online, the driver instances resume normal operation. Each driver instance must be capable of resuming and controlling the corresponding port on the replacement card. The PCI specification enables a single physical card to contain more than one port. Attempting to replace a card with another card that has more ports can result in the additional ports being claimed by other drivers if an ioscan occurs when slot power is on. Removing a Card In 11i v3, OL* also enables the administrator to permanently remove an interface card without rebooting. Ensure that the card isn t currently being used, then select the card in the SMH interface and select the Delete Card Online link on the main menu. During the deletion process, the driver instance for each port on the target card is suspended. I/O to the ports are either queued or failed while the drivers are suspended. When the card is removed, the driver instances are deleted. For Further Study See the Interface Card OL* Support Guide on for more information regarding OL* procedures. WARNING: Be sure to check your support agreement before opening the cabinet of any HP system. Attempting to service hardware components without the assistance of an HP engineer may invalidate your warranty or support agreement

250 Module 5 Configuring Hardware SLIDE: Installing New Devices (11i v1, v2, v3) Installing New Devices (11i v1, v2, v3) LUNs and Hot-pluggable devices can be added to a system without rebooting Non-hot-pluggable devices require a system reboot Configuring a new LUN or hot-pluggable device Verify device compatibility Verify that the required driver is configured in the kernel Connect or configure the device Run ioscan to add the device to the kernel I/O tree (not necessary in 11i v3) Run insf to create device files (not necessary in 11i v3) Run ioscan kfn or ioscan kfnn to verify the configuration Configuring a new non-hot-pluggable device Verify device compatibility Verify that the required driver is configured in the kernel Shutdown and power off the system Connect the device Power-on and boot the system Run ioscan kfn or ioscan kfnn to verify the configuration Student Notes After installing a new interface card, you may choose to attach new devices, too. The procedures below explain the process to install both hot-pluggable and non-hot-pluggable devices. Configuring a New LUN or Hot-Pluggable Device Many devices today, such as the media bays on most of the current servers, are hotpluggable. Hot pluggable devices can be installed or removed without shutting down the system. If you create or remove LUNs on a Storage Area Network, this same procedure may be used to force HP-UX to recognize the new configuration. 1. Verify device compatibility. Check HP s website or call the response center to verify that the device is supported on your system. 2. Verify that the required driver is configured in the kernel via sam (11i v1) or kcweb (11i v2 and v3). 3. Connect or configure the device

251 Module 5 Configuring Hardware 4. Run ioscan to add the device to the kernel iotree. Don t include the u or k options; in order to recognize the new device, ioscan must scan the buses rather than simply report the devices already recorded in the iotree. 5. Run insf to create device files. Device files allow users and applications to access peripheral devices. Device files are discussed in detail in the next chapter. NOTE: Even if a card is hot-pluggable, you must shutdown any daemons using the device before you remove the device. Configuring a New Non-Hot-Pluggable Device In order to install some devices, a reboot may be required. 1. Verify device compatibility. Check HP s website or call the response center to verify that the device is supported on your system. 2. Verify that the required driver is configured in the kernel via sam (11i v1) or kcweb (11i v2). 3. Shutdown and power off the system. 4. Connect the device. 5. Power-on and boot the system. 6. Run ioscan to verify auto-configuration. Verify that the device appears in the ioscan output and is CLAIMED. Additional Configuration Some additional configuration may be required after physically connecting a new device. Terminals and modems may require new device files. Disks may need to be partitioned before they may be used. The next couple chapters will discuss these additional configuration tasks in detail

252 Module 5 Configuring Hardware LAB: Exploring the System Hardware Directions The ioscan command is a powerful tool for exploring your system's hardware configuration. Part 1: Exploring your System Configuration You may do this part of the lab after your instructor completes lecture Part 2. Your goal in this part of the lab is to explore your assigned lab system s configuration. Carefully record the commands you use to obtain the information requested below. 1. Login as root on your assigned server. 2. Execute the model command to determine your system s model string. Consult the table of HP server types earlier in the chapter to determine whether your system is an entry class, blade, mid-range, or high-end server. 3. Execute machinfo to determine your system s processor type and speed. Some older PA-RISC systems do not support machinfo. If your system generates an error message, skip this question. 4. Execute machinfo to determine the amount of physical memory on your system. Some older PA-RISC systems do not support machinfo. If your system generates an error message, you can determine the amount of physical memory by executing dmesg grep i physical

253 Module 5 Configuring Hardware 5. Execute ioscan C cell to determine how many (if any) cell boards you have on your system. 6. Execute ioscan C processor to determine how many processor cores you have on your system. 7. Execute ioscan C lan to determine how many LAN interfaces you have on your system. 8. Execute ioscan C disk to determine how many disk class devices you have on your lab system. 9. DVDs and CDROMs are disk class devices, too. Execute ioscan C disk and look in the Description column for the string DVD or DV to determine if you have a DVD drive. 10. Are there any parallel SCSI buses on your system? Execute ioscan C ext_bus to view external bus type components. Look in the Description column for the string SCSI

254 Module 5 Configuring Hardware Part 2: Legacy and Agile View Hardware Addressing You may do this part of the lab after your instructor completes lecture Part Execute the ioscan command to view your system configuration. Try the command with each of the options listed below. View the results and explain the significance of each option. Check the man page if you need to. # ioscan # ioscan -f # ioscan N # ioscan k # ioscan -kfn 2. Does your system have any SCSI ext_bus es? If so, can you determine their hardware paths? # ioscan kfnc ext_bus 3. Skip this question if your system does not have SCSI buses. If your system does have one or more SCSI buses, how many devices are on the first bus? Execute the command below to find out. Replace the hardware path below with the first SCSI bus hardware path you discovered in the previous step. # ioscan -kfnh n/n/n/n 4. Skip this question if your system does not have any SCSI buses. If you add a new device to the SCSI bus you explored in the previous step, which SCSI target addresses have already been claimed by existing devices on the bus? 5. 11i v3 s new mass storage stack introduced some helpful new tools for managing disks and LUNs, particularly on systems with multi-pathed devices. Execute ioscan m lun

255 Module 5 Configuring Hardware to determine which disks (if any) on your system are multi-pathed. If so, how many paths lead to each disk/lun? # ioscan m lun 6. Choose a disk or LUN from the ioscan m lun output above and record its LUN hardware path and one of its lunpath hardware addresses below. If your system has multi-pathed LUNs, use one of the multi-pathed LUNs. LUN hardware path: lunpath hardware path: Conceptually, what is the difference between a LUN hardware address and a lunpath hardware address? 7. Recall that ioscan m lun also reports each LUN s health status. Are any of your LUNs currently disabled? # ioscan m lun 8. When troubleshooting SAN problems, your storage administrators may ask you to determine a LUN s WWID. Execute the command below to determine the WWID of the disk or LUN you selected in the previous question. # scsimgr get_attr -a wwid -H 64000/0xfa00/0x

256 Module 5 Configuring Hardware 9. You may also be asked to determine a LUN s LUN ID. Use the lunpath hardware address that you selected previously to determine the LUN s LUN ID. # scsimgr get_attr -a lunid H

257 Module 5 Configuring Hardware Part 3: HP-UX Slot Addressing You may do this part of the lab after your instructor completes lecture Part 4. Since some lab systems may not support OL* functionality, use the sample olrad q output below to answer the OL* questions that follow. This abbreviated output came from a Superdome server. A message in the /var/adm/syslog/syslog.log file suggests the interface card at hardware address 2/0/3/1 may need to be replaced. Your goal in the questions below is to determine where the problem card is physically located in the Superdome complex # olrad q Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode /0/1/ Off No N/A N/A N/A PCI-X PCI-X /0/2/ On Yes No Yes Yes PCI-X PCI-X /0/3/ On Yes No Yes Yes PCI-X PCI-X /0/4/ On Yes No Yes Yes PCI-X PCI-X 1. Which slot address corresponds to hardware path 2/0/3/1? 2. If the complex has two cabinets, which cabinet is the card in? 3. Is the card s I/O bay in the front or back of the cabinet? 4. Within the I/O bay, is the card s I/O chassis on the left or right? 5. Which slot is the card in?

258 Module 5 Configuring Hardware 6. Based on the output above, is it safe to remove the card from the chassis now? 7. Execute the olrad q command. Does your lab system support OL* functionality? # olrad q

259 Module 5 Configuring Hardware Part 4: (Optional) Viewing Peripheral Devices via the SMH You may do this part of the lab after your instructor completes lecture Part 5. If time permits, explore the Peripheral Devices functional area in the SMH. In the HP Virtual Lab, use the SMH button that is available from the reservation window to open an SMH browser. From the Home Page, click "System Configuration." From the System Configuration Window, click "Peripheral Devices" A similar Peripheral Devices SAM SAM functional area exists in sam in earlier versions of HP-UX

260 Module 5 Configuring Hardware Part 6: (Optional) Exploring Quickspecs You may do this part of the lab after your instructor completes lecture Part 2. External web sites are not available from within the HP virtual lab, and may not be accessible from all classrooms. Skip this part of the lab if you do not have public Internet access. 1. If time permits, visit the website. Go to the Integrity servers page on the website. See if you can find the QuickSpecs page for one or two server models. If you need help finding the QuickSpecs, ask your instructor. 2. Also have a look at some of the hardware documentation available at

261 5 62. LAB SOLUTIONS: Exploring the System Hardware Directions The ioscan command is a powerful tool for exploring your system's hardware configuration. Part 1: Exploring your System Configuration You may do this part of the lab after your instructor completes lecture Part 2. Module 5 Configuring Hardware Your goal in this part of the lab is to explore your assigned lab system s configuration. Carefully record the commands you use to obtain the information requested below. 1. Login as root on your assigned server. 2. Execute the model command to determine your system s model string. Consult the table of HP server types earlier in the chapter to determine whether your system is an entry class, blade, mid-range, or high-end server. Answer: # model 3. Execute machinfo to determine your system s processor type and speed. Some older PA-RISC systems do not support machinfo. If your system generates an error message, skip this question. Answer: # machinfo 4. Execute machinfo to determine the amount of physical memory on your system. Some older PA-RISC systems do not support machinfo. If your system generates an error message, you can determine the amount of physical memory by executing dmesg grep i physical. Answer: # machinfo 5. Execute ioscan C cell to determine how many (if any) cell boards you have on your system. Answer: # ioscan C cell 6. Execute ioscan C processor to determine how many processor cores you have on your system

262 Module 5 Configuring Hardware Answer: # ioscan C processor 7. Execute ioscan C lan to determine how many LAN interfaces you have on your system. Answer: # ioscan C lan 8. Execute ioscan C disk to determine how many disk class devices you have on your lab system. Answer: # ioscan C disk 9. DVDs and CDROMs are disk class devices, too. Execute ioscan C disk and look in the Description column for the string DVD or DV to determine if you have a DVD drive. Answer: # ioscan C disk 10. Are there any parallel SCSI buses on your system? Execute ioscan C ext_bus to view external bus type components. Look in the Description column for the string SCSI. Answer: # ioscan C ext_bus

263 Module 5 Configuring Hardware Part 2: Legacy and Agile View Hardware Addressing You may do this part of the lab after your instructor completes lecture Part Execute the ioscan command to view your system configuration. Try the command with each of the options listed below. View the results and explain the significance of each option. Check the man page if you need to. # ioscan # ioscan -f # ioscan N # ioscan k # ioscan -kfn Answer: When executed without any options, ioscan scans the buses and reports each hardware component s legacy hardware path, class, and description. The f option adds several additional columns to the output, including the driver name, instance number, SW State, and HW Type. The N option displays Agile View hardware addresses rather than legacy hardware addresses. The k option displays cached information. On a large system, ioscan executes significantly faster with the k than it does without the k option. The last example combines the last three options to display a full listing of Agile View hardware paths using kernel cached information. This is one of the most popular permutations of the ioscan command. 2. Does your system have any SCSI ext_bus es? If so, can you determine their hardware paths? # ioscan kfnc ext_bus Answer: Answers will vary. 3. Skip this question if your system does not have SCSI buses. If your system does have one or more SCSI buses, how many devices are on the first bus? Execute the command below to find out. Replace the hardware path below with the first SCSI bus hardware path you discovered in the previous step. # ioscan -kfnh n/n/n/n Answer: Answers will vary

264 Module 5 Configuring Hardware 4. Skip this question if your system does not have any SCSI buses. If you add a new device to the SCSI bus you explored in the previous step, which SCSI target addresses have already been claimed by existing devices on the bus? Answer: Look at the second to last component in each SCSI device address to determine which target addresses are already taken. There must not be duplicate SCSI target addresses on a SCSI bus i v3 s new mass storage stack introduced some helpful new tools for managing disks and LUNs, particularly on systems with multi-pathed devices. Execute ioscan m lun to determine which disks (if any) on your system are multi-pathed. If so, how many paths lead to each disk/lun? # ioscan m lun Answer: If ioscan lists multiple lunpaths below an Agile View LUN hardware path, the LUN is multi-pathed. 6. Choose a disk or LUN from the ioscan m lun output above and record its LUN hardware path and one of its lunpath hardware addresses below. If your system has multi-pathed LUNs, use one of the multi-pathed LUNs. LUN hardware path: lunpath hardware path: Conceptually, what is the difference between a LUN hardware address and a lunpath hardware address? Answer: A LUN hardware path represents a disk or LUN. A lunpath hardware address represents a single path to a disk or LUN. Each LUN has one LUN hardware path, but may have multiple lunpath hardware addresses. 7. Recall that ioscan m lun also reports each LUN s health status. Are any of your LUNs currently disabled? # ioscan m lun Answer: All of the LUNs should be online. 8. When troubleshooting SAN problems, your storage administrators may ask you to determine a LUN s WWID. Execute the command below to determine the WWID of the disk or LUN you selected in the previous question

265 Module 5 Configuring Hardware # scsimgr get_attr -a wwid -H 64000/0xfa00/0x Answer: Answers may vary. 9. You may also be asked to determine a LUN s LUN ID. Use the lunpath hardware address that you selected previously to determine the LUN s LUN ID. # scsimgr get_attr -a lunid H Answer: Answers may vary

266 Module 5 Configuring Hardware Part 3: HP-UX Slot Addressing You may do this part of the lab after your instructor completes lecture Part 4. Since some lab systems may not support OL* functionality, use the sample olrad q output below to answer the OL* questions that follow. This abbreviated output came from a Superdome server. A message in the /var/adm/syslog/syslog.log file suggests the interface card at hardware address 2/0/3/1 may need to be replaced. Your goal in the questions below is to determine where the problem card is physically located in the Superdome complex # olrad q Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode /0/1/ Off No N/A N/A N/A PCI-X PCI-X /0/2/ On Yes No Yes Yes PCI-X PCI-X /0/3/ On Yes No Yes Yes PCI-X PCI-X /0/4/ On Yes No Yes Yes PCI-X PCI-X 1. Which slot address corresponds to hardware path 2/0/3/1? Answer: Slot address If the complex has two cabinets, which cabinet is the card in? Answer: Cabinet 0, which is usually on the left when facing the front of the complex. 3. Is the card s I/O bay in the front or back of the cabinet? Answer: Bay 1, which is on the rear side of the cabinet. 4. Within the I/O bay, is the card s I/O chassis on the left or right? Answer: Chassis 3, which is on the right side of the I/O bay. 5. Which slot is the card in? Answer: Slot Based on the output above, is it safe to remove the card from the chassis now?

267 Module 5 Configuring Hardware Answer: Since the card is powered on and is not suspended, you should not remove the card. 7. Execute the olrad q command. Does your lab system support OL* functionality? # olrad q Answer: If you get a message reporting Capability not implemented; Could not obtain information of all slots, your server doesn t support OL*. If you get a list of card slots, your server does support OL*

268 Module 5 Configuring Hardware Part 4: (Optional) Viewing Peripheral Devices via the SMH You may do this part of the lab after your instructor completes lecture Part 5. If time permits, explore the Peripheral Devices functional area in the SMH. In the HP Virtual Lab, use the SMH button that is available from the reservation window to open an SMH browser. From the Home Page, click "System Configuration." From the System Configuration Window, click "Peripheral Devices" A similar Peripheral Devices SAM SAM functional area exists in sam in earlier versions of HP-UX

269 Module 5 Configuring Hardware Part 6: (Optional) Exploring Quickspecs You may do this part of the lab after your instructor completes lecture Part 2. External web sites are not available from within the HP virtual lab, and may not be accessible from all classrooms. Skip this part of the lab if you do not have public Internet access. 1. If time permits, visit the website. Go to the Integrity servers page on the website. See if you can find the QuickSpecs page for one or two server models. If you need help finding the QuickSpecs, ask your instructor. 2. Also have a look at some of the hardware documentation available at

270 Module 5 Configuring Hardware

271 Module 6 Configuring Device Files Objectives Upon completion of this module, you will be able to do the following: Explain the purpose of a Device Special File (DSF). Explain the significance of major and minor numbers, and block and character I/O. Compare and contrast legacy versus persistent DSFs. Describe the legacy DSF naming convention for disks, LUNs, DVDS, tapes, autochangers, terminals, modems. Describe the persistent DSF naming convention for disks, LUNs, DVDS, tapes, and autochangers. Use lsdev to list kernel driver major numbers. Use ll to determine a device file's major and minor numbers. Use ioscan to list legacy and persistent DSFs associated with devices. Use lssf to interpret the characteristics of legacy and persistent DSFs. Create DSFs via autoconfiguration, insf, mksf, and mknod. Remove DSFs via rmsf

272 Module 6 Configuring Device Files 6 1. SLIDE: Device Special File Overview Device Special File Overview Applications access peripheral devices via Device Special Files (DSFs) Advantages: Applications can access devices using standard file access system calls Applications can access devices with minimal knowledge of the system hardware UNIX Commands Applications Device Files Kernel reference device special files /dev/rtape/tape0_best references physical devices Student Notes UNIX applications access peripheral devices such as tape drives, disk drives, printers, terminals, and modems via special files in the /dev directory called Device Special Files (DSFs). Every peripheral device typically has one or more DSFs. The same read() and write() system calls used to read or write data to a disk-based file can also be used to read or write data to a tape drive, terminal device, or any other device via the device s DSF. This allows application developers to easily access peripheral devices using familiar system calls, with minimal knowledge of the system s underlying hardware architecture. The following examples demonstrate the use of DSFs by HP-UX commands: # tar -cf /dev/rtape/tape0_best /home The tar application creates (-c) a backup archive on the file specified by the -f option. Since device files allow applications to access devices using the same system calls that are 6-2

273 Module 6 Configuring Device Files used to access files, the f option may be used to write a tar archive to either a tape drive (e.g.: /dev/rtape/tape0_best) or a disk-based file (e.g.: /tmp/archive.tar). This second example redirects standard output of the echo command to a terminal via the terminal s device file. # echo hello > /dev/tty0p0 NOTE: The terms Device Special File, DSF, Device File, and Special File are used interchangeably

274 Module 6 Configuring Device Files 6 2. SLIDE: DSF Attributes DSF Attributes DSF file attributes determine which device a DSF accesses, and how Type: Access the device in block or character mode? Permissions: Who can access the device? Major#: Which kernel driver does the DSF use? Minor#: Which device does the DSF use? And how? Name: What is the DSF name? Use ll to view a device file s attributes # ll /dev/*disk/disk* brw-r bin sys 3 0x Jun 23 00:34 /dev/disk/disk30 brw-r bin sys 3 0x Jun 23 00:34 /dev/disk/disk31 crw-r bin sys 22 0x Jun 23 00:34 /dev/rdisk/disk30 crw-r bin sys 22 0x Jun 23 00:34 /dev/rdisk/disk31 Type Permissions Major# Minor# Device File Names Translate major#s to device driver names with lsdev # lsdev Character Block Driver Class 22 3 esdisk disk 23-1 estape tape Student Notes Every file on a UNIX system has an associated structure called an inode that records the file s owner, group, permissions, size, and other attributes. Every DSF also has an inode. Some DSF file attributes are similar to regular file attributes; others are DSF-specific. The ll command may be used to view the file attributes associated with both data files and DSFs. The notes below highlight some of the significant DSF file attributes. DSF File Types The very first character in the ll output for a device file indicates the device file type. File type d identifies directories. File type - identifies regular files. File types c and b are DSF-specific. Character Device Files File type "c" identifies character mode DSFs. Character mode DSFs transfer data to the device one character at a time. Devices such as terminals, printers, plotters, modems, and tape drives are typically accessed via character mode DSFs. Character mode DSFs are sometimes called "raw" device files

275 Module 6 Configuring Device Files Block Device Files File type "b" identifies a block mode DSFs. When accessing a device via a block mode DSF, the system reads and writes data through a buffer in memory, rather than transferring the data directly to the physical disk. This can significantly improve I/O for disks, LUNs, and CD-ROMs. Block device files are sometimes called "cooked" device files. Terminals, modems, printers, plotters, and tape drives typically only have character device files. Disks, LUNs, and CD-ROMs may be accessed in either character or block mode, and thus typically have both types of device files. Some applications and utilities prefer to access disks directly via character mode DSFs. Other utilities require a block mode DSF. Read the application or utility documentation to determine which device file is required. DSF File Permissions Just as file permissions determine which users can access regular files, file permissions also determine which users can access DSFs. In the example below, world write privileges on the /dev/console device file suggest that any user could write messages to the administrator s system console device. # ll /dev/console crw--w--w- 1 root sys 0 0x Jun 27 13:17 /dev/tty0p0 Recall that the mesg n command prevents other users from sending message to the local terminal device. mesg accomplishes this by changing the permissions on the user s terminal device file. # mesg n # ll /dev/console crw root sys 0 0x Jun 27 13:17 /dev/console Though administrators can change DSF file permissions, it s generally best to retain the permissions applied by the insf and mksf commands when they initially create DSFs. Device File Major Numbers Every device file has a "major number" that appears in the fifth field of the ll output. The major number identifies the "kernel driver" used to access the device. A kernel driver is a portion of code in the HP-UX kernel that manages I/O for a particular type of device. The lsdev command lists the drivers configured in the kernel, and their associated major numbers. The third column in the lsdev output reports driver names. The first column reports each driver s character major number. The second column reports each driver s block major number. Block major number -1 indicates that the driver doesn t support block mode access

276 Module 6 Configuring Device Files Device File Minor Numbers Every device file has a 24-bit hexadecimal minor number. Minor numbers are formulated differently for different types of devices. Some of the bits in the minor number identify which device the DSF is associated with. Other bits in the minor number may represent device-specific access options. Tape drives, for instance, have special access options that enable/disable hardware compression and define the density format used when writing to the tape. Fortunately, HP-UX auto-configures most device files, so administrators very rarely have to manually assign minor numbers anymore. Also, the lssf command automatically translates a DSF s hexadecimal minor numbers into human-readable format. Device File Names Every DSF has a DSF file name. DSF file names follow a standard naming convention, which will be described in detail later in this chapter

277 Module 6 Configuring Device Files 6 3. SLIDE: DSF Types: Legacy vs. Persistent DSF Types: Legacy vs. Persistent 11i v1 and v2 only support legacy DSFs 11i v3 still supports legacy device files, but introduces new persistent DSFs Persistent device files provide many advantages in today s SAN environments 11i v1 and v2 Legacy DSFs DSFs are created for each LUN path DSFs change if SAN topology changes DSFs are only auto-configured at startup DSFs support up to 8192 active LUNs 11i v3 Persistent DSFs DSFs are created for each WWID DSFs are unaffected by SAN topology changes DSFs are auto-configured after LUN creation DSFs support up to 16,777,216 LUNs Student Notes HP-UX 11i v3 supports two different types of DSFs. Legacy DSFs are supported in HP-UX 11i v1, v2, and v3, but will be deprecated in a future release. In the legacy addressing scheme, each device path is represented by a minor number, a legacy DSF and a legacy hardware path. The legacy DSF s minor number directly encodes the corresponding device path s bus, target, and LUN numbers, as well as device access options. Persistent DSFs are new in 11i v3. Persistent DSFs provide a persistent, path-independent representation of a device bound to the device s Agile View LUN hardware path and World Wide Identifier (WWID). The notes below highlight the significant differences between the two DSF types

278 Module 6 Configuring Device Files Path-based versus WWID-based DSFs Because Legacy DSFs are based on physical paths, a single multi-pathed LUN often yields multiple legacy DSFs. The OS relies on volume managers and third party applications to determine which legacy DSFs represent redundant paths to a LUN, and which DSFs represent distinct devices. The new mass storage stack creates a single Agile View LUN hardware path for each disk/tape/lun regardless of the number of underlying paths to the device. The new stack also creates a block and raw persistent DSF for each Agile View LUN hardware path / WWID. The persistent DSFs represent the LUN itself, rather than a specific path to the LUN. This approach greatly simplifies volume and system management. System/SAN Topology Changes Because legacy DSFs are based on physical paths, changes in the system or SAN topology may change legacy LUNs DSF names and minor numbers, which may then require manual changes to the system s volume and file system configurations. Because the persistent DSF represents a LUN rather than a lunpath, persistent DSFs aren t affected when a device is moved to a different HBA or SAN switch. Auto-configuration 11i v1 and v2 automatically create device files for new devices during system startup. When adding LUNs or other devices to a running system, though, the administrator must execute the insf command to auto-configure DSFs. 11i v3 recognizes new LUNs and creates DSFs automatically. Scalability Minor numbers are 24-bit numbers. In legacy DSFs, 15 bits in the minor number represent the device address, and 9 bits represent the DSF s special access options. With just 15 address bits, legacy DSFs can represent 2 15 = 32,768 LUN paths. The legacy storage stack further limits the number of concurrently active LUNs to Persistent DSF minor numbers are also 24-bits. However, persistent DSFs use all 24 bits to identify the device itself. As a result, persistent DSFs can represent 2 24 = 16,777,216 LUNs

279 Module 6 Configuring Device Files 6 4. SLIDE: DSF Directories DSF Directories DSFs are stored in a directory structure under /dev Disk, DVD, tape, and related DSFs are stored in subdirectories under /dev LAN, terminal, modem, Most other device files are stored directly under /dev /dev disk block disk dsk block disk ttyxpx terminal rdisk raw disk rdsk raw disk ttydxpx modem rtape tape drive rmt tape drive cuaxpx modem rchgr auto changer rac auto changer culxpx modem Persistent DSFs Legacy DSFs cxtx_lp printer More Legacy DSFs Student Notes The next few slides introduce the structure of the /dev directory, and the naming standard naming convention used to assign names to legacy and persistent DSFs. An understanding of the device file naming convention will allow you to more easily manage and use DSFs on your system. The slide above describes the structure of the /dev directory. The next few slides describe the contents of the /dev subdirectories in detail

280 Module 6 Configuring Device Files 6 5. SLIDE: Legacy DSF Names Legacy DSF Names Legacy DSF names are based on a device path s controller instance, target, and LUN Multi-pathed LUNs require separate legacy DSFs for each path Legacy DSFs change when the SAN or system topology changes # ioscan kf Class I H/W Path Description ===================================================== ext_bus 5 1/0/2/1/ FCP Array Interface disk 3 1/0/2/1/ HP HSV101 ext_bus 7 1/0/2/1/ FCP Array Interface disk 6 1/0/2/1/ HP HSV101 ext_bus 9 1/0/2/1/ FCP Array Interface disk 9 1/0/2/1/ HP HSV101 ext_bus 11 1/0/2/1/ FCP Array Interface disk 12 1/0/2/1/ HP HSV101 LUN Bus Instance Device dependent options Target /dev/dsk/c11t0d1options Student Notes Legacy disk, LUN, DVD, auto-changer, and tape DSF names follow the convention shown on the slide, in which the DSF name and minor number encode the associated hardware path s bus or controller instance number, target number, and LUN number, plus the associated device access options. Devices accessed via multiple paths have separate legacy DSFs representing each path. # ioscan kf Class I H/W Path Description ===================================================== ext_bus 5 1/0/2/1/ FCP Array Interface disk 3 1/0/2/1/ HP HSV101 1st path ext_bus 7 1/0/2/1/ FCP Array Interface disk 6 1/0/2/1/ HP HSV101 2nd path ext_bus 9 1/0/2/1/ FCP Array Interface disk 9 1/0/2/1/ HP HSV101 3rd path ext_bus 11 1/0/2/1/ FCP Array Interface disk 12 1/0/2/1/ HP HSV101 4th path

281 Module 6 Configuring Device Files The ioscan output on the slide represents four paths to a single LUN. The legacy DSF scheme assigns independent DSFs to each path. The notes below describe each component in the fourth hardware path, 1/0/2/1/ Bus Instance Numbers The kernel automatically assigns an instance number to every bus, controller, device, and interface card on an HP-UX system. Instance numbers are assigned sequentially within each device class, as new devices are recognized by the kernel. Thus, the first three ext_bus class hardware components would be assigned instance numbers 0, 1, and 2. The first three disk class devices would also be assigned instance numbers 0, 1, and 2. The first three tape class devices would also be assigned instance numbers 0, 1, and 2. The binary /etc/ioconfig file ensures that these instance number assignments persist across reboots. To view assigned instance numbers, look in the I column in the ioscan -kf output. To improve readability, the screenshot on the slide only shows selected columns and rows from the ioscan -kf output. The number following the "c" in a LUN, disk, tape, or DVD DSF name identifies the device path s SCSI bus or fiber channel array controller ext_bus instance number. The disk path represented by legacy hardware path 1/0/2/1/ on the slide would have device files beginning with "c11", since the instance number of the ext_bus at legacy hardware path 1/0/2/1/ is "11". ext_bus 11 1/0/2/1/ FCP Array Interface disk 12 1/0/2/1/ HP HSV101 Note that each device also has an instance number. Legacy DSF names utilize the ext_bus instance number rather than the device instance number. Legacy DSFs allocate 8 bits to represent the bus/controller portion of the device address in the minor number. Thus, legacy DSFs support up to 256 bus/controller instances. Target Numbers The number following the "t" in a LUN, disk, tape, or DVD DSF name identifies the device s target address, which appears in the second-to-last component of the device s hardware path. The target address for hardware path 1/0/2/1/ is 0. disk 12 1/0/2/1/ HP HSV101 LUN Numbers The number following the "d" in a LUN, disk, tape, or DVD DSF name identifies the device s LUN number, which appears in the last component of the device s hardware path. The LUN number for hardware path 1/0/2/1/ is 1. disk 12 1/0/2/1/ HP HSV

282 Module 6 Configuring Device Files Recall that the LUN number in the last component of the hardware path, and following the d in the legacy DSF, doesn t fully represent the LUN ID assigned by the array administrator. The legacy addressing scheme only provides 3-bits (eight addresses) in the last component in an HP-UX hardware path. Since today s arrays often present hundreds of LUNs. Legacy hardware addresses use the last three components of the hardware path, together, to represent the LUN ID. The table below shows the legacy hardware paths and DSF names that would be used to represent the first sixteen LUNs in an array. LUN ID (decimal) LUN ID (binary) LUN ID, as represented in a legacy HP-UX controller/target/lun hardware address Legacy DSF x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt0d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt1d x/x/x/x/x.x.x.x cxt2d0 Device-Dependent Access Options The last part of the device file name lists device-specific access options enabled by the device file. Tape drive device file names may have a variety of options listed in this portion of the device file name. Access options vary from device to device. Limitations Legacy DSF minor numbers only allocate 15 bits to identify the DSF s associated device. These 15 bits allow legacy DSFs to address at most 2 15 = 32,768 LUN paths per system. Above the legacy addressing scheme limits, only persistent device special files are created. Multi-pathed Device DSFs Each path to a multi-pathed LUN yields a separate legacy DSF. Since the LUN on the slide has four distinct paths, it has four different legacy DSF names: c5t0d1 c7t0d1 c9t0d1 c11t0d

283 Block versus Raw DSFs Module 6 Configuring Device Files Since LUNs may be accessed in either block or raw mode, each disk or LUN requires both a block and raw legacy DSF for each device path. /dev/dsk/ contains block DSFs. /dev/rdsk/ contains legacy raw DSFs: /dev/dsk/c5t0d1 /dev/dsk/c7t0d1 /dev/dsk/c9t0d1 /dev/dsk/c11t0d1 /dev/rdsk/c5t0d1 /dev/rdsk/c7t0d1 /dev/rdsk/c9t0d1 /dev/rdsk/c11t0d1 Legacy DSFs for Other Devices The example on the slide represents the DSF naming scheme for a LUN. Legacy DSFs for disks, CDROMs, and DVDs devices follow the same naming convention, and also reside in the /dev/dsk/ and /dev/rdsk/ directories. Legacy DSF names for tape drive and auto-changers also follow a cxtxdx format, but reside in /dev/rmt/ and /dev/rac/. Kernel drivers for tape drives and autochangers typically only support raw device files. Terminals, modems, and printers follow a very different format. Later slides describe each device type s unique DSF requirements in detail

284 Module 6 Configuring Device Files 6 6. SLIDE: Persistent DSF Names Persistent DSF Names Persistent DSF names are based on a device s LUN hardware path instance number Multi-pathed devices require just one block and one raw persistent DSF Persistent DSFs remain unchanged when the SAN or system topology changes # ioscan kfnn Class I H/W Path Driver S/W State H/W Type Description ================================================================= disk /0xfa00/0x4 esdisk CLAIMED DEVICE HP HSV101 LUN hardware path instance Device dependent options /dev/disk/disk30options Student Notes Legacy DSF names and minor numbers encode a hardware path s bus or controller instance number, target number, and LUN number. The legacy scheme has a number of significant limitations: Multi-pathed LUNs require separate legacy DSFs for each LUN path Legacy DSF names change when the SAN topology changes, since the DSF names encode the device s physical hardware path The minor number addressing scheme supported at most 32,768 total LUN addresses, of which only 8192 LUNs can be active at any given time Persistent DSFs resolve both of these issues by providing path-independent, WWID-based DSF representations of up to 16,777,216 disks, LUNs, tape drives, and DVDs. The agile view ioscan -kfnn output on this slide represents the same LUN described in the legacy view ioscan -kf output on the previous slide. There are still four paths to the LUN, but agile view reports a single, path-independent view of the LUN

285 Module 6 Configuring Device Files The LUN s persistent DSFs encode the instance number of the agile view LUN hardware address rather than the bus/target/lun numbers of the underlying legacy hardware paths leading to the LUN. The agile view LUN hardware address ultimately maps to a LUN WorldWide Identifier (WWID), which should remain consistent no matter which path one uses to access the LUN. In the example on the slide, ioscan suggests that the LUN s agile view hardware address instance number is 30. Thus, the LUN s persistent DSF name is simply disk30. Since LUNs may be accessed in either block or raw mode, each LUN has two persistent DSFs: /dev/disk/disk30 /dev/rdisk/disk30 (persistent block DSF for disk30) (persistent raw DSF for disk30) Persistent DSFs for Other Devices This example on the slide represents the persistent DSF naming scheme for a LUN. Persistent DSFs for disks, CDROMs, and DVDs follow the naming convention described on the slide, and also reside in the /dev/disk/ and /dev/rdisk/ directories. Persistent DSF names for tape drive and auto-changers also encode the instance number of the device s Agile View LUN hardware address, but require a slightly different device prefix and reside in the /dev/rtape/ and /dev/rchgr/ directories. Kernel drivers for tape drives and autochangers typically only support raw device files. Tape drives usually have a suffix representing the compression, format, and other options enabled by the device file. /dev/rtape/tape0_best /dev/rchgr/autoch1 Terminals, modems, and printers only require legacy DSFs

286 Module 6 Configuring Device Files 6 7. SLIDE: LUN, Disk, and DVD DSF Names LUN, Disk, and DVD DSF Names The table below shows DSFs created for a multi-pathed LUN with four paths Disks, DVDs, WORM and Optical Memory drives all use similar DSF names Block DSFs Raw DSFs Legacy DSFs /dev/dsk/c5t0d1 /dev/dsk/c7t0d1 /dev/dsk/c9t0d1 /dev/dsk/c11t0d1 /dev/rdsk/c5t0d1 /dev/rdsk/c7t0d1 /dev/rdsk/c9t0d1 /dev/rdsk/c11t0d1 Persistent DSFs /dev/disk/disk30 /dev/rdisk/disk30 SAN LUN Server with 2 HBAs Array with 2 Controllers Student Notes LUNs, disks, CDROMs, and DVDs all follow the standard cxtxdx legacy DSF naming convention and diskx persistent DSF naming convention. The drivers for these devices support both block and character mode access. Legacy block and raw DSFs reside in /dev/dsk/ and /dev/rdsk/ respectively. Persistent block and raw DSFs reside in /dev/disk/ and /dev/rdisk/ respectively. Using the legacy DSF scheme, every path to a LUN generates a block and raw DSF. Using the persistent DSF scheme, each LUN requires just one block and raw DSF for the LUN regardless of the number of underlying paths

287 Module 6 Configuring Device Files 6 8. SLIDE: Boot Disk DSF Names Boot Disk DSF Names Integrity boot disks are subdivided into three EFI disk partitions Each EFI partition requires block and raw DSFs Legacy DSFs identify EFI partitions via suffixes s1, s2, s3 Persistent DSFs identify EFI partitions via suffixes p1, p2, p3 Though not shown below, boot disks may be multi-pathed, too Block DSFs Raw DSFs Legacy DSFs /dev/dsk/c0t1d0 /dev/dsk/c0t1d0s1 /dev/dsk/c0t1d0s2 /dev/dsk/c0t1d0s3 /dev/rdsk/c0t1d0 /dev/rdsk/c0t1d0s1 /dev/rdsk/c0t1d0s2 Persistent DSFs /dev/disk/disk27 /dev/disk/disk27_p1 /dev/disk/disk27_p2 /dev/disk/disk27_p3 /dev/rdisk/disk27 /dev/rdisk/disk27_p1 /dev/rdisk/disk27_p2 Partition Table System Partition (partition 1) OS Partition (partition 2) Service Partition (partition 3) /dev/rdsk/c0t1d0s3 /dev/rdisk/disk27_p3 Partition Table Integrity Boot Disk Student Notes PA-RISC boot disk DSFs follow the standard legacy and persistent disk DSF naming convention described on the previous slide. Integrity boot disks, however, are subdivided into Extensible Firmware Interface (EFI) disk partitions. A partition table at the top of each disk records the locations of the partitions. Each partition requires additional block and raw device files. The EFI system partition contains the OS loader that is responsible for loading the OS in memory during the boot process, and several supporting files. cxtxdxs1 is the system partition s legacy DSF name. diskx_p1 is the system partition s persistent DSF name. The EFI OS partition contains the LVM or VxVM volumes that contain the kernel and other operating system files and directories. cxtxdxs2 is the OS partition s legacy DSF name. diskx_p2 is the OS partition s persistent DSF name. The optional EFI HP Service Partition (HPSP) contains offline diagnostic utilities that may be used to troubleshoot an unbootable system. cxtxdxs3 is the system partition s legacy DSF name. diskx_p3 is the system partition s persistent DSF name

288 Module 6 Configuring Device Files To learn more about Integrity boot disks and EFI partitions, see the Integrity boot process chapter later in this book

289 Module 6 Configuring Device Files 6 9. SLIDE: Tape Drive DSF Names Tape Drive DSF Names Tape drive DSFs follow the standard legacy/persistent DSF naming convention Suffixes identify the DSF s density, compression, rewind, and write semantic features Features Legacy DSFs in /dev/rmt/ Persistent DSF in /dev/rtape/ Best density, autorewind, AT&T style c0t0d0best + 0m tape0_best Best density, no autorewind, AT&T style c0t0d0bestn + 0mn tape0_bestn Best density, autorewind, Berkeley style c0t0d0bestb + 0mb tape0_bestb Best density, no autorewind, Berkeley style c0t0d0bestnb + 0mnb tape0_bestnb Student Notes Tape drive DSF names are very similar to LUN DSF names. Legacy Tape Drive DSFs Legacy tape drive DSFs reside in the /dev/rmt/ directory and encode bus/target/lun addresses just like legacy disk and LUN DSFs. However, unlike LUNs, tape drives often support numerous access options in the [options] portion of the device file name. The DSF below accesses the tape drive located at bus instance 0, target 0, LUN 0 using the best density and compression features supported by the tape drive. /dev/rmt/c0t0d0best Note that the stape kernel driver doesn t support block mode access to tape drives, so there isn t a /dev/mt/ device file directory

290 Module 6 Configuring Device Files Persistent Tape Drive DSFs Much like persistent LUN DSFs, persistent tape drive DSFs encode the device s agile view hardware address instance number. However, persistent tape drive DSFs reside in the /dev/rtape/ directory, and the DSF names include prefix tape rather than disk. The DSF below accesses the tape drive with instance number 0 using the best density and compression features supported by the tape drive. /dev/rtape/tape0_best Note that the estape kernel driver doesn t support block mode access to tape drives, so there isn t a /dev/tape/ device file directory. Tape Drive DSF Options Unlike LUN DSFs, tape drive DSFs often support numerous access options via the [options] portion of the DSF name. Common options, which are supported on both legacy and persistent DSFs, include: w Immediate report disabled. A write request waits until the data are written on the medium. density Specifies the density or format used when writing to the tape. 11i v3 only supports the BEST density. 11i v1 and v2 support several other formats. The list below only describes some of the common 11i v1 and v2 density formats. See the mt(7) man page for a complete list. BEST Use the highest available density/compression features available NOMOD Maintain the density/compression features used previously on the tape DDS1 Use DDS1 format to ensure compatibility with older DDS1 tape drives DDS2 Use DDS2 format to ensure compatibility with older DDS2 tape drives C[n] Write data in compressed mode, on tape drives that support data compression. Compression is automatically enabled when the density field is set to BEST. n No rewind on close. Unless this mode is requested, the driver automatically rewinds the tape when closed. b w Specifies Berkeley-style tape mode. When the b is absent, the tape drive follows AT&T-style behavior. When a file is closed after servicing a read request, if the no-rewind bit is not set, the tape drive automatically rewinds the tape. If the no-rewind bit is set, the behavior depends on the style mode. For AT&T-style devices, the tape is positioned after the EOF following the data just read (unless already at BOT or Filemark). For Berkeley-style devices, the tape is not repositioned in any way. Writes wait for physical completion of the operation before returning status. The default behavior (buffered mode or immediate reporting mode) requires the tape device to buffer the data and return immediately with successful status

291 Module 6 Configuring Device Files See the examples on the slide and the mksf(1m) man page for more information. 9.x Compatibility Prior to version HP-UX 10.01, tape drive DSFs followed an entirely different naming convention: /dev/rmt/0m First tape drive on the system /dev/rmt/1m Second tape drive on the system /dev/rmt/2m Third tape drive on the system /dev/rmt/2mn Third tape drive on the system, "no-rewind" feature enabled /dev/rmt/2mnb Third tape drive, "no-rewind" feature and Berkeley semantics enabled Each DSF name includes an instance number to distinguish the DSF from all other tape drive DSFs, the letter "m", and a series of access options as described previously. 11i v1, v2, and v3 automatically create the following tape drive DSFs, but they are simply links to equivalent legacy cxtxdxbest DSFs. /dev/rmt/0m links to /dev/rmt/cxtxdxbest /dev/rmt/0mn links to /dev/rmt/cxtxdxbestn /dev/rmt/0mb links to /dev/rmt/cxtxdxbestb /dev/rmt/0mnb links to /dev/rmt/cxtxdxbestnb

292 Module 6 Configuring Device Files SLIDE: Tape Autochanger DSF Names Tape Autochanger DSF Names Many administrators today use tape libraries with tape autochangers Autochanger legacy DSF names are based on controller/target/lun numbers Autochanger persistent DSF names are based on the autochanger s instance number Legacy DSFs /dev/rac/c5t0d2 /dev/rac/c7t0d2 /dev/rac/c9t0d2 /dev/rac/c11t0d2 Persistent DSF /dev/rchgr/autoch1 Student Notes Many administrators today use tape libraries with tape auto-changers to manage system backups. These devices typically include one or more tape drives, magazines for storing multiple tapes, and a robotic auto-changer mechanism to move tapes between the magazines and drives. Backup utilities access the tape drives via standard tape DSFs in /dev/rmt/ and /dev/rtape/. Robotic auto-changers typically have their own DSFs in /dev/rac/ and/or /dev/rchgr/. Legacy Auto-changer DSFs Legacy auto-changer DSFs reside in the /dev/rac/ directory and encode bus/target/lun addresses just like legacy disk/lun DSFs. Example: /dev/rac/c0t0d

293 Module 6 Configuring Device Files Persistent Auto-changer DSFs Much like persistent LUN DSFs, persistent tape drive DSFs encode the device s agile view hardware address instance number. However, persistent tape drive DSFs reside in the /dev/rchgr/ directory, and the DSF names include prefix autoch rather than disk. The DSF below accesses the auto-changer with instance number 0. /dev/rchgr/autoch

294 Module 6 Configuring Device Files SLIDE: Terminal, Modem, and Printer DSF Names Terminal, Modem, and Printer DSF Names Some terminals, modems, and printers connect directly to a serial interface card Others connect to the server via a multiplexer device and interface card Terminal, modem, and serial printer DSF names include two numbers: Interface card instance number Multiplexer port number (0 if not connected to a multiplexer) Device Type Terminal device file Modem dial-in Modem dial-out Modem direct connect Serial printer DSF Examples /dev/tty0p0 /dev/ttyp0p0 /dev/cul0p0 /dev/cua0p0 /dev/c0p0_lp 16 Port Multiplexer Server w/ MUX interface card Terminals Student Notes Though many users access systems exclusively via network services today, some systems still include hardwired terminals, modems, and printers. Most servers still include a built-in DB9 serial port on the Core I/O card that can be used to connect a single hardwired terminal or modem. Administrators who require multiple serial devices can purchase an add-on multiplexer (MUX) interface card. The interface card occupies one expansion slot on the server and typically connects to an external box that provides 8, 16, 32, or 64 RJ45, DB25, or DB9 ports for connecting external devices. Alternatively, it may be possible to connect serial devices to the multiplexer card directly via a MUX fan-out cable like the one shown below

295 Module 6 Configuring Device Files Hardwired Terminal DSFs Hardwired terminal DSFs reside directly in the /dev/ directory. The DSF names have two numeric components. The first number (which immediately follows the tty prefix) identifies the instance number of the interface card to which the device is attached. The second number (which immediately follows the p) identifies the multiplexer port number to which the device is attached. If the device is attached directly to a built-in serial port rather than a multiplexer, HP-UX uses port 0 in the DSF name. /dev/tty0p0 terminal attached to the Core I/O serial port /dev/tty2p3 terminal attached to MUX interface instance 2, port 3 /dev/tty2p4 terminal attached to MUX interface instance 2, port 4 See HP s Configuring HP-UX for Peripherals manual for additional information. Hardwired Modem DSFs Hardwired modem DSFs also reside directly in the /dev/ directory and, like terminals, have two numeric components. The first number identifies the instance number of the interface card to which the device is attached. The second number identifies the multiplexer port number to which the device is attached. If the device is attached directly to a built-in serial port rather than a multiplexer, HP-UX uses port 0 in the DSF name. A fully functional modem requires three device files. /dev/ttydxpx is required for dial-in modem service. /dev/culxpx is required for dial-out service. /dev/cuaxpx is required for direct-connect service. See HP s Configuring HP-UX for Peripherals manual for additional information. Pseudo Terminals Pseudo terminals are used by applications that provide terminal emulation capabilities, such as hpterm, xterm, telnet, etc. The pseudo terminal driver provides support for a device-pair termed a pseudo terminal. A pseudo terminal is a pair of character devices, a master device and a slave device

Register for this course. Find this course in the Training calendar and click the "Register" link.

Register for this course. Find this course in the Training calendar and click the Register link. Course Data Sheet HP-UX System and Network Administration I Course description This course teaches you to configure, manage, maintain, and administer HP-UX servers. The course focuses on configuration

More information

HP-UX System Administration

HP-UX System Administration HP-UX System Administration This intensive course is designed for experienced UNIX administrators who like to understand the differences between HP-UX and standard UNIX. It is essential that students have

More information

HP XP P9000 Remote Web Console Messages

HP XP P9000 Remote Web Console Messages HP XP P9000 Remote eb Console Messages Abstract This document lists the error codes and error messages for HP XP P9000 Remote eb Console for HP XP P9000 disk arrays, and provides recommended action for

More information

Cisco Unified Serviceability

Cisco Unified Serviceability Cisco Unified Serviceability Introduction, page 1 Installation, page 5 Introduction This document uses the following abbreviations to identify administration differences for these Cisco products: Unified

More information

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide

HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide HP VMware ESXi and vsphere 5.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi and vsphere. HP Part Number: 616896-409 Published: September

More information

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide

HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide HPE VMware ESXi and vsphere 5.x, 6.x and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HPE VMware ESXi and vsphere. Part Number: 818330-003 Published: April

More information

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide

HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide HP ProLiant Agentless Management Pack (v 3.2) for Microsoft System Center User Guide Abstract This guide provides information on using the HP ProLiant Agentless Management Pack for System Center version

More information

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5

HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 HPE WBEM Providers for OpenVMS Integrity servers Release Notes Version 2.2-5 January 2016 This release note describes the enhancement, known restrictions, and errors found in the WBEM software and documentation,

More information

HP Integrity Servers and HP 9000 Servers Firmware Update Options

HP Integrity Servers and HP 9000 Servers Firmware Update Options HP Integrity Servers and HP 9000 Servers Firmware Update Options HP Part Number: 5900-2655 Published: March 2013 Edition: 4 Copyright 2010, 2013 Hewlett-Packard Development Company, L.P The information

More information

HP Operations Manager

HP Operations Manager HP Operations Manager Software Version: 9.22 UNIX and Linux operating systems Java GUI Operator s Guide Document Release Date: December 2016 Software Release Date: December 2016 Legal Notices Warranty

More information

HP Data Protector Media Operations 6.11

HP Data Protector Media Operations 6.11 HP Data Protector Media Operations 6.11 Getting started This guide describes installing, starting and configuring Media Operations. Copyright 2009 Hewlett-Packard Development Company, L.P. Part number:

More information

System Fault Management Administrator s Guide

System Fault Management Administrator s Guide System Fault Management Administrator s Guide HP-UX 11i v1 HP Part Number: 5991-6717 Published: E0612 Copyright 2006 Hewlett-Packard Development Company, L.P Legal Notices The information in this document

More information

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired

QuickSpecs. Useful Web Sites For additional information, see the following web sites: Linux Operating System. Overview. Retired Overview NOTE: HP no longer sells RHEL and SLES on Integrity servers. HP will continue to support RHEL 5 on Integrity servers until Red Hat's end of support life date for RHEL 5 (March 31st, 2017). HP

More information

Best Practices When Deploying Microsoft Windows Server 2008 R2 or Microsoft Windows Server 2008 SP2 on HP ProLiant DL980 G7 Servers

Best Practices When Deploying Microsoft Windows Server 2008 R2 or Microsoft Windows Server 2008 SP2 on HP ProLiant DL980 G7 Servers Best Practices When Deploying Microsoft Windows Server 2008 R2 or Microsoft Windows Server 2008 SP2 on HP ProLiant DL980 G7 Servers Technical white paper Table of contents Introduction... 2 OS Support

More information

HP-UX 11i V3 Overview & Roadmap. IT-Symposium Robert Sakic - Vortrag 2H01 Senior Presales Consultant, HP.

HP-UX 11i V3 Overview & Roadmap. IT-Symposium Robert Sakic - Vortrag 2H01 Senior Presales Consultant, HP. HP-UX 11i V3 Overview & Roadmap Robert Sakic - Vortrag 2H01 Senior Presales Consultant, HP 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

More information

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide HP Storage Provisioning Manager (SPM) Version 1.3 User Guide Abstract This guide provides information to successfully install, configure, and manage the HP Storage Provisioning Manager (SPM). It is intended

More information

A9890A RAID Smart Array 6402 Controller Quick Installation Guide

A9890A RAID Smart Array 6402 Controller Quick Installation Guide A9890A RAID Smart Array 6402 Controller Quick Installation Guide Quick Installation of the Smart Array 6402 Controller Edition 1 Manufacturing Part Number: A9890-90005 March 2004 United States Copyright

More information

Ramdisk (Memory-based Disk) Support on HP-UX 11i v2

Ramdisk (Memory-based Disk) Support on HP-UX 11i v2 Ramdisk (Memory-based Disk) Support on HP-UX 11i v2 Introduction... 2 Terms and Definitions... 2 Ramdisk Features in HP-UX 11i v2... 2 Ramdisk Installation... 3 Ramdisk Configuration... 3 Ramdisk Device

More information

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5 A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4

More information

Parallels Remote Application Server

Parallels Remote Application Server Parallels Remote Application Server Parallels Client for Mac User's Guide v16 Parallels International GmbH Vordergasse 59 8200 Schaffhausen Switzerland Tel: + 41 52 672 20 30 www.parallels.com Copyright

More information

HP BladeSystem Management Pack (v 1.x) for Microsoft System Center User Guide

HP BladeSystem Management Pack (v 1.x) for Microsoft System Center User Guide HP BladeSystem Management Pack (v 1.x) for Microsoft System Center User Guide Abstract This guide provides information on using the HP BladeSystem Management Pack for System Center version 1.x to manage

More information

Introduction to HPE ProLiant Servers HE643S

Introduction to HPE ProLiant Servers HE643S Course data sheet Introduction to HPE ProLiant Servers HE643S HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses HE643S 2 Days ILT, VILT View

More information

Next generation single-system management on HP-UX 11i v2 (B.11.23)

Next generation single-system management on HP-UX 11i v2 (B.11.23) Next generation single-system management on HP-UX 11i v2 (B.11.23) Introduction... 2 Definition of terms... 2 Related documents... 3 HP System Management Homepage overview... 3 Architecture... 5 Supported

More information

HP Integrity Virtual Server Manager 6.4 User Guide

HP Integrity Virtual Server Manager 6.4 User Guide HP Integrity Virtual Server Manager 6.4 User Guide Abstract This document helps you understand and use HP Integrity Virtual Server Manager. The audience for this document includes system administrators

More information

HP Matrix Operating Environment 7.2 Getting Started Guide

HP Matrix Operating Environment 7.2 Getting Started Guide HP Matrix Operating Environment 7.2 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

HP Service Manager. Software Version: 9.41 For the supported Windows and UNIX operating systems. Collaboration Guide

HP Service Manager. Software Version: 9.41 For the supported Windows and UNIX operating systems. Collaboration Guide HP Service Manager Software Version: 9.41 For the supported Windows and UNIX operating systems Collaboration Guide Document Release Date: September 2015 Software Release Date: September 2015 Legal Notices

More information

HP Management Integration Framework 1.7

HP Management Integration Framework 1.7 HP Management Integration Framework 1.7 Administrator Guide Abstract This document describes the use of HP Management Integration Framework interfaces and is intended for administrators involved in the

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

HPE ilo mobile app for ios

HPE ilo mobile app for ios HPE ilo mobile app for ios User Guide Abstract The HPE ilo mobile app provides access to the remote console, web interface, and scripting features of HPE ProLiant servers. Part Number: 689175-004 Published:

More information

HP System Management Homepage 7.5 Update 2 User Guide

HP System Management Homepage 7.5 Update 2 User Guide HP System Management Homepage 7.5 Update 2 User Guide HP-UX, Linux, and Windows Operating Systems HP Part Number: 509679-402a Published: December 2015 Edition: 2 Copyright 2009, 2015 Hewlett-Packard Development

More information

HPE ilo Federation User Guide for ilo 5

HPE ilo Federation User Guide for ilo 5 HPE ilo Federation User Guide for ilo 5 Abstract This guide explains how to configure and use the HPE ilo Federation features. It is intended for system administrators, Hewlett Packard Enterprise representatives,

More information

HP StorageWorks XP24000/XP20000 Remote Web Console User Guide

HP StorageWorks XP24000/XP20000 Remote Web Console User Guide HP StorageWorks XP24000/XP20000 Remote Web Console User Guide Abstract This guide explains how to set up and use HP StorageWorks Remote Web Console to manage HP StorageWorks XP24000/XP20000 Disk Arrays

More information

Novell Operations Center

Novell Operations Center AUTHORIZED DOCUMENTATION Dashboard Guide Novell Operations Center 5.0 September 30, 2011 www.novell.com Legal Notices Novell, Inc., makes no representations or warranties with respect to the contents or

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v3 VM Host) v4.2 HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 Integrity Virtual Machines (also called Integrity VM or HPVM) is a hypervisor product

More information

QuickSpecs. HP IP Console Switch with Virtual Media Overview

QuickSpecs. HP IP Console Switch with Virtual Media Overview Overview HP's IP Console Switch with Virtual Media is a key component in managing the heterogeneous data center and along with the IP Viewer software allows remote access to multiple servers running various

More information

HP Business Service Management

HP Business Service Management HP Business Service Management Software Version: 9.26 Getting Started With BPM - Best Practices Document Release Date: September 2015 Software Release Date: September 2015 Legal Notices Warranty The only

More information

Cisco C880 M4 Server User Interface Operating Instructions for Servers with E v2 and E v3 CPUs

Cisco C880 M4 Server User Interface Operating Instructions for Servers with E v2 and E v3 CPUs Cisco C880 M4 Server User Interface Operating Instructions for Servers with E7-8800 v2 and E7-8800 v3 CPUs November, 2015 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT

More information

HP Database and Middleware Automation

HP Database and Middleware Automation HP Database and Middleware Automation For Windows Software Version: 10.10 SQL Server Database Refresh User Guide Document Release Date: June 2013 Software Release Date: June 2013 Legal Notices Warranty

More information

HP Matrix Operating Environment 7.1 Getting Started Guide

HP Matrix Operating Environment 7.1 Getting Started Guide HP Matrix Operating Environment 7.1 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

HP-UX System Administration Course Overview. Skills Gained. Who will the Course Benefit?

HP-UX System Administration Course Overview. Skills Gained. Who will the Course Benefit? HP-UX System Administration Course Overview This Hewlett Packard HP-UX System Administration training course is designed to give delegates practical experience in the administration of an HP-UX UNIX System.

More information

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System

HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System HP Insight Remote Support Advanced HP StorageWorks P4000 Storage System Migration Guide HP Part Number: 5900-1089 Published: August 2010, Edition 1 Copyright 2010 Hewlett-Packard Development Company, L.P.

More information

HP ALM. Software Version: Tutorial

HP ALM. Software Version: Tutorial HP ALM Software Version: 12.50 Tutorial Document Release Date: September 2015 Software Release Date: September 2015 Legal Notices Warranty The only warranties for HP products and services are set forth

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

Upgrading the MSA1000 for Enhanced Features

Upgrading the MSA1000 for Enhanced Features White Paper December 2002 Prepared by: Network Storage Solutions Hewlett Packard Company Contents Benefits of the MSA1000 Enhancements...1 Prerequisites...3 Environmental Monitoring Unit (EMU) issue:...3

More information

PCI / PCIe Error Recovery Product Note. HP-UX 11i v3

PCI / PCIe Error Recovery Product Note. HP-UX 11i v3 PCI / PCIe Error Recovery Product Note HP-UX 11i v3 HP Part Number: 5900-0584 Published: September 2010 Legal Notices Copyright 2003-2010 Hewlett-Packard Development Company, L.P. Confidential computer

More information

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Abstract This guide is for HPE StoreOnce System Administrators. It assumes that the user has followed

More information

HP StorageWorks Performance Advisor. Installation Guide. Version 1.7A

HP StorageWorks Performance Advisor. Installation Guide. Version 1.7A HP StorageWorks Performance Advisor Installation Guide Version 1.7A notice Copyright 2002-2004 Hewlett-Packard Development Company, L.P. Edition 0402 Part Number B9369-96068 Hewlett-Packard Company makes

More information

HP Matrix Operating Environment 7.4 Getting Started Guide

HP Matrix Operating Environment 7.4 Getting Started Guide HP Matrix Operating Environment 7.4 Getting Started Guide Abstract This document provides an overview of the HP Matrix Operating Environment. It is intended to be used by system administrators and other

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

HP ALM. Software Version: Tutorial

HP ALM. Software Version: Tutorial HP ALM Software Version: 12.20 Tutorial Document Release Date: December 2014 Software Release Date: December 2014 Legal Notices Warranty The only warranties for HP products and services are set forth in

More information

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE

HPE ALM Excel Add-in. Microsoft Excel Add-in Guide. Software Version: Go to HELP CENTER ONLINE HPE ALM Excel Add-in Software Version: 12.55 Microsoft Excel Add-in Guide Go to HELP CENTER ONLINE http://alm-help.saas.hpe.com Document Release Date: August 2017 Software Release Date: August 2017 Legal

More information

HP Data Protector A Support for Windows Vista and Windows Server 2008 Clients Whitepaper

HP Data Protector A Support for Windows Vista and Windows Server 2008 Clients Whitepaper HP Data Protector A.06.00 Support for Windows Vista and Windows Server 2008 Clients Whitepaper 1 Index Introduction... 3 Data Protector A.06.00 Installation on Windows Vista and Windows Server 2008 systems...

More information

CommandCenter Secure Gateway User Guide Release 5.2

CommandCenter Secure Gateway User Guide Release 5.2 CommandCenter Secure Gateway User Guide Release 5.2 Copyright 2011 Raritan, Inc. CC-0U-v5.2-E July 2011 255-80-3100-00 This document contains proprietary information that is protected by copyright. All

More information

LVM Migration from Legacy to Agile Naming Model HP-UX 11i v3

LVM Migration from Legacy to Agile Naming Model HP-UX 11i v3 LVM Migration from Legacy to Agile Naming Model HP-UX 11i v3 Abstract...2 Legacy and Agile Naming Models...3 LVM support for Dual Naming Models...3 Naming Model specific Commands/Options...3 Disabling

More information

Novell Access Manager

Novell Access Manager Quick Start AUTHORIZED DOCUMENTATION Novell Access Manager 3.1 SP2 June 11, 2010 www.novell.com Novell Access Manager 3.1 SP2 Quick Start Legal Notices Novell, Inc., makes no representations or warranties

More information

HPE Serviceguard I H6487S

HPE Serviceguard I H6487S Course data sheet HPE course number Course length Delivery mode View schedule, local pricing, and register View related courses H6487S 5 days ILT View now View now HPE Serviceguard I H6487S This course

More information

QuickSpecs. HP Integrated Lights-Out Overview

QuickSpecs. HP Integrated Lights-Out Overview Overview is an HP innovation that integrates industry leading Lights-Out functionality and basic system board management capabilities on selected ProLiant servers. consists of an intelligent processor

More information

HP IDOL Site Admin. Software Version: Installation Guide

HP IDOL Site Admin. Software Version: Installation Guide HP IDOL Site Admin Software Version: 10.9 Installation Guide Document Release Date: March 2015 Software Release Date: March 2015 Legal Notices Warranty The only warranties for HP products and services

More information

HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide

HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide HP-UX 11i v3 HP Part Number: 762798-001 Published: March 2014 Edition: 1 Legal Notices Copyright 2003, 2014 Hewlett-Packard Development

More information

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1)

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) Abstract This document describes changes in HPE OneView for VMware vcenter to help administrators understand the benefits of obtaining the 8.2

More information

It is also available as part of the HP IS DVD and the Management DVD/HPSIM install.

It is also available as part of the HP IS DVD and the Management DVD/HPSIM install. Overview The HP is a web-based interface that consolidates and simplifies the management of individual ProLiant and Integrity servers running Microsoft Windows or Linux operating systems. By aggregating

More information

Introduction to HP-UX Operating System

Introduction to HP-UX Operating System Welcome Introduction to HP-UX Operating System HP-UX (Hewlett Packard UniX) is Hewlett-Packard's proprietary implementation of the UNIX operating system, It runs on the HP PA-RISC and Integrity ( Itanium

More information

3 Mobility Pack Installation Instructions

3 Mobility Pack Installation Instructions Novell Data Synchronizer Mobility Pack Readme Novell September 10, 2010 1 Overview The Novell Data Synchronizer Mobility Pack creates a new Synchronizer system that consists of the Synchronizer services,

More information

HPE Factory Express Customized Integration with Onsite Startup Service

HPE Factory Express Customized Integration with Onsite Startup Service Data sheet HPE Factory Express Customized Integration with Onsite Startup Service HPE Lifecycle Event Services HPE Factory Express Customized Integration with Onsite Startup Service (formerly known as

More information

HP AutoPass License Server

HP AutoPass License Server HP AutoPass License Server Software Version: 9.0 Windows, Linux and CentOS operating systems Support Matrix Document Release Date: October 2015 Software Release Date: October 2015 Page 2 of 10 Legal Notices

More information

HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide

HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide HP-UX SysFaultMgmt (System Fault Management) (SFM) Administrator Guide HP-UX 11i v3 Part Number: 762798-002 Published: August 2016 Edition: 1 Legal Notices Copyright 2003, 2016 Hewlett-Packard Development

More information

SPECTRUM. Control Panel User Guide (5029) r9.0.1

SPECTRUM. Control Panel User Guide (5029) r9.0.1 SPECTRUM Control Panel User Guide (5029) r9.0.1 This documentation and any related computer software help programs (hereinafter referred to as the Documentation ) is for the end user s informational purposes

More information

QLogic iscsi Boot for HP FlexFabric Adapters User Guide

QLogic iscsi Boot for HP FlexFabric Adapters User Guide QLogic iscsi Boot for HP FlexFabric Adapters User Guide Abstract This document is for the person who installs, administers, and troubleshoots servers and storage systems. HP assumes you are qualified in

More information

HPE Knowledge Article

HPE Knowledge Article HPE Knowledge Article HPE Integrated Lights-Out 4 (ilo 4) - How to Reset ilo Management Processor and ilo Password? Article Number mmr_sf-en_us000012649 Environment HPE Integrated Lights-Out 4 Issue Reset

More information

HPE Digital Learner Server Management Content Pack

HPE Digital Learner Server Management Content Pack Content Pack data sheet HPE Digital Learner Server Management Content Pack HPE Content Pack number Content Pack category Content Pack length Learn more CP002 Category 1 20 Hours View now This Content Pack

More information

version on HP-UX 11i v3 March 2014 Operating Environment Updat e Release

version on HP-UX 11i v3 March 2014 Operating Environment Updat e Release Technical white paper Installation of non-def ault VxFS and VxVM soft ware version on HP-UX 11i v3 March 2014 Operating Environment Updat e Release Table of contents Introduction... 3 Installation Instructions...

More information

Nimsoft Monitor Server

Nimsoft Monitor Server Nimsoft Monitor Server Configuration Guide v6.00 Document Revision History Version Date Changes 1.0 10/20/2011 Initial version of Nimsoft Server Configuration Guide, containing configuration and usage

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information...

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information... Installation Checklist HP ProLiant Cluster F500 for Enterprise Virtual Array 4000/6000/8000 using Microsoft Windows Server 2003, Enterprise Edition Stretch Cluster May 2005 Table of Contents ProLiant Cluster

More information

DtS Data Migration to the MSA1000

DtS Data Migration to the MSA1000 White Paper September 2002 Document Number Prepared by: Network Storage Solutions Hewlett Packard Company Contents Migrating Data from Smart Array controllers and RA4100 controllers...3 Installation Notes

More information

HPE Knowledge Article

HPE Knowledge Article HPE Knowledge Article HPE Integrated Lights Out (ilo 5) for Gen10 Servers - What is System Recovery Set? Article Number mmr_sf-en_us000021097 Environment HPE Integrated Lights Out (ilo 5) HPE ProLiant

More information

HP-UX Software and Patching Management Using HP Server Automation

HP-UX Software and Patching Management Using HP Server Automation HP-UX Software and Patching Management Using HP Server Automation Software Version 7.84, released August 2010 Overview... 2 Patch Management for HP-UX Prerequisites... 2 HP-UX Patching Features... 2 Importing

More information

System Administration

System Administration Most of SocialMiner system administration is performed using the panel. This section describes the parts of the panel as well as other administrative procedures including backup and restore, managing certificates,

More information

Using ZENworks with Novell Service Desk

Using ZENworks with Novell Service Desk www.novell.com/documentation Using ZENworks with Novell Service Desk Novell Service Desk 7.1 April 2015 Legal Notices Novell, Inc. makes no representations or warranties with respect to the contents or

More information

QuickSpecs. What's New Support for HP software and hardware server support. Models. HP Insight Server Migration software for ProLiant v.3.

QuickSpecs. What's New Support for HP software and hardware server support. Models. HP Insight Server Migration software for ProLiant v.3. Overview NOTE: Effective November 16, 2009, HP Insight Server Migration software for ProLiant offered exclusively as part of HP Insight Control suites. For additional information please visit: http://www.hp.com/go/insightcontrol

More information

QuickSpecs. HPE Library and Tape Tools. Overview. Features & Benefits. What's New

QuickSpecs. HPE Library and Tape Tools. Overview. Features & Benefits. What's New Overview (L&TT) is a free, robust diagnostic tool for HPE StoreEver Tape Family. Targeted for a wide range of users, it is ideal for customers who want to verify their installation, ensure product reliability,

More information

NetExtender for SSL-VPN

NetExtender for SSL-VPN NetExtender for SSL-VPN Document Scope This document describes how to plan, design, implement, and manage the NetExtender feature in a SonicWALL SSL-VPN Environment. This document contains the following

More information

HP Network Node Manager i Software Step-by-Step Guide to Scheduling Reports using Network Performance Server

HP Network Node Manager i Software Step-by-Step Guide to Scheduling Reports using Network Performance Server HP Network Node Manager i Software Step-by-Step Guide to Scheduling Reports using Network Performance Server NNMi 9.1x Patch 2 This document shows an example of building a daily report for the ispi Performance

More information

Web Client Manual. for Macintosh and Windows. Group Logic Inc Fax: Internet:

Web Client Manual. for Macintosh and Windows. Group Logic Inc Fax: Internet: Web Client Manual for Macintosh and Windows Group Logic Inc. 703-528-1555 Fax: 703-527-2567 Email: info@grouplogic.com Internet: www.grouplogic.com Copyright (C) 1995-2007 Group Logic Incorporated. All

More information

Interface Card OL* Support Guide

Interface Card OL* Support Guide Interface Card OL* Support Guide HP-UX 11i v3 HP Part Number: 5992-1723 Published: September 2007 Edition: E 0709 Copyright 2007 Hewlett-Packard Development Company L.P Legal Notices The information contained

More information

HP Serviceguard for Linux Certification Matrix

HP Serviceguard for Linux Certification Matrix Technical Support Matrix HP Serviceguard for Linux Certification Matrix Version 04.05, April 10 th, 2015 How to use this document This document describes OS, Server and Storage support with the listed

More information

QuickSpecs. HP Advanced Server V5.1B-5 for UNIX. Overview. Retired

QuickSpecs. HP Advanced Server V5.1B-5 for UNIX. Overview. Retired Overview The Advanced Server for UNIX (ASU) software is a Tru64 UNIX layered application that provides seamless interoperability between systems running the Tru64 UNIX operating system software and systems

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

QuickSpecs. HPE Integrity Integrated Lights-Out (ilo) for HPE Integrity Servers. Overview

QuickSpecs. HPE Integrity Integrated Lights-Out (ilo) for HPE Integrity Servers. Overview HPE Integrity Integrated Lights-Out (ilo) management processors for HPE Integrity servers provide remote server control and monitoring that is independent of the server's operating system. This document

More information

Novell Identity Manager

Novell Identity Manager Role Mapping Administrator User Guide AUTHORIZED DOCUMENTATION Novell Identity Manager 1.0 August 28, 2009 www.novell.com Novell Identity Manager Role Mapping Administrator 1.0 User GuideNovell Identity

More information

HPE 3PAR OS GA Patch 12

HPE 3PAR OS GA Patch 12 HPE 3PAR OS 3.3.1 GA Patch 12 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 12 on the HPE 3PAR Operating System Software OS-3.3.1.215-GA. This document is for

More information

Data Protector Express Hewlett-Packard Company

Data Protector Express Hewlett-Packard Company Installation Guide Data Protector Express Hewlett-Packard Company ii Data Protector Express Installation Guide Copyright Copyright 2005/2006 by Hewlett-Packard Limited. March 2006 Part Number BB116-90024

More information

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

Integrating HP tools for Linux deployment (HP SIM, SSSTK, LinuxCOE, and PSP)

Integrating HP tools for Linux deployment (HP SIM, SSSTK, LinuxCOE, and PSP) Integrating HP tools for Linux deployment (HP SIM, SSSTK, LinuxCOE, and PSP) HOWTO Abstract... 2 Pre-integration tasks... 2 Pre-integration configuration... 2 Dynamic Host Configuration Protocol (DHCP)...3

More information

HP Data Protector Integration with Autonomy IDOL Server

HP Data Protector Integration with Autonomy IDOL Server Technical white paper HP Data Protector Integration with Autonomy IDOL Server Introducing e-discovery for HP Data Protector environments Table of contents Summary 2 Introduction 2 Integration concepts

More information

Virtual Infrastructure Web Access Administrator s Guide ESX Server 3.0 and VirtualCenter 2.0

Virtual Infrastructure Web Access Administrator s Guide ESX Server 3.0 and VirtualCenter 2.0 Virtual Infrastructure Web Access Administrator s Guide ESX Server 3.0 and VirtualCenter 2.0 Virtual Infrastructure Web Access Administrator s Guide Revision: 20060615 Item: VI-ENG-Q206-217 You can find

More information

Quick KVM 1.1. User s Guide. ClearCube Technology, Inc.

Quick KVM 1.1. User s Guide. ClearCube Technology, Inc. Quick KVM 1.1 User s Guide ClearCube Technology, Inc. Copyright 2005, ClearCube Technology, Inc. All rights reserved. Under copyright laws, this publication may not be reproduced or transmitted in any

More information

HPE Insight Online User Guide

HPE Insight Online User Guide HPE Insight Online User Guide Document Release Date: October 2017 Software Release Date: October 2017 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise Development LP products and

More information