HP XP7 High Availability User Guide

Size: px
Start display at page:

Download "HP XP7 High Availability User Guide"

Transcription

1 HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions for planning, configuring, and maintaining a High Availability system on HP XP7 Storage systems. HP Part Number: H6F Published: October 2014 Edition: 4

2 Copyright 2014 Hewlett-Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgments Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft, Windows, Windows XP, and Windows NT are U.S. registered trademarks of Microsoft Corporation. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. Java and Oracle are registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group. Revision History Revision 1 May 2014 Applies to microcode version /02 or later. Revision 2 September 2014 Applies to microcode version /00 or later. Revision 3 October 2014 Applies to microcode version /00 or later. Revision 4 October 2014 Applies to microcode version /00 or later.

3 Contents 1 Overview of High Availability...10 About High Availability...10 High Availability solutions...11 Fault-tolerant storage infrastructure...11 Failover clustering without storage impact...11 Server load balancing without storage impact...12 System configurations for HA solutions...13 HA and multi-array virtualization...15 About the virtual ID...15 Monitoring HA status...16 HA status...16 HA status transitions...17 Pair status...18 I/O modes...18 Relationship between HA status, pair status, and I/O mode...19 High Availability and server I/O...19 Server I/O when the HA status is Mirrored...20 Server I/O when the HA status is Mirroring...20 Server I/O when the HA status is Suspended...21 Server I/O when the HA status is Blocked...22 Quorum disk and server I/O...22 I/O stoppage detected in the counterpart system...23 I/O stoppage not detected in the counterpart system...24 HA pair operations...24 Initial copy and differential copy...25 HA components...25 User interfaces for High Availability operations...27 HP XP7 Command View Advanced Edition...27 RAID Manager...27 Configuration workflow for High Availability Planning for High Availability...29 Storage system preparation...29 Cache and additional shared memory...29 System option modes...29 Planning system performance...30 Hitachi Dynamic Link Manager...30 Planning physical paths...31 Bandwidth...31 Fibre Channel connections...32 Connection types...32 Direct connection...32 Connection using switches...33 Connection using channel extenders...34 Planning ports...34 Port attributes...35 Planning the quorum disk...35 Installation of external storage system for quorum disks...35 Relationship between the quorum disk and remote connection...36 Notes for the response time from the external storage system for quorum disks...38 Planning HA pairs and pair volumes...38 Maximum number of HA pairs...38 Contents 3

4 Calculating the number of cylinders...39 Calculating the number of bitmap areas...39 Calculating the number of available bitmap areas...39 Calculating the maximum number of pairs...40 Notes for creation of HA pairs when the volume specified as an S-VOL is in a resource group whose serial number and model are same as the storage system System requirements...41 Requirements and restrictions...41 Relationship between resource group and availability of HA pair creation...43 Interoperability requirements...44 Volume types that can be used for HA...44 Thin Provisioning / Smart Tiers...46 Business Copy...46 Limitations when sharing HA and Business Copy volumes...47 BC operations and HA pair status...47 HA operations and BC pair status...49 Fast Snap...50 Limitations for using both HA and Fast Snap...51 Fast Snap operations and HA status...51 HA operations and Fast Snap pair status...53 Use cases for pairing HA volumes with BC or FS...54 Data Retention...54 HA status and I/O allowance by access attribute...55 LUN Manager...55 Cache Partition...56 Volume Shredder...56 Performance Monitor...56 HA I/O added to Performance Monitor...56 Number of I/Os to the port to be added to Performance Monitor Planning for High Availability...59 Storage system preparation...59 Cache and additional shared memory...59 System option modes...59 Planning system performance...60 Hitachi Dynamic Link Manager...60 Planning physical paths...61 Bandwidth...61 Fibre-Channel connections...61 Connection types...62 Direct connection...62 Connection using switches...63 Connection using channel extenders...64 Planning ports...64 Port attributes...65 Planning the quorum disk...65 Installation of external storage system for quorum disks...65 Relationship between the quorum disk and remote connection...66 Suspended pairs depending on failure locations (when a quorum disk is not shared)...68 Suspended pairs depending on failure locations (when a quorum disk is shared)...69 Response time from the external storage system for quorum disks...70 Planning HA pairs and pair volumes...70 Maximum number of HA pairs...70 Calculating the number of cylinders...71 Calculating the number of bitmap areas Contents

5 Calculating the number of available bitmap areas...71 Calculating the maximum number of pairs...72 When the S-VOL's resource group and storage system have the same serial number and model Configuration and pair management using RAID Manager...73 High Availability system configuration...73 Primary storage system settings...74 Secondary storage system settings...75 RAID Manager server configuration...76 External storage system settings...76 Workflow for creating an HA environment...76 Initial state...77 Adding the external system for the quorum disk...78 Verifying the physical data paths...78 Creating the command devices...79 Creating the configuration definition files...80 Starting RAID Manager...81 Connecting the primary and secondary storage systems...82 Setting the port attributes...82 Adding remote connections...83 Creating the quorum disk...85 Setting the port attributes for connecting the external storage system...85 Creating external volume groups...87 Creating external volumes...88 Setting external volumes as quorum disks...90 Setting up the secondary system...93 Creating a resource group...94 Reserving a host group ID...95 Deleting the virtual LDEV ID of the S-VOL...97 Reserving an LDEV ID for the S-VOL...98 Setting the reservation attribute on the S-VOL...99 Creating additional host groups in a VSM Creating a pool Creating the S-VOL Adding an LU path to the S-VOL Updating the RAID Manager configuration definition files Shutting down RAID Manager Editing RAID Manager configuration definition files Restarting RAID Manager Creating the HA pair Verifying the virtual LDEV ID in the virtual storage machine of the secondary site Revising the virtual LDEV ID in the virtual storage machine of the secondary site Creating a High Availability pair Adding an alternate path to the S-VOL Disaster recovery of High Availability Failure locations SIMs related to HA Pair condition before failure Pair condition and recovery: server failures Pair condition and recovery: path failure between the server and the storage system Recovering from a path failure between the server and the primary system Recovering from a path failure between the server and the secondary system Pair condition and recovery: P-VOL failure (LDEV blockade) Recovering the P-VOL (pair status: PAIR) Pair condition and recovery: S-VOL failure (LDEV blockade) Contents 5

6 Recovering the S-VOL (pair status: PAIR) Pair condition and recovery: full pool for the P-VOL Recovering a full pool for the P-VOL (pair status: PAIR) Pair condition and recovery: full pool for the S-VOL Recovering a full pool for the S-VOL (pair status: PAIR) Pair condition and recovery: path failure from the primary to the secondary system Recovering paths from the primary to the secondary system (pair status: PAIR) Pair condition and recovery: path failure from the secondary to the primary system Recovering paths from the secondary to the primary system (pair status: PAIR) Pair condition and recovery: primary system failure Pair condition and recovery: secondary system failure Pair condition and recovery: path failure from the primary to the external system Recovering the path from the primary system to the external system (pair status: PAIR) Pair condition and recovery: path failure from the secondary to the external system Recovering the path from the secondary system to the external system (pair status: PAIR) Pair condition and recovery: quorum disk failure Recovering from a failure of a storage system or a physical path between storage systems Recovering from a failure of a physical path from the primary storage system to the secondary storage system Recovering from a failure of a physical path from the secondary storage system to the primary storage system Recovery of the quorum disk (pair status: PAIR) Pair condition and recovery: external system failure Pair condition and recovery: other failures Recovery procedure when an HA pair is suspended due to other failures Recovering the storage systems at the primary site from a failure (external storage system at the primary site) Reversing the P-VOL and S-VOL Resolving failures in multiple locations Planned outage of High Availability storage systems Planned power off/on of the primary storage system Powering off the primary storage system Powering on the primary storage system Planned power off/on of the secondary storage system Powering off the secondary storage system Powering on the secondary storage system Planned power off/on of the external storage system (I/O continues at the primary site) Powering off the external storage system for the quorum disks (I/O continues at the primary site) Powering on the external storage system for the quorum disk (I/O continues at the primary site)..162 Planned power off/on of the external storage system (I/O continues at the secondary site) Powering off the external storage system for the quorum disk (I/O continues at the secondary site) Powering on the external storage system for the quorum disk (I/O continues at the secondary site) Planned power off/on of the primary and secondary storage systems Powering off the primary and secondary storage systems Powering on the primary and secondary storage systems Planned power off/on of the primary and external storage systems Powering off the primary and external storage systems Powering on the primary and external storage systems Planned power off/on of the secondary and external storage systems Powering off the secondary and external storage systems Powering on the secondary and external storage systems Contents

7 Planned power off/on of all HA storage systems Powering off the primary, secondary, and external storage systems Powering on the primary, secondary, and external storage systems Data migration using High Availability Workflow for data migration Reusing volumes after data migration Reusing a volume that was an S-VOL Reusing a volume that was a P-VOL Troubleshooting General troubleshooting Troubleshooting related to remote path statuses Error codes and messages Troubleshooting for RAID Manager SIM reports of HA operations Procedure for recovering pinned track of an HA volume Support and other resources Contacting HP Related information Websites Typographic conventions Customer self repair Documentation feedback A Correspondence between GUI operations and CLI commands Correspondence between Remote Web Console operations and RAID Manager commands B Performing configuration operations using Remote Web Console Defining the attribute for a Fibre-Channel port Adding a remote connection Determining the Round Trip Time Adding the quorum disk Assigning the HA reservation attribute C Performing pair operations using Remote Web Console Types of pair operations Creating HA pairs Suspending HA pairs Resynchronizing HA pairs Deleting HA pairs Deleting HA pairs (Normal deletion) Forcibly deleting HA pairs (for paired volumes) Forcibly deleting HA pairs(for nonpaired volumes) D Performing monitoring operations using Remote Web Console Virtual storage machines displayed in Remote Web Console Checking the status of an HA pair Checking the detailed status of an HA pair Checking the synchronous rate of an HA pair Checking the operation history of HA pairs Messages displayed in Description of the Histories window Checking the licensed capacity Monitoring copy operation and I/O statistics Checking the remote connection status Checking the detailed status of remote connections and paths Contents 7

8 E Changing settings using Remote Web Console Editing remote replica options Maximum initial copy activities Removing quorum disks Editing remote connection options Adding remote paths Removing remote paths Removing remote connections Releasing the HA reservation attribute F Remote Web Console GUI reference for HA Replication window Remote Replication window Remote Connections window View Pair Synchronous Rate window View Pair Properties window View Remote Connection Properties window Histories window Add Remote Connection wizard Add Remote Connection window Confirm window Add Quorum Disks wizard Add Quorum Disks window Confirm window Assign HA Reserves window Create HA Pairs wizard Create HA Pairs window Change Settings window Confirm window Suspend Pairs window Resync Pairs wizard Resync Pairs window Confirm window Delete Pairs wizard Delete Pairs window Confirm window Edit Remote Replica Options wizard Edit Remote Replica Options window Confirm window Remove Quorum Disks window Force Delete Pairs (HA Pairs) window Edit Remote Connection Options wizard Edit Remote Connection Options window Confirm window Add Remote Paths wizard Add Remote Paths window Confirm window Remove Remote Paths wizard Remove Remote Paths window Confirm window Remove Remote Connections window Release HA Reserved window G Regulatory information Belarus Kazakhstan Russia marking Turkey RoHS material content declaration Contents

9 Ukraine RoHS material content declaration Warranty information Index Contents 9

10 1 Overview of High Availability Abstract This chapter provides an overview of the High Availability feature of the HP XP7 Storage system. About High Availability High Availability (HA) enables you to create and maintain synchronous, remote copies of data volumes on the HP XP7 Storage (HP XP7) system. A virtual storage machine is configured in the primary and secondary storage systems using the actual information of the primary system, and the High Availability primary and secondary volumes are assigned the same virtual LDEV number in the virtual storage machine. Because of this, the pair volumes are seen by the host as a single volume on a single storage system, and both volumes receive the same data from the host. A quorum disk located in a third and external storage system is used to monitor the HA pair volumes. The quorum disk acts as a heartbeat for the HA pair, with both storage systems accessing the quorum disk to check on each other. A communication failure between systems results in a series of checks with the quorum disk to identify the problem for the system able to receive host updates. Alternate path software on the host runs in the Active/Active configuration. While this configuration works well at campus distances, at metro distances Hitachi Dynamic Link Manager (HDLM) is required to support preferred/nonpreferred paths and ensure that the shortest path is used. If the host cannot access the primary volume (P-VOL) or secondary volume (S-VOL), host I/O is redirected by the alternate path software to the appropriate volume without any impact to the host applications. High Availability provides the following benefits: Continuous server I/O when a failure prevents access to a data volume Server failover and failback without storage impact Load balancing through migration of virtual storage machines without storage impact 10 Overview of High Availability

11 Related topics High Availability solutions (page 11) High Availability solutions Fault-tolerant storage infrastructure If a failure prevents host access to a volume in an HA pair, read and write I/O can continue to the pair volume in the other storage system, as shown in the following illustration, to provide continuous server I/O to the data volume. Failover clustering without storage impact In a server-cluster configuration with High Availability, the cluster software is used to perform server failover and failback operations, and the High Availability pairs do not need to be suspended or resynchronized. High Availability solutions 11

12 Server load balancing without storage impact When the I/O load on a virtual storage machine at the primary site increases, as shown below, High Availability enables you to migrate the virtual machine to the paired server without performing any operations on the storage systems. 12 Overview of High Availability

13 As shown in this example, the server virtualization function is used to migrate virtual machine VM 3 from the primary-site server to the secondary-site server. Because the HA primary and secondary volumes contain the same data, you do not need to migrate any data between the storage systems. System configurations for HA solutions The system configuration depends on the HA solution that you are implementing. The following table lists the HA solutions and specifies the system configuration for each solution. HA solution Software System configuration Alternate path software Cluster software Continuous server I/O (if a failure occurs in a storage system) Required Not required Single-server configuration Failover and failback on the servers without using the storage systems Not required Required Server-cluster configuration Migration of a virtual machine of a server without using the storage systems Not required Required Server-cluster configuration Both of the following: Required Required Cross-path configuration Continuous server I/O (if a failure occurs in a storage system) Migration of a virtual storage machine of a server without using the storage systems High Availability solutions 13

14 Single-server configuration In a single-server configuration, the primary and secondary storage systems connect to the host server at the primary site. If a failure occurs in one storage system, you can use alternate path software to switch server I/O to the other site. Server-cluster configuration In a server-cluster configuration, servers are located at both the primary and secondary sites. The primary system connects to the primary-site server, and the secondary system connects to the secondary-site server. The cluster software is used for failover and failback. When I/O on the virtual machine of one server increases, you can migrate the virtual machine to the paired server to balance the load. 14 Overview of High Availability

15 Cross-path configuration In a cross-path configuration, primary-site and secondary-site servers are connected to both the primary and secondary storage systems. If a failure occurs in one storage system, alternate path software is used to switch server I/O to the paired site. The cluster software is used for failover and failback. HA and multi-array virtualization HA operations are based on the multi-array virtualization function. When virtual information is sent to the server in response to the SCSI Inquiry command, the server views multiple storage systems as multiple paths to a single storage array. The multi-array virtualization function is enabled when you install the license for Resource Partition. For more information about Resource Partition, see the HP XP7 Provisioning for Open Systems User Guide. Related topics About the virtual ID (page 15) About the virtual ID The server is able to identify multiple storage systems as a single virtual storage machine when the resources listed below are virtualized and the virtual identification (virtual ID) information is set. You can set virtual IDs on resource groups and on individual volumes, as described in the following table. Virtual information required by the server Serial number Product LDEV ID* Emulation type Number of concatenated LUs of LUN Expansion (LUSE) Resource on which virtual IDs are set Resource group Resource group Volume Volume Volume HA and multi-array virtualization 15

16 Virtual information required by the server SSID Resource on which virtual IDs are set Volume *A volume whose virtual LDEV ID has been deleted cannot accept I/O from a server. The virtual LDEV ID is temporarily deleted on a volume to be used as an HA S-VOL because, when the pair is created, the P-VOL's physical LDEV ID is set as the S-VOL's virtual LDEV ID. When using multi-array virtualization you can set the following: The same serial number or product as the virtual ID for more than one resource group Up to eight types of virtual IDs for resource groups in a single storage system Virtual IDs for a maximum of 1,023 resource groups (excluding resource group #0) Virtual IDs for a maximum of 65,279 volumes For instructions on setting virtual IDs, see the HP XP7 RAID Manager Reference Guide. Related topics HA and multi-array virtualization (page 15) Monitoring HA status HA status HA operations are managed based on the following information: Pair status I/O mode of the P-VOL and S-VOL HA status, which is a combination of pair status and I/O mode Related topics HA status (page 16) HA status transitions (page 17) I/O modes (page 18) Relationship between HA status, pair status, and I/O mode (page 19) High Availability and server I/O (page 19) The following table lists and describes the HA statuses. HA status Description Data redundancy Updated volume Volume with latest data* Simplex The volume is not a pair volume. No Not applicable Not applicable Mirroring The pair is changing to Mirrored status. No P-VOL and S-VOL P-VOL This status is issued when you do the following: Prepare a quorum disk. Copy data from the P-VOL to the S-VOL. Mirrored The pair is operating normally. P-VOL and S-VOL P-VOL and S-VOL Suspended The pair is suspended. I/O from the server is sent to the volume with the latest data. No P-VOL or S-VOL P-VOL or S-VOL When a failure occurs or the pair is suspended, the status changes to Suspended. 16 Overview of High Availability

17 HA status Description Data redundancy Updated volume Volume with latest data* Blocked I/O is not accepted by either pair volume. This status occurs when: No None P-VOL and S-VOL Both the P-VOL and S-VOL have the latest data. If the pair is forcibly deleted, I/O can be restarted in either of the volumes. A failure occurs in the primary or secondary storage system, and I/O to the volume in the paired system is also stopped. If more than one failure occurs at the same time, the HA status changes to Blocked. * For details on how to determine which volume has the latest data, see Relationship between HA status, pair status, and I/O mode (page 19). Related topics Monitoring HA status (page 16) HA status transitions The HA status changes depending on the pair operation and failure. The following illustration shows the HA pair status transitions: If you resynchronize a pair specifying the P-VOL, I/O continues on the P-VOL. If you resynchronize a pair specifying the S-VOL, data flow switches from the S-VOL to the P-VOL, and then I/O continues on the new P-VOL. If you suspend a pair specifying the P-VOL, I/O continues to the P-VOL. If you suspend a pair specifying the S-VOL, I/O continues to the S-VOL. Monitoring HA status 17

18 Pair status The following table lists and describes the pair statuses, which indicate the current state of an HA pair. As shown in the table, the pair statuses displayed by RAID Manager and Remote Web Console are slightly different. Pair status Description RAID Manager SMPL COPY PAIR PSUS PSUE SSUS SSWS RWC SMPL INIT/COPY COPY PAIR PSUS PSUE SSUS SSWS The volume is not paired. The initial copy or pair resynchronization is in progress (including creation of an HA pair that does not perform data copy). A quorum disk is being prepared. The initial copy is in progress; data is being copied from the P-VOL to the S-VOL (including creation of an HA pair that does not perform data copy). The pair is synchronized. The pair was suspended by the user. This status appears on the P-VOL. The pair was suspended due to a failure. The pair was suspended by the user, and update of the S-VOL is interrupted. This status appears on the S-VOL. The pair was suspended either by the user or due to a failure, and update of the P-VOL is interrupted. This status appears on the S-VOL. Related topics I/O modes Monitoring HA status (page 16) The following table lists and describes the three types of HA I/O modes, which represent the I/O actions on the P-VOL and the S-VOL of an HA pair. I/O modes. As shown in the following table, the I/O modes displayed by RAID Manager and Remote Web Console are slightly different. I/O mode Read processing Write processing I/O mode RAID Manager 1 RWC Mirror (RL) L/M Mirror (Read Local) Sends data from the storage system that received a read request to the server. Writes data to the P-VOL and then the S-VOL. Local L/L Local Sends data from the storage system that received a read request to the server. Writes data to the volume on the storage system that received a write request. Block 2 B/B Block Rejected (Replies to illegal requests). Rejected (Replies to illegal requests). Notes: 1. In RAID Manager, the I/O mode is displayed as <read processing>/<write processing> in which L indicates Local, M indicates Mirror, and B indicates Block (for example, L/L indicates Local read processing and Local write processing). 2. For volumes whose I/O mode is Block, a response indicating that the LU is undefined is returned to the Report LUN and Inquiry commands. Therefore, servers cannot identify a volume whose I/O mode is Block, or the path of this volume is blocked. 18 Overview of High Availability

19 Related topics Monitoring HA status (page 16) Relationship between HA status, pair status, and I/O mode The following table lists the HA statuses and describes the relationship between the HA status, pair status, and I/O mode. "N" indicates that pair status or I/O mode cannot be identified due to a failure in the storage system. HA status When to suspend P-VOL Pair status I/O mode S-VOL Pair status I/O mode Volume that has the latest data Simplex Not applicable SMPL Not applicable SMPL Not applicable Not applicable Mirroring Not applicable INIT Mirror (RL) INIT Block P-VOL Not applicable COPY Mirror (RL) COPY Block P-VOL Mirrored Not applicable PAIR Mirror (RL) PAIR Mirror (RL) P-VOL and S-VOL Suspended Pair operation PSUS Local SSUS Block P-VOL Failure PSUE* Local PSUE Block P-VOL PSUE* Local SMPL -- P-VOL PSUE* Local N N P-VOL Pair operation PSUS Block SSWS Local S-VOL Failure PSUE Block SSWS* Local S-VOL SMPL -- SSWS* Local S-VOL N N SSWS* Local S-VOL Blocked Not applicable PSUE Block PSUE Block P-VOL and S-VOL Not applicable PSUE Block N N P-VOL and S-VOL Not applicable N N PSUE Block P-VOL and S-VOL *If the server does not issue the write I/O, the pair status might be PAIR, depending on the failure location. Related topics Monitoring HA status (page 16) High Availability and server I/O (page 19) High Availability and server I/O I/O requests from the server to an HA pair volume are managed according to the volume's I/O mode. The HA status determines the I/O mode of the P-VOL and S-VOL of a pair. This topic provides a detailed description of read and write I/O processing for each HA status. Related topics Server I/O when the HA status is Mirrored (page 20) Server I/O when the HA status is Mirroring (page 20) Server I/O when the HA status is Suspended (page 21) High Availability and server I/O 19

20 Server I/O when the HA status is Blocked (page 22) Monitoring HA status (page 16) Server I/O when the HA status is Mirrored When the HA status is Mirrored, the I/O mode of the P-VOL and S-VOL is Mirror (RL). As shown in the following figure, a write request sent to an HA volume is written to both pair volumes, and then a write-completed response is returned to the host. Read requests are read from the volume connected to the server and then sent to the server. There is no communication between the primary and secondary storage systems. Related topics Monitoring HA status (page 16) High Availability and server I/O (page 19) Server I/O when the HA status is Mirroring When the HA status is Mirroring, the I/O mode for the P-VOL is Mirror (RL), and the I/O mode for the S-VOL is Block. Write requests are written to both pair volumes, and then the write-completed response is returned to the server. Because the S-VOL's I/O mode is Block, it does not accept I/O from the server, but the data written to the P-VOL is also written to the S-VOL by the primary system, as shown in the following figure. 20 Overview of High Availability

21 Read requests are read by the P-VOL and then sent to the host. There is no communication between the primary and secondary systems. Related topics Monitoring HA status (page 16) High Availability and server I/O (page 19) Server I/O when the HA status is Suspended When the HA status is Suspended and the latest data is on the P-VOL, the I/O mode is as follows: P-VOL: Local S-VOL: Block When the latest data is on the S-VOL, the I/O mode is as follows: P-VOL: Block S-VOL: Local When the latest data is on the P-VOL, write requests are written to the P-VOL, and then the write-completed response is returned to the host, as shown in the following figure. The S-VOL's I/O mode is Block, so it does not accept I/O from the server, and the P-VOL's I/O mode is Local, so the data written to the P-VOL is not written to the S-VOL. High Availability and server I/O 21

22 Read requests are read by the P-VOL and then sent to the host. There is no communication between the primary and secondary systems. Related topics Monitoring HA status (page 16) High Availability and server I/O (page 19) Server I/O when the HA status is Blocked When the HA status is Blocked, the I/O mode of the P-VOL and S-VOL is Block. Neither volume accepts read/write processing. Related topics Monitoring HA status (page 16) High Availability and server I/O (page 19) Quorum disk and server I/O The quorum disk is a volume virtualized from an external storage system. The quorum disk is used to determine the storage system on which server I/O should continue when a path or storage system failure occurs. The primary and secondary systems check the quorum disk every 500 ms for the physical path statuses. When the primary and secondary systems cannot communicate, the storage systems take the following actions: 22 Overview of High Availability

23 1. The primary system cannot communicate over the data path and writes this status to the quorum disk. 2. When the secondary system detects from the quorum disk that a path failure has occurred, it stops accepting read/write. 3. The secondary system communicates to the quorum disk that it cannot accept read/write. 4. When the primary system detects that the secondary system cannot accept read/write, the primary system suspends the pair. Read/write continues to primary storage system. If the primary system cannot detect from the quorum disk that the secondary system cannot accept I/O within five seconds of a communication stoppage, the primary system suspends the pair and I/O continues. If both systems simultaneously write to the quorum disk that communication has stopped, this communication stoppage is considered to be written by the system with the smaller serial number. Related topics I/O stoppage detected in the counterpart system (page 23) I/O stoppage not detected in the counterpart system (page 24) I/O stoppage detected in the counterpart system When a stoppage is detected within 5 seconds in the counterpart system, the pair volume that will continue to receive read/write after the stoppage is determined based on the pair status: When the pair status is PAIR, read/write continues to the volume that wrote the communication stoppage to the quorum disk. When the pair status is INIT/COPY, read/write continues to the P-VOL. Read/write to the S-VOL remains stopped. When the pair status is PSUS, PSUE, SSWS, or SSUS, read/write continues to the volume whose I/O mode is Local. Read/write is stopped to the volume whose I/O mode is Block. Quorum disk and server I/O 23

24 Related topics Quorum disk and server I/O (page 22) I/O stoppage not detected in the counterpart system When a stoppage is not detected within 5 seconds in the counterpart system, the pair volume whose system wrote the communication stoppage to the quorum disk will continue to receive read/ write after the stoppage. Read/write processing depends on the pair status and I/O mode of the volume that did not detect the write. When the pair status is PAIR, read/write continues. When the pair status is INIT/COPY, read/write continues to the P-VOL. Read/write to the S-VOL remains stopped. When the pair status is PSUS, PSUE, SSWS, or SSUS, read/write continues to the volume whose I/O mode is Local. Read/write is stopped to the volume whose I/O mode is Block. In addition, server I/O does not continue to the volume that should have notified the quorum disk, but did not, that it cannot accept I/O, because either a storage system failure occurred or the quorum disk is no longer accessible. Related topics Quorum disk and server I/O (page 22) HA pair operations The HA pair operations are: Pair creation:copies the data in a volume in the primary system to a volume in the secondary system. Before a pair is created, the HA reservation attribute must be applied to the volume that will become the S-VOL. WARNING! Pair creation is a destructive operation. When a pair is created, the data in the S-VOL is overwritten by the data in the P-VOL. Before you create a pair, you are responsible for backing up the data in a volume that will become an S-VOL as needed. Pair suspension:stops write data from being copied to the S-VOL. When you suspend a pair, you can specify the volume (P-VOL or S-VOL) that will receive update data from the host while the pair is suspended. If you specify the S-VOL, the data written to the S-VOL while the pair is suspended will be copied to the P-VOL when the pair is resynchronized. Pair resynchronization:updates the S-VOL (or P-VOL) by copying the differential data accumulated since the pair was suspended. The volume that was not receiving update data while the pair was suspended is resynchronized with the volume that was receiving update data. When resynchronization completes, the host can read from and write directly to the P-VOL or the S-VOL. Pair deletion:deletes the pair relationship between the P-VOL and the S-VOL. The data in each volume is not affected. When you delete a pair, you can specify the volume (P-VOL or S-VOL) 24 Overview of High Availability

25 that will receive update data from the host after the pair is deleted. The virtual LDEV ID of the unspecified volume is deleted, and the HA reservation attribute is set for the specified volume. The following table specifies the required conditions for the volume that will continue to receive update data from the host after pair deletion. Volume to receive I/O after pair deletion P-VOL Required conditions Pair status: PSUS or PSUE I/O mode: Local S-VOL Pair status: SSWS I/O mode: Local Initial copy and differential copy There are two types of HA copy operations that synchronize the data on the P-VOL and S-VOL of a pair: Initial copy:all data in the P-VOL is copied to the S-VOL, which ensures that the data in the two volumes is consistent. The initial copy is executed when the HA status changes from Simplex to Mirrored. Differential copy:only the differential data between the P-VOL and the S-VOL is copied. Differential copy is used when the HA status changes from Suspended to Mirrored. When an HA pair is suspended, the storage systems record the update locations and manage the differential data. The following figure shows the differential copy operation for a pair in which the P-VOL received server I/O while the pair was suspended. If the S-VOL receives server I/O while a pair is suspended, the differential data is copied from the S-VOL to the P-VOL. HA components The following illustration shows the components of a typical High Availability system. Initial copy and differential copy 25

26 Storage systems An HP XP7 is required at the primary site and at the secondary site. An external storage system for the quorum disk, which is connected to the primary and secondary storage systems using External Storage, is also required. Paired volumes A High Availability pair consists of a P-VOL in the primary system and an S-VOL in the secondary system. Quorum disk The quorum disk, required for High Availability, is used to determine the storage system on which server I/O should continue when a storage system or path failure occurs. The quorum disk is virtualized from an external storage system that is connected to both the primary and secondary storage systems. Virtual storage machine A virtual storage system is configured in the secondary system with the same model and serial number as the (actual) primary system. The servers treat the virtual storage machine and the storage system at the primary site as one virtual storage machine. Paths and ports HA operations are carried out between hosts and primary and secondary storage systems connected by Fibre-Channel data paths composed of one of more Fibre-Channel physical links. The data path, also referred to as the remote connection, connects ports on the primary system to ports on the secondary system. The ports are assigned attributes that allow them to send and receive 26 Overview of High Availability

27 data. One data path connection is required, but two or more independent connections are recommended for hardware redundancy. Alternate path software Alternate path software is used to set redundant paths from servers to volumes and to distribute host workload evenly across the data paths. Alternate path software is required for the single-server and cross-path HA system configurations. Cluster software Cluster software is used to configure a system with multiple servers and to switch operations to another server when a server failure occurs. Cluster software is required when two servers are in an HA server-cluster system configuration. User interfaces for High Availability operations High Availability operations are performed using the GUI software and the CLI software for the HP XP7 Storage: HP XP7 Command View Advanced Edition RAID Manager HP XP7 Command View Advanced Edition The HP XP7 Command View Advanced Edition (HP XP7 CVAE) GUI software enables you to configure and manage HA pairs and monitor and manage your High Availability environment. When one Command View AE Device Manager server manages both High Availability storage systems, you can access all required functions for your HA setup from the HP XP7 CVAE High Availability window. Disaster recovery procedures are performed using RAID Manager and cannot be performed using HP XP7 Command View Advanced Edition. RAID Manager The RAID Manager command-line interface (CLI) software can be used to configure the High Availability environment and create and manage HA pairs. Disaster recovery procedures are performed using RAID Manager and cannot be performed using HP XP7 Command View Advanced Edition. Configuration workflow for High Availability The following table lists the High Availability configuration tasks and indicates the location of the GUI and CLI instructions for the tasks. For instructions on using HP XP7 CVAE, see the HP XP7 Command View Advanced Edition User Guide. The references to HP XP7 Remote Web Console information not covered in this guide (shown below as "Section on...") are found in the HP XP7 Remote Web Console User Guide. Configuration task Operation target RAID Manager HP XP7 CVAE Installing High Availability Primary, secondary systems Not available. Section on installing a software application Creating command devices Primary, secondary systems Creating the command devices (page 79) Section on configuring pair management servers User interfaces for High Availability operations 27

28 Configuration task Operation target RAID Manager HP XP7 CVAE Creating and executing RAID Manager configuration definition files Server. (With HP XP7 CVAE, this is the pair management server.) Creating the configuration definition files (page 80) Section on monitoring and managing High Availability pairs Connecting primary and secondary systems Changing port attributes Adding remote connections Primary, secondary systems Primary, secondary systems Setting the port attributes (page 82) Adding remote connections (page 83) Section on setting up a High Availability environment Creating the quorum disk Changing the port attribute to External Primary, secondary systems Creating the quorum disk (page 85) > Setting the port attributes for connecting the external storage system (page 85) Mapping the external volume Primary, secondary systems Creating external volume groups (page 87) Setting the quorum disk Primary, secondary systems Setting external volumes as quorum disks (page 90) Setting up the secondary system Creating a VSM Secondary system Setting up the secondary system (page 93) Setting the reservation attribute Secondary system Setting the reservation attribute on the S-VOL (page 99) Section on allocating volumes Adding an LU path to the S-VOL Secondary system Adding an LU path to the S-VOL (page 105) Updating RAID Manager configuration definition files Server Editing RAID Manager configuration definition files (page 107) Section on monitoring and managing High Availability pairs Creating HA pair Primary system Creating the HA pair (page 108) Section on allocating High Availability pairs Adding alternate path to the S-VOL Server Adding an alternate path to the S-VOL (page 111) Section on optimizing HBA configurations 28 Overview of High Availability

29 2 Planning for High Availability Abstract This chapter provides information for planning the primary and secondary systems, pair volumes, the quorum disk, and data paths. Storage system preparation The following provides requirements, recommendations, and restrictions for HA storage systems. The primary and secondary systems must be HP XP7. No other storage model can be used. All RAID Manager setup must be complete. For more information, see HP XP7 RAID Manager Installation and Configuration User Guide. Remote Web Console must be connected to the primary and secondary storage systems. For more information, see HP XP7 Remote Web Console User Guide. When determining the amount of cache required for HA, make sure to consider the amount of the Cache Residency data that will also be stored in the cache. Make sure that the primary system is configured to report sense information to the host. The secondary system should also be attached to a host server to report sense information in the event of a problem with an S-VOL or the secondary system itself. If power sequence control cables are used, set the power source selection switch for the cluster to "Local" to prevent the server from cutting the power supply to the primary system. In addition, make sure that the secondary system is not powered off during HA operations. Establish physical paths between the primary and secondary systems. Switches and channel extenders can be used. Related topics Cache and additional shared memory (page 59) System option modes (page 59) Cache and additional shared memory Additional shared memory must be installed and configured in both primary and secondary systems. Make sure that cache in both systems works normally. Pairs cannot be created if cache requires maintenance. Configure secondary system cache so that it can adequately support remote copy workloads and all local workload activity. If an HA pair is in COPY status, you cannot install or uninstall cache and shared memory. When either of these tasks is to be performed, split the pairs in COPY status, perform and complete the cache or shared memory operation, then resynchronize the pairs. Related topics Storage system preparation (page 59) System option modes You can customize HP XP7 storage systems to enable options that were not set at the factory. System option modes are preset to default values at installation, but you can have them changed by your HP representative. System option modes related to HA are shown in the following table. Storage system preparation 29

30 Note that the system option modes for HA are the same as the system option modes for Continuous Access Synchronous. Mode Description Allows you to suppress initial copy operations when the write-pending level to the MP blade of the S-VOL is 60% or higher. ON: The initial copy is suppressed. OFF: The initial copy is not suppressed. Allows you to reduce RIO MIH time to five seconds. As a result, after a remote path error, less time elapses until the operation is retried on an alternate path. (Both RIO MIH time and the Abort Sequence timeout value are combined for this retry time.) ON: Reduces the RIO MIH time to five seconds. Combined with the Abort Sequence timeout value, the total amount of time that elapses before the operation is retried on another path is a maximum of 10-seconds. OFF: The RIO MIH time that you specified when the secondary system was registered is used with the specified Abort Sequence timeout value. The default is 15 seconds. If the RIO timeout time and the ABTS timeout time elapse, an attempt is made to retry RIO in the alternative path. Related topics Storage system preparation (page 59) Planning system performance Remote copy operations can affect I/O performance of host servers and the primary and secondary storage systems. You can minimize the effects of remote copy operations and maximize efficiency and speed by changing your remote connection options and remote replica options. HP technical support can help you analyze your operation's write-workload and optimize copy operations. Using workload data (MB/s and IOPS), you determine the amount of bandwidth, the number of physical paths, and number of ports your HA system requires. When these are properly determined and sized, the data path operates free of bottlenecks under all workload levels. Related topics Hitachi Dynamic Link Manager (page 60) Hitachi Dynamic Link Manager Hitachi Dynamic Link Manager (HDLM) allows you to specify alternative paths to be used for normal High Availability operations. Other paths are used when failures occur in all paths (including alternative paths) that should be used for normal operations. Host mode option 78, the non-preferred path option, must be configured to specify non-preferred paths, which are used when failures occur. For example, if servers and storage systems are connected in a cross-path configuration, I/O response is prolonged because the primary site server is distant from the secondary system, and the secondary site server is distant from the primary system. Normally in this case you use paths between the primary server and primary system and paths between the secondary server and secondary system. If a failure occurs in a path used in normal circumstances, you will use the paths between the primary server and secondary system, and paths between the secondary server and primary system. 30 Planning for High Availability

31 After you incorporate the HP XP7 settings to HDLM, the attribute of HDLM path to which the host mode option 78 was set changes to the non-owner path. If the host mode option 78 is not set to the path, the HDLM path attribute changes to the owner path. Related topics Planning system performance (page 60) Planning physical paths Bandwidth When configuring physical paths to connect storage systems in the primary and secondary sites, make sure that the physical paths can handle all the data that could be transferred to the primary and secondary volumes under all circumstances. Related topics Bandwidth (page 61) Fibre-Channel connections (page 61) Connection types (page 62) The amount of bandwidth you have must be able to handle data transfers at all workload levels. The amount of necessary bandwidth depends on the amount of I/O to be sent from servers to primary volumes. To identify the required bandwidth, you must measure the write workload of the system. Use performance-monitoring software to collect the workload data. Planning physical paths 31

32 Related topics Planning physical paths (page 61) Fibre Channel connections Use shortwave (optical multi-mode) or longwave (optical single-mode) optical fibre cables to connect storage systems in the primary and secondary sites. The required cables and network relay devices differ depending on the distance between the primary and secondary systems, as explained below. Distance between storage systems up to 1.5 kilometers 1.5 to 10 kilometers 10 to 30 kilometers 30 kilometers or longer Cable type shortwave (optical multi-mode) longwave (optical single-mode) longwave (optical single-mode) communication line Network relay device One or two switches are required if the distance is 0.5 to 1.5 kilometers. Not required. Up to two switches can be used. An authorized third-party channel extender is required. No special settings are required for HP XP7 if switches are used in a Fibre Channel environment. Longwave (optical single-mode) cables can be used for direct connection at a maximum distance of 10 kilometers. The maximum distance that might result in the best performance differs depending on the link speed, as shown in the following table: Link speed 1 Gbps 2 Gbps 4 Gbps 8 Gbps Maximum distance for best performance 10 kilometers 6 kilometers 3 kilometers 2 kilometers Related topics Planning physical paths (page 61) Connection types HA supports three types of connections: direct, switch, and channel extenders. Use LUN Manager to configure ports and topologies. Establish physical path connections in bidirectional directions, from the primary to the secondary system, and from the secondary to the primary system. Related topics Planning physical paths (page 61) Direct connection (page 62) Connection using switches (page 63) Connection using channel extenders (page 64) Direct connection With a direct connection, the two storage systems are directly connected to each other. 32 Planning for High Availability

33 Set the ports and topology to Fabric OFF and FC-AL. You can use the following host mode options to improve response time of host I/O by improving response time between the storage systems for long distance direct connections (up to 10 kilometers LongWave) when the 16FC8 package is used. Host mode option 49 (BB Credit Set Up Option1) Host mode option 50 (BB Credit Set Up Option2) Host mode option 51 (Round Trip Set Up Option) When you set these host mode options, set the topology of the Initiator port and the RCU Target port to Fabric OFF and Point-to-Point. Related topics Connection types (page 62) Connection using switches With a switch connection, up to three optical fibre cables can be connected. A maximum of two switches can be used. Planning physical paths 33

34 Specify the topology as follows: NL_port: Fabric ON and FC-AL N_port: Fabric ON and Point-to-Point Switches from some vendors (for example, McData ED5000) require F_port. You can use the following host mode options to improve response time of host I/O by improving response time between the storage systems when switches are used for long distance connections (up to 100 kilometers) and the 16FC8 package is used. Host mode option 49 (BB Credit Set Up Option1) Host mode option 50 (BB Credit Set Up Option2) Host mode option 51 (Round Trip Set Up Option) When you set these host mode options, set the topology of the Initiator port and the RCU Target port to Fabric ON and Point-to-Point. Related topics Connection types (page 62) Connection using channel extenders Channel extenders and switches should be used for long distance connections. Specify the topology as follows: NL/FL_port: Fabric ON and FC-AL F_port: Fabric ON and Point-to-Point Related topics Connection types (page 62) Planning ports Data is transferred from Initiator ports in one storage system to RCU Target ports in the other system. 34 Planning for High Availability

35 Port attributes The amount of data sent to and from these ports is limited. That is why it is necessary to measure the amount of write workload your system will generate. When you identify peak write workload, which is the amount of data transferred during peak periods, you can determine the amount of bandwidth and number of Initiator and RCU Target ports required for your system. Related topics Port attributes (page 65) Ports on HP XP7 can have four attributes, as shown below. These port attributes are necessary for ports in the primary and secondary systems. Initiator ports, which send HA commands and data to the paired storage system. One initiator port can be connected to a maximum of 64 RCU Target ports. CAUTION: For the Fibre Channel interface, do not use the LUN Manager function for defining SCSI paths at the same time that you are adding or removing remote connections or adding remote paths. RCU Target ports, which receive HA commands and data. One RCU Target port can be connected to a maximum of 16 Initiator ports. The number of remote paths that can be specified does not depend on the number of ports. The number of remote paths can be specified for each remote connection. Target ports, which connect storage systems and servers. If a server issues a write request, the request is sent via a Target port on the storage system to an HP XP7 volume. External ports, which are configured and used by External Storage. HA uses these ports when connecting to external storage systems for quorum disks. Related topics Planning ports (page 64) Planning the quorum disk An external storage system must be prepared for the HA quorum disk. Related topics Installation of external storage system for quorum disks (page 65) Relationship between the quorum disk and remote connection (page 66) Response time from the external storage system for quorum disks (page 70) Installation of external storage system for quorum disks The external storage system can be installed in the following two locations: In a three-site configuration, the external storage system is installed in a third site away from the primary and secondary sites. I/O from servers continues if any failure occurs in the primary site, the secondary site, or the site where the external storage system is installed. Planning the quorum disk 35

36 In a two-site configuration, the external storage system is installed in the primary site. If failure occurs in the secondary site, I/O from servers continues. However, if a failure occurs in the primary site, I/O from servers stop. In the secondary site, you cannot install any external storage system for quorum disks. Related topics Planning the quorum disk (page 65) Relationship between the quorum disk and remote connection When you use multiple remote connections, we recommend that you prepare as many quorum disks as remote connections to avoid the potential of the single remote connection failure causing the suspend of the HA pairs that are using the other normal remote connections. Simultaneously, you must make a combination of one quorum disk, one remote connection from the primary storage system to the secondary storage system, and one remote connection from the secondary storage system to the primary storage system. 36 Planning for High Availability

37 TIP: When you are going to manage many HA pairs in a concentrated way by one quorum disk, if more than 8 physical paths are necessary for the remote connection, you may configure the system which has one quorum disk for two or more remote connections. When all paths used in the remote connection are blocked, the HA pairs will be suspended in units of quorum disks. Therefore, in the configuration like a following figure, the HA pairs which are using the remote connection 1 will be suspended even if the failure occurred at the remote connection 2. Also, when the failure occurred at the path from the volume in the primary site or the secondary site to the quorum disk, the HA pairs which are using the same quorum disk will be suspended. Planning the quorum disk 37

38 Related topics Planning the quorum disk (page 65) Notes for the response time from the external storage system for quorum disks In the environment such as the response time from the external storage system for quorum disks is delayed for more than one second, HA pairs may be suspended by some failures. Monitor regularly the response time of the quorum disks using Performance Monitor from the primary storage system or secondary storage system. As a result of specifying External storage > Logical device > Response time (ms) on the monitoring objects, if the response time exceeds 100 ms, review the configuration from the following view points. Lower the I/O load, if the load of I/O of the volumes other than quorum disk is high in the external storage system. Remove factors of the high cache load, if the cache load is high in the external storage system. Lower the I/O load of the entire external storage system, when you do the maintenance of the external storage system. Alternatively, do the maintenance of the external storage system with the settings which will minimize the influences to the I/O in reference to the documents of the external storage system. Related topics Planning the quorum disk (page 65) Planning HA pairs and pair volumes This section describes how to calculate the maximum number of HA pairs, and the requirements of volumes which is used as a primary volume and a secondary volume depending on the HA configuration. Related topics Maximum number of HA pairs (page 70) When the S-VOL's resource group and storage system have the same serial number and model (page 72) Maximum number of HA pairs The maximum number of HA pairs in a storage system is 63,231. This number is calculated by subtracting the number of quorum disks (at least one) from the maximum number of virtual volumes (total number of THP V-VOLs plus external volumes: 63,232) that can be defined in a storage system. If RAID Manager is used in the In-band method, and if one virtual volume is used as a command device, the maximum number of HA pairs is 63,230. Note, however, that the maximum number of pairs in HP XP7 is subject to restrictions, such as the number of cylinders used in volumes or the number of bitmap areas used in volumes. In the calculation formulas below, "ceiling" is a function that rounds the value inside the parentheses up to the next integer. "floor" is a function that rounds the value inside the parentheses down to the next integer. Related topics Planning HA pairs and pair volumes (page 70) Calculating the number of cylinders (page 71) Calculating the number of bitmap areas (page 71) 38 Planning for High Availability

39 Calculating the number of available bitmap areas (page 71) Calculating the maximum number of pairs (page 72) Calculating the number of cylinders To calculate the number of cylinders, start by calculating the number of logical blocks, which indicates volume capacity measured in blocks. number-of-logical-blocks = volume-capacity (in bytes) / 512 Then use the following formula to calculate the number of cylinders: number-of-cylinders = ceiling(ceiling(number-of-logical-blocks / 512) / 15) Related topics Maximum number of HA pairs (page 70) Calculating the number of bitmap areas (page 71) Calculating the number of bitmap areas Calculate the number of bitmap areas using the number of cylinders. number-of-bitmap-areas = ceiling((number-of-cylinders * 15) / 122,752) 122,752 is the differential quantity per bitmap area. The unit is bits. NOTE: You must calculate the number of required bitmap areas for each volume. If you calculate the total number of cylinders in multiple volumes and then use this number to calculate the number of required bitmap areas, the calculation results might be incorrect. The following are examples of correct and incorrect calculations, assuming that one volume has 10,017 cylinders and another volume has 32,760 cylinders. Correct: ceiling((10,017 * 15) / 122,752) = 2 ceiling((32,760 * 15) / 122,752) = 5 The calculation result is seven bitmap areas in total. Incorrect: 10, ,760 = 42,777 cylinders ceiling((42,777 * 15) / 122,752) = 6 The calculation result is six bitmap areas in total. Related topics Maximum number of HA pairs (page 70) Calculating the number of cylinders (page 71) Calculating the number of available bitmap areas (page 71) Calculating the number of available bitmap areas The total number of bitmap areas available in the storage system is 65,536. The number of bitmap areas is shared by Continuous Access Synchronous, Continuous Access Synchronous Z, Continuous Access Journal, and Continuous Access Journal Z. If you use these software products, subtract the number of bitmap areas required for these products from the total number of bitmap areas in the storage system (65,536), and then use the formula in the next section to calculate the maximum number of HA pairs. For details on the methods for calculating Planning HA pairs and pair volumes 39

40 the number of bitmap areas required for these program products, refer to the appropriate user guide. Calculating the maximum number of pairs Use the following values to calculate the maximum number of pairs: The number of bitmap areas required for pair creation The total number of bitmap areas available in the storage system (that is, 65,536), or the number of available bitmap areas calculated in the previous section Calculate the maximum number of pairs using the following formula with the total number of bitmap areas in the storage system (or the number of available bitmap areas) and the number of required bitmap areas, as follows: maximum-number-of-pairs-that-can-be-created = floor(total-number-of-bitmap-areas-in-storage-system / number-of-required-bitmap-areas) Related topics Maximum number of HA pairs (page 70) Calculating the number of available bitmap areas (page 71) Notes for creation of HA pairs when the volume specified as an S-VOL is in a resource group whose serial number and model are same as the storage system You can create HA pairs specifying the volume in a resource group that has the same serial number and the same model as the storage system for an S-VOL. In this case, you must specify the volume in a resource group (virtual storage machine) whose serial number and model are same as the storage system in which the S-VOL resides for a P-VOL. When you create HA pairs, the virtual LDEV ID of the P-VOL is copied to the virtual LDEV ID of the S-VOL. In the following figure, the copied virtual LDEV ID of the P-VOL is equal to the original virtual LDEV ID of the S-VOL. The volume in a resource group that has the same serial number and the same model as the storage system and whose original LDEV ID is equal to the virtual LDEV ID will be treated as a normal volume but as a virtualized volume by the function of multi-array virtualization. As a result of copying a virtual information from the P-VOL to the S-VOL, when the requirement as a normal volume becomes not satisfied like the following examples, you cannot create HA pairs. The copied virtual SSID of the P-VOL is not corresponding to the original SSID of the S-VOL. The copied virtual emulation type of the P-VOL is not corresponding to the original emulation type of the S-VOL. The virtual emulation type includes the virtual CVS attribute (-CVS). Because the HP XP7 does not support the LUSE, the LUSE configuration (*n) volumes are not able to be specified as a P-VOL. 40 Planning for High Availability

41 3 System requirements Abstract This chapter provides the system requirements for High Availability (HA) operations. Requirements and restrictions The following table lists the requirements and restrictions for High Availability operations. Item Storage systems at the primary and secondary sites External storage systems (for quorum disk) Requirements and restrictions Model. HP XP7 Storage is required at the primary and secondary sites. Microcode. Microcode (DKCMAIN) version x-00/00 or later is required on the primary and secondary systems. High Availability. The HA feature must be installed and enabled on the primary and secondary systems. Controller emulation type. The controller emulation type of the primary and secondary systems must be same. Shared memory. Additional shared memory is required in the primary and secondary storage systems. The storage system must be supported for attachment to the HP XP7 using External Storage. For details, see the HP XP7 External Storage for Open and Mainframe Systems User Guide. The maximum distance between the external storage system used for the quorum disk and the primary site and secondary site is 1,500 km. Licensed capacity Host server platforms The licensed capacity is restricted as follows: The page size assigned to the virtual volume is counted as a licensed capacity for HA. If the actual licensed capacity exceeds the available licensed capacity, HA can be used as usual for 30 days. After 30 days, only pair split and pair delete operations are allowed. For supported OS versions, see the HP SPOCK website: storage/spock Requirements and restrictions 41

42 Item Physical paths connecting the primary and secondary storage systems Requirements and restrictions A maximum of 8 physical paths is supported. The ports to connect the storage systems at the primary site and the secondary site are Initiator port and RCU target port. The maximum distance between the primary and secondary storage systems is 1,000 km. Fibre-channel interface is required. Remote paths and path groups A maximum of 8 remote paths can be registered in a path group. A maximum of 64 path groups can be set in a storage system (sum of the path groups used by Continuous Access Synchronous, Continuous Access Journal, and Continuous Access Journal Z). The range of values for the path group ID is The path group is specified during the create pair operation and cannot be changed by resynchronization. The remote path must be set by each path group of the storage systems at the primary site and the secondary site. You can also use multiple path groups with the same combination of the storage systems at the primary and the secondary sites. It is recommended that you specify different paths and path groups for Continuous Access Synchronous, Continuous Access Journal and Continuous Access Journal Z secondary systems when using CU Free. SCSI command The SCSI-2 Reserve command, the SCSI-3 Persistent Reserve command, and the VAAI command are supported. The reservation information is duplicated when the Reserve command or the Persistent Reserve command is received, or when the initial copy or resync copy starts. The ALUA command is not supported. Virtual storage machine A maximum of 8 virtual storage machines can be configured in one storage system. A maximum of 65,280 volumes can be configured in one virtual storage machine. When a P-VOL is registered to a virtual storage machine, you must create a virtual storage machine in the secondary storage system that has the same model and serial number as the virtual storage machine to which the P-VOL is registered. Number of pairs A maximum of 63,231 HA pairs can exist in a virtual storage machine. This is the same maximum number of HA pairs allowed per HP XP7. 42 System requirements

43 Item Requirements and restrictions Pair volumes Both the primary volume and the secondary volume must be a virtual volume for Thin Provisioning. The primary volume or the secondary volume must be in the virtual storage machine which has the same model and the same serial number as HP XP7. The emulation type of the primary volume and secondary volume must be OPEN-V. The primary volume and the secondary volume must be equal in size. The maximum volume size is 4,194, MB (8,589,934,592 blocks). Quorum disk A maximum of 32 quorum disks per storage system can be configured at the primary or secondary site. A maximum of 65,280 HA pairs can use one quorum disk. The emulation type of quorum disk must be OPEN-V. The minimum size of a quorum disk is 12,292 MB (25,174,016 blocks). The maximum size of a quorum disk is the maximum limit for an external volume supported by External Storage: 4 TB. One external volume group must be mapped to one external volume. Do not use an HA quorum disk as a quorum disk for the External Storage Access Manager function of the P9500, XP24000/XP20000 Disk Array. Alternate path software Cluster software Alternate path software is required for the single-server HA configuration and the cross-path HA configuration (two servers). For supported OS versions, see the HP SPOCK website: storage/spock Cluster software is required for the server-cluster and cross-path HA configurations. For supported OS versions, see the HP SPOCK website: storage/spock User interface HP XP7 CVAE : version or later RAID Manager: version or later The RAID Manager command device is required on the primary and secondary storage systems. Relationship between resource group and availability of HA pair creation An availability of HA pair creation depends on the combination of the resource groups to which the primary volume belongs and to which the secondary volume belongs. Either a primary volume or a secondary volume must be assigned to the resource group which has the same model and the same serial number as the storage system in which the volume resides. Relationship between resource group and availability of HA pair creation 43

44 The volume in the resource group that is created when the data was migrated from the model other than HP XP7 by Online Migration cannot be used as a volume of the HA pair. A resource group in which the primary volume resides A resource group in which the secondary volume resides The resource group which has the same model and the same serial number as the storage system at the secondary site The resource group which has the different model and the different serial number from the storage system at the secondary site (virtual storage machine) The model is HP XP7. The model is other than HP XP7 1 The resource group which has the same model and the same serial number as the storage system at the primary site Not available Available Not available The resource group which has the different model and the different serial number from the storage system at the primary site (Virtual storage machine) The model is HP XP7. The model is other than HP XP7 1. Available 2 Not available Not available Not available Not available Not available 1. The resource group (virtual storage machine) is made when the data is migrated from the storage system whose model was other than HP XP7 by Online Migration. 2. You cannot create an HA pair when the same virtual LDEV ID as the primary volume exists in the resource group of the storage system at the secondary site (Virtual storage machine). You must delete the virtual LDEV ID if the volume is not created and only the LDEV ID exists. Interoperability requirements This section describes the interoperability of High Availability (HA) and other HP XP7 Storage features. Volume types that can be used for HA The following table lists the volume types on the HP XP7 Storage and specifies whether the volume can be used for HA operations. Volume type Used as HA P-VOL? Used as HA S-VOL? Used as quorum disk? Thin Provisioning / Smart Tiers Virtual volume No Pool volume No No No Business Copy / Fast Snap P-VOL No S-VOL No No No Continuous Access Synchronous P-VOL No No No S-VOL No No No Continuous Access Journal P-VOL No No No S-VOL No No No 44 System requirements

45 Volume type Used as HA P-VOL? Used as HA S-VOL? Used as quorum disk? Journal volume No No No External Storage External volume No No Data Retention Volume with access attribute 1 No Auto LUN Source volume No No No Target volume No No No Cache Residency The volume on which Cache Residency is set No No No Virtual LUN Virtual LUN volume No No 2 LUN Manager The volume on which paths are defined No Volume on which paths are not defined No No RAID Manager command device Command device No No No Remote command device No No No DKA Encryption Volume whose parity groups have been encrypted You can use an encrypted volume in the external storage system as a quorum disk. 3 Notes: 1. If you set the S-VOL Disable attribute of Data Retention to the HA S-VOL, HA pair operations using RAID Manager are restricted. Release the S-VOL Disable attribute on the HA S-VOL, and then perform the HA pair operations. 2. Quorum disks can be set only on external volumes that have been configured so that one external volume group is mapped to one external volume. 3. You cannot encrypt a nonencrypted quorum disk in the external storage system from the HP XP7 at the primary or secondary site. Related topics Thin Provisioning / Smart Tiers (page 46) Use cases for pairing HA volumes with BC or FS (page 54) Business Copy (page 46) Fast Snap (page 50) Data Retention (page 54) LUN Manager (page 55) Cache Partition (page 56) Interoperability requirements 45

46 Volume Shredder (page 56) Performance Monitor (page 56) Thin Provisioning / Smart Tiers Thin Provisioning and Smart Tiers virtual volumes (THP V-VOLs) can be used as HA pair volumes. Note the following: Only allocated page capacity is counted as HA license capacity. Page capacity or license capacity counted toward the P-VOL and for the S-VOL might differ because page capacity for the volumes changes according to the operation, for example, tier relocation or reclaiming zero pages. You cannot add capacity to a THP V-VOL that is used as an HA pair volume. To do so, delete the pair, add the capacity to the THP V-VOL, then recreate the pair. Related topics Business Copy Volume types that can be used for HA (page 44) You can use the HA P-VOL and S-VOL as a Business Copy P-VOL. You can create up to three Business Copy pairs respectively on the HA primary and secondary systems. Because the server recognizes an HA pair as one volume, it sees the volume as paired with six Business Copy volumes. 46 System requirements

47 You can create three additional, cascaded BC pairs using the BC S-VOLs. This means that up to nine BC pairs can be created with the HA P-VOL, and nine BC pairs can be created with the HA S-VOL. NOTE: Pairs in a BC consistency group must reside in the same storage system. Because of this, the BC pairs that are associated with both the HA P-VOL and the S-VOL cannot be registered to the same consistency group. When you use HA pair volumes to create a BC pair, you must specify the physical LDEV ID, not the virtual LDEV ID. Related topics Limitations when sharing HA and Business Copy volumes (page 47) BC operations and HA pair status (page 47) HA operations and BC pair status (page 49) Limitations when sharing HA and Business Copy volumes When an HA pair is deleted with the P-VOL specified, the virtual LDEV ID of the S-VOL is deleted. If you delete the pair with the S-VOL specified, the virtual LDEV ID of the P-VOL is deleted. When the virtual LDEV ID is deleted, the server does not recognize the volume. Any operation that deletes the virtual LDEV ID of a volume used as a Business Copy volume cannot be performed. BC operations and HA pair status The ability to perform a Business Copy pair operation depends on the BC pair status and HA pair status. The following tables show BC pair operations and whether they can be performed (, No) with the listed HA status. The information assumes the required BC status for the operation. The Virtual LDEV ID column shows whether the volume has a virtual LDEV ID or not (, No). Interoperability requirements 47

48 Table 1 BC operations when HA status is Simplex HA pair status Virtual LDEV ID I/O Read Write Business Copy pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy SMPL No No No No, but has S-VOL reservation No No No No No No Table 2 BC operations when HA status is Mirroring HA pair status I/O mode Pair location I/O Read Write Business Copy pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy INIT/COPY Mirror (RL) Primary No 1 Block Secondary No No No 2 No 2 No 1, 3 COPY Mirror (RL) Primary No 1 Block Secondary No No No 2 No 2 No 1, 3 Note: 1. Cannot be used because HA pairs are not suspended. 2. Cannot be used because S-VOL data is not fixed. 3. Cannot be used because the volume at the HA copy destination is the same as the volume at the Business Copy copy destination. Table 3 Business Copy operations when HA status is Mirrored HA pair status I/O mode Pair location I/O Read Write Business Copy pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy PAIR Mirror (RL) Primary No* Secondary No* * Cannot be used because HA pairs are not suspended, and also because the volume at the HA copy destination is the same as the volume at the Business Copy copy destination. Table 4 Business Copy operations when HA status is Suspended HA pair status I/O mode Pair location I/O Read Write Business Copy pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy PSUS Local Primary * Block Primary No No No PSUE Local Primary * Block Primary No No No Secondary No No No 48 System requirements

49 Table 4 Business Copy operations when HA status is Suspended (continued) HA pair status I/O mode Pair location I/O Read Write Business Copy pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy SSUS Block Secondary No No No SSWS Local Secondary * * Quick Restore cannot be executed. Table 5 Business Copy operations when HA status is Blocked HA pair status I/O mode Pair location I/O Read Write BC pair operations Create pairs Split pairs Resync pairs Restore pairs Delete pairs/suspend copy PSUE Block Primary No No No Secondary No No No HA operations and BC pair status The ability to perform an HA pair operation depends on HA pair status and BC pair status. The following tables show HA operations and whether they can be performed (, No) with the listed BC status. The information assumes the required HA status for the operation. Table 6 HA operations and BC pair statuses, when HA P-VOL is shared BC pair status HA pair operations Create pairs Suspend pairs Delete pairs Resync pairs P-VOL selected S-VOL selected P-VOL selected 1 S-VOL selected 2 Forced deletion P-VOL selected S-VOL selected SMPL(PD) No 3 COPY No 3 PAIR No 3 COPY(SP) No 3 PSUS(SP) No 3 PSUS No 3 COPY(RS) No 3 COPY(RS-R) No 4 impossible impossible No 3 No 4 No 4 PSUE No 3 Notes: 1. You can delete an HA pair by specifying the P-VOL, only when the I/O mode is Local and the HA pair status of the P-VOL is PSUS or PSUE. 2. You can delete an HA pair by specifying the S-VOL, only when the I/O mode is Local and the HA pair status of the S-VOL is SSWS. 3. Cannot be used because, when you delete an HA pair specifying the S-VOL, the P-VOL's virtual LDEV ID is also deleted, which makes it unusable as the BC P-VOL. 4. To continue BC restore copy, the HA pairs must be suspended. Interoperability requirements 49

50 Table 7 HA operations and BC pair statuses, when HA S-VOL is shared BC pair status HA pair operations Create pairs Suspend pairs Delete pairs Resync pairs P-VOL selected S-VOL selected P-VOL selected 1 S-VOL selected 2 Forced deletion P-VOL selected S-VOL selected SMPL(PD) No 3 No 4 COPY No 3 No 4 PAIR No 3 No 4 COPY(SP) No 3 No 4 PSUS(SP) No 3 No 4 PSUS No 3 No 4 COPY(RS) No 3 No 4 COPY(RS-R) No 3, 5 impossible impossible No 4 No 5, 6 No 6 PSUE No 3 No 4 Notes: 1. You can delete an HA pair by specifying the P-VOL, only when the I/O mode is Local and the HA pair status of the P-VOL is PSUS or PSUE. 2. You can delete an HA pair by specifying the S-VOL, only when the I/O mode is Local and the HA pair status of the S-VOL is SSWS. 3. The reservation attribute is set and the virtual LDEV ID is deleted for a volume to be the HA S-VOL, making it unusable as a BC volume. 4. Cannot be used because, when you delete an HA pair specifying the S-VOL, the P-VOL's virtual LDEV ID is also deleted, which makes it unusable as the BC P-VOL. 5. Cannot be used because the volume at the HA copy destination is the same as the volume at the Business Copy copy destination. 6. To continue Business Copy restore copy, HA pairs must be suspended. Fast Snap Related topics Business Copy (page 46) You can use an HA P-VOL or S-VOL as a Fast Snap (FS) P-VOL. You can create up to 1,024 Fast Snap pairs using an HA P-VOL, and up to 1,024 Fast Snap pairs using an HA S-VOL. 50 System requirements

51 Because the server recognizes the HA pair as one volume, it sees the volume as paired with 2,048 FS volumes. NOTE: Pairs in an FS consistency group and snapshot group must reside in the same storage system. Because of this, the FS pairs that are associated with both the HA P-VOL and S-VOL cannot be registered to the same consistency group or snapshot group. When you use HA pair volumes to create a Fast Snap pair, specify the physical LDEV ID, not the virtual LDEV ID. Related topics Limitations for using both HA and Fast Snap (page 51) Fast Snap operations and HA status (page 51) HA operations and Fast Snap pair status (page 53) Limitations for using both HA and Fast Snap When an HA pair is deleted with the P-VOL specified, the virtual S-VOL's LDEV ID is deleted. If you delete the pair with the S-VOL specified, the P-VOL's virtual LDEV ID is deleted. When the virtual LDEV ID is deleted, the server does not recognize the volume, making it unusable as a Fast Snap volume. Any operation that deletes the virtual LDEV ID of a volume used as a Fast Snap volume cannot be performed. Fast Snap operations and HA status The ability to perform a Fast Snap pair operation depends on the FS pair status and the HA pair status. The following tables show FS operations and whether they can be performed (, No) with the listed HA status. The information assumes the required FS status for the operation. The Virtual LDEV ID column shows whether the volume has a virtual LDEV ID or not (, No). Interoperability requirements 51

52 Table 8 Fast Snap operations when HA status is Simplex HA pair status Virtual LDEV ID I/O Read Write Fast Snap pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs SMPL No No No No, but has S-VOL reservation No No No No No No Table 9 Fast Snap operations when HA status is Mirroring HA pair status I/O mode Pair location I/O Read Write Fast Snap pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs INIT/COPY Mirror (RL) Primary No 1 Block Secondary No No No No 2 No 2 No 1, 3 COPY Mirror (RL) Primary No 1 Block Secondary No No No No 2 No 2 No 1, 3 Note: 1. Cannot be used because HA pairs are not suspended. 2. Cannot be used because the data is being copied and the volume data is not fixed yet. 3. Cannot be used because the volume at the HA copy destination is the same as the volume at the Fast Snap copy destination. Table 10 Fast Snap operations when HA status is Mirrored HA pair status I/O mode Pair location I/O Read Write Fast Snap pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs PAIR Mirror (RL) Primary No* Secondary No* * Cannot be used because HA pairs are not suspended, and also because the volume at the HA copy destination is the same as the volume at the Fast Snap copy destination. Table 11 Fast Snap operations when HA status is Suspended HA pair status I/O mode Pair location I/O Read Write Fast Snap pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs PSUS Local Primary Block Primary No No No PSUE Local Primary Block Primary No No No Secondary No No No SSUS Block Secondary No No No SSWS Local Secondary No 52 System requirements

53 Table 12 Fast Snap operations when HA status is Blocked HA pair status I/O mode Pair location I/O Read Write Fast Snap pair operation Create pairs Split pairs Resync pairs Restore pairs Delete pairs PSUE Block Primary No No No Secondary No No No Related topics Fast Snap (page 50) HA operations and Fast Snap pair status The ability to perform an HA pair operation depends on the HA pair status and the FS pair status. The following tables show HA operations and whether they can be performed (, No) with the listed FS status. The information assumes the required HA status for the operation. Table 13 HA operations and FS pair status, when the HA P-VOL is shared FS pair status HA pair operations Create HA Pairs Suspend Pairs P-VOL selected S-VOL selected Delete Pairs P-VOL selected 1 S-VOL selected 2 Forced deletion Resync Pairs P-VOL selected S-VOL selected SMPL(PD) No 3 COPY No 3 PAIR No 3 PSUS No 3 COPY(RS-R) No 4 impossible impossible No 3 No 4 No 4 PSUE No 3 Note: 1. You can delete an HA pair by specifying the P-VOL, only when the I/O mode is Local and the HA pair status of the P-VOL is PSUS or PSUE. 2. You can delete an HA pair by specifying the S-VOL, only when the I/O mode is Local and the HA pair status of the S-VOL is SSWS. 3. Cannot be used because, when you delete an HA pair specifying the S-VOL, the P-VOL's virtual LDEV ID is also deleted, which makes it unusable as the FS P-VOL. 4. To continue resynchronizing the FS pair, you must split the HA pair. Table 14 HA operations and FS pair status, when the HA S-VOL is shared FS pair status HA pair operations Create HA Pairs Suspend Pairs P-VOL selected S-VOL selected Delete Pairs P-VOL selected 1 S-VOL selected 2 Forced deletion Resync Pairs P-VOL selected S-VOL selected SMPL(PD) No 3 No 4 COPY No 3 No 4 PAIR No 3 No 4 PSUS No 3 No 4 Interoperability requirements 53

54 Table 14 HA operations and FS pair status, when the HA S-VOL is shared (continued) FS pair status HA pair operations Create HA Pairs Suspend Pairs P-VOL selected S-VOL selected Delete Pairs P-VOL selected 1 S-VOL selected 2 Forced deletion Resync Pairs P-VOL selected S-VOL selected COPY(RS-R) No 3, 5 No No No 4 No 5, 6 No 6 PSUE No 3 No 4 Note: 1. You can delete an HA pair by specifying the primary volume, only when the I/O mode is Local and the HA pair status of the primary volume is PSUS or PSUE. 2. You can delete an HA pair by specifying the secondary volume, only when the I/O mode is Local and the HA pair status of the secondary volume is SSWS. 3. When creating an HA pair, the reservation attribute is set on the volume to become the S-VOL. Doing this removes the virtual LDEV ID of this volume, which makes it unusable as an FS pair volume. 4. Cannot be used because, when you delete an HA PAIR specifying the P-VOL, the S-VOL's virtual LDEV ID is also deleted, which makes it unusable as the FS P-VOL. 5. Cannot be used because the HA pair's target volume is the same as the FS pair's target volume. 6. To continue resynchronizing the FS pair, you must split the HA pair. Related topics Fast Snap (page 50) Use cases for pairing HA volumes with BC or FS Backing up HA pair volumes with Business Copy (BC) or Fast Snap (FS) provides further protection for HA data, in the following ways: Data Retention When the HA pair is resynchronized, pair status changes to COPY. While in this status, S-VOL consistency is temporarily lost. You can protect data when in COPY status by pairing the S-VOL with BC or FS before resynchronizing the HA pair. Though data in a blocked HA pair is inconsistent, host activity can continue with the P-VOL and/or S-VOL. Therefore, before correcting the failure by forcibly deleting the pair, you should pair the volumes with BC or FS. The BC and FS pairs can then be copied, and the copies used for other purposes. You can create an HA pair using volumes that have been assigned the Data Retention access attribute. When you create or resynchronize an HA pair, the access attribute set for the P-VOL is copied to the S-VOL. If you change the access attribute when HA status is Mirrored or Mirroring, make sure to set the access attribute to both the P-VOL and S-VOLs. Server I/O can be controlled, depending on HA status and the access attribute. If you set the Data Retention S-VOL Disable attribute on the HA S-VOL, HA pair operations using RAID Manager are restricted. Release the S-VOL Disable attribute from the S-VOL, then perform RAID Manager operations. 54 System requirements

55 Related topics HA status and I/O allowance by access attribute (page 55) HA status and I/O allowance by access attribute Even when the access attribute is assigned to an HA volume, the initial copy and pair resynchronization operations are not controlled. The following table shows whether server I/O is allowed or not for the listed HA status and access attribute. HA statuses Access attribute I/O P-VOL S-VOL P-VOL S-VOL Mirrored Read/Write Read/Write Ends normally Ends normally Read Only or Protect Read/Write Depends on the attribute* Ends normally Read/Write Read Only or Protect Ends normally Depends on the attribute* Read Only or Protect Read Only or Protect Depends on the attribute* Depends on the attribute* Mirroring Suspended (when the I/O mode of the primary volume is Local and the I/O mode of the secondary volume is Block) Read/Write Read Only or Protect Read/Write Read Only or Protect Read/Write Read/Write Read Only or Protect Read Only or Protect Ends normally Depends on the attribute* Ends normally Depends on the attribute* Rejected Rejected Rejected Rejected Suspended (when the I/O mode of the primary volume is Block and the I/O mode of the secondary volume is Local) Read/Write Read Only or Protect Read/Write Read Only or Protect Read/Write Read/Write Read Only or Protect Read Only or Protect Rejected Rejected Rejected Rejected Ends normally Ends normally Depends on the attribute* Depends on the attribute* Block Read/Write Read/Write Rejected Rejected Read Only or Protect Read/Write Rejected Rejected Read/Write Read Only or Protect Rejected Rejected Read Only or Protect Read Only or Protect Rejected Rejected *If the attribute is Read Only, Read is allowed but not Write. If the attribute is Protect, Read and Write are not allowed. Related topics LUN Manager Data Retention (page 54) Use the volumes for which LU paths have been set to create an HA pair. You can add LU paths to or delete LU paths from HA pair volumes. However, you cannot delete the last LU path because at least one LU path must be set for HA pair volumes. A volume for which no LU path has been set cannot be used as an HA pair volume. Interoperability requirements 55

56 Cache Partition HA pair volumes and quorum disks can migrate CLPRs. Volume Shredder HA pair volumes and quorum disks cannot use Volume Shredder to delete data. Performance Monitor Performance Monitor can be used to collect performance information about HA pair volumes and the quorum disk. However, the amount of a port's I/O that can be added to Performance Monitor depends on the type of the volume to which I/O is issued, or on the volume's I/O mode. For example, if the I/O mode of both HA volumes is Mirror (RL), when the server writes to the P-VOL one time, all of the following ports and volumes that the command goes through record the performance information: Primary system's Target port P-VOL Primary system's Initiator port Secondary system's RCU Target port S-VOL Also, when I/O mode of both HA volumes is Mirror (RL), when the server reads P-VOL data one time, only the primary system Target port and the P-VOL record the performance information. HA I/O added to Performance Monitor The amount of HA volume I/O from and to the server that are added to Performance Monitor depends on the HA status, as shown the following tables. Table 15 Number of writes to HA to be added to Performance Monitor HA status P-VOL S-VOL Mirrored The sum of the following values: Number of writes to the P-VOL Number of RIOs to the P-VOL from the S-VOL The sum of the following values: Number of reads from the S-VOL Number of RIOs to the S-VOL from the P-VOL Mirroring Suspended (when the P-VOL has the latest information) Suspended (when the S-VOL has the latest information) Blocked Number of writes to the P-VOL Number of writes to the P-VOL Not counted* Not counted* Number of RIOs to the S-VOL from the P-VOL Not counted* Number of writes to the S-VOL Not counted* * Reads and writes by a server are illegal requests and cause an error. However, they could be counted as I/O. Table 16 Number of reads from HA to be added to Performance Monitor HA status Mirrored Mirroring P-VOL Number of reads from the P-VOL Number of reads from the P-VOL S-VOL Number of reads from the S-VOL Not counted* 56 System requirements

57 Table 16 Number of reads from HA to be added to Performance Monitor (continued) HA status Suspended (when the P-VOL has the latest information) Suspended (when the S-VOL has the latest information) Blocked P-VOL Number of reads from the P-VOL Not counted* Not counted* S-VOL Not counted* Number of reads from the S-VOL Not counted* * Reads and writes from a server are illegal requests and cause an error. However, they could be counted as I/O. Table 17 Relation between amount of I/O added to Performance Monitor and amount of server I/O HA status Mirrored Mirroring Suspended (P-VOL has latest data) Suspended (S-VOL has latest data) Blocked Number of writes Approximately the same* as the number of writes to the P-VOL or S-VOL The same as the number of writes to the P-VOL The same as the number of writes to the P-VOL The same as the number of writes to the S-VOL Not counted Number of reads The same as the total number of writes to the P-VOL and S-VOL The same as the number of reads from the P-VOL The same as the number of reads from the P-VOL The same as the number of reads from the S-VOL Not counted * For writes by a server, RIOs might be divided before being issued. For this reason, this number might differ from the number of writes by a server. Number of I/Os to the port to be added to Performance Monitor The number of I/Os (reads or writes) of the port added to Performance Monitor depends on the P-VOL or S-VOL (I/O destination), or on the I/O mode of the destination volume, as shown in the following table. I/O destination volume I/O mode I/O destination volume Primary system Target port Initiator port RCU Target port Secondary system Target port Initiator port RCU Target port Mirror (RL) P-VOL Total writes and reads Number of writes Not added Not added Not added Number of writes S-VOL Not added Not added Number of writes Total writes and reads Number of writes Not added Local P-VOL Total writes and reads Not added Not added Not added Not added Not added S-VOL Not added Not added Not added Total writes and reads Not added Not added Block P-VOL Total writes and reads* Not added Not added Not added Not added Not added S-VOL Not added Not added Not added Total writes and reads* Not added Not added * Reads and writes by a server are illegal requests and cause an error. However, they might be counted as I/Os. Interoperability requirements 57

58 Related topics Performance Monitor (page 56) 58 System requirements

59 4 Planning for High Availability Abstract This chapter provides planning information for configuring the primary and secondary systems, pair volumes, quorum disk, and data paths for High Availability operations. Storage system preparation The following provides requirements, recommendations, and restrictions for HA storage systems. The primary and secondary systems must be HP XP7. No other storage model can be used. All RAID Manager setup must be complete. For more information, see HP XP7 RAID Manager Installation and Configuration User Guide. Remote Web Console must be connected to the primary and secondary storage systems. For more information, see HP XP7 Remote Web Console User Guide. When determining the amount of cache required for HA, consider the amount of the Cache Residency data that will also be stored in the cache. Make sure that the primary system is configured to report sense information to the host. The secondary system should also be attached to a host server to report sense information in the event of a problem with an S-VOL or the secondary storage system. If power sequence control cables are used, set the power source selection switch for the cluster to "Local" to prevent the server from cutting the power supply to the primary system. In addition, make sure that the secondary system is not powered off during HA operations. Establish physical paths between the primary and secondary systems. Switches and channel extenders can be used. Related topics Cache and additional shared memory (page 59) System option modes (page 59) Cache and additional shared memory Additional shared memory must be installed and configured in both primary and secondary systems. Make sure that cache in both systems works normally. Pairs cannot be created if cache requires maintenance. Configure the secondary system cache so that it can adequately support remote copy workloads and all local workload activity. If an HA pair is in COPY status, you cannot install or uninstall cache and shared memory. When either of these tasks is to be performed, split the pairs in COPY status, perform and complete the cache or shared memory operation, then resynchronize the pairs. Related topics Storage system preparation (page 59) System option modes You can customize HP XP7 storage systems to enable options that were not set at the factory. System option modes are preset to default values at installation, but you can have them changed by your HP representative. System option modes related to HA are shown in the following table. Storage system preparation 59

60 Note that the system option modes for HA are the same as the system option modes for Continuous Access Synchronous. Mode Description Allows you to suppress initial copy operations when the write-pending level to the MP blade of the S-VOL is 60 % or higher. ON: The initial copy is suppressed. OFF: The initial copy is not suppressed. Allows you to reduce RIO MIH time to five seconds. As a result, after a remote path error, less time elapses until the operation is retried on an alternate path. (Both RIO MIH time and the Abort Sequence timeout value are combined for this retry time.) ON: Reduces the RIO MIH time to five seconds. Combined with the Abort Sequence timeout value, the total amount of time that elapses before the operation is retried on another path is a maximum of 10-seconds. OFF: The RIO MIH time that you specified when the secondary system was registered is used with the specified Abort Sequence timeout value. The default is 15 seconds. If the RIO timeout time and the ABTS timeout time elapse, an attempt is made to retry RIO in the alternate path. Related topics Storage system preparation (page 59) Planning system performance Remote copy operations can affect I/O performance of host servers and the primary and secondary storage systems. You can minimize the effects of remote copy operations and maximize efficiency and speed by changing your remote connection options and remote replica options. HP technical support can help you analyze your operation's write-workload and optimize copy operations. Using workload data (MB/s and IOPS), you determine the amount of bandwidth, the number of physical paths, and number of ports your HA system requires. When these are properly determined and sized, the data path operates free of bottlenecks under all workload levels. Related topics Hitachi Dynamic Link Manager (page 60) Hitachi Dynamic Link Manager Hitachi Dynamic Link Manager (HDLM) allows you to specify alternate paths to be used for normal High Availability operations. Other paths are used when failures occur in all paths (including alternate paths) that should be used for normal operations. Host mode option 78, the nonpreferred path option, must be configured to specify nonpreferred paths, which are used when failures occur. For example, if servers and storage systems are connected in a cross-path configuration, I/O response is prolonged because the primary-site server is distant from the secondary system, and the secondary-site server is distant from the primary system. Normally in this case you use paths between the primary server and primary system and paths between the secondary server and secondary system. If a failure occurs in a path used in normal circumstances, you will use the paths between the primary server and secondary system, and paths between the secondary server and primary system. 60 Planning for High Availability

61 After you incorporate the HP XP7 settings to HDLM, the attribute of HDLM path to which the host mode option 78 was set changes to the non-owner path. If the host mode option 78 is not set to the path, the HDLM path attribute changes to the owner path. Related topics Planning system performance (page 60) Planning physical paths Bandwidth When configuring physical paths to connect the storage systems at the primary and secondary sites, make sure that the paths can handle all of the data that could be transferred to the primary and secondary volumes under all circumstances. Related topics Bandwidth (page 61) Fibre-Channel connections (page 61) Connection types (page 62) The amount of bandwidth you have must be able to handle data transfers at all workload levels. The amount of necessary bandwidth depends on the amount of I/O to be sent from servers to primary volumes. To identify the required bandwidth, you must measure the write workload of the system. Use performance-monitoring software to collect the workload data. Related topics Planning physical paths (page 61) Fibre-Channel connections Use short-wave (optical multi-mode) or long-wave (optical single-mode) optical fiber cables to connect the storage systems at the primary and secondary sites. The required cables and network Planning physical paths 61

62 relay devices differ depending on the distance between the primary and secondary systems, as described in the following table. Distance between storage systems Up to 1.5 km 1.5 to 10 km 10 to 30 km 30 km or longer Cable type Short wave (optical multi-mode) Long wave (optical single-mode) Long wave (optical single-mode) Communication line Network relay device One or two switches are required if the distance is 0.5 to 1.5 km. Not required. Up to two switches can be used. An authorized third-party channel extender is required. No special settings are required for HP XP7 if switches are used in a Fibre-Channel environment. Long wave (optical single-mode) cables can be used for direct connection at a maximum distance of 10 km. The maximum distance that might result in the best performance differs depending on the link speed, as shown in the following table: Link speed 1 Gbps 2 Gbps 4 Gbps 8 Gbps Maximum distance for best performance 10 km 6 km 3 km 2 km Related topics Planning physical paths (page 61) Connection types HA supports three types of connections: direct, switch, and channel extenders. You can use Command View Advanced Edition or RAID Manager to configure ports and topologies. Establish bidirectional physical path connections from the primary to the secondary system and from the secondary to the primary system. Related topics Planning physical paths (page 61) Direct connection (page 62) Connection using switches (page 63) Connection using channel extenders (page 64) Direct connection With a direct connection, the two storage systems are directly connected to each other. 62 Planning for High Availability

63 Set the ports and topology to Fabric OFF and FC-AL. You can use the following host mode options to improve response time of host I/O by improving response time between the storage systems for long distance direct connections (up to 10 kilometers LongWave) when the 16FC8 package is used. Host mode option 49 (BB Credit Set Up Option1) Host mode option 50 (BB Credit Set Up Option2) Host mode option 51 (Round Trip Set Up Option) When you set these host mode options, set the topology of the Initiator port and the RCU Target port to Fabric OFF and Point-to-Point. Related topics Connection types (page 62) Connection using switches With a switch connection, up to three optical fiber cables can be connected. A maximum of two switches can be used. Planning physical paths 63

64 Specify the topology as follows: NL_port: Fabric ON and FC-AL N_port: Fabric ON and Point-to-Point Switches from some vendors (for example, McData ED5000) require F_port. You can use the following host mode options to improve response time of host I/O by improving response time between the storage systems when switches are used for long distance connections (up to 100 kilometers) and the 16FC8 package is used. Host mode option 49 (BB Credit Set Up Option1) Host mode option 50 (BB Credit Set Up Option2) Host mode option 51 (Round Trip Set Up Option) When you set these host mode options, set the topology of the Initiator port and the RCU Target port to Fabric ON and Point-to-Point. Related topics Connection types (page 62) Connection using channel extenders Channel extenders and switches should be used for long-distance connections. Specify the topology as follows: NL/FL_port: Fabric ON and FC-AL F_port: Fabric ON and Point-to-Point Related topics Connection types (page 62) Planning ports Data is transferred from Initiator ports in one storage system to RCU Target ports in the other system. 64 Planning for High Availability

65 Port attributes The amount of data sent to and from these ports is limited. That is why it is necessary to measure the amount of write workload your system will generate. When you identify peak write workload, which is the amount of data transferred during peak periods, you can determine the amount of bandwidth and number of Initiator and RCU Target ports required for your system. Related topics Port attributes (page 65) Ports on HP XP7 can have four attributes, as shown below. These port attributes are necessary for ports in the primary and secondary systems. Initiator ports, which send HA commands and data to the paired storage system. One initiator port can be connected to a maximum of 64 RCU Target ports. CAUTION: For Fibre-Channel interface, do not use the LUN Manager function for defining SCSI paths at the same time that you are adding or removing remote connections or adding remote paths. RCU Target ports, which receive HA commands and data. One RCU Target port can be connected to a maximum of 16 Initiator ports. The number of remote paths that can be specified does not depend on the number of ports. The number of remote paths can be specified for each remote connection. Target ports, which connect storage systems and servers. If a server issues a write request, the request is sent from a Target port on the storage system to an HP XP7 volume. External ports, which are configured and used by External Storage. HA uses these ports when connecting to external storage systems for quorum disks. Related topics Planning ports (page 64) Planning the quorum disk An external storage system must be prepared for the HA quorum disk. Related topics Installation of external storage system for quorum disks (page 65) Relationship between the quorum disk and remote connection (page 66) Response time from the external storage system for quorum disks (page 70) Installation of external storage system for quorum disks The external storage system can be installed in the following two locations: In a three-site configuration, the external storage system is installed in a third site away from the primary and secondary sites. I/O from servers continues if any failure occurs at the primary site, the secondary site, or the site where the external storage system is installed. Planning the quorum disk 65

66 In a two-site configuration, the external storage system is installed at the primary site. If failure occurs at the secondary site, I/O from servers will continue. However, if a failure occurs at the primary site, I/O from servers will stop. At the secondary site, you cannot install any external storage system for quorum disks. Related topics Planning the quorum disk (page 65) Relationship between the quorum disk and remote connection When you use multiple remote connections, you should prepare as many quorum disks as remote connections to avoid the possibility of a single remote connection failure causing the suspension of the HA pairs that are using the other normal remote connections. Simultaneously, you must make a combination of one quorum disk, one remote connection from the primary storage system to the secondary storage system, and one remote connection from the secondary storage system to the primary storage system. 66 Planning for High Availability

67 TIP: When you are going to manage many HA pairs in a concentrated way by one quorum disk, if more than 8 physical paths are necessary for the remote connection, you can configure the system which has one quorum disk for two or more remote connections. When all paths used in the remote connection are blocked, the HA pairs will be suspended in units of quorum disks. Therefore, in the configuration like a following figure, the HA pairs which are using the remote connection 1 will be suspended even if the failure occurred at the remote connection 2. Also, when the failure occurred at the path from the volume at the primary site or the secondary site to the quorum disk, the HA pairs which are using the same quorum disk will be suspended. Planning the quorum disk 67

68 Related topics Planning the quorum disk (page 65) Suspended pairs depending on failure locations (when a quorum disk is not shared) (page 68) Suspended pairs depending on failure locations (when a quorum disk is shared) (page 69) Suspended pairs depending on failure locations (when a quorum disk is not shared) When the same number of quorum disks as the remote connections are used, only HA pair that uses the failed remote connection, a quorum disk or a path to the quorum disk is suspended. The HA pair that uses the normal remote connection, quorum disk and path to the quorum disk can keep the status being mirrored. The following figure shows the relationship between the failure locations and the HA pair suspended by the failure. # Failure locations HA pair 1 HA pair 2 1 Remote connection 1 from the primary site to the secondary site Suspended Not suspended 2 Remote connection 1 from the secondary site to the primary site Suspended Not suspended 3 Remote connection 2 from the primary site to the secondary site Not suspended Suspended 4 Remote connection 2 from the secondary site to the primary site Not suspended Suspended 5 Path to the quorum disk 1 Suspended Not suspended 6 Quorum disk 1 Suspended Not suspended 68 Planning for High Availability

69 # Failure locations HA pair 1 HA pair 2 7 Path to the quorum disk 2 Not suspended Suspended 8 Quorum disk 2 Not suspended Suspended Related topics Relationship between the quorum disk and remote connection (page 66) Suspended pairs depending on failure locations (when a quorum disk is shared) When a quorum disk is shared by more than one connections, all HA pairs which share a quorum disk are suspended, regardless of the failure locations, as shown below. # Failure locations HA pair 1 HA pair 2 1 Remote connection 1 from the primary site to the secondary site Suspended Suspended 2 Remote connection 1 from the secondary site to the primary site Suspended Suspended 3 Remote connection 2 from the primary site to the secondary site Suspended Suspended 4 Remote connection 2 from the secondary site to the primary site Suspended Suspended 5 Path to the quorum disk 1 Suspended Suspended 6 Quorum disk 1 Suspended Suspended Planning the quorum disk 69

70 Related topics Relationship between the quorum disk and remote connection (page 66) Response time from the external storage system for quorum disks If the response time from the external storage system for quorum disks is delayed for more than one second, HA pairs might be suspended by some failures. Monitor regularly the response time of the quorum disks using Performance Monitor from the primary storage system or secondary storage system. As a result of specifying External storage > Logical device > Response time (ms) on the monitoring objects, if the response time exceeds 100 ms, review the configuration and consider the following actions: Lower the I/O load, if the load of I/O of the volumes other than quorum disk is high in the external storage system. Remove factors of the high cache load, if the cache load is high in the external storage system. Lower the I/O load of the entire external storage system, when you do the maintenance of the external storage system. Alternatively, do the maintenance of the external storage system with the settings which will minimize the influences to the I/O in reference to the documents of the external storage system. Related topics Planning the quorum disk (page 65) Planning HA pairs and pair volumes This section describes how to calculate the maximum number of HA pairs, and the requirements of volumes which is used as a primary volume and a secondary volume depending on the HA configuration. Related topics Maximum number of HA pairs (page 70) When the S-VOL's resource group and storage system have the same serial number and model (page 72) Maximum number of HA pairs The maximum number of HA pairs in a storage system is 63,231. This number is calculated by subtracting the number of quorum disks (at least one) from the maximum number of virtual volumes (total number of THP V-VOLs plus external volumes: 63,232) that can be defined in a storage system. If RAID Manager is used in the In-band method, and if one virtual volume is used as a command device, the maximum number of HA pairs is 63,230. Note, however, that the maximum number of pairs in HP XP7 is subject to restrictions, such as the number of cylinders used in volumes or the number of bitmap areas used in volumes. In the calculation formulas below, "ceiling" is a function that rounds the value inside the parentheses up to the next integer. "Floor" is a function that rounds the value inside the parentheses down to the next integer. Related topics Planning HA pairs and pair volumes (page 70) Calculating the number of cylinders (page 71) Calculating the number of bitmap areas (page 71) 70 Planning for High Availability

71 Calculating the number of available bitmap areas (page 71) Calculating the maximum number of pairs (page 72) Calculating the number of cylinders To calculate the number of cylinders, start by calculating the number of logical blocks, which indicates volume capacity measured in blocks. number-of-logical-blocks = volume-capacity (in bytes) / 512 Then use the following formula to calculate the number of cylinders: number-of-cylinders = ceiling(ceiling(number-of-logical-blocks / 512) / 15) Related topics Maximum number of HA pairs (page 70) Calculating the number of bitmap areas (page 71) Calculating the number of bitmap areas Calculate the number of bitmap areas using the number of cylinders. number-of-bitmap-areas = ceiling((number-of-cylinders * 15) / 122,752) 122,752 is the differential quantity per bitmap area. The unit is bits. NOTE: You must calculate the number of required bitmap areas for each volume. If you calculate the total number of cylinders in multiple volumes and then use this number to calculate the number of required bitmap areas, the calculation results might be incorrect. The following are examples of correct and incorrect calculations, assuming that one volume has 10,017 cylinders and another volume has 32,760 cylinders. Correct: ceiling((10,017 * 15) / 122,752) = 2 ceiling((32,760 * 15) / 122,752) = 5 The calculation result is seven bitmap areas in total. Incorrect: 10, ,760 = 42,777 cylinders ceiling((42,777 * 15) / 122,752) = 6 The calculation result is six bitmap areas in total. Related topics Maximum number of HA pairs (page 70) Calculating the number of cylinders (page 71) Calculating the number of available bitmap areas (page 71) Calculating the number of available bitmap areas The total number of bitmap areas available in the storage system is 65,536. The number of bitmap areas is shared by Continuous Access Synchronous, Continuous Access Synchronous Z, Continuous Access Journal, and Continuous Access Journal Z. If you use these software products, subtract the number of bitmap areas required for these products from the total number of bitmap areas in the storage system (65,536), and then use the formula in the next section to calculate the maximum number of HA pairs. For details on the methods for calculating Planning HA pairs and pair volumes 71

72 the number of bitmap areas required for these program products, refer to the appropriate user guide. Calculating the maximum number of pairs Use the following values to calculate the maximum number of pairs: The number of bitmap areas required for pair creation The total number of bitmap areas available in the storage system (that is, 65,536), or the number of available bitmap areas calculated in the previous section Calculate the maximum number of pairs using the following formula with the total number of bitmap areas in the storage system (or the number of available bitmap areas) and the number of required bitmap areas, as follows: maximum-number-of-pairs-that-can-be-created = floor(total-number-of-bitmap-areas-in-storage-system / number-of-required-bitmap-areas) Related topics Maximum number of HA pairs (page 70) Calculating the number of available bitmap areas (page 71) When the S-VOL's resource group and storage system have the same serial number and model You can create HA pairs specifying a volume in a resource group that has the same serial number and model as the storage system for an S-VOL. In this case, you must specify the volume in a resource group (virtual storage machine) whose serial number and model are same as the storage system in which the S-VOL resides for a P-VOL. When you create HA pairs, the virtual LDEV ID of the P-VOL is copied to the virtual LDEV ID of the S-VOL. In the following figure, the copied virtual LDEV ID of the P-VOL is equal to the original virtual LDEV ID of the S-VOL. The volume in a resource group that has the same serial number and the same model as the storage system and whose original LDEV ID is equal to the virtual LDEV ID will be treated as a normal volume but as a virtualized volume by the function of multi-array virtualization. As a result of copying a virtual information from the P-VOL to the S-VOL, when the requirement as a normal volume becomes not satisfied like the following examples, you cannot create HA pairs. The copied virtual SSID of the P-VOL is not corresponding to the original SSID of the S-VOL. The copied virtual emulation type of the P-VOL is not corresponding to the original emulation type of the S-VOL. The virtual emulation type includes the virtual CVS attribute (-CVS). Because the HP XP7 does not support the LUSE, the LUSE configuration (*n) volumes are not able to be specified as a P-VOL. 72 Planning for High Availability

73 5 Configuration and pair management using RAID Manager Abstract This chapter describes and provides instructions for using RAID Manager commands to configure a High Availability system and manage HA pairs. High Availability system configuration The following illustration shows a completed HA system configuration, which includes the following key components: Host server connected to the primary and secondary storage systems with management software (alternate path or cluster or both, depending on system configuration). Primary and secondary sites with HP XP7 systems at each site. The following system components are configured on the storage systems: Thin Provisioning volumes that will become the primary and secondary volumes of HA pairs The HA feature installed on the primary and secondary storage systems A virtual storage machine on the secondary storage system A resource group on the secondary storage system An external volume group on each storage system for the quorum disk Remote paths between the storage systems A RAID Manager command device on each storage system External storage system with the quorum disk connected to both storage systems using External Storage High Availability system configuration 73

74 Primary storage system settings The primary storage system components used in the procedures and examples in this chapter have the following settings. Primary storage system Model HP XP7 Serial number Primary volume Actual LDEV ID Port attribute Port name LU number 22:22 Target CL1-A 0 Ports for remote connections Port name CL3-A CL4-A Port attribute Initiator RCU Target 74 Configuration and pair management using RAID Manager

75 External volume for the quorum disk Actual LDEV ID Port attribute Port name External volume group number Path group ID LU number Quorum disk ID 99:99 External CL5-A Secondary storage system settings The secondary storage system components used in the procedures and examples in this chapter have the following settings. Secondary storage system Model HP XP7 Serial number Secondary volume Actual LDEV ID Port attribute Port name LU number 44:44 Target CL1-C 0 Ports for remote connections Port name CL3-C CL4-C Port attribute RCU Target Initiator External volume for the quorum disk Actual LDEV ID Port attribute Port name External volume group number Path group ID LU number Quorum disk ID 88:88 External CL5-C Resource group Resource group name Virtual storage machine HAGroup1 Model HP XP7 Serial number Host group Host group ID CL1-C-0 Host group name 1C-G00 Usage For the S-VOL Pool Pool ID 0 Pool name HA_POOL Pool volume 77:77 High Availability system configuration 75

76 RAID Manager server configuration The RAID Manager server configuration used in the procedures and examples in this chapter has the following settings. RAID Manager instances and configuration definition files Instance number Configuration definition files horcm0.conf horcm1.conf horcm100.conf horcm101.conf Usage For the operation of the primary storage system For the operation of the secondary storage system For the operation of the primary storage system from a view point of the virtual storage machine (serial number: 11111) For the operation of the secondary storage system from a view point of the virtual storage machine (serial number: 11111) For operations involving virtual storage machines, the parameters specified in the raidcom command and the objects displayed by the raidcomcommand are based on the virtual ID. In the procedures and examples in this chapter, there is no virtual storage machine defined in the primary storage system, but you can operate the primary storage system as if there is a virtual storage machine with the same serial number and model as the primary storage system. External storage system settings The external storage system used in the procedures and examples in this chapter has the following settings. External storage system Model HP XP7 Serial number WWN Storage system at destination Storage system at the primary site Storage system at the secondary site WWN 50060e e Workflow for creating an HA environment 1. Initial state (page 77) 2. Adding the external system for the quorum disk (page 78) 3. Verifying the physical data paths (page 78) 4. Creating the command devices (page 79) 5. Creating the configuration definition files (page 80) 6. Starting RAID Manager (page 81) 7. Connecting the primary and secondary storage systems (page 82) 8. Creating the quorum disk (page 85) 76 Configuration and pair management using RAID Manager

77 Initial state 9. Setting up the secondary system (page 93) 10. Updating the RAID Manager configuration definition files (page 106) 11. Creating the HA pair (page 108) 12. Adding an alternate path to the S-VOL (page 111) NOTE: This chapter provides RAID Manager examples and instructions using the in-band method of issuing RAID Manager commands. You can also issue HA commands using the out-of-band method. For details about the in-band and out-of-band methods, see the HP XP7 RAID Manager User Guide. The initial state before HA configuration consists of one host, one primary storage system, and one secondary storage system. Primary and secondary storage systems: Additional shared memory for HA is installed in both storage systems. The HA feature is installed on both storage systems. Resource group 0 exists by default in both storage systems. Thin Provisioning virtual volumes (THP V-VOLs) are configured and have LU paths defined. These volumes will become the primary volumes of HA pairs. Host: The required management software for your configuration, alternate path and /or cluster software, is installed. The RAID Manager software is installed. NOTE: The creation of HA pairs is not affected by the presence or absence of server I/O to the THP V-VOLs. In the primary and secondary systems, additional shared memory for HA is installed. Initial state 77

78 Adding the external system for the quorum disk Install an external storage system for the quorum disk. The storage system must be supported by External Storage for connection to the HP XP7 as external storage. Related topics Requirements and restrictions (page 41) Planning the quorum disk (page 65) Verifying the physical data paths Make sure that the following physical data paths are connected and configured: From the primary system to the secondary system: two or more paths From the secondary system to the primary system: two or more paths From the primary system to the external system: two or more paths From the secondary system to the external system: two or more paths From the host to the primary system: two or more paths From the host to the secondary system: two or more paths The following figure shows the physical data paths (redundant paths not shown). Although only one path is required for each location, it is strongly recommended that you connect the storage systems using at least two physical paths. If you connect nodes using only one physical path, an unexpected failover might occur in the server, or the HA pairs might be suspended, even though only one path or hardware failure has occurred. When maintenance is performed on the physical paths between storage systems, the HA pairs must be suspended. 78 Configuration and pair management using RAID Manager

79 Creating the command devices A command device (CMD) is required on each storage system for communication between RAID Manager and the storage system. The command device must be created in resource group 0 in the primary system and in the secondary system. After the command devices have been created, host recognition must be set to the command devices.. Creating the command devices 79

80 1. Using Command View Advanced Edition, allocate a command device in Resource Group 0 in both storage systems and enable user authentication. For details about creating a command device, see the HP XP7 Provisioning for Open Systems User Guide. 2. If necessary, change the topology and fabric settings for the ports defined to the command devices. 3. Define the volume to the port connected to the host. Creating the configuration definition files You must create four HORCM configuration definition files on the host for your HA environment: for the latest command format and syntax. One that describes the primary storage system and P-VOLs One that describes the secondary storage system and S-VOLs One for operating the virtual storage machine (SN: 11111) on the primary storage system One for operating the virtual storage machine (SN: 11111) on the secondary storage system The configuration definition files for the examples in this chapter are shown below. For details about creating RAID Manager configuration definition files, see the HP XP7 RAID Manager Installation and Configuration User Guide. The examples below show files on a Windows host. NOTE: When specifying the serial number for HP XP7 using RAID Manager, add a "3" at the beginning of the serial number. For example, for serial number 11111, enter HORCM file for the primary storage system: horcm0.conf HORCM_MON #ip_address service poll(10ms) timeout(10ms) 80 Configuration and pair management using RAID Manager

81 localhost HORCM_CMD \\.\PhysicalDrive0 HORCM file for the secondary storage system: horcm1.conf HORCM_MON #ip_address service poll(10ms) timeout(10ms) localhost HORCM_CMD \\.\PhysicalDrive1 HORCM file for the virtual storage machine (SN: 11111) on the primary storage system: horcm100.conf HORCM_MON #ip_address service poll(10ms) timeout(10ms) localhost HORCM_CMD \\.\PhysicalDrive0 HORCM_VCMD # redefine Virtual DKC Serial# as unitids HORCM file for the virtual storage machine (SN: 11111) on the secondary storage system: horcm101.conf HORCM_MON #ip_address service poll(10ms) timeout(10ms) localhost HORCM_CMD \\.\PhysicalDrive1 HORCM_VCMD # redefine Virtual DKC Serial# as unitids Creating the command devices (page 79) Starting RAID Manager (page 81) Starting RAID Manager After creating the RAID Manager configuration definition files, you can start the RAID Manager software. Because you are not yet operating the virtual storage machine, you only need to start instances (0 and 1). You do not yet need to start instances (100 and 101) for the virtual storage machine. Command example (for Windows) 1. Start RAID Manager instances 0 and 1. horcmstart 0 1 starting HORCM inst 0 HORCM inst 0 starts successfully. starting HORCM inst 1 HORCM inst 1 starts successfully. 2. Enter the user name and password, and perform user authentication. raidcom -login <username> <password> -IH0 raidcom -login <username> <password> -IH1 Starting RAID Manager 81

82 The -IH option in this example is used for each command to specify an instance. You can also perform the operation using a shell for each instance. To start the shell, specify an instance number to the environment variable HORCMINST, and then execute the command. Creating the configuration definition files (page 80) Connecting the primary and secondary storage systems (page 82) Connecting the primary and secondary storage systems To connect the primary and secondary storage systems, you will first set the port attributes on both storage systems, physically connect the storage systems, and then add the remote paths between the storage systems. Setting the port attributes (page 82) Adding remote connections (page 83) Setting the port attributes The Initiator and RCU Target port attributes must be set on the primary and secondary system ports for HA command and data transfer. Commands and data are sent from the Initiator ports to the RCU Target ports. Initiator ports and RCU Target ports are required on both the primary and secondary storage systems. Command example 1. Change the attribute of port (CL3-A) on the primary storage system to Initiator. raidcom modify port -port CL3-A -port_attribute MCU -IH0 2. Change the attribute of port (CL4-A) on the primary storage system to RCU Target. 82 Configuration and pair management using RAID Manager

83 raidcom modify port -port CL4-A -port_attribute RCU -IH0 3. Change the attribute of port (CL3-C) on the secondary storage system to RCU Target. raidcom modify port -port CL3-C -port_attribute RCU -IH1 4. Change the attribute of port (CL4-C) on the secondary storage system to Initiator. raidcom modify port -port CL4-C -port_attribute MCU -IH1 Use the same procedure to change the port attributes for the alternate paths. The alternate paths are not shown in the illustration. Check command and output examples 1. Display the port information for the primary system. raidcom get port -IH0 PORT TYPE ATTR SPD LPID FAB CONN SSW SL Serial# WWN PHY_PORT (snip) CL3-A FIBRE MCU AUT E8 N FCAL N e80072b (snip) CL4-A FIBRE RCU AUT 97 N FCAL N e80072b (snip) 2. Display the port information for the secondary system. Confirm that the port attributes have been changed as intended. raidcom get port -IH1 PORT TYPE ATTR SPD LPID FAB CONN SSW SL Serial# WWN PHY_PORT (snip) CL3-C FIBRE RCU AUT D6 N FCAL N e800756ce22 - (snip) CL4-C FIBRE MCU AUT 7C N FCAL N e800756ce32 - (snip) Adding remote connections Add bidirectional remote connections between the primary and secondary storage systems. Specify the same path group ID to the bidirectional remote connections. Connecting the primary and secondary storage systems 83

84 NOTE: When specifying the serial number for HP XP7 using RAID Manager, add a "3" at the beginning of the serial number. For example, for serial number 11111, enter Command example 1. Add a remote connection with path group ID 0 from primary system port (CL3-A) to secondary system port (CL3-C). raidcom add rcu -cu_free R mcu_port CL3-A -rcu_port CL3-C -IH0 2. Confirm that asynchronous command processing has completed. raidcom get command_status -IH0 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Add a remote connection with path group ID 0 from secondary system port (CL4-C) to primary system port (CL4-A). raidcom add rcu -cu_free R mcu_port CL4-C -rcu_port CL4-A -IH1 4. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Add the alternate paths between the storage systems using the raidcom add rcu_path command. These alternate paths are not shown in the illustration. 84 Configuration and pair management using RAID Manager

85 Check command and output examples 1. On the primary storage system, display remote connection information. raidcom get rcu -cu_free R IH0 Serial# ID PID MCU RCU M/R T PNO MPORT RPORT STS_CD SSIDs R RCU F 0 CL3-A CL3-C NML_01-2. On the secondary storage system, display the remote connection information, and confirm that the serial number, model, and port name of the storage system are correct and that the path status is normal. raidcom get rcu -cu_free R IH1 Serial# ID PID MCU RCU M/R T PNO MPORT RPORT STS_CD SSIDs R RCU F 0 CL4-C CL4-A NML_01 - Creating the quorum disk When a failure occurs, the quorum disk is used by the primary and secondary systems to determine which pair volume contained the latest data when the failure occurred. This section provides instructions for setting up the quorum disk. You will map the disk on the external storage system to the primary and secondary systems. Make sure the external volume is formatted before proceeding. You should be familiar with External Storage to set up the quorum disk. An external volume for a quorum disk must be mapped to one external volume group. Setting the port attributes for connecting the external storage system (page 85) Creating external volume groups (page 87) Creating external volumes (page 88) Setting external volumes as quorum disks (page 90) Setting the port attributes for connecting the external storage system This section provides instructions for setting the ports on the primary and secondary systems to the "External" attribute in preparation for connecting to the external storage system. Creating the quorum disk 85

86 Command example 1. Change the attribute of the port (CL5-A) on the primary storage system to External. raidcom modify port -port CL5-A -port_attribute ELUN -IH0 2. Change the attribute of the port (CL5-C) on the secondary storage system to External. raidcom modify port -port CL5-C -port_attribute ELUN -IH1 Check command and output examples 1. Display port information for the primary system. raidcom get port -IH0 PORT TYPE ATTR SPD LPID FAB CONN SSW SL Serial# WWN PHY_PORT (snip) CL5-A FIBRE ELUN AUT E4 N FCAL N e80072b (snip) 2. Display the port information for the secondary system. Confirm that the port attributes have been changed as intended. raidcom get port -IH1 PORT TYPE ATTR SPD LPID FAB CONN SSW SL Serial# WWN PHY_PORT (snip) CL5-C FIBRE ELUN AUT D5 N FCAL N e800756ce42 - (snip) 86 Configuration and pair management using RAID Manager

87 Creating external volume groups Create external volume groups for the quorum disk to map the disk on the external storage system to the primary and secondary storage systems. Verify that the volumes in the external storage system are formatted. Use the raidcom discover lun -portcommand to verify that the same E_VOL_ID_C value (volume identifier included in the SCSI Inquiry command of the external volume) is displayed for the primary and secondary storage systems. For details about creating external volume groups, see the HP XP7 External Storage for Open and Mainframe Systems User Guide. Command example 1. Search for information about the external system port that is connected to primary system port (CL5-A). raidcom discover external_storage -port CL5-A -IH0 PORT WWN PM USED Serial# VENDOR_ID PRODUCT_ID CL5-A 50060e M NO HP XP7 2. Display the LU that is defined to external storage system port (50060e ) that is connected to primary system port (CL5-A). Check the LU number, and note the value shown in the E_VOL_ID_C field. raidcom discover lun -port CL5-A -external_wwn 50060e IH0 PORT WWN LUN VOL_Cap(BLK) PRODUCT_ID E_VOL_ID_C CL5-A 50060e OPEN-V HP AAAA Creating the quorum disk 87

88 3. Map the LU (0) that is defined to the external storage system port (50060e ) that is connected to the primary system port (CL5-A). Specify 1 for the path group ID, and specify 1-1 for the external volume group number. raidcom add external_grp -path_grp 1 -external_grp_id 1-1 -port CL5-A -external_wwn 50060e lun_id 0 -IH0 4. Confirm that asynchronous command processing has completed. raidcom get command_status -IH0 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Search for information about the external system port that is connected to the secondary system port (CL5-C). raidcom discover external_storage -port CL5-C -IH1 PORT WWN PM USED Serial# VENDOR_ID PRODUCT_ID CL5-C 50060e M NO HP XP7 6. Display the LU that is defined to external storage system port (50060e ) that is connected to secondary system port (CL5-C). Check the LU number, and confirm that the E_VOL_ID_C field displays the same value as in step 2. raidcom discover lun -port CL5-C -external_wwn 50060e IH1 PORT WWN LUN VOL_Cap(BLK) PRODUCT_ID E_VOL_ID_C CL5-C 50060e OPEN-V HP AAAA 7. Map the LU (0) that is defined to external storage system port (50060e ) that is connected to secondary system port (CL5-C). Specify 1 for the path group ID, and specify 1-2 for the external volume group number. raidcom add external_grp -path_grp 1 -external_grp_id 1-2 -port CL5-C -external_wwn 50060e lun_id 0 -IH1 8. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Check command and output examples 1. On the primary storage system, display information about the external path to the volume in the external storage system. raidcom get path -path_grp 1 -IH0 PHG GROUP STS CM IF MP# PORT WWN PR LUN PHS Serial# PRODUCT_ID LB PM NML E D 0 CL5-A 50060e NML XP7 N M 2. On the secondary storage system, display information about the external path to the volume in the external storage system. Confirm that external system information is correct, including serial number, model, and WWN, and confirm that the path status and volume status are normal. raidcom get path -path_grp 1 -IH1 PHG GROUP STS CM IF MP# PORT WWN PR LUN PHS Serial# PRODUCT_ID LB PM NML E D 0 CL5-C 50060e NML XP7 N M Creating external volumes Using capacity in the external system, you will create virtual external volumes on the primary and secondary systems that will be mapped to the quorum disk. 88 Configuration and pair management using RAID Manager

89 Command example 1. Specify external volume group (1-1) assigned to the primary storage system to create an external volume whose LDEV ID is 0x9999. Allocate all capacity in the external volume group. raidcom add ldev -external_grp_id 1-1 -ldev_id 0x9999 -capacity all -IH0 2. Confirm that asynchronous command processing has completed. raidcom get command_status -IH0 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Specify external volume group (1-2) assigned to the secondary storage system to create an external volume whose LDEV ID is 0x8888. Allocate all free space in the external volume group. raidcom add ldev -external_grp_id 1-2 -ldev_id 0x8888 -capacity all -IH1 4. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Creating the quorum disk 89

90 Check command and output examples 1. Display information about the volume (LDEV ID: 0x9999). raidcom get ldev -ldev_id 0x9999 -fx -IH0 Serial# : LDEV : 9999 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : ELUN E_VendorID : HP E_ProductID : OPEN-V E_VOLID : E_VOLID_C : HP AAAA... NUM_E_PORT : 1 E_PORTs : CL5-A e LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0007 RSGID : 0 2. Display the information about the volume (LDEV ID: 0x8888). Confirm that the information about the external volume is correct. raidcom get ldev -ldev_id 0x8888 -fx -IH1 Serial# : LDEV : 8888 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : ELUN E_VendorID : HP E_ProductID : OPEN-V E_VOLID : E_VOLID_C : HP AAAA... NUM_E_PORT : 1 E_PORTs : CL5-C e LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0005 RSGID : 0 Setting external volumes as quorum disks This section provides instructions for setting the virtualized external volumes in the primary and secondary systems as quorum disks. The same quorum disk ID must be set to the primary and secondary storage systems. 90 Configuration and pair management using RAID Manager

91 The serial number and model of the paired storage system is specified for the -quorum_enable option of the raidcom modify ldev command. NOTE: When specifying the serial number for HP XP7 using RAID Manager, add a "3" at the beginning of the serial number. For example, for serial number 11111, enter Command example 1. Specify 0 to the quorum disk ID to set the volume (0x9999) in the primary storage system (serial number 22222, entered as ) as a quorum disk. Specify the storage system's serial number and model, HP XP7. raidcom modify ldev -ldev_id 0x9999 -quorum_enable R800 -quorum_id 0 -IH0 2. Confirm that asynchronous command processing has completed. raidcom get command_status -IH0 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Specify 0 to the quorum disk ID to set the volume (0x8888) in the secondary storage system (serial number 11111, entered as ) as a quorum disk. Specify the storage system's serial number and model, HP XP7. raidcom modify ldev -ldev_id 0x8888 -quorum_enable R800 -quorum_id 0 -IH1 4. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Creating the quorum disk 91

92 Check command and output examples 1. Display the information about the volume (LDEV ID: 0x9999). raidcom get ldev -ldev_id 0x9999 -fx -IH0 Serial# : LDEV : 9999 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : ELUN : QRD E_VendorID : HP E_ProductID : OPEN-V E_VOLID : E_VOLID_C : HP AAAA... NUM_E_PORT : 1 E_PORTs : CL5-A e80072b6750 LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0007 QRDID : 0 QRP_Serial# : QRP_ID : R8 RSGID : 0 92 Configuration and pair management using RAID Manager

93 2. Display the information about volume (LDEV ID: 0x8888). Confirm that the following values are correct: QRDID (quorum disk ID) QRP_Serial# (serial number of the storage system that forms an HA pair) QRP_ID (model of the storage system that forms an HA pair) raidcom get ldev -ldev_id 0x8888 -fx -IH1 Serial# : LDEV : 8888 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : ELUN : QRD E_VendorID : HP E_ProductID : OPEN-V E_VOLID : E_VOLID_C : HP AAAA... NUM_E_PORT : 1 E_PORTs : CL5-C e80072b6760 LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0005 QRDID : 0 QRP_Serial# : QRP_ID : R8 RSGID : 0 QRDID (quorum disk ID) QRP_Serial# (serial number of the storage system that forms an HA pair) QRP_ID (model of the storage system that forms an HA) Creating external volumes (page 88) Setting up the secondary system (page 93) Setting up the secondary system This section provides instructions for creating a virtual storage machine (VSM) in the secondary storage system and configuring it for HA pair operations. To create a virtual storage machine, you add resources such as host group IDs and LDEV IDs to a resource group that is created for the virtual storage machine. You can also reserve the host group and the volume ID by only adding them to the resource group. Create a host group and a volume (actual volume) by specifying the reserved IDs so that the host group and the volume can be used on an HA pair. The following procedures describe how to create an HA environment. If appropriate, you can use existing storage system resources, for example, Thin Provisioning pools and THP V-VOLs that have already been created. Setting up the secondary system 93

94 Related topics Creating a resource group (page 94) Reserving a host group ID (page 95) Deleting the virtual LDEV ID of the S-VOL (page 97) Reserving an LDEV ID for the S-VOL (page 98) Setting the reservation attribute on the S-VOL (page 99) Creating additional host groups in a VSM (page 101) Creating a pool (page 102) Creating the S-VOL (page 103) Adding an LU path to the S-VOL (page 105) Creating a resource group When HA setup is complete, the host sees the P-VOL and S-VOL of each pair as a single volume in a single storage system. Resource groups are created in the secondary storage systems using the primary system's serial number and model as virtual information, so that the P-VOL and S-VOL of each pair share the same virtual storage machine information. A virtual storage machine is composed of multiple resource groups that have the same virtual information. When you create a resource group and specify the virtual serial number and model, the resource group is registered in the virtual storage machine. If the virtual storage machine does not already exist in the storage system, it is created automatically when the resource group is created. The following illustration shows the creation of a resource group when the P-VOL is not already registered to a virtual storage machine. 94 Configuration and pair management using RAID Manager

95 NOTE: When specifying the serial number for HP XP7 using RAID Manager, add a "3" at the beginning of the serial number. For example, for serial number 11111, enter Command example Specify the primary system's serial number and model for the virtual storage machine you are creating on the secondary system. raidcom add resource -resource_name HAGroup1 -virtual_type R800 -IH1 Check command and output examples Display the information about the resource groups of the secondary storage system. Information about all resource groups is displayed. Confirm the resource group name, resource group number, virtual serial number, and virtual model. raidcom get resource -key opt -IH1 RS_GROUP RGID V_Serial# V_ID V_IF Serial# meta_resource R8 Y HAGroup R8 Y NOTE: If you need to delete the virtual information set to the resource group, you must delete the resource group: raidcom delete resource -resource_name HAGroup1 -IH1 Setting up the secondary system (page 93) Reserving a host group ID (page 95) Reserving a host group ID In the secondary storage system's resource group, you will reserve a host group ID to be used by the S-VOL. Setting up the secondary system 95

96 Command example Reserve a host group ID (CL1-C-0) in resource group (HAGroup1). raidcom add resource -resource_name HAGroup1 -port CL1-C-0 -IH1 Check command and output examples Display information about the host group that is set to port (CL1-C). Confirm that the port name, host group ID, and host group name are correct. raidcom get host_grp -port CL1-C -resource 1 -IH1 PORT GID GROUP_NAME Serial# HMD HMO_BITs CL1-C 0 1C-G WIN 96 Configuration and pair management using RAID Manager

97 NOTE: If you reserve a host group for which no actual volume is defined in the resource group, specifying the -key host_grpoption for the check command allows you to display the reserved host group. The following example shows the result of executing the check command. raidcom get host_grp -port CL1-C -key host_grp -resource 1 -IH1 PORT GID GROUP_NAME Serial# HMD HMO_BITs CL1-C 0 1C-G WIN CL1-C 1 HAVol WIN CL1-C CL1-C CL1-C CL1-C As shown in this example, the host groups with host group ID 0 to 5 are reserved in resource group 1. Actual volumes are defined for the host groups with host group ID 0 and 1. The host groups with host group ID 2 to 5 are reserved in the resource group, but actual volumes are not defined for them. Note that the host groups with host group ID 6 to 254 are not displayed, because they are not reserved in resource group 1. Creating a resource group (page 94) Deleting the virtual LDEV ID of the S-VOL (page 97) Creating additional host groups in a VSM (page 101) Deleting the virtual LDEV ID of the S-VOL Delete temporarily the virtual LDEV ID of the volume to be added to the virtual storage machine. Setting up the secondary system 97

98 Command example Delete the virtual LDEV ID of the volume (0x4444). raidcom unmap resource -ldev_id 0x4444 -virtual_ldev_id 0x4444 -IH1 Check command and output examples Display information about the volume (LDEV ID: 0x4444). For the volume whose virtual LDEV ID was deleted, fffe is displayed for VIR_LDEV (virtual LDEV ID). raidcom get ldev -ldev_id 0x4444 -fx -IH1 Serial# : LDEV : 4444 VIR_LDEV : fffe SL : - CL : - VOL_TYPE : NOT DEFINED SSID : - RSGID : 0 NOTE: If you need to reconfigure a deleted virtual LDEV ID, use the raidcom map resource command (example: raidcom map resource -ldev_id 0x4444 -virtual_ldev_id 0x4444 -IH1). The default virtual LDEV ID is the same as the actual LDEV ID. After reconfiguring the virtual LDEV ID, use the check command to confirm that the virtual LDEV ID is the same as the actual LDEV ID. Reserving a host group ID (page 95) Reserving an LDEV ID for the S-VOL (page 98) Reserving an LDEV ID for the S-VOL In the newly created resource group, you will reserve an LDEV ID so that the volume is available to become the target volume of an HA pair. 98 Configuration and pair management using RAID Manager

99 Command example Reserve the LDEV ID (0x4444) in the resource group (HAGroup1). raidcom add resource -resource_name HAGroup1 -ldev_id 0x4444 -IH1 Check command and output examples Display the information about volume (LDEV ID: 0x4444). Confirm that the number of the resource group in which the LDEV ID was reserved is displayed for RSGID. raidcom get ldev -ldev_id 0x4444 -fx -IH1 Serial# : LDEV : 4444 VIR_LDEV : fffe SL : - CL : - VOL_TYPE : NOT DEFINED SSID : - RSGID : 1 Setting the reservation attribute on the S-VOL When you create an HA pair, the P-VOL's LDEV ID is set as the virtual LDEV ID of the S-VOL. Before the pair can be created, the HA reservation attribute must be set to the volume that will become the S-VOL, so that the virtual LDEV ID can be set to the volume. Setting up the secondary system 99

100 Command example Set the HA reservation attribute to the LDEV ID (0x4444). raidcom map resource -ldev_id 0x4444 -virtual_ldev_id reserve -IH1 Check command and output examples Display the information about the volume (LDEV ID: 0x4444). For the LDEV ID to which the reservation attribute was set, ffff is displayed for VIR_LDEV (virtual LDEV ID). raidcom get ldev -ldev_id 0x4444 -fx -IH1 Serial# : LDEV : 4444 VIR_LDEV : ffff SL : - CL : - VOL_TYPE : NOT DEFINED SSID : - RSGID : 1 NOTE: If you need to release the reservation attribute, use the raidcom unmap resource command (example: raidcom unmap resource -ldev_id 0x4444 -virtual_ldev_id reserve -IH1). After releasing the reservation attribute, use the check command to confirm that fffeis displayed for VIR_LDEV (virtual LDEV ID). Reserving an LDEV ID for the S-VOL (page 98) Creating additional host groups in a VSM (page 101) 100 Configuration and pair management using RAID Manager

101 Creating additional host groups in a VSM Create a host group to be used by the S-VOL of the HA pair. If necessary, set the host mode for the host group. NOTE: Host group 0 exists by default. You need to create a host group only if you are creating additional host groups with host group ID 1 or higher. If you create a new host group but do not reserve the new host group ID in the resource group, add the new host group ID to the resource group as described in Reserving a host group ID (page 95). Command example (using CL1-C-1) 1. To port (CL1-C), create a host group (HAVol) with host group ID 1. raidcom add host_grp -port CL1-C-1 -host_grp_name HAVol -IH1 2. If necessary, set the host mode for the new host group (Windows shown). raidcom modify host_grp -port CL1-C-1 -host_mode WIN -IH1 3. Reserve host group (CL1-C-1) to resource group 1. For instructions, see Reserving a host group ID (page 95). Check command and output examples Display the information about the host group that is set for port (CL1-C). Confirm that the port name, host group ID, and host group name are correct. Setting up the secondary system 101

102 raidcom get host_grp -port CL1-C -IH1 PORT GID GROUP_NAME Serial# HMD HMO_BITs CL1-C 0 1C-G WIN CL1-C 1 HAVol WIN Related topics Reserving a host group ID (page 95) Creating a pool (page 102) Creating a pool After creating host groups, you need to create a pool volume, format the volume, and create a Thin Provisioning pool. Command example 1. Specify a parity group (13-4) to create a volume (pool volume) whose LDEV ID is 0x7777. The capacity is 100 GB. raidcom add ldev -ldev_id 0x7777 -parity_grp_id capacity 100G -IH1 2. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Perform Quick Format to the volume (0x7777). raidcom initialize ldev -operation qfmt -ldev_id 0x7777 -IH1 4. Confirm that asynchronous command processing has completed. 102 Configuration and pair management using RAID Manager

103 raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Specify 0x7777 to the pool volume to create a pool for Thin Provisioning whose pool ID is 0 and whose pool name is HA_POOL. raidcom add thp_pool -pool_id 0 -pool_name HA_POOL -ldev_id 0x7777 -IH1 6. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Check command and output examples 1. Confirm that volume (LDEV ID: 0x7777) is set for the pool volume of pool (pool ID: 0). raidcom get ldev -ldev_id 0x7777 -fx -IH1 Serial# : LDEV : 7777 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_LDEV : 1 LDEVs : 7777 NUM_PORT : 0 PORTs : F_POOLID : 0 VOL_ATTR : CVS : POOL RAID_LEVEL : RAID1 RAID_TYPE : 2D+2D NUM_GROUP : 1 RAID_GROUPs : DRIVE_TYPE : DKR5E-J1R2SS DRIVE_Capa : LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0007 RSGID : 0 2. Check the pool capacity. raidcom get thp_pool -IH1 PID POLS U(%) AV_CAP(MB) TP_CAP(MB) W(%) H(%) Num LDEV# LCNT TL_CAP(MB) 000 POLN Check the pool name. raidcom get pool -key opt -IH1 PID POLS U(%) POOL_NAME Seq# Num LDEV# H(%) VCAP(%) TYPE PM 000 POLN 0 HA_POOL OPEN N Creating additional host groups in a VSM (page 101) Creating the S-VOL (page 103) Creating the S-VOL Specify the volume that will become the S-VOL using the reservation attribute and LDEV ID mapped earlier. The S-VOL must be the same size as the P-VOL. Setting up the secondary system 103

104 Command example 1. Check the capacity of the P-VOL. raidcom get ldev -ldev_id 0x2222 -fx -IH0 Serial# : LDEV : 2222 SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : THP B_POOLID : 0 LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0005 Used_Block(BLK) : 0 RSGID : 0 2. In the Thin Provisioning pool with pool ID 0, create a virtual volume (THP V-VOL) with a capacity of 1,024,000 blocks and LDEV ID = 0x4444. raidcom add ldev -pool 0 -ldev_id 0x4444 -capacity IH1 104 Configuration and pair management using RAID Manager

105 3. Confirm that asynchronous command processing has completed. raidcom get command_status -IH1 HANDLE SSB1 SSB2 ERR_CNT Serial# Description 00c Check command and output examples Display the information for volume (LDEV ID: 0x4444). Confirm that the new volume satisfies the following requirements: The reservation attribute is set. The volume has the same capacity as the P-VOL. The volume is a THP V-VOL. raidcom get ldev -ldev_id 0x4444 -fx -IH1 Serial# : LDEV : 4444 VIR_LDEV : ffff SL : 0 CL : 0 VOL_TYPE : OPEN-V-CVS VOL_Capacity(BLK) : NUM_PORT : 0 PORTs : F_POOLID : NONE VOL_ATTR : CVS : THP B_POOLID : 0 LDEV_NAMING : STS : NML OPE_TYPE : NONE OPE_RATE : 100 MP# : 0 SSID : 0009 Used_Block(BLK) : 0 RSGID : 1 The reservation attribute is set for the created volume. The created volume has the same capacity as the P-VOL. The created volume is a THP V-VOL. Creating a pool (page 102) Adding an LU path to the S-VOL (page 105) Adding an LU path to the S-VOL Add an LU path between the port connected to the server and the S-VOL. The host does not recognize the S-VOL, because the virtual LDEV ID has not yet been defined to the volume for the S-VOL. Setting up the secondary system 105

106 Command example Specify host group (CL1-C-0) and LU (0) to add an LU path to S-VOL (0x4444). raidcom add lun -port CL1-C-0 -lun_id 0 -ldev_id 0x4444 -IH1 Check command and output examples Display the information about the LU paths that are defined in host group (CL1-C-0). raidcom get lun -port CL1-C-0 -fx -IH1 PORT GID HMD LUN NUM LDEV CM Serial# HMO_BITs CL1-C 0 WIN Creating the S-VOL (page 103) Updating the RAID Manager configuration definition files (page 106) Updating the RAID Manager configuration definition files Before creating the HA pair, you must update the RAID Manager configuration definition files on the primary and secondary systems to add the information for the volumes that will become the P-VOL and S-VOL. Shutting down RAID Manager (page 106) Editing RAID Manager configuration definition files (page 107) Restarting RAID Manager (page 108) Shutting down RAID Manager You must shut down both RAID Manager instances before editing the configuration definition files. 106 Configuration and pair management using RAID Manager

107 Command example (Windows shown) Shut down instance 0 and instance 1. horcmshutdown 0 1 inst 0: HORCM Shutdown inst 0!!! inst 1: HORCM Shutdown inst 1!!! Updating the RAID Manager configuration definition files (page 106) Editing RAID Manager configuration definition files (page 107) Editing RAID Manager configuration definition files The following examples show the configuration definition files for a Windows host. Make sure to specify the actual LDEV IDs for the HA pair volumes, not the virtual LDEV IDs. NOTE: When specifying the serial number for HP XP7 using RAID Manager, add a "3" at the beginning of the serial number. For example, for serial number 11111, enter Example of primary HORCM file, horcm0.conf The italicized lines below show the updates for the volumes in the sample configuration in this chapter. Make sure to enter the information for your system in your configuration definition files. HORCM_MON #ip_address service poll(10ms) timeout(10ms) localhost HORCM_CMD \\.\PhysicalDrive0 HORCM_LDEV #GRP DEV SERIAL LDEV# MU# oraha dev :22 0 HORCM_INST #GPR IP ADR PORT# oraha localhost Example of secondary HORCM file, horcm1.conf The italicized lines below show the updates for the volumes in the sample configuration in this chapter. Make sure to enter the information for your system in your configuration definition files. HORCM_MON #ip_address service poll(10ms) timeout(10ms) localhost HORCM_CMD \\.\PhysicalDrive1 HORCM_LDEV #GRP DEV SERIAL LDEV# MU# oraha dev :44 0 HORCM_INST #GPR IP ADR PORT# oraha localhost Updating the RAID Manager configuration definition files (page 106) Shutting down RAID Manager (page 106) Restarting RAID Manager (page 108) Updating the RAID Manager configuration definition files 107

108 Restarting RAID Manager After editing the configuration definition files, restart both RAID Manager instances. Command example (Windows shown) Start instances 0 and 1. horcmstart 0 1 starting HORCM inst 0 HORCM inst 0 starts successfully. starting HORCM inst 1 HORCM inst 1 starts successfully. Updating the RAID Manager configuration definition files (page 106) Editing RAID Manager configuration definition files (page 107) Verifying the virtual LDEV ID in the virtual storage machine of the secondary site (page 108) Creating the HA pair After verifying that the same virtual LDEV ID as that of the primary volume does not exist in the virtual storage machine of the secondary site, you can create the HA pair. Verifying the virtual LDEV ID in the virtual storage machine of the secondary site (page 108) Creating a High Availability pair (page 110) Verifying the virtual LDEV ID in the virtual storage machine of the secondary site Before creating an HA pair, check that the same virtual LDEV ID as that of the primary volume does not exist in the virtual storage machine of the secondary site, which has the same serial number and model as the primary storage system. If the same virtual LDEV ID as the primary volume exists, you cannot create the HA pair. Operate the virtual storage machine to check that the virtual LDEV ID does not exist. Specify the virtual storage machine for HORCM_VCMD of the configuration definition file, and then start RAID Manager. Command example (Windows shown) 1. Start instances (100 and 101) for confirming the virtual LDEV IDs. horcmstart starting HORCM inst 100 HORCM inst 100 starts successfully. starting HORCM inst 101 HORCM inst 101 starts successfully. 2. Confirm the P-VOL's virtual LDEV ID. raidcom get ldev -ldev_id 0x2222 -key front_end -cnt 1 -fx -IH100 Serial# LDEV# SL CL VOL_TYPE VOL_Cap(BLK) PID ATTRIBUTE Ports PORT_No:LU#:GRPNAME OPEN-V-CVS CVS THP Configuration and pair management using RAID Manager

109 3. Check that the same virtual LDEV ID as that of the primary volume does not exist in the virtual storage machine of the secondary site. After you execute this command, if virtual LDEV ID 0x2222 is not displayed, the same virtual LDEV ID (0x2222) as that of the primary volume does not exist in the virtual storage machine of the secondary site. raidcom get ldev -ldev_id 0x2222 -key front_end -cnt 1 -fx -IH101 When you specify the virtual storage machine for HORCM_VCMD in the configuration definition file and execute the raidcom get ldev command by specifying the -cntoption, the virtual LDEV IDs in the range specified by the -cnt option are displayed. TIP: To display the volume information as a list for each volume, use the -key front_end option for the raidcom get ldev command. Related Topics Revising the virtual LDEV ID in the virtual storage machine of the secondary site If the same virtual LDEV ID as that of the primary volume is displayed for the virtual storage machine of the secondary site, there might be errors in the HA system establishment plan. Revise the system configuration. The example when the same virtual LDEV ID as that of the P-VOL (0x2222) is assigned to the volume (LDEV ID: 0xfefe) in the virtual storage machine of the secondary storage machine is shown below. Command example 1. Check whether the same virtual LDEV ID as that of the primary volume is assigned to the virtual storage machine of the secondary site. raidcom get ldev -ldev_id 0x2222 -key front_end -cnt 1 -fx -IH101 Serial# LDEV# SL CL VOL_TYPE VOL_Cap(BLK) PID ATTRIBUTE Ports PORT_No:LU#:GRPNAME NOT DEFINED NOT DEFINED The virtual LDEV ID (0x2222) is assigned to the virtual storage machine of the secondary site. 2. Confirm the actual LDEV ID of the volume whose virtual LDEV ID is 0x2222. raidcom get ldev -ldev_id 0x2222 -fx -IH101 Serial# : PHY_Serial# : LDEV : 2222 PHY_LDEV : fefe SL : - CL : - VOL_TYPE : NOT DEFINED SSID : - RSGID : 1 LDEV : 2222 PHY_LDEV : fefe CL : - SSID : - In this example, the virtual LDEV ID (0x2222) is assigned to the volume whose actual LDEV ID is 0xfefe. 3. To use the virtual LDEV ID (0x2222) for an HA pair volume, use the raidcom unmap resource command to remove assignment of the virtual LDEV ID (0x2222) from the volume whose LDEV ID is 0xfefe. raidcom unmap resource -ldev_id 0xfefe -virtual_ldev_id 0x2222 -IH1 Creating the HA pair 109

110 4. Confirm that the assignment of the virtual LDEV ID (0x2222) is removed from the volume whose LDEV ID is 0xfefe. raidcom get ldev -ldev_id 0x2222 -key front_end -cnt 1 -fx -IH101 When you specify the virtual storage machine for HORCM_VCMD in the configuration definition file, and execute the raidcom get ldev command by specifying the -cnt option, the virtual LDEV IDs existing in the specified range by the -cnt option are displayed. After you execute the above command, if the virtual LDEV ID 0x2222 is not displayed, the same virtual LDEV ID (0x2222) as that of the primary volume does not exist in the virtual storage machine of the secondary site. NOTE: After releasing the virtual LDEV ID assignment, if you execute the raidcom get ldev command without specifying the -cntoption, the following error code and message are output: raidcom: [EX_EGPERM] Permission denied with the Resource Group In the example above, the virtual LDEV ID (0x2222) has not been defined after you released the virtual LDEV ID assignment. Therefore, the user of the virtual storage machine does not have access authority. When a command is executed specifying a virtual storage machine (that is, using HORCM_VCMD), both the actual ID and the virtual ID of the specified resource must be assigned to the user. When the virtual storage machine is not specified (that is, using HORCM_CMD), the user can execute the command only if the actual ID of the specified resource is assigned to the user. About the virtual ID (page 15) Verifying the virtual LDEV ID in the virtual storage machine of the secondary site (page 108) Creating a High Availability pair When HA configuration is complete, you can start creating HA pairs. When a pair is created, the P-VOL LDEV ID is set as the S-VOL's virtual LDEV ID. When the paircreate operation completes, the pair status becomes PAIR, and the P-VOL and S-VOL can accept I/O from the host. When a pair is deleted, the S-VOL's virtual LDEV ID is deleted, and the HA reservation attribute remains set on the S-VOL. NOTE: When you create an HA pair, make sure that the available pool capacity for Thin Provisioning below the warning threshold is more than the capacity of the secondary volume. If you create an HA pair at the secondary storage system when the available pool capacity below the warning threshold is less than the capacity of the secondary volume, SIM (SIM=720XXX, where XXX is the pool ID) is issued (the used capacity exceeds the warning threshold). NOTE: You cannot create an HA pair by using instances (100 and 101) for confirming the virtual LDEV IDs. To create an HA pair, use instances (0 and 1) for operating storage systems. 110 Configuration and pair management using RAID Manager

111 Command example Specify 0 for the quorum disk ID to create an HA pair. paircreate -g oraha -f never -vl -jq 0 -IH0 Check command and output examples Confirm that an HA pair is created. pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status,Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M Confirm that the copy progress reaches 100%. Updating the RAID Manager configuration definition files (page 106) Adding an alternate path to the S-VOL (page 111) Adding an alternate path to the S-VOL Add an alternate path to the S-VOL on the host using the alternate path software. For some alternate path software the alternate path is added automatically. Make sure that the host has recognized the HA secondary volume correctly. Adding an alternate path to the S-VOL 111

112 CAUTION: If Hitachi Dynamic Link Manager (HDLM) is installed on the server and host mode option 78 is set to the host group of HP XP7, add the alternate path, and then execute the dlnkmgr refresh -gadcommand to incorporate the HP XP7 settings to HDLM. For details about HDLM, see the HDLM user documentation. Hitachi Dynamic Link Manager (page 60) Creating the HA pair (page 108) 112 Configuration and pair management using RAID Manager

113 6 Disaster recovery of High Availability Abstract This chapter describes the High Availability (HA) failure locations, the SIMs issued for HA failures, and the recovery procedures for HA failures. Failure locations The following figure and table describe the locations where HA failures can occur, the SIMs that are issued, and whether the P-VOL and S-VOL are accessible. All HA-related SIMs are described in SIMs related to HA (page 116). Figure 1 Failure locations # Failure location SIM reference codes HA volume accessible? 1 Primary system Secondary system P-VOL S-VOL 1 Server None (normal) None (normal) 2 Path between the server and the storage system Path between the server and the primary storage system None (normal) None (normal) No 2 Failure locations 113

114 # Failure location SIM reference codes HA volume accessible? 1 Primary system Secondary system P-VOL S-VOL 3 Path between the server and the secondary storage system None (normal) None (normal) 3 No 4 HA pair volume P-VOL 3A0XXX DD1XYY DFAXXX DFBXXX EF9XXX DD1XYY No 2 5 S-VOL DD1XYY 3A0XXX DD1XYY DFAXXX DFBXXX EF9XXX 3 No 6 Pool for HA pair 4 Pool for P-VOL 622XXX DD1XYY DD1XYY No 2 7 Pool for S-VOL DD1XYY 622XXX DD1XYY 3 No 8 Path between storage systems Remote path from the primary to secondary system 2180XX DD0XYY DD3XYY 3 No 9 Remote path from the secondary to primary system DD3XYY 2180XX DD0XYY No 2 10 Storage system Primary system Depends on the failure type XX DD0XYY DD3XYY No 2 11 Secondary system 2180XX DD0XYY DD3XYY Depends on the failure type 5 3 No 12 Quorum disk Path between the primary system and quorum disk 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY DD2XYY 3 No 13 Path between the secondary system and quorum disk DD2XYY 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY 3 No 114 Disaster recovery of High Availability

115 # Failure location SIM reference codes HA volume accessible? 1 Primary system Secondary system P-VOL S-VOL 14 Quorum disk 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY 3 No 15 External storage system 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY 21D0XX 21D2XX DD2XYY DEF0ZZ EF5XYY EFD000 FF5XYY 3 No Notes: 1. Pairs are not suspended and do not become inaccessible for: Failure in hardware used for redundancy in the HP XP7, such as HDD, cache, CHA, DKA, and MPB Failure in redundant physical paths 2. The volume is not accessible if a failure occurs while the S-VOL pair status is COPY, SSUS, or PSUE. 3. The volume is not accessible if a failure occurs while the P-VOL pair status is PSUS or PSUE and the I/O mode is BLOCK. 4. A failure occurs due to a full pool for an HA pair. 5. The SIM might not be viewable, depending on the failure (for example, all cache failure, all MP failure, storage system failure). Related topics SIMs related to HA (page 116) Pair condition and recovery: server failures (page 117) Pair condition and recovery: path failure between the server and the storage system (page 117) Pair condition and recovery: P-VOL failure (LDEV blockade) (page 121) Pair condition and recovery: S-VOL failure (LDEV blockade) (page 124) Pair condition and recovery: full pool for the P-VOL (page 127) Pair condition and recovery: full pool for the S-VOL (page 130) Pair condition and recovery: path failure from the primary to the secondary system (page 132) Pair condition and recovery: path failure from the secondary to the primary system (page 134) Pair condition and recovery: primary system failure (page 136) Pair condition and recovery: secondary system failure (page 138) Pair condition and recovery: path failure from the primary to the external system (page 139) Pair condition and recovery: path failure from the secondary to the external system (page 142) Failure locations 115

116 Pair condition and recovery: quorum disk failure (page 145) Pair condition and recovery: external system failure (page 153) Pair condition and recovery: other failures (page 154) SIMs related to HA The following table shows SIMs related to High Availability operations. All SIMs in the following table are reported to the service processor (SVP) of the storage system. SIM reference code 2180XX 21D0XX 21D2XX 3A0XYY 622XXX DD0XYY DD1XYY DD2XYY DD3XYY DEE0ZZ DEF0XX DFAXXX DFBXXX EF5XYY EF9XXX EFD000 FF5XYY Description Logical path(s) on the remote copy connections was logically blocked (Due to an error conditions) External storage system connection path blocking Threshold over by external storage system connection path response time-out LDEV Blockade (Effect of micro code error) The THP POOL FULL HA for this volume was suspended (Due to an unrecoverable failure on the remote copy connections) HA for this volume was suspended (Due to a failure on the volume) HA for this volume was suspended (Due to an internal error condition detected) Status of the P-VOL was not consistent with the S-VOL Quorum Disk Restore Quorum Disk Blocked LDEV blockade(drive path: Boundary 0/Effect of Drive port blockade) LDEV blockade(drive path: Boundary 1/Effect of Drive port blockade) Abnormal end of Write processing in External storage system LDEV blockade (Effect of drive blockade) External storage system connection device blockade Abnormal end of Read processing in External storage system Related topics Failure locations (page 113) Resolving failures in multiple locations (page 158) Pair condition before failure The pair status and I/O mode of an HA pair, the accessibility of the server, and the storage location of the latest data vary depending on the status before a failure occurs. 116 Disaster recovery of High Availability

117 The following table shows pair status and I/O mode, the volumes accessible from the server, and the location of the latest data before a failure occurs. You can compare this information with the changes that take place after a failure occurs, as described in the following topics. Pair status and I/O mode P-VOL S-VOL Volume accessible from the server P-VOL S-VOL Volume with latest data PAIR (Mirror (RL)) PAIR (Mirror (RL)) OK OK Both P-VOL and S-VOL COPY (Mirror (RL)) COPY (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL Pair condition and recovery: server failures The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use a server. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL* P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PAIR (Mirror (RL)) PAIR (Mirror (RL)) OK OK Both P-VOL and S-VOL COPY (Mirror (RL)) COPY (Block) COPY (Mirror (RL)) COPY (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL *If failures occur in all servers that access the P-VOL or S-VOL, then you cannot access either volume. SIM Primary system: None Secondary system: None Recovery procedure 1. Recover the server. 2. Recover the path from the server to the pair volumes. Related topics Pair condition before failure (page 116) Pair condition and recovery: path failure between the server and the storage system If a server cannot access a pair volume whose status is PAIR, though no SIM has been issued, a failure might have occurred between the server and the storage system. The following topics provide procedures for recovering of the physical path between the server and the storage systems. Pair condition and recovery: server failures 117

118 The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use a physical path between the server and a storage system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL* P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PAIR (Mirror (RL)) PAIR (Mirror (RL)) OK OK Both P-VOL and S-VOL COPY (Mirror (RL)) COPY (Block) COPY (Mirror (RL)) COPY (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL *If failures occur in all servers that access to the P-VOL or S-VOL, then you cannot access either volumes. SIM Primary system: None Secondary system: None Recovery procedure 1. Recover the path between the server and the storage system. 2. Recover the path from the server to the pair volume. Related topics Pair condition before failure (page 116) Recovering from a path failure between the server and the primary system The following figure shows the failure area and recovery when the path between the server and primary storage system fails. 118 Disaster recovery of High Availability

119 Steps for recovery 1. Recover the path. 1. Using the alternate path software and other tools, identify the path that cannot be accessed from the server. 2. Using the SAN management software, identify the failure location; for example, a host bus adapter, FC cable, switch, or other location. 3. Remove the cause of failure and recover the path. 2. Using the alternate path software, resume I/O from the server to the recovered path (I/O may resume automatically). Related topics Failure locations (page 113) SIMs related to HA (page 116) Recovering from a path failure between the server and the secondary system The following figure shows the failure area and recovery when the path between the server and secondary storage system fails. Pair condition and recovery: path failure between the server and the storage system 119

120 Steps for recovery 1. Recover the path. 1. Using the alternate path software or other tools, identify the path that cannot be accessed from the server. 2. Using SAN management software, identify the failure location; for example, a host bus adapter, FC cable, switch, or other location. 3. Remove the cause of failure and recover the path. 2. Using the alternate path software, resume I/O from the server to the recovered path (I/O may resume automatically). Related topics Failure locations (page 113) SIMs related to HA (page 116) 120 Disaster recovery of High Availability

121 Pair condition and recovery: P-VOL failure (LDEV blockade) The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the P-VOL due to LDEV blockade. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Block) SSWS (Local) NG OK S-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) NG NG None 1 PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) NG NG None 2 PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL Notes: 1. Recover the data from Business Copy, Fast Snap, or other backup data. 2. Recover the data using the S-VOL data that is not the latest, Business Copy, Fast Snap, or other backup data. SIM Primary system: 3A0XYY, DD1XYY, DFAXXX, DFBXXX, EF9XXX Secondary system: DD1XYY Recovery procedure 1. Recover the P-VOL. 2. Recreate the pair. Related topics Pair condition before failure (page 116) Recovering the P-VOL (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to a P-VOL failure. Pair condition and recovery: P-VOL failure (LDEV blockade) 121

122 Steps for recovery 1. Delete the alternate path (logical path) to the volume that cannot be accessed from the server. 1. Using the alternate path software, identify the volume that cannot be accessed. 2. Confirm whether the volume (P-VOL) is blocked, and the pool ID (B_POOLID) of the pool to which the P-VOL is associated. Command example: raidcom get ldev -ldev_id 0x2222 -IH0 (snip) B_POOLID : 0 (snip) STS : BLK (snip) 3. Display the status of the volumes configuring the pool (pool volume) to identify the blocked volume. Command example: raidcom get ldev -ldev_list pool -pool_id 0 -IH0 (snip) LDEV : (snip) STS : BLK (snip) For the blocked volume, BLK is indicated in the STS column. 4. Using the alternate path software, delete the alternate path to the volume that cannot be accessed from the server. Go to the next step even if the alternate path cannot be deleted. 2. Delete the pair. 122 Disaster recovery of High Availability

123 1. From the secondary storage system, delete the pair specifying the actual LDEV ID of the S-VOL. Command example: pairsplit -g oraha -R -d dev1 -IH1 NOTE: To delete the pair specifying the S-VOL, use the -R option of the pairsplit command. Specify the actual LDEV ID (device name) of the S-VOL in the -d option. 2. Confirm that the pair is deleted. Command example: pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU), Seq#, LDEV#.P/S,Status,Fence, %, P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) SMPL , /- oraha dev1(r) (CL1-A-0, 0, 0) SMPL , /- 3. Remove the failure. The following procedure is the example of the recovery form the pool volume failure. 1. Recover a pool volume that configures the P-VOL (THP V-VOL). 2. Display the status of the pool volumes to confirm that the pool volume has been recovered. Command example: raidcom get ldev -ldev_list pool -pool_id 0 -IH0 (snip) LDEV : (snip) STS : NML (snip) For a normal volume, NML is indicated in the STS column. 4. If the volume cannot be recovered, follow the procedure below to re-create the P-VOL: 1. At the primary storage system, delete the LU path to the P-VOL. 2. Delete the P-VOL. 3. Create a new volume. 4. Set an LU path to the new volume. 5. Recreate the pair. 1. If you created a volume in step 4, set the HA reservation attribute to the created volume. Command example: raidcom map resource -ldev_id 0x2222 -virtual_ldev_id reserve -IH0 2. From the secondary storage system, create the pair specifying the S-VOL's actual LDEV ID. Command example: paircreate -g oraha -f never -vl -jq 0 -d dev1 -IH1 NOTE: To create the pair specifying the S-VOL, specify the actual LDEV ID (device name) of the S-VOL in the -d option of the paircreate command. The volume of the primary storage system changes to an S-VOL, and the volume of the secondary storage system changes to a P-VOL. Pair condition and recovery: P-VOL failure (LDEV blockade) 123

124 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 6. Using the alternate path software, add an alternate path from the server to the S-VOL (P-VOL before the failure). 7. Using the alternate path software, resume I/O from the server to the S-VOL (P-VOL before the failure). Note that I/O from the server might resume automatically. 8. Reverse the P-VOL and the S-VOL if necessary. Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Reversing the P-VOL and S-VOL (page 158) Pair condition and recovery: S-VOL failure (LDEV blockade) The following table shows the transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the S-VOL due to LDEV blockade. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG NG None* *Recover data using the P-VOL data that is not the latest, Business Copy, Fast Snap, or other backup data. 124 Disaster recovery of High Availability

125 SIM Primary system: DD1XYY Secondary system: 3A0XYY, DD1XYY, DFAXXX, DFBXXX, EF9XXX Recovery procedure 1. Recover the S-VOL. 2. Recreate the pair. Related topics Pair condition before failure (page 116) Recovering the S-VOL (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to an S-VOL failure. Pair condition and recovery: S-VOL failure (LDEV blockade) 125

126 Steps for recovery 1. Delete the alternate path (logical path) to the volume that cannot be accessed from the server. 1. Using the alternate path software, identify the volume that cannot be accessed. 2. Confirm whether the volume (S-VOL) is blocked, and the pool ID (B_POOLID) of the pool to which the S-VOL is associated. Command example: raidcom get ldev -ldev_id 0x4444 -IH1 (snip) B_POOLID : 0 (snip) STS : BLK (snip) 3. Display the status of the volumes configuring the pool (pool volume) to identify the blocked volume. Command example: raidcom get ldev -ldev_list pool -pool_id 0 -IH1 (snip) LDEV : (snip) STS : BLK (snip) For the blocked volume, BLK is indicated in the STS column. 4. Using the alternate path software, delete the alternate path to the volume. Go to the next step even if the alternate path cannot be deleted. 2. Delete the pair. 1. From the primary storage system, delete the pair specifying the P-VOL's actual LDEV ID. Command example: pairsplit -g oraha -S -d dev1 -IH0 NOTE: To delete the pair specifying the P-VOL, use the -S option of the pairsplit command. Specify the actual LDEV ID (device name) of the P-VOL in the -d option. 2. Confirm that the pair is deleted. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU), Seq#, LDEV#.P/S,Status,Fence, %, P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) SMPL , /- oraha dev1(r) (CL1-C-0, 0, 0) SMPL , /- 3. Remove the failure. The following procedure is the example of the recovery form the pool volume failure. 1. Recover a pool volume that configures the S-VOL (THP V-VOL). 2. Display the status of the pool volumes to confirm that the pool volume has been recovered. Command example: raidcom get ldev -ldev_list pool -pool_id 0 -IH1 (snip) LDEV : (snip) STS : NML (snip) For the normal volume, NML is indicated in the STS column. 126 Disaster recovery of High Availability

127 4. If the volume cannot be recovered, follow the procedure below to re-create the S-VOL: 1. At the secondary storage system, delete the LU path to the S-VOL. 2. Delete the S-VOL. 3. Create a new volume. 4. Set an LU path to the new volume. 5. Recreate the pair. 1. If you created a volume in step 4, set the HA reservation attribute to the created volume. Command example: raidcom map resource -ldev_id 0x4444 -virtual_ldev_id reserve -IH1 2. From the primary storage system, create the pair specifying the P-VOL's actual LDEV ID. Command example: paircreate -g oraha -f never -vl -jq 0 -d dev1 -IH0 NOTE: To create the pair specifying the P-VOL, specify the actual LDEV ID (device name) of the P-VOL in the -d option of the paircreate command. 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 6. Using the alternate path software, add an alternate path from the server to the S-VOL. 7. Using the alternate path software, resume I/O from the server to the S-VOL. Note that I/Os from the server might be resumed automatically. Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Pair condition and recovery: full pool for the P-VOL Due to a full pool of the P-VOL, a failure that is an HA pair suspended occurs. Pair condition and recovery: full pool for the P-VOL 127

128 The following table shows the transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the P-VOL due to full pool. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Block) SSWS (Local) NG OK S-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) NG NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) NG NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: 662XXX, DD1XYY Secondary system: DD1XYY Recovery procedure 1. Increase an available pool capacity to the P-VOL. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering a full pool for the P-VOL (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to a full pool of the P-VOL. 128 Disaster recovery of High Availability

129 Steps for recovery 1. Increase an available capacity to the pool on which the full pool was detected. For details on how to increase an available pool capacity, see the HP XP7 Provisioning for Open Systems User Guide. 2. Resynchronize an HA pair. 1. Confirm that the I/O mode of the S-VOL is Local. Command example: pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL SSWS NEVER, L/L oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, B/B 2. At the secondary storage system, resynchronize the pair. Command example: pairresync -g oraha -swaps -IH1 The volume of the primary storage system changes to an S-VOL, and the volume of the secondary storage system changes to a P-VOL. Pair condition and recovery: full pool for the P-VOL 129

130 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) P-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) S-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/Os to the S-VOL that was a P-VOL before the failure (I/O might resume automatically). 4. Reverse the P-VOL and the S-VOL if necessary. Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Reversing the P-VOL and S-VOL (page 158) Pair condition and recovery: full pool for the S-VOL Due to a full pool of the S-VOL, a failure that is an HA pair suspended occurs. The following table shows the transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the S-VOL due to full pool. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG NG S-VOL SIM Primary system: DD1XYY Secondary system: 662XXX, DD1XYY 130 Disaster recovery of High Availability

131 Recovery procedure 1. Increase an available pool capacity to the S-VOL. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering a full pool for the S-VOL (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to a full pool of the S-VOL. Steps for recovery 1. Increase an available capacity to the pool on which the full pool was detected. For details on how to increase an available pool capacity, see the HP XP7 Provisioning for Open Systems User Guide. 2. Resynchronize an HA pair. 1. Confirm that the I/O mode of the P-VOL is Local. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B 2. At the primary storage system, resynchronize the pair. Command example: pairresync -g oraha -IH0 Pair condition and recovery: full pool for the S-VOL 131

132 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/O to the S-VOL (I/O might resume automatically). Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Pair condition and recovery: path failure from the primary to the secondary system If the statuses of storage systems in both the primary and secondary sites are normal, a failure might have occurred in a physical path or switch between the storage systems. The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use any physical path from the primary system to the secondary system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: DD0XYY, 2180XX Secondary system: DD3XYY 132 Disaster recovery of High Availability

133 Recovery procedure 1. Recover the paths from primary to secondary storage systems. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering paths from the primary to the secondary system (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to path failure from the primary system to the secondary system. Steps for recovery 1. Reconnect the physical path or reconfigure the SAN to recover the path failure. When the path is recovered, the remote path is automatically recovered. If you recover the physical path but the remote path does not recover, contact HP Technical Support. 2. Resynchronize the pair. 1. Confirm that the P-VOL I/O mode is Local. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B 2. At the primary system, resynchronize the pair. Command example: pairresync -g oraha -IH0 Pair condition and recovery: path failure from the primary to the secondary system 133

134 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/O to the volume that could not be accessed from the server (I/O might resume automatically). Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Pair condition and recovery: path failure from the secondary to the primary system If the statuses of storage systems in both the primary and secondary sites are normal, a failure might have occurred in a physical path or switch between the storage systems. The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use any physical path from the secondary system to the primary system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Block) SSWS (Local) NG OK S-VOL COPY (Mirror (RL)) COPY (Block) COPY (Mirror (RL)) COPY (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: DD3XYY Secondary system: DD0XYY, 2180XX 134 Disaster recovery of High Availability

135 Recovery procedure 1. Recover the paths from secondary to primary storage systems. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering paths from the secondary to the primary system (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to path failure from the secondary system to primary system. Steps for recovery 1. Reconnect the physical path or reconfigure the SAN to recover the path from the secondary system to the primary system. After the path is recovered, the remote path is automatically recovered. If you recover the physical path but the remote path does not recover, contact HP Technical Support. 2. Resynchronize the pair. 1. Confirm that the S-VOL I/O mode is Local. Command example: pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL SSWS NEVER, L/L oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, B/B Pair condition and recovery: path failure from the secondary to the primary system 135

136 2. At the secondary system, resynchronize the pair. Command example: pairresync -g oraha -swaps -IH1 The volume on the primary system changes to an S-VOL, and the volume on the secondary system changes to a P-VOL. 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) P-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) S-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/O to the S-VOL (P-VOL before the failure). I/O from the server might resume automatically. 4. Reverse the P-VOL and the S-VOL if necessary. Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Reversing the P-VOL and S-VOL (page 158) Pair condition and recovery: primary system failure The following table shows transitions for pair status and I/O mode, the volumes accessible from the server, and location of the latest data when you can no longer use the primary system due to failure. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL 1 S-VOL P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Block) SSWS (Local) 2 NG OK S-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) NG NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) NG NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL Notes: 136 Disaster recovery of High Availability

137 Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL 1 S-VOL P-VOL S-VOL 1. If shared memory in the primary system volatilizes, P-VOL status changes to SMPL, and the HA reservation attribute is set for the volume. 2. If the server does not issue write I/O, the pair status might be PAIR (Mirror (RL)). SIM Primary system: SIM varies depending on the failure type Secondary system: 2180XX, DD0XYY, DD3XYY Recovery procedure 1. When the primary storage system is powered off, delete an alternate path (logical path) to the P-VOL, and then turn on the power. 1. Using the alternate path software, distinguish the volumes which are not able to be accessed from the server. 2. Using the alternate path software, delete the alternate paths to the P-VOL. If you cannot delete the alternate paths, detach all channel port connection paths (physical paths) which are connected to the server at the primary site. 2. Turn on the primary storage system. 3. Recover the primary storage system. For details, contact HP Technical Support. 4. Recover the physical path between the primary and secondary systems. 5. If S-VOL pair status is PAIR, suspend the pair specifying the S-VOL. 6. Resynchronize or recreate a pair using the procedure in the following table whose pair status and I/O mode match your pair's status and I/O mode. Pair status I/O mode Procedure P-VOL S-VOL P-VOL S-VOL PSUS/PSUE SSWS Block Local Resynchronize the pair specifying the S-VOL. SMPL SSWS Not applicable Local 1. Delete the pair specifying the S-VOL. 2. When the virtual LDEV ID is set to the P-VOL, delete the virtual LDEV ID, and then set the reservation attribute to the P-VOL. 3. Recreate the pair specifying the S-VOL. PSUS/PSUE SSUS/PSUE Local Block Resynchronize the pair specifying the P-VOL. SMPL SSUS/PSUE Not applicable Block 1. Delete the pair forcibly from the secondary system, specifying Disable in the Volume Access field (Delete Pairs window). 2. Release the reservation attribute of the P-VOL, and then set the same virtual LDEV ID that was used before the pair was deleted. 3. Recreate the pair specifying the P-VOL. Pair condition and recovery: primary system failure 137

138 7. If the alternate path to the P-VOL has been deleted, add the alternate path. 1. If you have detached the channel port connection paths of the primary site, restore all channel port connection paths to their original status, and then add the alternate path. 2. Using the alternate path software, add the alternate path deleted at step 1 to the P-VOL. Related topics Pair condition before failure (page 116) Pair condition and recovery: secondary system failure The following table shows transitions for pair status and I/O mode, the volumes accessible from the server, and location of the latest data when you can no longer use the secondary system due to failure. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) 2 PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG NG S-VOL Notes: 1. If shared memory in the secondary system volatilizes, the S-VOL pair status changes to SMPL, and the reservation attribute is set for the volume. 2. If the server does not issue write I/O, the pair status might be PAIR (Mirror (RL)). SIM Primary system: 2180XX, DD0XYY, DD3XYY Secondary system: SIM varies depending on the failure type Recovery procedure 1. When the secondary storage system is powered off, delete an alternate path (logical path) to the S-VOL, and then turn on the power. 1. Using the alternate path software, distinguish the volumes which are not able to be accessed from the server. 2. Using the alternate path software, delete the alternate paths to the S-VOL. If you cannot delete the alternate paths, detach all channel port connection paths (physical paths) which are connected to the server at the secondary site. 2. Turn on the secondary storage system. 3. Recover the secondary system. For details, contact HP Technical Support. 4. Recover the physical path between the primary and secondary systems. 5. If P-VOL pair status is PAIR, suspend the pair specifying the P-VOL. 138 Disaster recovery of High Availability

139 6. Resynchronize or recreate the pair using the procedure in the following table whose pair status and I/O mode match your pair's status and I/O mode. Pair status I/O mode Procedure P-VOL S-VOL P-VOL S-VOL PSUS/PSUE PSUS/PSUE Local Block Resynchronize the pair specifying the P-VOL. PSUS/PSUE SMPL Local Not applicable 1. Delete the pair specifying the P-VOL. 2. When the virtual LDEV ID is set to the S-VOL, delete the virtual LDEV ID, and then set the reservation attribute to the S-VOL. 3. Recreate the pair specifying the P-VOL. PSUS/PSUE SSWS Block Local Resynchronize the pair specifying the S-VOL. PSUS/PSUE SMPL Block Not applicable 1. Delete the pair forcibly from the primary system, specifying Disable in the Volume Access field (Delete Pairs window). 2. Release the reservation attribute from the S-VOL, and then set the same virtual LDEV ID as was used before the pair was deleted. 3. Recreate the pair specifying the S-VOL. 7. If the alternate path to the S-VOL has been deleted, add the alternate path. 1. If you have detached the channel port connection paths of the secondary site, restore all channel port connection paths to their original status, and then add the alternate path. 2. Using the alternate path software, add the alternate path deleted at step 1 to the S-VOL. Related topics Pair condition before failure (page 116) Pair condition and recovery: path failure from the primary to the external system If the status of external system is normal, a failure might have occurred in a physical path from primary or secondary system to the external system, or a switch. Recover from the failure that occurred in the physical path or switch. The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use any physical path from the primary system to the quorum disk's external system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL Pair condition and recovery: path failure from the primary to the external system 139

140 SIM Primary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY Secondary system: DD2XYY Recovery procedure 1. Recover the paths to the external system. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering the path from the primary system to the external system (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to path failure from the primary system to the external system. When pair status is PAIR and the pair suspends due to a failure of the external storage system or of a path to an external system, the P-VOL I/O mode is Local, and the server continues I/O to the P-VOL. 140 Disaster recovery of High Availability

141 Steps for recovery 1. Recover the path to the external storage system. 1. Reconnect the physical path or reconfigure the SAN to recover the path to the external system. After the path is recovered, the remote path is automatically recovered. 2. Confirm that the external storage system is connected correctly. Command example: raidcom get path -path_grp 1 -IH0 PHG GROUP STS CM IF MP# PORT WWN PR LUN PHS Serial# PRODUCT_ID LB PM NML E D 0 CL5-A 50060e NML XP7 N M 3. Confirm the LDEV ID of the quorum disk by obtaining the information of the external volume from the primary storage system. Command example: raidcom get external_grp -external_grp_id 1-1 -IH0 T GROUP P_NO LDEV# STS LOC_LBA SIZE_LBA Serial# E NML 0x x000003c Confirm that the primary storage system recognizes the external volume as a quorum disk by specifying the LDEV ID of the quorum disk. Command example: raidcom get ldev -ldev_id 0x9999 -fx -IH0 (snip) QRDID : 0 QRP_Serial# : QRP_ID : R8 (snip) 2. Wait for more than 5 minutes after completing step 1, and then resynchronize the pair. 1. Confirm that the I/O mode of the P-VOL is Local. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B 2. At the primary storage system, resynchronize the pair. Command example: pairresync -g oraha -IH0 Pair condition and recovery: path failure from the primary to the external system 141

142 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/O to the volume that could not be accessed from the server (I/O might resume automatically). Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Pair condition and recovery: path failure from the secondary to the external system If the status of external system is normal, a failure might have occurred in a physical path from primary or secondary system to the external system, or a switch. Recover from the failure that occurred in the physical path or switch. The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use any physical path from the secondary system to the quorum disk's external system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: DD2XYY Secondary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY 142 Disaster recovery of High Availability

143 Recovery procedure 1. Recover the paths to the external system. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovering the path from the secondary system to the external system (pair status: PAIR) The following figure shows the failure area and recovery when the pair suspends due to path failure from the secondary system to the external system. When pair status is PAIR and the pair suspends due to a failure of the external storage system or of a path to an external system, the P-VOL I/O mode is Local, and the server continues I/O to the P-VOL. Pair condition and recovery: path failure from the secondary to the external system 143

144 Steps for recovery 1. Recover the path to the external storage system. 1. Reconnect the physical path or reconfigure the SAN to recover the path to the external system. After the path is recovered, the remote path is automatically recovered. 2. Confirm that the external system is connected correctly. Command example: raidcom get path -path_grp 1 -IH1 PHG GROUP STS CM IF MP# PORT WWN PR LUN PHS Serial# PRODUCT_ID LB PM NML E D 0 CL5-C 50060e NML XP7 N M 3. Confirm the LDEV ID of the quorum disk by obtaining the information of the external volume from the secondary system. Command example: raidcom get external_grp -external_grp_id 1-2 -IH1 T GROUP P_NO LDEV# STS LOC_LBA SIZE_LBA Serial# E NML 0x x000003c Confirm that the secondary system recognizes the external volume as a quorum disk by specifying the LDEV ID of the quorum disk. Command example: raidcom get ldev -ldev_id 0x8888 -fx -IH1 (snip) QRDID : 0 QRP_Serial# : QRP_ID : R8 (snip) 2. Wait for more than 5 minutes after completing step 1, and then resynchronize the pair. 1. Confirm that the P-VOL I/O mode is Local. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B 2. At the primary system, resynchronize the pair. Command example: pairresync -g oraha -IH0 144 Disaster recovery of High Availability

145 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/O to the S-VOL (I/O might resume automatically). Related topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Pair condition and recovery: quorum disk failure The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the quorum disk volume. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY Secondary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY Recovering from a failure of a storage system or a physical path between storage systems The recovery procedure for when an HA pair was suspended due to a failure of a storage system or a physical path between storage systems is explained below. Pair condition and recovery: quorum disk failure 145

146 Check the state of storage systems in the primary and secondary sites. If the status of a storage system in the primary or secondary site is not normal, contact HP Technical Support or contact us. If the statuses of storage systems in both the primary and secondary sites are normal, a failure might have occurred in a physical path or switch between the storage systems. Recover the physical path or switch from the failure. Recovering from a failure of a physical path from the primary storage system to the secondary storage system The recovery procedure for when an HA pair was suspended due to a failure of a physical path from the primary storage system to the secondary storage system is explained below. Overview of failure recovery Steps for recovery from the failure 1. Reconnect the physical path or reconfigure the SAN to recover from the failure of the path from the primary storage system to the secondary storage system. After the physical path between the storage systems is recovered, the remote path is automatically recovered. If you recover the physical path but the recovery from the failure is impossible, contact us. 2. Resynchronize an HA pair. 1. Confirm that the I/O mode of the primary volume is Local. Command example: pairdisplay -g oraha -fcxe -IH0 2. At the primary storage system, resynchronize the pair. Command example: pairresync -g oraha -IH0 146 Disaster recovery of High Availability

147 3. Confirm that the pair statuses of the primary and secondary volumes of the HA pair have changed to PAIR(Mirror(RL)). Command example: pairdisplay -g oraha -fcxe -IH0 pairdisplay -g oraha -fcxe -IH1 3. Using an alternative path software, resume I/Os to the volume that could not be accessed from the server. Note that I/Os from the server might be resumed automatically. Related Topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Recovering from a failure of a physical path from the secondary storage system to the primary storage system The recovery procedure for when an HA pair was suspended due to a failure of a physical path from the secondary storage system to the primary storage system is explained below. Overview of failure recovery Pair condition and recovery: quorum disk failure 147

148 Steps for recovery from the failure 1. Reconnect the physical path or reconfigure the SAN to recover from the failure of the path from the secondary storage system to the primary storage system. After the physical path between the storage systems is recovered, the remote path is automatically recovered. If you recover the physical path but the recovery from the failure is impossible, contact us. 2. Resynchronize an HA pair. 1. Confirm that the I/O mode of the secondary volume is Local. Command example: pairdisplay -g oraha -fcxe -IH1 2. At the secondary storage system, resynchronize the pair. Command example: pairresync -g oraha -swaps -IH1 The volume of the primary storage system changes to a secondary volume, and the volume of the secondary storage system changes to a primary volume. 3. Confirm that the pair statuses of the primary and secondary volumes of the HA pair have changed to PAIR (Mirror(RL)). Command example: pairdisplay -g oraha -fcxe -IH0 pairdisplay -g oraha -fcxe -IH1 3. Using an alternative path software, resume I/Os to the secondary volume (primary volume before the failure). Note that I/Os from the server might be resumed automatically. 4. Reverse the primary volume and the secondary volume as necessary. Related Topics I/O modes (page 18) Failure locations (page 113) SIMs related to HA (page 116) Reversing the P-VOL and S-VOL (page 158) Recovery of the quorum disk (pair status: PAIR) When the pair status is PAIR and the pair suspends due to quorum disk failure, the P-VOL I/O mode is Local and the server continues I/O to the P-VOL. The following figure shows the failure area and recovery when the pair suspends due to quorum disk failure. 148 Disaster recovery of High Availability

149 Steps for recovery NOTE: The following procedure is also used for recreating the quorum disk when has been mistakenly reformatted. NOTE: The following step 1 and step 2 describe the recovery procedure for the external storage system made by HP like P9500. If you use the other vendor's external storage system, follow the recovery procedure of the external storage system. When you complete the recovery procedure for the external storage system, proceed to step 3. Pair condition and recovery: quorum disk failure 149

150 1. On the external storage system, recover the quorum disk. 1. Block the quorum disk. 2. Format the quorum disk. If the quorum disk recovers after formatting, proceed to step h. If the quorum disk is not recovered from the failure, proceed to step c. 3. Confirm the following information about the quorum disk. - Vendor - Machine name - Volume identifier 1 - Volume identifier 2 (if the information is valid) - Serial number - SSID - Product ID - LBA capacity (the capacity must be larger than the quorum disk before the failure occurred) - CVS attribute See HP XP7 External Storage for Open and Mainframe Systems User Guide about the details of information above and how to confirm them. Details about how to confirm the CVS attribute, refer to the Table 18 (page 152). 4. Delete the LU path to the quorum disk. 5. Delete the volume that is used as the quorum disk. 6. Create a new volume. For the LDEV ID, set the same value as the LDEV ID of the quorum disk that has been used since before the failure occurred. If you cannot set the same value, proceed to step 3. Also set the same value for the following information as the value was used before the failure occurred. If you cannot set the same value, proceed to step 3. - Vendor - Machine name - Volume identifier 1 - Volume identifier 2 (if the information is valid) - Serial Number - SSID - Product ID - LBA capacity (the capacity must be larger than the quorum disk before the failure occurred) - CVS attribute See HP XP7 External Storage for Open and Mainframe Systems User Guide about the details of information above and how to confirm them. How to confirm the CVS attribute, refer to the Table 18 (page 152) and Table 19 (page 152). 7. Set an LU path to the new volume. For the LU number, set the same value as the LU number of the quorum disk that was used since before the failure occurred. 150 Disaster recovery of High Availability

151 If you cannot set the same value, proceed to step Reconnect the external storage system or the quorum disk to the primary and secondary storage systems. 2. Wait for more than 5 minutes after completing step 1, and then resynchronize the pair. 1. Confirm that the P-VOL I/O mode is Local. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B 2. On the primary storage system, resynchronize the pair. Command example: pairresync -g oraha -IH0 3. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status,Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M Proceed to step Recreate the pairs. 1. On the primary storage system, delete all pairs that use the quorum disk where the failure occurred. Command example: pairsplit -g oraha -S -d 0x2222 -IH0 2. Confirm that the pairs were deleted. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU), Seq#, LDEV#.P/S,Status,Fence, %, P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) SMPL , /- oraha dev1(r) (CL1-C-0, 0, 0) SMPL , /- 3. On the primary and secondary storage systems, delete the quorum disk. 4. On the primary and secondary storage systems, add a quorum disk. 5. On the primary storage system, create the pairs. Command example: paircreate -g oraha -f never -vl -jq 0 -d 0x2222 -IH0 Pair condition and recovery: quorum disk failure 151

152 6. Confirm that the P-VOL and S-VOL pair statuses change to PAIR (Mirror (RL)). Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 4. Using the alternate path software, resume I/O to the S-VOL (I/O might resume automatically). NOTE: When the external storage system is installed at the primary site, if a failure occurs in both the primary system and the external system, forcibly delete the pair on the secondary system, then recreate the pair. For details, see Recovering the storage systems at the primary site from a failure (external storage system at the primary site) (page 156). Table 18 How to confirm from the external storage system whether the CVS attribute is present or absent Interface Remote Web Console RAID Manager Web Console* How to confirm the CVS attribute Open the Logical device window by Remote Web Console from the external storage system, and then confirm whether the CVS attribute is displayed or not in the Emulation type column of the LDEV which is using as the quorum disk. Execute the raidcom get ldev command from RAID Manager to the LDEV which is used as quorum disk by external storage system, and then confirm whether the CVS attribute is output for VOL_TYPE. Details of the raidcom get ldev command; refer to HP XP7 RAID Manager Reference Guide. Confirm whether the CVS attribute is added in the CVS column on the window of LUN Management. * Ask the maintenance personnel to operate the Web Console. Table 19 Conditions for grant of CVS attribute when the volume is created in the external storage system Interface Remote Web Console RAID Manager Condition Internal volume or external volume THP-VOL HP XP7 or later CVS attribute granted granted P9500 or earlier Create LDEV at maximum size Other than above not granted granted Web Console* The LDEV is created during the operation of the installation of Define Config & Install or ECC/LDEV, which remains the initial value of the Number of LDEVs on the Device Emulation Type Define window. Other than above not granted granted * Ask the maintenance personnel to operate the Web Console. 152 Disaster recovery of High Availability

153 Related topics I/O modes (page 18) Creating the quorum disk (page 85) Failure locations (page 113) SIMs related to HA (page 116) Adding the quorum disk (page 194) Removing quorum disks (page 208) Pair condition and recovery: external system failure The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when you can no longer use the external system. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) OK NG P-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG OK S-VOL SIM Primary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY Secondary system: 21D0XY, 21D2XX, DD2XYY, DEF0ZZ, EF5XYY, EFD000, FF5XYY Recovery procedure 1. Recover the external system. For details, contact the vendor. 2. Recreate or resynchronize the pair. 3. Reverse the primary volume and the secondary volume as necessary. Pair condition and recovery: external system failure 153

154 Related topics Pair condition before failure (page 116) Pair condition and recovery: other failures The following table shows transitions for pair status and I/O mode, the volumes that are accessible from the server, and the location of the latest data when a failure other than explained above occurs. Before failure After failure Pair status and I/O mode Pair status and I/O mode Volume accessible from the server Volume with latest data P-VOL S-VOL P-VOL S-VOL 1 P-VOL S-VOL PAIR (Mirror (RL)) PAIR (Mirror (RL)) PSUE (Local) PSUE (Block) PSUE (Block) SSWS (Local) OK NG NG OK P-VOL S-VOL COPY (Mirror (RL)) COPY (Block) PSUE (Local) PSUE (Block) OK NG 1 NG NG P-VOL P-VOL PSUS/PSUE (Local) SSUS/PSUE (Block) PSUS/PSUE (Local) SSUS/PSUE (Block) OK NG 1 NG NG P-VOL P-VOL PSUS/PSUE (Block) SSWS (Local) PSUS/PSUE (Block) SSWS (Local) NG NG OK NG 2 S-VOL S-VOL Notes: 1. Depending on the failure factor, if you cannot access the P-VOL, you cannot access neither P-VOL nor S-VOL. 2. Depending on the failure factor, if you cannot access the S-VOL, you cannot access neither P-VOL nor S-VOL. SIM Primary system: SIM varies depending on the failure type Secondary system: SIM varies depending on the failure type Recovery procedure 1. Recover the system. 2. Resynchronize the pair. Related topics Pair condition before failure (page 116) Recovery procedure when an HA pair is suspended due to other failures An HA pair might be suspended due to failures according to other than factors explained above. An example of workflow to recover from this failure is shown below. 1. Recover from the failure. 1. Verify that the failure such as n HA pair suspended has not occurred by the information, for example SIMs issued by the primary or secondary storage systems. 2. When a failure occurred, perform a troubleshooting according to the failure type to remove the failure factor. 2. Resynchronize an HA pair. 154 Disaster recovery of High Availability

155 1. Check the I/O mode of the P-VOL and the S-VOL of HA pair. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PSUE NEVER, B/B oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PSUE NEVER, L/L 2. If the I/O mode of the P-VOL is Local, resynchronize the HA pair at the primary storage system. Command example: pairresync -g oraha -IH0 3. If the I/O mode of the S-VOL is Local, resynchronize the HA pair at the secondary storage system. Command example: pairresync -g oraha -swaps -IH1 The volume of the primary storage system changes to an S-VOL, and the volume of the secondary storage system changes to a P-VOL. 4. Confirm that the pair statuses of the P-VOL and the S-VOL of the HA pair have changed to PAIR (Mirror (RL)) Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 3. Using the alternate path software, resume I/Os to the S-VOL. 4. Reverse the P-VOL and the S-VOL if necessary. If the HA volumes are not restored in the above procedure, contact HP Technical Support. Related topics I/O modes (page 18) Reversing the P-VOL and S-VOL (page 158) Pair condition and recovery: other failures 155

156 Recovering the storage systems at the primary site from a failure (external storage system at the primary site) If a failure occurs at the primary site in a configuration with the external storage system for the quorum disk located at the primary site, the failure might affect the primary storage system and the external storage system simultaneously. In this case, the HA pair is suspended, and the accesses to the HA volumes stop. Failure occurred at the primary site (when containing an external storage system at the primary site) Failure locations Reference codes of SIMs that might be issued Can the volume access to the HA volumes? 1 Primary storage system Secondary storage system P-VOL S-VOL Both the primary storage system and the external storage system for the quorum disk Depends on the failure type 2 DD0XYY DD2XYY DD3XYY No No XX 21D0XX 21D2XX EF5XYY EFD000 FF5XYY DEF0ZZ Note: 156 Disaster recovery of High Availability

157 Failure locations Reference codes of SIMs that might be issued Can the volume access to the HA volumes? 1 Primary storage system Secondary storage system P-VOL S-VOL 1. Hardware such as HDD, cache, CHA, DKA and MPB is redundant in the HP XP7 configuration. Even if a failure occurs in a part of redundant hardware, the failure does not cause an HA pair being suspended, or an inaccessible HA volume. The failure does not cause the HA pair suspended, or the inaccessible HA volume even if a failure occurs in a part of hardware, if the following physical paths are redundant. Between a server and a storage systems of the primary and secondary sites Between an external storage system and storage systems of the primary and secondary sites Between storage systems of the primary and secondary sites 2. A SIM that corresponds to the failure type is issued. You might not be able to view SIMs according to the failure type. 3. You can access the S-VOL, if the pair status of the S-VOL is SSWS, even if a failure occurs. Steps for recovery from the failure 1. Using the alternate path software, delete the alternate path to the HA P-VOL. 2. At the secondary storage system, delete the HA pair forcibly. Select Enable for Volume Access in the Delete Pairs window of Remote Web Console. 3. Confirm that the HA pair is deleted. 4. Using the alternate path software, resume I/Os from the server to the HA S-VOL. 5. Restore the primary storage system from the failure. 6. At the primary storage system, delete the HA pair forcibly. Select Disable for Volume Access in the Delete Pairs window of Remote Web Console. Depending on the failure type of the primary storage system, after the primary storage is restored from a failure, the pair status of the P-VOL might change to SMPL, and the HA reservation attribute might be set. In this case, you do not need to delete the HA pair forcibly. 7. Confirm that the HA pair is deleted. 8. Restore the external storage system from a failure. 9. From the primary and secondary storage systems, delete the quorum disk. Depending on the failure type of the external storage system, after the external storage system is restored from a failure, a quorum disk can be deleted. In this case, you do not need to delete the quorum disk. 10. From the primary and secondary storage systems, add a quorum disk. 11. From the secondary storage system, recreate an HA pair. 12. Using the alternate path software, add a path to the HA P-VOL, and then resume I/Os. 13. Reverse the P-VOL and the S-VOL if necessary. Related topics I/O modes (page 18) Failure locations (page 113) Reversing the P-VOL and S-VOL (page 158) Adding the quorum disk (page 194) Forcibly deleting HA pairs (for paired volumes) (page 201) Removing quorum disks (page 208) Recovering the storage systems at the primary site from a failure (external storage system at the primary site) 157

158 Reversing the P-VOL and S-VOL During disaster recovery operations, P-VOLs are changed to S-VOLs and S-VOLs to P-VOLs to reverse the flow of data from the secondary site to the primary site to restore the primary site. When normal operations are resumed at the primary site, the direction of copy is changed again so that the original P-VOLs become primary volumes again and the original S-VOLs become secondary volumes again with data flowing from the primary site to the secondary site. Steps to reverse data flow 1. Using the alternate path software, stop I/O from the server to P-VOLs in the secondary storage system. Continue to the next step even if the alternate path cannot be deleted. 2. Confirm that the P-VOL and the S-VOL have been reversed. Command example: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) P-VOL PAIR NEVER, L/M 3. At the primary storage system, change the pair statuses of the S-VOLs to SSWS to suspend the pairs (swap suspension). Command example: pairsplit -g oraha -d devx -RS -IH0 4. At the secondary storage system, reverse the P-VOL and the S-VOL, and then resynchronize the pairs (swap resync). Command example: pairresync -g oraha -d devx -swaps -IH0 5. Confirm that the P-VOL and the S-VOL pair statuses change to PAIR (Mirror (RL)). For example, you can use the following command: pairdisplay -g oraha -fxce -IH0 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M pairdisplay -g oraha -fxce -IH1 Group PairVol(L/R) (Port#,TID, LU),Seq#,LDEV#.P/S,Status, Fence, %,P-LDEV# M CTG JID AP EM E-Seq# E-LDEV# R/W oraha dev1(l) (CL1-C-0, 0, 0) S-VOL PAIR NEVER, L/M oraha dev1(r) (CL1-A-0, 0, 0) P-VOL PAIR NEVER, L/M 6. Using the alternate path software, restart I/Os from the server to S-VOLs in the secondary storage system. Related topics I/O modes (page 18) Resolving failures in multiple locations If failures occur in multiple locations, use the following recovery procedure: 158 Disaster recovery of High Availability

159 1. Identify the failure locations from SIMs issued from the storage systems and using SAN management software, and then recover from the failures. 2. If data are lost from both volumes, recover from the backup data using Business Copy or Fast Snap volumes, or backup software. 3. If I/O is stopped, resume I/O from the server. 4. If HA pairs are suspended, resynchronize the pairs. If the pairs cannot be resynchronized, delete the pairs and then recreate them. Related topics SIMs related to HA (page 116) Resolving failures in multiple locations 159

160 7 Planned outage of High Availability storage systems Abstract This chapter describes and provides instructions for performing planned outages of High Availability (HA) storage systems. Planned power off/on of the primary storage system Powering off the primary storage system 1. Direct server I/O to the storage system at the secondary site. Using the alternate path software, stop I/O from servers to the storage system at the primary site. 2. On the storage system at the secondary site, suspend the HA pairs to change the pair status of the S-VOLs to SSWS (swap suspension). pairsplit -g oraha -RS -IH1 3. Verify that the pair status of P-VOLs of the HA pairs has changed to PSUS(Block) and that the pair status of the S-VOLs has changed to SSWS(Local). pairdisplay -g oraha -fcxe -IH1 4. Power off the storage system at the primary site. I/O modes (page 18) Powering on the primary storage system 1. Power on the storage system at the primary site. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disk do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. Confirm that the pair status of the HA P-VOLs is PSUS(Block) and that the pair status of the S-VOLs is SSWS(Local). pairdisplay -g oraha -fcxe -IH1 5. On the storage system at the secondary site, resynchronize the HA pairs by reversing the primary and secondary volumes (swap resync). pairresync -g oraha -swaps -IH1 6. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH1 7. Using the alternate path software, resume I/O from the servers to the storage system at the primary site. 8. If necessary, reverse the primary and secondary volumes. I/O modes (page 18) Reversing the P-VOL and S-VOL (page 158) 160 Planned outage of High Availability storage systems

161 Planned power off/on of the secondary storage system Powering off the secondary storage system 1. Direct server I/O to the storage system at the primary site. Using the alternate path software, stop I/O from servers to the storage system at the secondary site. 2. On the storage system at the primary site, suspend the HA pairs by specifying the primary volume. pairsplit -g oraha -r -IH0 3. Confirm that the pair status of P-VOLs of HA pairs has changed to PSUS(Local) and the pair status of the S-VOLs has changed to SSUS(Block). pairdisplay -g oraha -fcxe -IH0 4. Power off the storage system at the secondary site. I/O modes (page 18) Powering on the secondary storage system 1. Power on the storage system at the secondary site. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disk do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Local) and that the pair status of the S-VOLs is SSUS(Block). pairdisplay -g oraha -fcxe -IH0 5. On the storage system at the primary site, resynchronize the HA pairs by specifying the primary volume. pairresync -g oraha -IH0 6. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH0 7. Using the alternate path software, resume I/O from the servers to the storage system at the secondary site. I/O modes (page 18) Planned power off/on of the external storage system (I/O continues at the primary site) Powering off the external storage system for the quorum disks (I/O continues at the primary site) 1. Direct server I/O to the storage system at the primary site. Using the alternate path software, stop I/O from the servers to the storage system at the secondary site. 2. On the storage system at the primary site, suspend the HA pairs by specifying the primary volume. pairsplit -g oraha -r -IH0 Planned power off/on of the secondary storage system 161

162 3. Confirm that the pair status of P-VOLs of the HA pairs has changed to PSUS(Local) and that the pair status of the S-VOLs has changed to SSUS(Block). pairdisplay -g oraha -fcxe -IH0 4. On the primary and secondary storage systems, disconnect the quorum disks. raidcom disconnect external_grp -ldev_id 0x999 -IH0 NOTE: When you disconnect a quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, delete the SIM after powering on the storage system and reconnecting the quorum disk. *ZZ: quorum disk ID 5. On the primary and secondary storage systems, confirm that the quorum disk has been disconnected. raidcom get path -path_grp 1 -IH0 6. Power off the external storage system. I/O modes (page 18) Powering on the external storage system for the quorum disk (I/O continues at the primary site) 1. Power on the external storage system. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disk do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. On the primary and secondary storage systems, establish the connections to the quorum disks. raidcom check_ext_storage external_grp -ldev_id 0x9999 -IH0 5. On the primary and secondary storage systems, confirm that the connections to the quorum disks have been established. raidcom get external_grp -external_grp_id 1-1 -IH0 6. Confirm that the external volumes of the primary and secondary storage systems are recognized as quorum disks. raidcom get ldev -ldev_id 0x9999 -IH0 7. Check for SIMs about quorum disk blockade, and delete the SIMs. 8. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Local) and that the pair status of the S-VOLs is SSUS(Block). pairdisplay -g oraha -fcxe -IH0 9. Wait for more than 5 minutes after completing step 4, and then resynchronize the HA pairs on the primary storage system by specifying the primary volume. pairresync -g oraha -IH0 10. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH0 162 Planned outage of High Availability storage systems

163 11. Using the alternate path software, resume server I/O to the storage system at the secondary site. I/O modes (page 18) Planned power off/on of the external storage system (I/O continues at the secondary site) Powering off the external storage system for the quorum disk (I/O continues at the secondary site) 1. Direct server I/O to the storage system at the secondary site. Using the alternate path software, stop I/O from the servers to the storage system at the primary site. 2. On the secondary storage system, suspend the HA pairs to change the pair status of the S-VOLs to SSWS (swap suspension). pairsplit -g oraha -RS -IH1 3. Verify that the pair status of the P-VOLs of the HA pairs has changed to PSUS(Block) and that the pair status of the S-VOLs has changed to SSWS(Local). pairdisplay -g oraha -fcxe -IH1 4. On the primary and secondary storage systems, disconnect the quorum disks. raidcom disconnect external_grp -ldev_id 0x8888 -IH1 NOTE: When you disconnect a quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, delete the SIM after powering on the storage system and reconnecting the quorum disk. *ZZ: quorum disk ID 5. On the primary and secondary storage systems, confirm that the quorum disks have been disconnected. raidcom get path -path_grp 1 -IH1 6. Power off the external storage system. I/O modes (page 18) Powering on the external storage system for the quorum disk (I/O continues at the secondary site) 1. Power on the external storage system. 2. Confirm that the primary and secondary storage systems and the external storage systems for the quorum disks do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. On the primary and secondary storage systems, establish connections to the quorum disks. raidcom check_ext_storage external_grp -ldev_id 0x8888 -IH1 5. On the primary and secondary storage systems, confirm that the connections to the quorum disks have been established. raidcom get external_grp -external_grp_id 1-2 -IH1 6. Confirm that the external volumes of the primary and secondary storage systems are recognized as quorum disks. raidcom get ldev -ldev_id 0x8888 -IH1 Planned power off/on of the external storage system (I/O continues at the secondary site) 163

164 7. Check for SIMs about quorum disk blockade, and delete the SIMs. 8. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Block) and that the pair status of the S-VOLs is SSWS(Local). pairdisplay -g oraha -fcxe -IH1 9. Wait for more than 5 minutes after completing step 4, and then resynchronize the HA pairs from the secondary storage system by reversing the primary and secondary volumes (swap resync). pairresync -g oraha -swaps -IH1 10. Confirm that the pair status of the P-VOls and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH1 11. Using the alternate path software, resume I/O from the servers to the storage system at the primary site. 12. If necessary, reverse the primary and secondary volumes. I/O modes (page 18) Reversing the P-VOL and S-VOL (page 158) Planned power off/on of the primary and secondary storage systems Powering off the primary and secondary storage systems 1. Stop server I/O to the primary and secondary storage systems. 2. On the primary storage system, suspend the HA pairs by specifying the primary volume. pairsplit -g oraha -r -IH0 3. Confirm that the pair status of the P-VOLs of the HA pairs has changed to PSUS(Local) and that the pair status of the S-VOLs has changed to SSUS(Block). pairdisplay -g oraha -fcxe -IH0 4. Power off the primary and secondary storage systems. I/O modes (page 18) Powering on the primary and secondary storage systems 1. Power on the primary and secondary storage systems. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disks do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Local) and that the pair status of the S-VOLs is SSUS(Block). pairdisplay -g oraha -fcxe -IH0 5. On the primary storage system, resynchronize the HA pairs by specifying the primary volume. pairresync -g oraha -IH0 6. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH0 164 Planned outage of High Availability storage systems

165 7. Resume I/O from the servers to the primary and secondary storage systems. I/O modes (page 18) Planned power off/on of the primary and external storage systems Powering off the primary and external storage systems 1. Direct server I/O to the secondary storage system. Using the alternate path software, stop I/O from the servers to the primary storage system. 2. On the secondary storage system, swap and suspend the HA pairs (swap suspension). pairsplit -g oraha -RS -IH1 3. Confirm that the pair status of the P-VOLs of the HA pairs has changed to PSUS(Block) and that the pair status of the S-VOLs has changed to SSWS(Local). pairdisplay -g oraha -fcxe -IH1 4. On the primary and secondary storage systems, disconnect the quorum disks. raidcom disconnect external_grp -ldev_id 0x8888 -IH1 NOTE: When you disconnect a quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, delete the SIM after powering on the storage system and reconnecting the quorum disk. *ZZ: quorum disk ID 5. On the primary and secondary storage systems, verify that the quorum disks have been disconnected. raidcom get path -path_grp 1 -IH1 6. Power off the storage system at the primary site and the external storage system. I/O modes (page 18) Powering on the primary and external storage systems 1. Power on the storage system at the primary site and the external storage system. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disks do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. On the primary and secondary storage systems, establish connections to the quorum disks. raidcom check_ext_storage external_grp -ldev_id 0x8888 -IH1 5. On the primary and secondary storage systems, verify that the connections to the quorum disks have been established. raidcom get external_grp -external_grp_id 1-2 -IH1 6. Confirm that external volumes of the primary and secondary storage systems are recognized as quorum disks. raidcom get ldev -ldev_id 0x8888 -IH1 7. Check for SIMs about quorum disk blockade, and delete the SIMs. 8. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Block) and that the pair status of the S-VOLs is SSWS(Local). pairdisplay -g oraha -fcxe -IH1 9. Wait for more than 5 minutes after completing step 4, and then resynchronize HA pairs from the secondary storage system by reversing the primary and secondary volumes (swap resync). Planned power off/on of the primary and external storage systems 165

166 pairresync -g oraha -swaps -IH1 10. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcx -IH1 11. Using the alternate path software, resume I/O from the servers to the storage system at the primary site. 12. If necessary, reverse the primary and secondary volumes. I/O modes (page 18) Reversing the P-VOL and S-VOL (page 158) Planned power off/on of the secondary and external storage systems Powering off the secondary and external storage systems 1. Direct server I/O to the storage system at the primary site. Using the alternate path software, stop I/O from the servers to the secondary storage system. 2. On the primary storage system, suspend the HA pairs by specifying the primary volume. pairsplit -g oraha -r -IH0 3. Confirm that the pair status of the P-VOLs of the HA pairs has changed to PSUS(Local) and that the pair status of the S-VOLs has changed to SSUS(Block). pairdisplay -g oraha -fcxe -IH0 4. On the primary and secondary storage systems, disconnect the quorum disks. raidcom disconnect external_grp -ldev_id 0x9999 -IH0 NOTE: When you disconnect a quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, delete the SIM after powering on the storage system and reconnecting the quorum disk. *ZZ: quorum disk ID 5. On the primary and secondary storage systems, verify that the quorum disks have been disconnected. raidcom get path -path_grp 1 -IH0 6. Power off the storage system at the secondary site and the external storage system. I/O modes (page 18) Powering on the secondary and external storage systems 1. Power on the storage system at the secondary site and the external storage system. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disks do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. On the primary and secondary storage systems, establish connections to the quorum disks. raidcom check_ext_storage external_grp -ldev_id 0x9999 -IH0 5. On the primary and secondary storage systems, verify that the connections to the quorum disks have been established. raidcom get external_grp -external_grp_id 1-1 -IH0 6. Confirm that the external volumes of the primary and secondary storage systems are recognized as quorum disks. 166 Planned outage of High Availability storage systems

167 raidcom get ldev -ldev_id 0x9999 -IH0 7. Check for SIMs about quorum disk blockade, and delete the SIMs. 8. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Local) and that the pair status of the S-VOLs is SSUS(Block). pairdisplay -g oraha -fcxe -IH0 9. Wait for more than 5 minutes after completing step 4, and then resynchronize the HA pairs from the primary storage system by specifying the primary volume. pairresync -g oraha -IH0 10. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH0 11. Using the alternate path software, resume I/O from the servers to the storage system at the secondary site. I/O modes (page 18) Planned power off/on of all HA storage systems Powering off the primary, secondary, and external storage systems 1. Using the alternate path software, stop server I/O to the primary and secondary storage systems. 2. On the primary storage system, suspend the HA pairs. pairsplit -g oraha -r -IH0 3. Confirm that the pair status of the P-VOLs of the HA pairs has changed to PSUS(Local) and that the pair status of the S-VOLs has changed to SSUS(Block). pairdisplay -g oraha -fcxe -IH0 4. On the primary and secondary storage systems, disconnect the quorum disks. raidcom disconnect external_grp -ldev_id 0x9999 -IH0 NOTE: When you disconnect a quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, delete the SIM after powering on the storage system and reconnecting the quorum disk. *ZZ: quorum disk ID 5. On the primary and secondary storage systems, verify that the quorum disks have been disconnected. raidcom get path -path_grp 1 -IH0 6. Power off the primary and secondary storage systems and the external storage system. I/O modes (page 18) Powering on the primary, secondary, and external storage systems 1. Power on the primary and secondary storage systems and the external storage system. 2. Confirm that the primary and secondary storage systems and the external storage system for the quorum disks do not have any blocked parts. If any parts are blocked, recover them. 3. Check for SIMs about path blockage, and delete the SIMs. 4. On the primary and secondary storage systems, establish connections to the quorum disks. raidcom check_ext_storage external_grp -ldev_id 0x9999 -IH0 Planned power off/on of all HA storage systems 167

168 5. On the primary and secondary storage systems, verify that the connections to the quorum disks have been established. raidcom get external_grp -external_grp_id 1-1 -IH0 6. Confirm that the external volumes of the primary and secondary storage systems are recognized as quorum disks. raidcom get ldev -ldev_id 0x9999 -IH0 7. Check for SIMs about quorum disk blockade, and delete the SIMs. 8. Confirm that the pair status of the P-VOLs of the HA pairs is PSUS(Local) and that the pair status of the S-VOLs is SSUS(Block). pairdisplay -g oraha -fcxe -IH0 9. Wait for more than 5 minutes after completing step 4, and then resynchronize the HA pairs from the primary storage system by specifying the primary volume. pairresync -g oraha -IH0 10. Confirm that the pair status of the P-VOLs and S-VOLs of the HA pairs has changed to PAIR (Mirror (RL)). pairdisplay -g oraha -fcxe -IH0 11. Using the alternate path software, resume I/O from the servers to the storage systems at the primary and secondary sites. I/O modes (page 18) 168 Planned outage of High Availability storage systems

169 8 Data migration using High Availability Abstract This chapter describes and provides instructions for performing nondisruptive data migration using High Availability (HA) and discontinuing HA operations after the migration is complete. Workflow for data migration The High Availability (HA) feature of the HP XP7 Storage enables you to perform data migration without interrupting business operations. The following figure shows the system configuration for data migration using High Availability. The storage system that is the migration source (primary site) must be an HP XP7, and the storage system that is the migration target (secondary site) must be an HP XP7 (or later). 1. Create HA pairs between the primary and secondary storage systems. The data on the volumes is duplicated, and the server issues I/O operations to volumes in both storage systems of the HA pairs. 2. Monitor the status of the HA pairs, and make sure that the pair status of all pairs is PAIR before continuing. 3. On the server, stop I/O to the primary volumes at the primary site. At this time, do not stop I/O to the secondary volumes at the secondary site. 4. At the secondary site, suspend the HA pairs by specifying the S-VOLs. When you suspend an HA pair by specifying the S-VOL, the pair status I/O mode of the P-VOL and S-VOL change as follows: The pair status of the S-VOL changes to SSWS, and the I/O mode of the S-VOL changes to Local. The pair status of the P-VOL changes to PSUS, and the I/O mode of the P-VOL changes to Block. 5. At the secondary site, delete the HA pairs by specifying the S-VOLs. When you delete an HA pair by specifying the S-VOL, the HA reservation attribute is applied to the volume that was the P-VOL. The volume that was the S-VOL keeps the virtual LDEV ID and continues to receive I/O from the server. Workflow for data migration 169

170 6. After you have deleted the HA pairs, at the primary site delete the LU paths to the volumes that were the P-VOLs. If desired, you can now delete the volumes at the primary site, as they have been nondisruptively migrated to the secondary site. 7. On the primary and secondary storage systems, release the quorum disk settings for the external volume that was the quorum disk. 8. On the primary and secondary storage systems, disconnect the external volume that was the quorum disk. NOTE: When you disconnect the quorum disk, SIM (DEF0ZZ*) (quorum disk blockade) might be issued. If this SIM is issued, you can delete it. *ZZ: quorum disk ID. 9. On the primary and secondary storage systems, delete the remote connections between the storage systems. 10. If necessary, uninstall the HA license. 11. Remove the physical paths between the primary and secondary storage systems. 12. Stop and remove the storage system at the primary site. Reusing volumes after data migration This topic provides instructions for reusing volumes that were the P-VOLs and S-VOLs of HA pairs that have been deleted. Reusing a volume that was an S-VOL When you delete an HA pair by specifying the P-VOL, the HA reservation attribute remains set for the volume that was the S-VOL. When you delete an HA pair by specifying the S-VOL, the HA reservation attribute is applied to the volume that was the P-VOL. When you execute the raidcom get ldev command for a volume that has the reservation attribute, the VIR_LDEV (virtual LDEV ID) is displayed as ffff. 1. Delete the LU path to the volume that has the reservation attribute. 2. Remove the reservation attribute. Example for removing the reservation attribute for LDEV ID (0x4444): raidcom unmap resource -ldev_id 0x4444 -virtual_ldev_id reserve The volume from which the reservation attribute was removed changes to a volume whose virtual LDEV ID was deleted. If you execute the raidcom get ldevcommand for a volume whose virtual LDEV ID was deleted, fffe is displayed for VIR_LDEV (virtual LDEV ID). 3. Reserve an LDEV ID for the resource group that will use the volume. Example for reserving LDEV ID (0x4444) for resource group (#0): raidcom add resource -resource_name meta_resource -ldev_id 0x Set a virtual LDEV ID for the volume. NOTE: You must set a virtual LDEV ID that is unique within the storage system that uses the volume. If the same virtual LDEV ID is used in other storage systems or virtual storage machines with the same serial number and model, identification of multiple volumes with the same virtual LDEV ID might cause problems. At worst, the server might detect an inconsistency. Example for setting virtual LDEV ID (0x5555) for volume (0x4444): raidcom map resource -ldev_id 0x4444 -virtual_ldev_id 0x Specify a new port and host group for the volume, and set an LU path. 170 Data migration using High Availability

171 Reusing a volume that was a P-VOL After you delete an HA pair by specifying the P-VOL, you can continue to use the volume that was the P-VOL of the pair. When you execute the raidcom get ldevcommand for a volume that continues to be available after pair deletion, a value other than ffffor fffeis displayed for the VIR_LDEV (virtual LDEV ID), or the VIR_LDEV is not displayed. Use the following procedure to move the volume to another resource group (virtual storage machine) so that the server recognizes it as a different volume and it can be used. 1. Delete the LU path to the volume. 2. Delete the virtual LDEV ID. Command example for deleting virtual LDEV ID (0x5555) for LDEV ID (0x4444): raidcom unmap resource -ldev_id 0x4444 -virtual_ldev_id 0x5555 When you delete the virtual LDEV ID, the volume changes to a volume whose virtual LDEV ID has been deleted. If you execute the raidcom get ldevcommand for a volume whose virtual LDEV ID has been deleted, fffeis displayed for the VIR_LDEV (virtual LDEV ID). 3. Reserve an LDEV ID for a resource group to be used for a different purpose. Command example for reserving LDEV ID (0x4444) for resource group (AnotherGroup) to which the volume is registered: raidcom add resource -resource_name AnotherGroup -ldev_id 0x Set a virtual LDEV ID for the volume. NOTE: You must set a virtual LDEV ID that is unique within the storage system that uses the volume. If the same virtual LDEV ID is used in other storage systems or virtual storage machines with the same serial number and model, identification of multiple volumes with the same virtual LDEV ID might cause problems. At worst, the server might detect an inconsistency. Command example for setting virtual LDEV ID (0xe000) for volume (0x4444): raidcom map resource -ldev_id 0x4444 -virtual_ldev_id 0xe Specify a new port and host group for the volume, and set an LU path. Reusing volumes after data migration 171

172 9 Troubleshooting Abstract This chapter provides troubleshooting information for High Availability operations and instructions for contacting HP customer support. General troubleshooting Problem The Remote Web Console computer stops, or the HA does not operate properly. The LEDon the HP XP7 control panel that indicates that the channel of the initiator is available is off or blinking. HA error messages are displayed on the Remote Web Console computer. The status of a path to the remote storage system is abnormal. A timeout error occurred while creating a pair or resynchronizing a pair. An HA volume has pinned tracks. The monitoring switch is enabled, but the monitoring data is not updated. Recommended action Verify that there are no problems with the Remote Web Console computer, with the Ethernet connection, or with the program products, and then restart the Remote Web Console computer. Restart of the Remote Web Console computer does not affect the HA operations that are currently running. Confirm that all requirements and restrictions (such as whether the LU types are the same) of HA are met. Confirm that the storage systems of the primary site and secondary site are turned on, and that their functions are fully enabled. Check all the values and parameters that were entered to confirm that correct information (such as serial number and ID of the remote storage system, path parameter, IDs of the primary volume and secondary volume) were entered to the Remote Web Console computer. Contact HP Technical Support. Correct the error, and then re-execute the HA operation. Check the status of the paths in the Remote Connections window, and make the required corrections. If a timeout occurred due to a hardware error, a SIM is generated. Contact HP Technical Support, and after solving the problem, re-execute the HA operation. Large workload: If a SIM is not generated, wait for 5 to 6 minutes, and then check the status of the pair you want to create or resynchronize. If the pair status correctly changed, the failed operation completed after the timeout error message was displayed. If the pair status did not change as anticipated, the HA operation cannot complete due to the large workload. In this case, re-execute the HA operation when the workload of the storage system is smaller. If a communication error between Remote Web Console and SVP occurred, see HP XP7 Remote Web Console User Guide. Recover the pinned track volume. The monitoring data might not be updated because the time setting of SVP was changed. Disable the monitoring switch, and then enable it again. For details about the monitoring switch, see HP XP7 Performance for Open and Mainframe Systems User Guide. Verify that the settings for the target being monitored are correct. 172 Troubleshooting

173 Related topics Troubleshooting related to remote path statuses (page 173) Procedure for recovering pinned track of an HA volume (page 184) Troubleshooting related to remote path statuses Remote path status and description Normal Normal Status description This remote path is correctly set, and the path can be used for copying HA. Recommended action The remote path status is normal. Recovery is not required. Initialization Failed Initialization error A physical connection between the local storage system and the remote storage system, or a connection between the local storage system and the switch does not exist. Therefore, the error occurred when the connection to the remote storage system was initialized. Check the following, and correct them if they are not correct: The cable between the ports of the local storage system and the remote storage system or between the ports of the local storage system and the switch of the local storage system is properly connected. The serial number (S/N) and system ID of the remote storage system, the port number of the local storage system, and the port number of the remote storage system are correct. The topology (Fabric, FC-AL, Point-to-point) of the ports of the local storage system and remote storage system is correctly set. Communication Time Out Communication timeout A timeout occurred in a communication between the local storage system and remote storage system. Check the following, and correct them if they are not correct: The remote storage system is powered on, and the remote storage system can be used normally. The following network relaying devices are correctly configured, and can be properly used: Connectors Cables Switches (zoning settings) Channel extenders (if channel extenders are connected) Lines and systems connected among between channel extenders (if channel extenders are connected) Troubleshooting related to remote path statuses 173

174 Remote path status and description Port Rejected Insufficient resources Serial Number Mismatch Mismatched serial number Invalid Port Invalid port Status description All resources of the local storage system or remote storage system are being used for other connections. Therefore, the local storage system or remote storage system rejected the connection control function that sets remote paths. The serial number of the remote storage system does not match the specified serial number. The specified port of the local storage system is in the following status: The port is not mounted. The port attribute is not Initiator. The remote path already exists. Recommended action In the Remove Remote Paths window, remove all remote paths that are not currently used. In the Remove Remote Connections window, remove all remote storage systems that are not currently used. Confirm that the port attribute of the local storage system is Initiator, and that the port attribute of the remote storage system is set to RCU Target is set. If these settings are incorrect, change them to the correct port attributes. Check the following, and correct them if they are not correct: The serial number (S/N) and system ID of the remote storage system, the port number of the local storage system, and the port number of the remote storage system are correct. The topology (Fabric, FC-AL, Point-to-point) of the ports of the local storage system and remote storage system is correctly set. The following network relaying devices are correctly configured, and can be properly used: Connectors Cables Switches (zoning settings) Channel extenders (if channel extenders are connected) Lines and systems connected among between channel extenders (if channel extenders are connected) Check the following, and correct them if they are not correct: The port of the local storage system is mounted, or the Initiator is set to the attribute. No remote path with the same configuration (the same port number of the local storage system and the same port number of the remote storage system) exists. The topology (Fabric, FC-AL, Point-to-point) of the ports of the local storage system and remote storage system is correctly set. The following network relaying devices are correctly configured, and can be properly used: Connectors Cables Switches (zoning settings) 174 Troubleshooting

175 Remote path status and description Status description Recommended action Channel extenders (if channel extenders are connected) Lines and systems connected among between channel extenders (if channel extenders are connected) The serial number (S/N) and system ID of the remote storage system, the port number of the local storage system, and the port number of the remote storage system are correct. Pair-Port Number Mismatch Incorrect port number of the remote storage system The specified port of the remote storage system is not physically connected to the local storage system. Check the following, and correct them if they are not correct: The port number of the remote storage system is correct. The cable between the ports of the local storage system and the remote storage system or between the ports of the local storage system and the switch of the local storage system is properly connected. The topology (Fabric, FC-AL, Point-to-point) of the ports of the local storage system and remote storage system is correctly set. Pair-Port Type Mismatch Incorrect port type of the remote storage system The attribute of the specified port of the remote storage system is not set to RCU Target. Set RCU Target to the attribute of the port of the remote storage system that was specified to RCU Target. Communication Failed The local storage system is correctly connected to the remote storage system, but a logical communication timeout occurred. Check the following, and correct them if they are not correct: Communication error The port of the remote storage system and the network relaying devices are correctly set. The following network relaying devices are correctly configured, and can be properly used: Connectors Cables Switches (zoning settings) Channel extenders (if channel extenders are connected) Lines and systems connected among between channel extenders (if channel extenders are connected) Path Blockade Logical blockade Blocked because path errors or link errors repeatedly occurred. The port of the local storage system is out of order. The port of the remote storage system is out of order. Repair the port of the local storage system. Then, recover the remote path.* Repair the port of the remote storage system. Then, recover the remote path.* Troubleshooting related to remote path statuses 175

176 Remote path status and description Status description Recommended action Program Error Program error In Progress In progress A relaying device is out of order. The cable is damaged. A program error was detected. A remote path is being created, the remote path is being deleted, or the port attribute is being changed. Repair the relaying device. Then, recover the remote path.* Replace the cable. Then, recover the remote path.* Recover the remote path.* Wait until the processing ends. * Recover the remote path by either of the following methods: To use Remote Web Console (either of the following): Remove the remote connection in the Remove Remote Connections window, and then register the remote connection again in the Add Remote Connection window. Remove the remote path in the Remove Remote Paths window, and then create a remote path again in the Add Remote Paths window. To use RAID Manager Use the raidcom delete rcu_path command to remove the remote path, and then use the raidcom add rcu_path command to recreate the remote path. If the remote path is still not recovered after these operations, contact HP Technical Support. Error codes and messages If an error occurs during a High Availability operation, HA displays an error message that describes the error and includes an error code. Make sure to record the error codes, so that you can report them if you need to contact technical support. For details about Remote Web Console error codes, see HP XP7 Remote Web Console Messages. Troubleshooting for RAID Manager If an error occurred while operating an HA pair by using RAID Manager, you might be able to determine the cause of the error by viewing the logs that are output in the RAID Manager window or the RAID Manager operation logs. The following is an output example: The alphanumerics after "SSB=" indicate an error code. The last four digits of the alphanumerics before the comma (,) is SSB1 (example: B9E1), and the last four digits of the alphanumerics after the comma is SSB2 (example: B901). For details about the error codes produced by the configuration definition command (raidcom) for RAID Manager, see the HP XP7 RAID Manager User Guide. If the problem persists, send the contents of the /HORCM/log* folder to HP Technical Support. 176 Troubleshooting

177 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) Error code (SSB2) 9100 B952 B9A2 B9A3 B9A4 B9A5 B9BD B9C0 DB89 DB8A DB8B DB8D FA00 FA01 FA02 FA03 FA04 Details The command cannot be executed because user authentication has not been performed. The specified LU is not defined. A configuration of the storage system might have been changed. Restart RAID Manager. You cannot create the HA pair because the specified volume is a command device. You cannot create the HA pair because the specified volume is a mainframe volume. You cannot create the HA pair because no SCSI path is defined on the specified volume. You cannot create the HA pair or perform a swap resync for the pair because one of the following remote paths is not set. A bidirectional remote path between the storage systems at the primary site and secondary site A remote path from the storage system at the primary site to the storage system at the secondary site A remote path from the storage system at the secondary site to the storage system at the primary site A configuration of the LDEV in the storage system might have been changed while RAID Manager was running. Restart RAID Manager. There are no free resources in the command device. Use LUN Manager to turn off and then turn on the command device. You cannot change the status of the HA pair even though a request to suspend or to delete the pair has been received. This is because the volume paired with the specified volume is in an unusable status. You cannot change the status of the HA pair even though a request to suspend or to delete the pair has been received. This is because the volume paired with the specified volume is blocked. You cannot change the status of the HA pair even though a request to suspend or to delete the pair has been received. This is because the volume paired with the specified volume is in an unusable status. You cannot change the status of the HA pair even though a request to suspend or to delete the pair has been received. This is because the number of remote paths from the storage systems at the primary site to the storage systems at the secondary site is less than the minimum number of remote paths. You cannot create the HA pair because the capacity of the volume that has been specified as the primary volume is larger than the maximum capacity of an HA pair that can be created. You cannot create the HA pair because the volume that has been specified as the primary volume is being used by Online Migration. You cannot create the HA pair because the storage system cache at the primary site is in one of the following statuses: One side is blocked or is transitioning to being blocked. One side is recovering. Recovering You cannot create the HA pair because the remote paths from the storage systems at the primary site to the storage systems at the secondary site are in either of the following status: The number of remote paths is 0 (unspecified). The number of remote paths is less than the minimum number. You cannot create the HA pair because the emulation type of the volume that has been specified as the primary volume is not OPEN-V. Troubleshooting for RAID Manager 177

178 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) FA05 FA07 FA08 FA09 FA0A FA0B FA0C FA0D FA0E FA0F FA10 FA12 FA13 FA14 FA15 FA16 FA17 FA18 FA1B FA1C FA1D Details You cannot create the HA pair because the volume that has been specified as the primary volume is a migration volume of a product of another company. The pair status of the volume that has been specified as the primary volume is not SMPL. The pair status of the volume that has been specified as the primary volume is not PSUS or PSUE. There is a pinned track on the volume that has been specified as the primary volume. You cannot create the HA pair because the volume that has been specified as the primary volume is blocked. You cannot create the HA pair because the volume that has been specified as the primary volume is in one of the following statuses: Blocked Being formatted Read only You cannot create the HA pair because the volume that has been specified as the primary volume is a mainframe volume. You cannot create the HA pair because the virtual emulation type of the device that has been specified as the primary volume is none of the following: OPEN-K, 3, 8, 9, E, L, or V. You cannot create the HA pair because the volume that has been specified as the primary volume is not a virtual volume of Thin Provisioning or Smart Tiers. The device type of the volume that has been specified as the primary volume is not supported. You cannot create the HA pair because the secondary volume is in an unusable status. You cannot create the pair because the HA reservation attribute has been set for the volume that has been specified as the primary volume. The specified volume is being used by Continuous Access Synchronous. The specified volume is being used by Continuous Access Journal. You cannot create the pair because of one of the following reasons: The volume that has been specified as the primary volume of HA is a primary volume of Fast Snap which is being restored. The volume that has been specified as the secondary volume of HA is a primary volume of Fast Snap. The specified volume is a secondary volume of Fast Snap. The specified volume is a secondary volume of Business Copy. The specified volume is being used by Auto LUN. The specified volume is a volume of a Business Copy pair that is in the process of reverse copying. You cannot create the HA pair because the information about the virtual storage machine at the primary site disagrees with the one at the secondary site. The access attribute set by Data Retention for the primary volume cannot be transferred to the secondary volume because Data Retention is not installed in the storage system at the secondary site. You cannot create the HA pair by using the specified secondary volume because of either one of the following two reasons: The specified secondary volume is already used for the other HA pair. The information about the HA pair still remains only in the secondary volume. 178 Troubleshooting

179 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) FA1E FA1F FA29 FA2A FA2B FA2C FA30 FA31 FA32 FA33 FA35 FA37 FA38 FA3A FA3B FA3C FA3E FA3F FA40 FA41 FA42 Details You cannot create the HA pair because the primary volume is a command device. You cannot create the HA pair because the secondary volume is a command device. You cannot create the HA pair because the volume that has been specified as the secondary volume is not installed or is a command device. You cannot create the HA pair because the volume that has been specified as the secondary volume is in the intervention-required condition. You cannot create the HA pair because the volume that has been specified as the secondary volume is blocked. The secondary volume is in an unusable status. The pair status of the volume that has been specified as the secondary volume is not PSUS or PSUE. The pair status of the volume that has been specified as the secondary volume is not SMPL. There is a pinned track on the volume that has been specified as the secondary volume. You cannot create the HA pair because the volume that has been specified as the secondary volume is in one of the following statuses: Blocked Being formatted Read only You cannot create the HA pair because the volume that has been specified as the secondary volume is blocked. You cannot create the HA pair because the volume that has been specified as the secondary volume is a migration volume of a product of another company. You cannot create the HA pair because the volume that has been specified as the secondary volume is not OPEN-V. You cannot create the HA pair because the capacity of the volume that has been specified as the secondary volume is larger than the maximum capacity of an HA pair that can be created. You cannot create the HA pair because the volume that has been specified as the secondary volume is being used by Online Migration. The device type of the volume that has been specified as the secondary volume is not supported. The DKC emulation types of the storage systems at the primary site and the secondary site are inconsistent. No program product of High Availability is installed in the storage systems at the secondary site. The shared memory that is required to create an HA pair is not installed on the storage system at the secondary site. The volume that has been specified as the secondary volume is not installed. You cannot create the HA pair because the storage system cache at the secondary site is in one of the following statuses: One side is blocked or is transitioning to being blocked One side is recovering. Recovering Troubleshooting for RAID Manager 179

180 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) FA43 FA44 FA46 FA49 FA4B FA4C FA4D FA4E FA4F FA50 FA5B FA60 FA62 FB80 FBB0 FBB1 FBE0 FBE1 FBE8 FBE9 FBEA Details You cannot create the HA pair because the remote path from the storage system at the secondary site to the storage system at the primary site is in either of the following status: The number of remote paths is 0 (unspecified). The number of remote paths is less than the minimum number. You cannot create the HA pair because the volume that has been specified as the secondary volume is a mainframe volume. You cannot create the HA pair because the volume that has been specified as the secondary volume is not a virtual volume of Thin Provisioning or Smart Tiers. You cannot create the pair because the serial numbers of the storage systems on the primary volume and the secondary volume are the same. You cannot create the pair because the HA reservation attribute is not set for the secondary volume. You cannot create the pair because no virtual LDEV ID is set for the secondary volume. No LU path to the specified secondary volume is defined. You cannot create the HA pair because the capacities of the primary volume and the secondary volume are different. No LU path to the specified secondary volume is defined. One of the following is incorrect: A primary volume parameter (port name, host group ID, LUN ID) A secondary volume parameter (port name, host group ID, LUN ID) You cannot create the HA pair because the remote paths from the storage system at the primary site to the storage system at the secondary site are in one of the following states: The number of remote paths is 0 (unspecified). The requirement for the minimum number of paths is not met. You cannot create a pair because the remote storage system product or its microcode version does not support the High Availability functionality. You cannot create the HA pair because no virtual LDEV ID is set for the volume specified as the primary volume. The command operating on the HA pair was rejected because the -fg option was specified for the paircreate or pairresync command. A request to delete the HA pair was received, but the pair cannot be deleted because the volume paired with the specified volume is shared with Business Copy. A request to delete the HA pair was received, but the pair cannot be deleted because the volume paired with the specified volume is shared with Fast Snap. The command operating on the HA pair was rejected because the -f data or -f status option was specified for the paircreate or pairresync command. The command operating on the HA pair was rejected because the -SM block or -SM unblock option was specified for the paircreate or pairresync command. The command operating on the HA pair was rejected because the -P option was specified for the pairsplit command. The command operating on the HA pair was rejected because the -rw option was specified for the pairsplit command. The command operating on the HA pair was rejected because the -RB option was specified for the pairsplit command. 180 Troubleshooting

181 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) FBEB FC10 FC12 FC13 FC20 FC21 FC22 FC23 FC24 FC25 Details The command operating on the HA pair was rejected because the -SM block or -SM unblock option was specified for the pairsplit command. The command was rejected because the storage system of the specified volume is in one of the following states: The storage system includes microcode that does not support HA. No HA program products are installed. No shared memory for HA has been added. If none of the above applies, contact HP Technical Support. The same operation or a different operation is already being executed. An operation to suspend the pair is being processed because an error was detected. You cannot create a pair for one of the following reasons: The differential bit area of the storage system of the specified volume is depleted. No shared memory is installed in the storage system of the specified volume. No Resource Partition license is installed in the storage system of the specified volume. The HA license capacity in the storage system of the specified volume is insufficient. You cannot create a pair for one of the following reasons: The differential bit area of the storage system of the volume to be paired with the specified volume is depleted. No shared memory is installed for the differential bit area of the storage system of the volume to be paired with the specified volume. No Resource Partition license is installed in the storage system of the volume to be paired with the specified volume. The HA license capacity in the storage system of the volume to be paired with the specified volume is insufficient. You cannot create a pair for one of the following reasons: The pair status of the specified volume is not SMPL. The specified volume is a single volume or is not the primary volume in the HA pair. You cannot create a pair for one of the following reasons: The pair status of the volume to be paired with the specified volume is not SMPL or COPY. The volume to be paired with the specified volume is a single volume or is not the secondary volume in the HA pair. You cannot create a pair at the primary site for either one of the following two reasons: The capacity of the specified volume is being expanded. The pool containing the specified volume is being initialized. You cannot create a pair at the secondary site for one of the following reasons: The capacity of the volume to be paired with the specified volume is being expanded. The pool of the volume paired with the specified volume is being initialized. The virtual LDEV ID of the volume to be paired with the specified volume is duplicated in the virtual storage machine. You specified the virtual LDEV ID at the primary site the same as the actual LDEV ID at the secondary site from the volume to be paired with the specified volume. However, the actual information of Troubleshooting for RAID Manager 181

182 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) Details the virtual emulation type (including the settings for CVS and LUSE) or the virtual SSID is different from the virtual information. The virtual LDEV ID of the volume to be paired with the specified volume is already in use. FC26 FC27 FC28 FC29 FC30 FC31 FC38 You cannot create a pair because verification of the remote path between storage systems failed in the storage system of the specified volume. You cannot create a pair because verification of the remote path between storage systems failed in the storage system of the volume to be paired with the specified volume. You cannot create a pair for one of the following reasons: The mirror count for a single volume is depleted for the specified volume. The specified volume is already being used by another HA pair. The pair management table of the specified volume is depleted. You cannot create a pair for one of the following reasons: The mirror count for a single volume is depleted for the volume to be paired with the specified volume. The volume to be paired with the specified volume is already being used by another HA pair. The pair management table of the volume to be paired with the specified volume is depleted. The pair resynchronization or swap resync was rejected for one of the following reasons: The pair status of the volume specified for the pair resynchronization is not PSUS or PSUE. The volume specified for the pair resynchronization is not the primary volume of the HA pair. The I/O mode of the volume specified for the pair resynchronization is Block. The pair status of the volume paired with the volume specified for the swap resync is not PSUS or PSUE. The I/O mode of the volume paired with the volume specified for the swap resync is not Block. The pair resynchronization or swap resync was rejected for one of the following reasons: The pair status of the volume specified for the swap resync is not SSWS. The volume specified for the swap resync is not the secondary volume of the HA pair. The I/O mode of the volume specified for the swap resync is Block. The pair status of the volume paired with the volume specified for the pair resynchronization is not SSUS or PSUE. The I/O mode of the volume paired with the volume specified for the pair resynchronization is not Block. A request to suspend a pair was received, but the pair cannot be suspended because the specified volume meets one of the following conditions: An instruction specifying that the primary volume be suspended is directed at the secondary volume. An instruction specifying that the secondary volume be suspended is directed at the primary volume. The pair status is not PAIR or COPY. 182 Troubleshooting

183 Table 20 Error codes and details when operating RAID Manager (when SSB1 is 2E31, B901, B90A, B90B, B912 or D004) (continued) Error code (SSB2) FC39 FC40 FC41 FC7E Details A request to suspend a pair was received, but the pair cannot be suspended because the volume paired with the specified volume meets one of the following conditions: The paired volume is the primary volume, but an instruction specifies that the primary volume be suspended. The paired volume is the secondary volume, but an instruction specifies that the secondary volume be suspended. The pair status is not PAIR or COPY. A request to delete a pair was received, but the pair cannot be deleted because the specified volume meets one of the following conditions: An instruction specifying that the primary volume be deleted is directed at the secondary volume. An instruction specifying that the secondary volume be deleted is directed at the primary volume. The pair status is not PSUS, SSUS, SSWS, or PSUE. The I/O mode is not Local. A request to delete a pair was received, but the pair cannot be deleted because the volume paired with the specified volume meets one of the following conditions: The paired volume is the primary volume, but an instruction specifies that the primary volume be deleted. The paired volume is the secondary volume, but an instruction specifies that the secondary volume be deleted. The pair status is not PSUS, SSUS, SSWS, or PSUE. The I/O mode is not Block. A request to create a pair, resynchronize a pair, or perform a swap resync was received, but the request was rejected because the status of the quorum disk meets one of the following conditions: The ID of the specified quorum disk is out of range. The quorum disk has not been created. The specified remote storage system is not the same as when the quorum disk was created. The same quorum disk ID is allocated to separate external volumes in the storage systems at the primary and secondary sites. The quorum disk is blocked. An error occurred on the external path between the storage systems at the primary and secondary sites and the external storage system for the quorum disk. Recovery from a failure at a quorum disk or the external path for a quorum disk, or from the maintenance operation is in progress. The processing of the recovery requires 5 minutes after your operation. The quorum disk was used to cancel the pair. SIM reports of HA operations If a storage system requires maintenance, a SIM is issued and displayed in the Alerts window of Remote Web Console. A SIM is also issued when the pair status of a primary or secondary HA volume changes. SIMs are categorized into service, moderate, serious, and acute according to their severity.the HA operation history appears in the Histories window.?? If SNMP is installed on the storage systems, SIMs trigger an SNMP trap that is sent to the corresponding server. For details about SNMP operations, see the HP XP7 Remote Web Console User Guide or the HP XP7 SNMP Agent User Guide. SIM reports of HA operations 183

184 Related topics SIMs related to HA (page 116) Procedure for recovering pinned track of an HA volume To recover the pinned track and secure the entire data integrity of the pair at the same time, follow this procedure: 1. Connect to the storage system of the primary site for an HA pair that contains the pinned track volume, and then select a correct CU. 2. Remove the HA pair that contains the pinned track volume. 3. Perform a normal procedure for recovering data from the pinned track. Use the pinned track recovery procedure of the OS that is being used, or contact HP Technical Support. 4. Use the Create HA Pairs window to create the pair. Make sure to select Entire for Initial Copy Type. 184 Troubleshooting

185 10 Support and other resources Contacting HP For worldwide technical support information, see the HP Support Center: Before contacting HP, collect the following information: Product model names and numbers Technical support registration number (if applicable) Product serial numbers Error messages Operating system type and revision level Detailed questions Related information Websites The following documents [and websites] provide related information: HP XP7 Continuous Access Journal for Mainframe Systems User Guide HP XP7 Continuous Access Synchronous for Mainframe Systems User Guide HP XP7 Continuous Access Synchronous User Guide HP XP7 External Storage for Open and Mainframe Systems User Guide HP XP7 for Compatible FlashCopy Mirroring User Guide HP XP7 Provisioning for Mainframe Systems User Guide HP XP7 RAID Manager User Guide HP XP7 Remote Web Console Messages HP XP7 RemoteWeb Console User Guide You can find these documents at: HP Business Support Center website (Manuals page): Click Storage > Disk Storage Systems > XP Storage, and then select your Storage System. HP Enterprise Information Library website: Under Products and Solutions, click HP XP Storage. Then, click XP7 Storage under HP XP Storage. HP Event Monitoring Service and HA Monitors Software: HP Technical Support website: Contacting HP 185

186 Single Point of Connectivity Knowledge (SPOCK) website: White papers and Analyst reports: Typographic conventions Table 21 Document conventions Convention Element Blue text: Table 21 (page 186) Cross-reference links and addresses A cross reference to the glossary definition of the term in blue text Blue, underlined text: Website addresses Bold text Keys that are pressed Text typed into a GUI element, such as a box GUI elements that are clicked or selected, such as menu and list items, buttons, tabs, and check boxes Italic text Text emphasis Monospace text File and directory names System output Code Commands, their arguments, and argument values Monospace, italic text Code variables Command variables Monospace, bold text Emphasized monospace text WARNING! Indicates that failure to follow directions could result in bodily harm or death. CAUTION: IMPORTANT: Indicates that failure to follow directions could result in damage to equipment or data. Provides clarifying information or specific instructions. NOTE: Provides additional information. TIP: Provides helpful hints and shortcuts. Customer self repair HP customer self repair (CSR) programs allow you to repair your StorageWorks product. If a CSR part needs replacing, HP ships the part directly to you so that you can install it at your convenience. Some parts do not qualify for CSR. Your HP-authorized service provider will determine whether a repair can be accomplished by CSR. For more information about CSR, contact your local service provider, or see the CSR website: Support and other resources

187 11 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback Include the document title and part number, version number, or the URL when submitting your feedback. 187

188 A Correspondence between GUI operations and CLI commands Abstract This appendix describes the correspondence between the Remote Web Console GUI operations and the RAID Manager commands. Almost all HA operations can be performed using either the GUI or the CLI, whichever you prefer, but a few operations can be performed using only the GUI (for example, forcibly deleting a pair) or only the CLI (for example, creating a virtual storage machine). Correspondence between Remote Web Console operations and RAID Manager commands The following tables show the correspondence between RAID Manager commands and Remote Web Console operations. Table 22 Correspondence between RWC operations and RAID Manager commands (configuration operations) Operation Remote Web Console RAID Manager Possible? Possible? Command Edit Ports raidcom modify port Add Remote Connection raidcom add rcu Select external path groups raidcom add external_grp Create external volumes raidcom add ldev Add Quorum Disks raidcom modify ldev Create virtual storage machines (resource groups) No raidcom add resource Reserve host group IDs No raidcom add resource Delete virtual LDEV IDs raidcom unmap resource Reserve LDEV IDs raidcom add resource Assign HA Reserves raidcom map resource Create Host Groups raidcom add host_grp Create Pools raidcom add thp_pool Create virtual volumes raidcom add ldev Add LU Paths raidcom add lun Table 23 Correspondence between RWC operations and RAID Manager commands (pair operations) Operation Parameter Remote Web Console RAID Manager Possible? Possible? Command Option Create HA Pairs Fence Level No* paircreate -f never Copy Pace paircreate -c <size> No initial copy paircreate -nocopy Suspend Pairs P-VOL specification pairsplit -r 188 Correspondence between GUI operations and CLI commands

189 Table 23 Correspondence between RWC operations and RAID Manager commands (pair operations) (continued) Operation Parameter Remote Web Console RAID Manager Possible? Possible? Command Option S-VOL specification pairsplit -RS Resync Pairs P-VOL specification pairresync S-VOL specification pairresync -swaps Copy Pace pairresync -c <size> Delete Pairs Normal (P-VOL specification) pairsplit -S Normal (S-VOL specification) pairsplit -R Force (Enable is specified for Volume Access) No None Force (Disable is specified for Volume Access) No None *When you create an HA pair with Remote Web Console, you do not need specify the fence level. The fence level is set to Never automatically. N: The operation is impossible. S-VOL: Secondary volume Table 24 Correspondence between RWC operations and RAID Manager commands (displaying status) Operation Parameter Remote Web Console RAID Manager Possible? Possible? Command Option View Pair Properties I/O mode pairdisplay -fxce or -fxde Status pairdisplay -fxc or -fxce View Pair Synchronous Rate pairdisplay -fxc View Remote Connection Properties raidcom get rcu View virtual storage machines raidcom get resource -key opt View quorum disks raidcom get ldev Check the status of volumes Existence of a virtual LDEV ID raidcom get ldev Existence of the reservation attribute for HA raidcom get ldev Correspondence between Remote Web Console operations and RAID Manager commands 189

190 Table 25 Correspondence between RWC operations and RAID Manager commands (changing settings) Operation Parameter Remote Web Console RAID Manager Possible? Possible? Command Option Edit Remote Replica Options No None Edit Virtualization Management Settings Virtual LDEV ID raidcom map resource raidcom unmap resource -ldev_id <ldev#> Virtual emulation type (including CVS and LUSE settings) raidcom map resource -emulation <emulation type> Virtual SSID raidcom map resource -ssid <ssid> Remove Quorum Disks raidcom modify ldev Release HA Reserved raidcom unmap resource Force Delete Pairs No None Edit Remote Connection Options RIO MIH Time raidcom modify rcu -rcu_option <mpth> <rto> <rtt> Add Remote Paths raidcom add rcu_path Remove Remote Paths raidcom delete rcu_path Remove Remote Connections raidcom delete rcu 190 Correspondence between GUI operations and CLI commands

191 B Performing configuration operations using Remote Web Console Abstract Some HA configuration operations can be performed using Remote Web Console (RWC). This appendix provides instructions for these operations. The RWC GUI displays "Local Storage System" for the system you accessed on the RWC server. As a result, when you access the RWC server at the secondary site, the GUI displays information for the pair's secondary system under "Local Storage System". Likewise, the GUI identifies the storage system connected to the system you accessed (in this case, the primary system) as the "Remote Storage System". Defining the attribute for a Fibre-Channel port To transfer HA data, you must define the Initiator ports on the storage systems at the primary site and the RCU Target ports on the storage systems at the secondary sites. CAUTION: To avoid an invalid disconnection, make sure that the number of servers connected to a single Target port is 128 or fewer. If 128 or more servers are connected to a Target port, the server connection might be disconnected after the port type is changed from Target to RCU Target. Prerequisite information Storage Administrator (System Resource Management) role is required. Before changing a Target port to Initiator port, prepare the following: Check that the port is offline from the server. Disconnect the port from the server. Delete the path to the port. Before changing a Fibre-Channel Port from Initiator to Target or to RCU Target, prepare the following: Delete all paths from the Initiator port to the remote storage system. Disconnect the connection from the local storage system to the remote storage system. Procedure 1. In the Storage Systems tree, select Ports / Host Groups. 2. Select the Ports tab. 3. Select the port you want to change its attribute. 4. Display the Edit Ports window in one of the following methods: Click Edit Ports. From the Action menu, select Ports / Host Groups, and then Edit Ports. 5. Select the Port Attribute check box on, and select the port attribute (Initiator or RCU Target). For settings other than Port Attribute, see the HP XP7 Provisioning for Open Systems User Guide. 6. Click Finish. 7. In the Confirm window, check the settings you made, and then enter the task name in Task Name. Defining the attribute for a Fibre-Channel port 191

192 8. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Port Editing Wizard for HP XP7 Provisioning for Open Systems User Guide Adding a remote connection Add a remote connection to register a storage system of a secondary site to the storage system of a primary site. Also, add a remote connection from the storage system of a secondary site to the storage system of a primary site. When a remote connection is added, both storage systems are ready to perform operations on HA. You can also set a remote path between storage systems when the remote connection is added. NOTE: Remote path operations cannot be performed during microcode exchange processing. Before performing remote path operations, make sure that microcode exchange processing is complete. Remote path operations cannot be performed when microcode exchange processing has been interrupted (for example, due to user cancellation or error). Before performing remote path operations, make sure that microcode exchange processing has completed normally. Prerequisite information Storage Administrator (Remote Copy) role is required. Physical paths are set. The port attributes of local and remote storage systems are defined for HA. You know the remote storage system model, serial number, and path group ID. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connections. 2. Select the Connections (To) tab. 3. Display the Add Remote Connection window in one of the following methods: Click Add Remote Connection. From the Action menu, select Remote Connections, and then Add Remote Connection. 4. In Connection Type, select System. 5. Set each items in Remote Storage System. Model: Select the remote storage system model (XP7). Serial Number: Enter the serial number for the remote storage system. Note to specify the serial number for HP XP7 when using the volume in the virtual storage machine. Serial number for the virtual storage machine cannot be specified. 6. Set each items in Remote Paths. Path Group ID: Select the ID for the path group. Minimum Number of Paths: Specify the minimum number or paths that are required for each remote storage systems that are connected to the current local storage system. When the number of normal paths become fewer than the value specified in Minimum Number of Paths, the local storage system suspends all HA pairs that will be affected, and prevents the server performance from being harmed due to insufficient number of paths. Select the port to be used by local storage system and remote storage system. Click Add Path to add a path. 192 Performing configuration operations using Remote Web Console

193 7. Enter RIO MIH time, if necessary. You can enter the value between 10 and 100. The default setting is 15. The RIO MIH (Remote I/O Missing Interrupt Handler) is the wait time until completion of data copy between storage systems. 8. Enter Round Trip Time. The round trip time is a time limit for data to travel from the P-VOL to the S-VOL. This value is the reference value to control the copy pace of the initial copy automatically when the initial copy is performed, and to lessen the impact to the response time of the remote I/O for the update I/O. For details on the round trip time, see Determining the Round Trip Time (page 193). 9. Click Finish. 10. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 11. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Add Remote Connection wizard (page 241) Determining the Round Trip Time You specify a time limit for data to be transferred from the P-VOL to the S-VOL when you set up the HA association between the primary and secondary systems. This value is the reference value to control the initial copy pace automatically and to lessen the impact to host response time of the remote I/O for the update I/O. The default value of the Round Trip Time is 1 millisecond, and the range of the Round Trip Time is 1 ms to 500 ms. If the distance between the primary and secondary storage systems is long, or if a delay is caused by the line equipment, you need to change the Round Trip Time to a value that is appropriate for your environment. If the initial copy operation is performed with the default Round Trip Time value of 1 ms instead of a more appropriate value, it might take an unexpectedly long time to complete initial copy operations. Note the following Round Trip Time considerations: If the difference between the Round Trip Time and remote IO response time is significant, for example, 1 ms for Round Trip Time and 500 ms for remote I/O response time, the storage system slows or even interrupts the initial copy operation so that update copy can continue. If the difference between the Round Trip Time and remote IO response time is insignificant, for example, 1 ms for Round Trip Time and 5 ms for remote I/O response time, the storage system allows initial copying to run at the specified pace. To determine the Round Trip Time value Round Trip time= round trip time between the primary and secondary systems 2(*) + initial copy response time (ms) (*) When host mode option (HMO) 51 (Round Trip Set Up Option) is OFF (default), you must double the round trip time, because each data transfer between the primary and secondary systems involves two response sequences for each command issued. When HMO 51 is ON, you do not need to double the value of the round trip time, because the sequence is one response for each command issued. For the "round trip time" in the formula, ask HP Technical Support, or use the pingcommand. If you do not use channel extenders between the primary and secondary systems, specify "1". The "initial copy response time" in the formula is the response time required for multiple initial copy operations. Use the following formula to determine the initial copy response time based Adding a remote connection 193

194 on the initial copy pace, the number of maximum initial copy VOLs, and the bandwidth of the channel-extender communication lines between the primary and secondary systems. Initial copy response time (ms) = (1[MB] / "Data path speed between the primary and secondary systems [MB/ms] 1 ") ("initial copy pace 2 / 4) (number of maximum initial copy VOLs 3 " / "Number of data paths between the primary and secondary systems" 4 ) 1. When you connect the primary and secondary systems without extenders, set the value of the "data path speed between the primary and secondary systems" according to the link speed as follows: 2 Gbps: 0.17 MB/ms 4 Gbps: 0.34 MB/ms 8 Gbps: 0.68 MB/ms 2. For "initial copy pace" in the preceding formula, see the following table. 3. For "number of maximum initial copy volumes", use the value set up per storage system. The default value is Even if the "number of maximum copy VOLs" / "Number of data paths between the primary and secondary systems" is 16 or more, specify "number of maximum initial copy VOLs" / "Number of data paths between the primary and secondary systems" as 16. The following table shows the initial copy pace used for the response time calculation. Interface Initial copy only in progress Initial, update copy in progress When initial copy pace specified at the time of pair creation is 1 to 4 When initial copy pace specified at the time of pair creation is 5 to 15 When initial copy pace specified at the time of pair creation is 1 to 2 When initial copy pace specified at the time of pair creation is 3 to 15 RAID Manager User-specified value at the time of pair creation 4 User-specified value at the time of pair creation 2 Remote Web Console User-specified value at the time of pair creation 4 User-specified value at the time of pair creation 2 The following table shows example settings. Round trip time[ms] Data path speed between the primary and secondary systems (MB/ms) Number of data paths between the primary and secondary systems Initial copy pace Number of maximum initial copy VOLs Round trip time specified [ms] Adding the quorum disk Add the quorum disk on the primary and secondary storage systems. Prerequisite information Storage Administrator (Provisioning) role is required. The mapping of volumes of the external storage system for the quorum disk has been completed. 194 Performing configuration operations using Remote Web Console

195 Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connections. 2. Select the Quorum Disks tab. 3. Display the Add Quorum Disks window in one of the following methods: Click Add Quorum Disks. From the Action menu, select Remote Connections, and then Add Quorum Disks. 4. Select Quorum Disk ID. 5. In the Available LDEVs table, select the volume you want to set to the quorum disk. 6. Select Remote Storage Systems. 7. Click Add. To remove the selected quorum disks from the Selected Quorum Disks table, select the quorum disk, and then click Remove. 8. Click Finish. 9. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 10. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Add Quorum Disks wizard (page 244) Assigning the HA reservation attribute This section describes how to assign the HA reservation attribute. Prerequisite information Security Administrator (View & Modify) role is required. Procedure 1. In the Administration tree, select Resource Groups. 2. Select the resource group to which the volume to assign the HA reservation attribute HA belongs. 3. In the LDEVs tab, select a volume to which the HA reservation attribute is to be assigned. You can also select multiple volumes. 4. Use either of the following methods to display the Assign HA Reserves window. Click More Actions, and then Assign HA Reserves. In the Action menu, select Remote Replication, and then Assign HA Reserves. 5. In the Selected LDEVs table, check the target volume. 6. Enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Assign HA Reserves window (page 248) Assigning the HA reservation attribute 195

196 C Performing pair operations using Remote Web Console Abstract Pair operations for High Availability (HA) can be performed on Remote Web Console. This appendix provides instructions for these operations. The RWC GUI displays "Local Storage System" for the system you accessed on the RWC server. As a result, when you access the RWC server at the secondary site, the GUI displays information for the pair's secondary system under "Local Storage System". Likewise, the GUI identifies the storage system connected to the system you accessed (in this case, the primary system) as the "Remote Storage System". Types of pair operations The HA pair operations are: Create pairs Suspend pairs Resync pairs Delete pairs CAUTION: Pair operations cannot be performed on volumes that do not have an LU path. Before performing pair operations, make sure that the volumes to be assigned to pairs have at least one LU path defined. Pair operations cannot be performed during microcode exchange processing. Before performing pair operations, make sure that microcode exchange processing is complete. Pair operations cannot be performed when microcode exchange processing has been interrupted (for example, due to user cancellation or error). Make sure that microcode exchange processing has completed normally before performing pair operations. CAUTION: If the following status continues while the HA pair mirroring, the HA pair might be suspended to prioritize the update I/O than mirroring of the HA pair. The availability ratio of the processor in the MP blade which the primary volume belongs to is equal or more than 70% on the storage system at the primary site. There is a large amount of inward traffic of update I/O to the primary volumes on the storage system at the primary site. The Write Pending of the MP blade which the secondary volume belongs to is equal or more than 65% on the storage system at the secondary site. When you create or resynchronize the HA pair, consider the above load status of the storage system at each site. Creating HA pairs You must create HA pairs on the primary storage system. NOTE: To avoid excess traffic on your TCP/IP network, you should consider stopping Performance Monitor monitoring operations before creating multiple pairs. For details, see the HP XP7 Performance for Open and Mainframe Systems User Guide. Prerequisite information Storage Administrator (Provisioning) role is required. The secondary volume must be offline from all servers. 196 Performing pair operations using Remote Web Console

197 You must know the port ID, host group ID, and LUN ID of the primary and secondary volumes. The logical units (LUs) at the primary and secondary sites must be defined and initialized. The remote path between the primary and secondary storage systems must be installed and configured. Procedure 1. Display the Create HA Pairs window in one of the following methods: From General Tasks, select Create HA Pairs. In the Storage Systems tree, select Replication, and then Remote Replication. In the HA Pairs tab, click Create HA Pairs. In the Storage Systems tree, select Replication, and then Remote Replication. From the Actions menu, select Remote Replication, and then Create HA Pairs. 2. Specify the remote storage system. Model / Serial Number: Select the model and the serial number. Path Group ID: Select the ID for the path group. 3. In the Select Primary Volumes, from LU Selection, select the port name and the host group name of the local storage system. The volumes that can be used as primary volumes are displayed in the Available LDEVs table. 4. In the Select Primary Volumes, from the Available LDEVs table, select the primary volume. Note to specify the actual LDEV ID when specifying the volume in the virtual storage machine. Virtual LDEV ID cannot be specified. NOTE: Volumes of Online Migration are not displayed in the Available LDEVs table. 5. In the Secondary Volume Selection, from Base Secondary Volume, specify the information related to the base secondary volume. Port ID: Select the port name. Host Group ID: Select the ID for the host group. LUN ID: Select the LUN ID. Selection Type: Select Interval or Relative Primary Volume. 6. Select Mirror ID. 7. Select Quorum Disks. 8. Click Options if necessary. 9. Select Initial Copy Type. 10. In Copy Pace, specify the maximum number of tracks to be copied in a single remote I/O. 11. Click Add. The created pair is added in the Selected Pairs table. If you want to remove a pair from the Selected Pairs table, select the pair and click Remove. If you select a pair and click Change Settings, the Change Settings window appears and you can change the pair settings. 12. Click Finish. 13. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 14. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Creating HA pairs 197

198 Related topics Create HA Pairs wizard (page 249) Suspending HA pairs HA pairs can be suspended. Prerequisite information Storage Administrator (Remote Copy) role is required. The pair status must be INIT/COPY, COPY or PAIR. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Replication. 2. In the HA Pairs tab, select the pair you want to suspend, and display the Suspend Pairs window in one of the following methods: Click Suspend Pairs. From the Actions menu, select Remote Replication, and then Suspend Pairs. 3. Check that the pair you want to suspend is displayed in the Selected Pairs table. 4. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 5. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. After suspending a pair, in the Remote Replication window, check that the pair status of the HA pair is PSUS (status when the pair is suspended from the storage system at the primary site) or SSWS (status when the pair is suspended from the storage system at the secondary site). To check the status of a pair that is being suspended, click the Refresh button at the top right corner in the main window of Remote Web Console, or display detailed information in the View Pair Properties window. Related topics Suspend Pairs window (page 257) Resynchronizing HA pairs While the HA pairs are being suspended, the storage system on the primary site does not execute update copy operation towards the secondary volume. When the pairs are resynchronized, only the accumulated differential data after suspension is updated to the secondary volume, and the secondary volume would have the same data as the primary volume again. After that, the update copy operation starts again towards the secondary volume. Pair resynchronization can be executed on a storage system with primary volumes. Prerequisite information Storage Administrator (Remote Copy) role is required. The pair status must be PSUS, PSUE, or SSWS. If the pair status is PSUS or PSUE, the pair can be resynchronized only when the I/O mode is Local. If the pair status is SSWS, the primary volume and the secondary volume are reversed and then the pair is resynchronized (swap resync). 198 Performing pair operations using Remote Web Console

199 Procedure 1. In the Storage Systems tree, select Replication, and then Remote Replication. 2. In the HA Pairs tab, select the pair you want to resync, and display the Resync Pairs window in one of the following methods: Click Resync Pairs. From the Actions menu, select Remote Replication, and then Resync Pairs. 3. Check that the pair you want to resync is displayed in the Selected Pairs table. 4. In Copy Pace, specify the maximum number of tracks to be copied in a single remote I/O. 5. Click Finish. 6. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. After resynchronizing a pair, in the Remote Replication window, check that the HA pair is correctly displayed (in PAIR status). To check the status of a pair that is being resynchronized, click the Refresh button at the top right corner in the main window of Remote Web Console, or display detailed information in the View Pair Properties window. Related topics Resync Pairs wizard (page 258) Deleting HA pairs If an HA pair does not need to be maintained any longer, delete the HA pairs. When the HA pair is deleted, all the copy operations for that pair stops and the primary and secondary volumes change to single volumes, and the volumes or its data will not be deleted. When you delete an HA pair by specifying the primary volume, the virtual LDEV ID of the secondary volume is deleted. If you continue business on the primary volume after deleting the HA pair, delete the HA pair by specifying the primary volume. You can delete an HA pair by specifying the primary volume only when the pair status of the primary volume is PSUS or PSUE, and the I/O mode is Local. When you delete an HA pair by specifying the secondary volume, the virtual LDEV ID of the primary volume is deleted. If you continue business on the secondary volume after deleting the HA pair, delete the HA pair by specifying the secondary volume. You can delete an HA pair by specifying the secondary volume when the pair status of the secondary volume is SSWS and the I/O mode is Local. There are three methods for deleting a pair: Normal deletion: To delete HA pairs normally, delete the HA pairs from the storage system at the primary site. Forced deletion for paired volumes: Used when the I/O mode of both the primary and secondary volumes is Blocked. Forced deletion for nonpaired volumes: Used when the volumes are not paired but pair information remains in the volumes. Deleting HA pairs 199

200 NOTE: If the HA pair is deleted, the data on the P-VOL and S-VOL is not synchronized. To prevent viewing a duplicated volume with the same virtual LDEV ID but asynchronous data on the server, the virtual LDEV ID of the LDEV that does not continue I/O is deleted. When the virtual LDEV ID is deleted and the HA reservation attribute is assigned to the volume, the server cannot recognize the volume. If you want to recreate an HA pair using a volume that was deleted from a pair, recreate the HA pair from the storage system with the volume that was specified when you deleted the HA pair. For example, if you delete an HA pair by specifying the P-VOL, recreate the HA pair from the primary storage system. If you delete an HA pair by specifying the S-VOL, recreate the HA pair from the secondary storage system. Related topics Deleting HA pairs (Normal deletion) (page 200) Forcibly deleting HA pairs (for paired volumes) (page 201) Forcibly deleting HA pairs(for nonpaired volumes) (page 202) Deleting HA pairs (Normal deletion) The procedure for deleting the HA pairs is described below. CAUTION: When deleting the HA pairs from the storage system on the secondary site, make sure that the secondary and the primary volumes are the same (for example, they have the same volume label), and be careful not to generate system problems due to volume duplication. Prerequisite information Storage Administrator (Provisioning) role is required. One of the following conditions for the pair status and the I/O mode is satisfied: The pair status is PSUS and the I/O mode is Local. The pair status is PSUE and the I/O mode is Local. The pair status is SSWS. Procedure 1. In the Storage Systems tree, select Replication, and the Remote Replication. 2. In the HA Pairs tab, select the pair you want to delete, and display the Delete Pairs window in one of the following methods: In More Actions, click Delete Pairs. From the Actions menu, select Remote Replication, and then Delete Pairs. 3. Check that the pair you want to delete is displayed in the Selected Pairs table. 4. Select Normal in Delete Mode. 5. Click Finish. 6. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. After deleting a pair, in the Remote Replication window, check that the HA pair is not displayed. To check the status of a pair that is being deleted, click the Refresh button at the top right corner of the main window of Remote Web Console, or display detailed information in the View Pair Properties window. 200 Performing pair operations using Remote Web Console

201 Related topics Delete Pairs wizard (page 260) Forcibly deleting HA pairs (for paired volumes) The procedure for forcibly deleting the HA pairs is described below. CAUTION: When deleting the HA pairs from the storage system on the secondary site, make sure that the secondary and the primary volumes are the same (for example, they have the same volume label), and be careful not to generate system problems due to volume duplication. CAUTION: Specify Force in Delete Mode only when the I/O mode of both primary and secondary volumes is Block. If you want to specify Force when the I/O mode is not Block, call the HP Technical Support. CAUTION: If you specify Force in Delete Mode, you need to delete an HA pair from the storage systems at both primary and secondary sites. When the server can access both volumes, if you specify Enable for Volume Access and forcibly delete the pair, the data failure might occur because the contents of each volume are not inconsistent. Therefore, delete a pair according to the following procedure. 1. Stop access from the server to one volume. 2. For the volume to which the access from the server has been stopped, specify Disable for Volume Access and forcibly delete the pair. The virtual LDEV ID is deleted from the volume that was forcibly deleted by specifying Disable, and then the reservation attribute, which indicates that the volume is reserved for a secondary volume, is set for the volume. When the reservation attribute is set, the server cannot access the volume. 3. For the volume to which the access from the server continues, specify Enable for Volume Access and forcibly delete the pair. Prerequisite information Storage Administrator (Provisioning) role is required. One of the following conditions for the pair status and the I/O mode is satisfied: The pair status is PSUS and the I/O mode is Local. The pair status is PSUE and the I/O mode is Local. The pair status is SSWS. Procedure 1. In the Storage Systems tree, select Replication, and the Remote Replication. 2. In the HA Pairs tab, select the pair you want to delete, and display the Delete Pairs window in one of the following methods: In More Actions, click Delete Pairs. From the Actions menu, select Remote Replication, and then Delete Pairs. 3. Check that the pair you want to delete is displayed in the Selected Pairs table. 4. Select Force in Delete Mode. 5. In Volume Access, select whether to permit access from the server after the pair is deleted. Select Enable to remain the virtual LDEV ID of the volume in the local storage system and permit access from the server. Select Disable to delete the virtual LDEV ID of the volume in the local storage system and deny access from the server. 6. Click Finish. 7. In the Confirm window, check the settings you made, and then enter the task name in Task Name. Deleting HA pairs 201

202 8. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. After deleting a pair, in the Remote Replication window, check that the HA pair is not displayed. To check the status of a pair that is being deleted, click the Refresh button at the top right corner of the main window of Remote Web Console, or display detailed information in the View Pair Properties window. Related topics Delete Pairs wizard (page 260) Forcibly deleting HA pairs(for nonpaired volumes) Use the Force Delete Pairs (HA Pairs) window to forcibly delete the HA pairs when: The volume is no longer a pair but contains remained information about a pair in the volume and cannot be used as a volume for other pair. Cannot connect to the remote storage system due to communication error. NOTE: If the volume is a pair, forcibly delete the pair according to the procedure in Forcibly deleting HA pairs (for paired volumes) (page 201) even if a connection to the remote storage system is impossible due to communication error. If you cannot connect to the remote storage system due to communication error, forcibly delete the pair in the remote storage system as well. Prerequisite information Storage Administrator (Provisioning) role is required. The volume is a nonpaired volume. Procedure 1. In the Storage Systems tree, select Logical Device. 2. In the LDEVs tab, select the volume you want to delete forcibly. 3. Display the Force Delete Pairs (HA Pairs) window in one of the following methods: In More Action, click Force Delete Pairs (HA Pairs). From the Action menu, select Remote Replication, and then Force Delete Pairs (HA Pairs). 4. Check that the volume you want to delete its pair information is displayed in the Selected LDEVs table. 5. Enter the task name in Task Name. 6. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Force Delete Pairs (HA Pairs) window (page 267) 202 Performing pair operations using Remote Web Console

203 D Performing monitoring operations using Remote Web Console Abstract Some monitoring operations can be performed using Remote Web Console (RWC). This appendix provides instructions for these operations. You can check the status of HA pairs using RWC. Make sure to click the refresh button as needed to display the latest pair status. The pair status is displayed in the following format: RWC pair status/raid Manager pair status. If the two pair statuses are the same, the RAID Manager pair status is not displayed. Virtual storage machines displayed in Remote Web Console Information on the virtualized resources of a virtual storage machine appears in Remote Web Console with associated physical storage information. If the information on these resources is not displayed by default, you can change the column settings of the table option. For details about Column Settings, see the HP XP7 Remote Web Console User Guide. For more information about how to display the information about the virtualized resources, see the HP XP7 RAID Manager User Guide. The following information on the virtualized resources can also be displayed in Remote Web Console. Term LDEVs for which virtualization management is disabled LDEVs for which virtualization management is enabled Description LDEVs that satisfy both of these conditions: The model and serial number of the virtual storage machine assigned to the resource group to which an LDEV belongs is the same as the storage system managed by Remote Web Console. The values of the virtual LDEV ID and the LDEV ID are the same. LDEVs that satisfy one of these conditions: The model or serial number of the virtual storage machine assigned to the resource group to which an LDEV belongs is different from the storage system managed by Remote Web Console. The model and serial number of the virtual storage machine assigned to the resource group to which an LDEV belongs is the same as the storage system managed by Remote Web Console, but virtual LDEV ID and LDEV ID are different. Checking the status of an HA pair Check the status of a pair before performing a pair operation. Some pair operations can be performed only when the pair is in the specific status. Procedure 1. In the Storage System tree, select Replication and then Remote Replication. 2. In the HA Pairs tab, check Status of the HA pair whose status you want to know. Related topics Remote Replication window (page 215) Checking the detailed status of an HA pair To check the detailed information of the pair: Virtual storage machines displayed in Remote Web Console 203

204 Procedure 1. In the Storage System tree, select Replication and then Remote Replication. 2. In the HA Pairs tab, select the HA pair whose status you want to check. 3. Use either of the following methods to display the View Pair Properties window. Click More Actions and then View Pair Properties. From the Actions menu, select Remote Replication and then View Pair Properties. Related topics View Pair Properties window (page 234) Checking the synchronous rate of an HA pair This section explains how to check the pair synchronous rate. Procedure 1. In the Storage System tree, select Replication and then Remote Replication. 2. In the HA Pairs tab, select the pair whose synchronous rate you want to check. 3. Use either of the following methods to display the View Pair Synchronous Rate window. Click More Actions and then View Pair Synchronous Rate. From the Actions menu, select Remote Replication and then View Pair Synchronous Rate. Related topics View Pair Synchronous Rate window (page 232) Checking the operation history of HA pairs You can check the operation history of HA pairs. The time-series entries in the history list might not be displayed in descending order. The maximum number of history information entries is 524,288. If you operate more than 1,000 pairs at once, or the HA pair is suspended because of a failure, a part of histories might not be recorded. Even if you operate HA pairs on RAID Manager by specifying the virtual LDEV IDs, the actual LDEV_IDs are displayed for LDEV ID. Procedure 1. In the Storage System tree, select Replication. 2. Use either of the following methods to display the Histories window. Click View Histories and then Remote Replication. From the Actions menu, select Remote Replication and then View Histories. 3. In Copy Type, select HA. The operation history of HA pairs is displayed. Related topics Histories window (page 238) Messages displayed in Description of the Histories window (page 205) 204 Performing monitoring operations using Remote Web Console

205 Messages displayed in Description of the Histories window The following table describes the messages that are displayed in Description of the Histories window. Description code Message displayed in Description Pair Create Start. Pair Resync. Start. Copy Start. Copy Complete. Pair Suspend (Operation). Pair Suspend (Operation). Pair Suspend (Failure). Pair Delete. Pair Delete(Force). Quorum Disk Create. Quorum Disk Delete. Description Creation of a pair starts. Resynchronization of a pair starts. Copy processing starts. Copy processing is complete. A pair was suspended. A pair was suspended. (swap suspension) A pair was split because a failure occurred. A pair was deleted. A pair was forcibly deleted. A quorum disk was created. A quorum disk was deleted. Checking the licensed capacity You can check the licensed capacity in the Replication window. Procedure In the Storage System tree, select Replication. Related topics Replication window (page 212) Monitoring copy operation and I/O statistics You can monitor copy operation or I/O statistics. For details, see HP XP7 Performance for Open and Mainframe Systems User Guide. Checking the remote connection status You can check the status of remote connections. Procedure 1. In the Storage System tree, select Replication and then Remote Connections. 2. Check Status of the remote connection whose status you want to know. Related topics Remote Connections window (page 227) Checking the detailed status of remote connections and paths You can check the detailed status of a remote connection and remote path. Checking the licensed capacity 205

206 Procedure 1. In the Storage System tree, select Replication and then Remote Connections. 2. Select the remote connection whose status you want to check. 3. Use either of the following methods to display the View Remote Connection Properties window. Click View Remote Connection Properties. From the Actions menu, select Remote Connection and then View Remote Connection Properties. Related topics View Remote Connection Properties window (page 236) Troubleshooting related to remote path statuses (page 173) 206 Performing monitoring operations using Remote Web Console

207 E Changing settings using Remote Web Console Abstract Some High Availability (HA) settings can be changed using Remote Web Console. This appendix provides instructions for performing these operations. Editing remote replica options You can change the number of volumes that can be copied at once during a single initial copy operation using the Edit Remote Replica Optionswindow. Prerequisite information Storage Administrator (Remote Copy) role is required. Procedure 1. In the Storage System tree, select Replication. 2. Display the Edit Remote Replica Options window in one of the following methods: From Edit Options, select Remote Replication. From the Action menu, select Remote Replication, and then Edit Remote Replica Options. 3. In Copy Type, select HA. 4. In Maximum Initial Copy Activities, enter the number of volumes that can be copied at once in a single initial copy operation. The number of initial copy operation of HA might effect the performance of the local storage system depending on the amount of I/O processing and the number of pairs that are registered at the same time. If this value is too large, the pending process in the remote storage system increases and might effect the response time of the remote I/O to update I/O. 5. Click Finish. 6. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Edit Remote Replica Options wizard (page 262) Maximum initial copy activities (page 207) Maximum initial copy activities The maximum initial copy activities is the maximum number of pairs that can simultaneously perform an initial copy in the same storage system. You can specify the value between one and 512 for the total number of initial copy activities for HA, Continuous Access Synchronous, and Continuous Access Synchronous Z. If an initial copy or differential copy of HA, Continuous Access Synchronous or Continuous Access Synchronous Z is already in progress in the same storage system, and if you perform the copy operation for the HA pairs, the value which is derived from the total amount of the copy activities of running HA and Continuous Access Synchronous subtracting from 512 is the number of copy activities which can be performed for the HA pairs after this. For example, if you set the maximum initial copy activities to 512, and if 400 HA pairs are created during the initial copy operation for 400 Continuous Access Synchronous pairs being performed, 112 HA pairs ( (the number of copy activities of Continuous Access Synchronous which Editing remote replica options 207

208 is in progress)) start to copy at the beginning. After that, the number of the running copy activities of HA pairs gradually increases, not exceeding 512 which is the maximum number of initial copy activities, with progressing of the initial copy activities of Continuous Access Synchronous, as shown in the following figure. Removing quorum disks When the HA pairs no longer uses the quorum disk, remove the quorum disk from both primary and secondary storage systems. NOTE: After you remove a quorum disk, the Remote Web Console error message ( ) or RAID Manager error message (SSB1:2E10, SSB2:A007) might be displayed. In this case, the external volume is not displayed on the Quorum disk tab of the Remote Connection window because the quorum disk was deleted successfully, but the quorum disk management information remains in the external volume. If this happens, format the external volume. Prerequisite information Storage Administrator (Provisioning) role is required. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connections. 2. Select the Quorum Disks tab. 3. Select the quorum disk you want to delete. 4. Display the Remove Quorum Disks window in one of the following methods: Click Remove Quorum Disks. From the Action menu, select Remote Connections, and then Remove Quorum Disks. 5. In the Selected Quorum Disks table, check that the quorum disk you want to delete is displayed. 6. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. 208 Changing settings using Remote Web Console

209 Related topics Remove Quorum Disks window (page 266) Editing remote connection options Use the Edit Remote Connection Options window to change the following option settings: Minimum number of paths RIO MIH Time (waiting time until data copy from local storage system to remote storage system completes) Prerequisite information Storage Administrator (Remote Copy) role is required. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connection. 2. In the Connections (To) tab, select the remote connection you want to change its options. 3. Display the Edit Remote Connection Options window in one of the following methods: Click Edit Remote Connection Options. From the Action menu, select Remote Connection, and then Edit Remote Connection Options. 4. Select the check box of options you want to change. 5. Select the number of Minimum number of paths. 6. Enter RIO MIH Time. 7. Click Finish. 8. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 9. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Edit Remote Connection Options wizard (page 268) Adding remote paths Remote paths from local storage systems to remote storage systems can be added as required. The maximum number of paths that can be added is 8. Prerequisite information Storage Administrator (Remote Copy) role is required. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connection. 2. In the Connection (To) tab, select the remote connection whose remote path you want to add. 3. Display the Add remote Paths window. In More Actions, click Add Remote Paths. In the Action menu, select Remote Connection, and then Add Remote Paths. 4. Select the port to be used between the local storage system and the remote storage system. If you want to add 2 or more paths, click Add Path. 5. Click Finish. Editing remote connection options 209

210 6. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Add Remote Paths wizard (page 270) Removing remote paths Remote paths from remote storage systems to local storage systems can be removed. Remove remote paths from the local storage system. CAUTION: Confirm that the number of paths is more than the one specified to Minimum Number of Paths in the Add Remote Connection window. If not, the remote path deletion operation fails. Prerequisite information Storage Administrator (Remote Copy) role is required. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connection. 2. In the Connection (To) tab, select the remote connection whose remote path you want to remove. 3. Display the Remove Remote Paths window. In More Actions, click Remove Remote Paths. In the Action menu, select Remote Connection, and then Remove Remote Paths. 4. Select the Remove check box for the remote paths you want to remove. If the number of paths becomes less than the minimum number of paths by deleting remote paths, the check boxes cannot be selected. 5. Click Finish. 6. In the Confirm window, check the settings you made, and then enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Remove Remote Paths wizard (page 272) Removing remote connections Remote connections from local storage systems to remote storage systems can be removed. When a remote connection is removed, all remote paths to the selected remote storage system are removed from the current local storage system. When a remote connection is removed, operations of HA between other local storage systems and their remote storage systems are not affected. Even after removing a remote connection, you can reconfigure a remote connection and add a different remote storage system to the local storage system. You can also remove a remote connection, change back the Initiator port to RCU Target port, and then add a server channel of a local storage system. 210 Changing settings using Remote Web Console

211 Prerequisite information Storage Administrator (Remote Copy) role is required. All HA pairs between the local storage system and the remote storage system are removed. Procedure 1. In the Storage Systems tree, select Replication, and then Remote Connections. 2. In the Connection (To) tab, select the remote connections which you want to remove. (You can select multiple remote connections.) 3. Display the Remove Remote Connections window in one of the following methods: In More Actions, click Remove Remote Connections. In the Action menu, select Remote Connection, and then Remove Remote Connections. 4. In the Selected Remote Connection table, confirm the remote connections to be removed. You can check the detail of the remote connection in the View Remote Connection Properties window that is displayed by selecting a remote connection and then clicking Detail. 5. Enter the task name in Task Name. 6. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics View Remote Connection Properties window (page 236) Remove Remote Connections window (page 274) Releasing the HA reservation attribute This section describes how to release the HA reservation attribute. Prerequisite information Security Administrator (View & Modify) role is required. Procedure 1. In the Administration tree, select Resource Group. 2. Select the resource group to which the volume for which the HA reservation attribute is to be removed belongs. 3. In the LDEVs tab, select a volume for which the HA reservation attribute is to be removed. You can also select multiple volumes. 4. Use either of the following methods to display the Release HA Reserved window. Click More Actions, and then Release HA Reserved. In the Action menu, select Remote Replication, and then Release HA Reserved. 5. In the Selected LDEVs table, check the target volume. 6. Enter the task name in Task Name. 7. Click Apply. The task is registered, and if the Go to tasks window for status check box is selected, the Tasks window appears. Related topics Release HA Reserved window (page 276) Releasing the HA reservation attribute 211

212 F Remote Web Console GUI reference for HA Abstract This appendix describes the Remote Web Console windows for High Availability (HA) operations. Replication window Summary Replica LDEVs Tab Summary Button Item View Histories - Local Replication View Histories - Remote Replication Edit Options - Local Replication Edit Options - Remote Replication Edit Options - SCP Time Description Displays the Histories window for local replication. Displays the Histories window for remote replication. Displays the Edit Local Replica Options window. Displays the Edit Remote Replica Options window. Displays the Edit SCP Time window. 212 Remote Web Console GUI reference for HA

213 Table Item Licensed Capacity Number of Replica LDEVs Number of FC Z/FCSE Relationships Number of Differential Tables Description Displays the used capacity and the licensed capacity for each program product. Displays the number of LDEV used for replication. Displays the number of relationships of Compatible FlashCopy and Compatible FlashCopy SE. Displays the number of differential tables in use and the maximum number. The Fast Snap pair does not use differential tables, therefore the number of the differential tables is not affected by the Fast Snap pair operations. In addition, the number of differential tables is not affected by the relationship operations to Compatible FlashCopy and FC Z either. Replica LDEVs Tab Displays only pairs assigned to the primary volume (source volume for FC Z relationship or FCSE relationship) and/or the secondary volume (target volume for FC Z relationship or FCSE relationship) to users. Button Item Export Description Opens a window for outputting the table information by clicking this button. Table Item LDEV ID LDEV Name Emulation Type Capacity Copy Type Description Displays the LDEV ID. If you click the LDEV ID, the LDEV Properties window appears. Displays the LDEV name. Displays the emulation type. Displays the LDEV capacity. Displays the types of the copy and the volume that use the LDEV. Copy types BC-L1: L1 pair of Business Copy BC-L2: L2 pair of Business Copy FS: Fast Snap pair BC Z: Business Copy for Mainframe pair FC Z: Compatible FlashCopy relationship FCSE: Compatible FlashCopy SE relationship Cnt Ac-S: Continuous Access Synchronous pair Cnt Ac-J: Continuous Access Journal pair Cnt Ac-S Z: Continuous Access Synchronous Z pair Replication window 213

214 Item Description Cnt Ac-J Z: Continuous Access Journal Z pair HA: High Availability pair Volume types(bc, FS, BC Z, Cnt Ac-S, Cnt Ac-J, Cnt Ac-S Z, Cnt Ac-J Z, HA) Primary:Primary volume Secondary:Secondary volume Volume types(fc Z, FCSE) "S" indicates a source volume, and "T" indicates a target volume. S-Normal: Normal source volume T-Normal: Normal target volume ST-Normal: Normal volume that is set to both a source volume and a target volume S-Failed, S-Full, S-Full & Failed :Failed source volume T-Failed, T-Full, T-Full & Failed: Failed target volume ST-Failed, ST-Full, ST-Full & Failed: Failed volume that is set to both a source volume and a target volume If no pair is set, a hyphen is displayed. Virtual Storage Machine* Displays the information about the virtual storage machine to which the LDEV belongs. Model / Serial Number: Displays the virtual storage machine model and serial number in the volume. LDEV ID: Displays the virtual LDEV ID of the volume. If no virtual LDEV ID is assigned, a blank is displayed. Device Name: Displays the virtual device name of the volume. A virtual device name is displayed in a format that combines the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute. Of the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute, only the specified items are displayed. If these are not specified, a blank is displayed. If the virtual CVS attribute is specified, "CVS" is added at the end. SSID: Displays the virtual SSID of the volume. If no virtual SSID is set, a blank is displayed. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. 214 Remote Web Console GUI reference for HA

215 Related topics Checking the licensed capacity (page 205) Remote Replication window Summary Cnt Ac-S Pairs Tab Cnt Ac-J Pairs Tab Mirrors Tab HA Pairs Tab Summary Item Number of Pairs Number of Mirrors Description Displays the number of pairs for each program product. Total displays the total number of pairs. Displays the number of mirrors. Open: Displays the number of mirrors for open systems. Mainframe: Displays the number of mirrors for mainframe systems. Total: Displays the total number of mirrors. Remote Replication window 215

216 Cnt Ac-S Pairs Tab Displays only pairs which a volume of a local storage system is assigned to users. Button Item Create Cnt Ac-S Pairs Split Pairs Resync Pairs View Pair Synchronous Rate* View Pair Properties* View Remote Connection Properties* Edit Pair Options* Delete Pairs* Export* Description Displays the Create Cnt Ac-S Pairs window. Displays the Split Pairs window. Displays the Resync Pairs window. Displays the View Pair Synchronous Rate window. Displays the View Pair Properties window. Displays the View Remote Connection Properties window. The window appears only if Pair Position is Primary. Displays the Edit Pair Options window. Displays the Delete Pairs window. Opens a window for outputting the table information by clicking this button. Table *: Displayed by clicking More Actions. Item Local Storage System Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. If you click the LDEV ID, the LDEV Properties window appears. LDEV Name: Displays the LDEV name of the volume. Port ID: Displays the port name of the volume. For a Continuous Access Synchronous Z pair, a hyphen is displayed. Host Group Name: Displays the host group name of the volume. For a Continuous Access Synchronous Z pair, a hyphen is displayed. LUN ID: Displays the LUN ID of the volume. For a Continuous Access Synchronous Z pair, a hyphen is displayed. Pair Position: Displays whether the volume is the primary volume of the pair or secondary volume. Provisioning Type*: Displays the volume type. Emulation Type*: Displays the emulation type of the volume. Capacity*: Displays the capacity of the volume. CLPR*: Displays the CLPR number of the volume. Encryption*: Displays the information of the encryption. Enabled: The encryption of the parity group to which the LDEV belongs is enabled. Disabled: The encryption of the parity group to which the LDEV belongs is disabled. 216 Remote Web Console GUI reference for HA

217 Item Description A hyphen is displayed if the LDEV is a THP V-VOL, an external volume, or a migration volume. Virtual Storage Machine*: Displays the virtual storage machine model and serial number in the volume. Virtual LDEV ID*: Displays the virtual LDEV ID of the volume. If the virtual LDEV ID is not assigned, a blank is displayed. Virtual Device Name*: Displays the virtual device name of the volume. A virtual device name is displayed in a format that combines the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute. Of the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute, only the specified items are displayed. If these are not specified, a blank is displayed. If the virtual CVS attribute is specified, "CVS" is added at the end. Virtual SSID*: Displays the virtual SSID of the volume. If no virtual SSID is set, a blank is displayed. Copy Type Displays the copy type. Cnt Ac-S: Continuous Access Synchronous pair Cnt Ac-S Z: Continuous Access Synchronous Z pair Status Remote Storage System Displays the pair status. In the Remote Web Console window, the pair status is displayed in the format of "pair status in Remote Web Console / pair status in RAID Manager or in Business Continuity Manager". If the pair status in Remote Web Console and the pair status in RAID Manager or Business Continuity Manager are the same, the pair status in RAID Manager or Business Continuity Manager is not displayed. Displays the information about the volume in the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. SSID: Displays the SSID of the remote storage system. LDEV ID: Displays the LDEV ID of the volume. Port ID: Displays the port name of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Synchronous Z pair, a hyphen is displayed. Host Group ID: Displays the host group ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Synchronous Z pair, a hyphen is displayed. LUN ID: Displays the LUN ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Synchronous Z pair, a hyphen is displayed. Virtual Storage Machine*: Displays the virtual storage machine model and serial number in the volume. Virtual LDEV ID*: Displays the virtual LDEV ID of the volume. Path Group ID Displays the path group ID. Remote Replication window 217

218 Item Update Type* Description Displays the update type. Sync: A Continuous Access Synchronous pair or Continuous Access Synchronous Z pair that is not assigned to any consistency group Sync(Specified CTG): A Continuous Access Synchronous pair or Continuous Access Synchronous Z pair that was created by specifying a consistency group CTG ID* CTG Utilization* Displays the consistency group ID. Displays whether multiple local storage systems and remote storage systems are sharing the consistency group. Single: The consistency group consists of one pair of storage systems. Multi: The consistency group consists of multiple pairs of storage systems. Preserve Mirror Status* Displays the preserve mirror status. - : Indicates either the normal preserve mirror status or that this is not a preserve mirror pair. Withdrawn: Indicates that the pair volume data does not match because copying Compatible FlashCopy was interrupted. Fence Level* Host I/O Time Stamp Transfer* Displays the fence level. Displays whether to transfer the time stamps of the host to the secondary volume. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. 218 Remote Web Console GUI reference for HA

219 Cnt Ac-J Pairs Tab Displays only pairs which a volume of a local storage system is assigned to users. Button Item Create Cnt Ac-J Pairs Split Pairs Resync Pairs View Pair Synchronous Rate* View Pair Properties* View Remote Connection Properties* Edit Pair Options* Delete Pairs* Split Mirrors* Resync Mirrors* Delete Mirrors* Export* Description Displays the Create Cnt Ac-J Pairs window. Displays the Split Pairs window. Displays the Resync Pairs window. Displays the View Pair Synchronous Rate window. Displays the View Pair Properties window. Displays the View Remote Connection Properties window. The window appears only if Pair Position is Primary. Displays the Edit Pair Options window. Displays the Delete Pairs window. Displays the Split Mirrors window. Displays the Resync Mirrors window. Displays the Delete Mirrors window. Opens a window for outputting the table information by clicking this button. *: This item is displayed by clicking More Actions. Remote Replication window 219

220 Table Item Local Storage System Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. If you click the LDEV ID, the LDEV Properties window appears. LDEV Name: Displays the LDEV name of the volume. Port ID: Displays the port name of the volume. For a Continuous Access Journal Z pair, a hyphen is displayed. Host Group Name: Displays the host group name of the volume. For a Continuous Access Journal Z pair, a hyphen is displayed. LUN ID: Displays the LUN ID of the volume. For a Continuous Access Journal Z pair, a hyphen is displayed. Pair Position: Displays whether the volume is the primary volume or secondary volume of the pair. Journal ID: Displays the journal ID. Mirror ID: Displays the mirror ID. Provisioning Type*: Displays the volume type. Emulation Type*: Displays the emulation type of the volume. Capacity*: Displays the capacity of the volume. CLPR*: Displays the CLPR number of the volume. Encryption*: Displays the information of the encryption. Enabled: The encryption of the parity group to which the LDEV belongs is enabled. Disabled: The encryption of the parity group to which the LDEV belongs is disabled. A hyphen is displayed if the LDEV is a THP V-VOL, an external volume, or a migration volume. Virtual Storage Machine*: Displays the virtual storage machine model and serial number in the volume. Virtual LDEV ID*: Displays the virtual LDEV ID of the volume. If the virtual LDEV ID is not assigned, a blank is displayed. Virtual Device Name*: Displays the virtual device name of the volume. A virtual device name is displayed in a format that combines the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute. Of the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS 220 Remote Web Console GUI reference for HA

221 Item Description attribute, only the specified items are displayed. If these are not specified, a blank is displayed. If the virtual CVS attribute is specified, "CVS" is added at the end. Virtual SSID*: Displays the virtual SSID of the volume. If no virtual SSID is set, a blank is displayed. Copy Type Displays the copy type. Cnt Ac-J: Continuous Access Journal pair Cnt Ac-J Z: Continuous Access Journal Z pair Status Remote Storage System Displays the pair status. For details about the pair status, see HP XP7 Continuous Access Journal User Guide or HP XP7 Continuous Access Journal for Mainframe Systems User Guide. Displays the information about the volume in the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. LDEV ID: Displays the LDEV ID of the volume. Port ID: Displays the port name of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Journal Z pair, a hyphen is displayed. Host Group ID: Displays the host group ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Journal Z pair, a hyphen is displayed. LUN ID: Displays the LUN ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. For a Continuous Access Journal Z pair, a hyphen is displayed. Journal ID: Displays the journal ID. Virtual Storage Machine*: Displays the virtual storage machine model and serial number in the volume. Virtual LDEV ID*: Displays the virtual LDEV ID of the volume. Path Group ID CTG ID* Error Level* Displays the path group ID. Displays the consistency group ID. Displays the error level. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. Remote Replication window 221

222 Mirrors Tab Displays only mirrors which all journal volumes are assigned to each user. Button Item Split Mirrors Resync Mirrors Create Cnt Ac-J Pairs Edit Mirror Options* View Remote Connection Properties* Delete Mirrors* Assign Remote Command Devices* Release Remote Command Devices* Export* Description Displays the Split Mirrors window. Displays the Resync Mirrors window. Displays the Create Cnt Ac-J Pairs window. Displays the Edit Mirror Options window. Displays the View Remote Connection Properties window. The window appears only if Attribute is Master. Displays the Delete Mirrors window. Displays the Assign Remote Command Devices window. Displays the Release Remote Command Devices window. Opens a window for outputting the table information by clicking this button. *: This item is displayed by clicking More Actions. Table Item Journal ID Mirror ID Journal Type Attribute Status Remote Storage System Description Displays the journal ID. If you click the journal ID, a separate window for the journal appears. Displays the mirror ID. Displays the copy type and the journal type option. If the journal type option is Standard, only the copy type is displayed. Displays the journal attribute. Displays the mirror status. For details about mirror statuses, see HP XP7 Continuous Access Journal User Guide or HP XP7 Continuous Access Journal for Mainframe Systems User Guide. Displays the information about the volume in the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. Journal ID: Displays the journal ID. Path Group ID Number of Data VOLs Data Capacity Displays the path group ID. Displays the number of data volumes. Displays the data capacity. 222 Remote Web Console GUI reference for HA

223 Item Remote Command Device Description Displays whether a remote command device is assigned to the mirror. If a remote command device is assigned to the mirror, the LDEV ID is displayed. If a remote command device is not assigned to the mirror, a blank is displayed. If a remote command device cannot be assigned to the mirror, a hyphen is displayed. CTG ID* CTG Utilization* Displays the consistency group ID. Displays whether multiple local storage systems and remote storage systems are sharing the consistency group. Single: The consistency group consists of one pair of storage systems. Multi: The consistency group consists of multiple pairs of storage systems. EXCTG Setting* If the journal belongs to an extended consistency group, the following information is displayed. If the journal does not belong to an extended consistency group, a hyphen is displayed. EXCTG ID: Displays the extended consistency group ID. Super DKC: Displays the device name of the super DKC and, after a slash (/), serial number. Path Watch Time* Path Watch Time Transfer* Displays the path watch time. Displays whether to transfer the path watch time of the master journal to the secondary mirror. If transferred, the path match time of the primary mirror and the secondary mirror match. : Transfers the path watch time to the secondary mirror. No: Does not transfer the path watch time to the secondary mirror. Copy Pace* Transfer Speed* Delta Resync Failure* Displays the speed for the initial copy per volume. Displays Slower, Middle, or Faster is displayed. If the journal is a restore journal, a hyphen is displayed. Displays the line speed for data transfer. The unit is Mbps (megabits per second). 256, 100, or 10 is displayed. Displays the processing to be performed if delta resync could not be executed. Entire Copy: If delta resync could not be executed, the entire primary volume data is copied to the secondary volume. No Copy: If delta resync could not be executed, nothing is executed. Therefore, the secondary volume is not updated. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. Remote Replication window 223

224 HA Pairs Tab Only pairs whose a volume in a local storage system is assigned to each user are displayed. Button Item Create HA Pairs Suspend Pairs Resync Pairs View Pair Synchronous Rate* View Pair Properties* View Remote Connection Properties* Delete Pairs* Export* Description Displays the Create HA Pairs window. Displays the Suspend Pairs window. Displays the Resync Pairs window. Displays the View Pair Synchronous Rate window. Displays the View Pair Properties window. Displays the View Remote Connection Properties window. The window appears only if Pair Position is Primary. Displays the Delete Pairs window. Opens a window for outputting the table information by clicking this button. *: This item is displayed by clicking More Actions. 224 Remote Web Console GUI reference for HA

225 Table Item Local Storage System Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. If you click the LDEV ID, the LDEV Properties window appears. LDEV Name: Displays the LDEV name of the volume. Port ID: Displays the port name of the volume. Host Group Name: Displays the host group name of the volume. LUN ID: Displays the LUN ID of the volume. Pair Position: Displays whether the volume is the primary volume or secondary volume of the pair. Capacity*: Displays the capacity of the volume. CLPR*: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Status Failure Factor* Remote Storage System Displays the pair status. Displays the failure factor. For failure factors displayed in the Failure Factor column and meanings, see Table 26 (page 226). Displays the information about the volume in the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. LDEV ID: Displays the LDEV ID of the volume. Port ID: Displays the port name of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore, this information is not updated even if the path setting is changed at the connected destination. Host Group ID: Displays the host group ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore, this information is not updated even if the path setting is changed at the connected destination. LUN ID: Displays the LUN ID of the volume. This information is only for identifying the LDEV ID when creating a pair. Therefore, this information is not updated even if the path setting is changed at the connected destination. Path Group ID Mirror ID Quorum Disk ID Virtual Storage Machine Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Displays the information about the virtual storage machine to which the LDEV belongs. Model / Serial Number: Displays the model and serial number of the virtual storage machine to which the volume belongs. LDEV ID: Displays the virtual LDEV ID of the volume. If the virtual LDEV ID is not assigned, a blank is displayed. Device Name: Displays the virtual device name of the volume. A virtual device name is displayed in a format with the combination of the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute. Remote Replication window 225

226 Item Description Among these items, only the items which are specified are displayed. If these are not specified, a blank is displayed. If the virtual CVS attribute is specified, "CVS" is added at the end. SSID: Displays the virtual SSID of the volume. If no virtual SSID is set, a blank is displayed. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. Table 26 Failure factors displayed in the Failure Factor column and meanings Failure Factor Local Volume Failure Remote Path Failure Quorum Disk Failure Internal Error Not Failure Remote Volume Failure Remote Side Unidentified Failure blank cell Meanings A failure is detected on a volume in the local storage system. A failure is detected on the remote path. A failure is detected on the quorum disk. An internal error is detected. The failure is not detected. However, the pair is suspended when the local storage system is turned on. A failure is detected on a volume in the remote storage system. A failure due to an indefinite factor is detected on a volume in the remote storage system. A failure is not detected. Related topics Checking the status of an HA pair (page 203) HP XP7 Continuous Access Synchronous User Guide HP XP7 Continuous Access Synchronous for Mainframe Systems User Guide HP XP7 Continuous Access Journal User Guide HP XP7 Continuous Access Journal for Mainframe Systems User Guide 226 Remote Web Console GUI reference for HA

227 Remote Connections window Summary Connections (To) Tab Connections (From) Tab Quorum Disks Tab Summary Button Item View Port Condition Description Displays the View Port Condition window. Table Item Connections (To) Description System: Displays the number of connections from a local storage system to a remote storage system per system. CU: Displays the number of connections from a local storage system to a remote storage system per CU. Remote Storage System Displays the number of the storage systems that are connected to the local storage system. If you click the numerical value, a balloon that shows the model and serial number of the remote storage system appears. Remote Connections window 227

228 Item Connections (From) Description System: Displays the number of connections from a remote storage system to a local storage system per system. CU: Displays the number of connections from a remote storage system to a local storage system per CU. The number of the remote connections that are used by the Continuous Access Synchronous pair and the Continuous Access Synchronous Z pair is only displayed as the number of connections. Quorum Disks Displays the number of quorum disks. Connections (To) Tab Displays the information about the remote storage system (RCU). Button Item Add Remote Connection Edit Remote Connection Options View Remote Connection Properties Add Remote Paths* Remove Remote Paths* Add SSIDs* Remove SSIDs* Remove Remote Connections* Export* Description Displays the Add Remote Connection window. Displays the Edit Remote Connection Options window. Displays the View Remote Connection Properties window. Displays the Add Remote Paths window. Displays the Remove Remote Paths window. Displays the Add SSIDs window. Displays the Remove SSIDs window. Displays the Remove Remote Connections window. Opens a window for outputting the table information by clicking this button. *: This item is displayed by clicking More Actions. Table Item Connection Type Description System: Local storage systems are connected to remote storage systems in units of systems. CU: Local storage systems are connected to remote storage systems in units of CUs. Local CU Remote Storage System Displays the CU number of the local storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Path Group ID Displays the path group ID. 228 Remote Web Console GUI reference for HA

229 Item Status Description Displays the remote connection status. Normal: All remote paths of the remote connection are normal. Failed: All remote paths of the remote connection failed. Warning: Some remote paths of the remote connection failed. Number of Remote Paths Minimum Number of Paths* RIO MIH Time (sec.)* Round Trip Time (msec.)* FREEZE Option* Displays the number of remote paths. Displays the minimum number of paths. Displays the RIO MIH time (seconds). Displays the round trip time (milliseconds). Displays the FREEZE option. *: This item is not displayed in the initial status. To display items, change settings of the table option in the Column Settings window. For details about the Column Settings window, see HP XP7 Remote Web Console User Guide. Connections (From) Tab Remote Connections window 229

230 Only when the remote connection is used by the Continuous Access Synchronous pair and the Continuous Access Synchronous Z pair, displays the information about the local storage system (MCU). Button Item Export Description Opens a window for outputting the table information by clicking this button. Table Item Connection Type Description System: Remote storage systems are connected to local storage systems in units of systems. CU: Remote storage systems are connected to local storage systems in units of CUs. Local CU Remote Storage System Displays the CU number of the local storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Path Group ID Displays the path group ID. 230 Remote Web Console GUI reference for HA

231 Quorum Disks Tab Only quorum disks that are assigned to each user are displayed. Button Item Add Quorum Disks Remove Quorum Disks Export Description Displays the Add Quorum Disks window. Displays the Remove Quorum Disks window. Opens a window for outputting the table information by clicking this button. Table Item Quorum Disk ID Quorum Disk Description Displays the quorum disk ID. Displays the information about the quorum disk. LDEV ID: Displays the LDEV ID of the volume. If you click the LDEV ID, the LDEV Properties window appears. LDEV Name: Displays the LDEV name of the volume. Remote Connections window 231

232 Item Description Status: Displays the status of the volume. Normal: Normal status. Blocked: Host cannot access a blocked volume. Warning: Problem occurs in the volume. Formatting: Volume is being formatted. Preparing Quick Format: Volume is being prepared for quick formatting. Quick Formatting: Volume is being quick-formatted. Correction Access: Access attribute is being corrected. Copying: Data in the volume is being copied. Read Only: Data cannot be written on a read-only volume. Shredding: Volume is being shredded. Hyphen (-): Any status other than the above. CLPR: Displays the CLPR number of the volume. Capacity: Displays the capacity of the volume. Remote Storage System Displays the model and serial number of the remote storage system. Related topics Checking the remote connection status (page 205) View Pair Synchronous Rate window 232 Remote Web Console GUI reference for HA

233 Pairs Table Table Item Local Storage System Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays whether the volume is the primary volume or secondary volume of the pair. CLPR: Displays the CLPR number of the volume. Copy Type Status Synchronous Rate (%) Remote Storage System Displays the copy type. Displays the pair status. Displays the synchronous rate of the volume of the local storage system and the volume of the remote storage system. If the target volume is queuing, (Queuing) is displayed. Displays the information about the volume in the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. LDEV ID: Displays the LDEV ID of the volume. Path Group ID Quorum Disk ID Virtual Storage Machine Displays the path group ID. Displays the quorum disk ID. Displays the information about the volume in the virtual storage machine. Model / Serial Number: Displays the model and serial number of the virtual storage machine. LDEV ID: Displays the LDEV ID of the volume. Device Name: Displays the device name of the volume. SSID: Displays the SSID. Button Item Refresh Description Refreshes the Pairs table information. View Pair Synchronous Rate window 233

234 Related topics Checking the synchronous rate of an HA pair (page 204) View Pair Properties window Pair Properties Item Local Storage System Description Displays the information about the local storage system. LDEV ID(LDEV Name): Displays the LDEV ID and the LDEV name of the volume in the local storage system. If the LDEV name is long and abbreviated by using "...", place the cursor on the LDEV name to display a tooltip that shows the LDEV name. Number of Paths: Displays the number of paths. If you click the link, a path list appears. Capacity: Displays the capacity. Model / Serial Number, CLPR: Displays the model, serial number, and CLPR number of the local storage system. Copy Type Status Path Group Displays the copy type. Displays the pair status. Displays the path group ID of the pair. If the primary volume is in the local storage system and you click the path group ID, a remote path list appears. 234 Remote Web Console GUI reference for HA

235 Item Mirror ID Remote Storage System Description Displays the mirror ID. Displays the information about the remote storage system. LDEV ID: Displays the LDEV ID of the volume in the remote storage system. Port ID / Host Group ID / LUN ID: Displays the port name, the host group ID, and the LUN ID of the volume in the remote storage system. This information is only for identifying the LDEV ID when creating a pair. Therefore this information is not updated even if the path setting is changed at the connection destination. Capacity: Displays the capacity. Model / Serial Number: Displays the model and serial number of the remote storage system. Pair Detail Table Item Status Failure Factor Quorum Disk ID (LDEV ID) Copy Pace Paired Time Last Update Time Pair Copy Time Local Volume I/O Mode Virtual Storage Machine Description Displays the status. Displays the failure factor. Displays the quorum disk ID and the LDEV ID. Displays the copy speed. Displays the time when the pair was created. Displays the time of the last update. Displays the pair copy time. Displays the I/O mode of the volume in the local storage system. Displays the information about the virtual storage machine. Model / Serial Number: Displays the model and serial number of the virtual storage machine. LDEV ID: Displays the virtual LDEV ID of the volume. If the virtual LDEV ID is not assigned, a blank is displayed. Device Name: Displays the virtual device name of the volume. A virtual device name is displayed in a format that combines the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute. Of the virtual emulation type, the number of virtual LUSE volumes, and the virtual CVS attribute, only the specified items are displayed. If these are not specified, a blank is displayed. If the virtual CVS attribute is specified, "CVS" is added at the end. SSID: Displays the virtual SSID of the volume. If no virtual SSID is set, a blank is displayed. [Number of pages (current / number of selections)] Displays "current pair information / number of selected pairs". View Pair Properties window 235

236 Related topics Checking the status of an HA pair (page 203) View Remote Connection Properties window Remote Connection Properties Table Item Connection Type Local CU Description Displays the connection type. Displays the CU number of the local storage system. 236 Remote Web Console GUI reference for HA

237 Item Remote Storage System Path Group ID Channel Type Status Minimum Number of Paths RIO MIH Time Round Trip Time FREEZE Option Registered Time Last Update Time Number of Remote Paths Description Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Displays the path group ID. Displays the channel type. Displays the remote connection status: Normal: All remote paths of the remote connection are normal. Failed: All remote paths of the remote connection failed. Warning: Some remote paths of the remote connection failed. Cnt Ac-S/HA: Displays the minimum number of paths for Continuous Access Synchronous and HA. Cnt Ac-J: Displays the minimum number of paths for Continuous Access Journal. Displays the RIO MIH time. Displays the round trip time. Displays whether the FREEZE option is enabled. Displays the time of the registration. Displays the time of the last update. Displays the number of remote paths. Remote Paths Table Item Local Port ID Remote Port ID Status Description Displays the local port ID for the remote path. Displays the remote port ID for the remote path. Displays the remote path status. For details, see Troubleshooting related to remote path statuses (page 173). View Remote Connection Properties window 237

238 Related topics Checking the detailed status of remote connections and paths (page 205) Histories window Setting Fields Item Copy Type Last Updated Page Number Description Select a copy type. Cnt Ac-S Cnt Ac-S Z Cnt Ac-J Cnt Ac-J Z HA Displays the latest updated date and time. It you do not select the Copy Type, the date and time are not displayed. Displays the page number. If you click the button, you will go to the next/previous page. If you do not select the Copy Type, both the current page number (text box) and the total page number are not displayed. History table (when you select Cnt Ac-S or Cnt Ac-S Z ) Table Item Date and Time Local Storage System Description Displays the date and time that the operation was run. Displays the following information about the volumes in the local storage system. LDEV ID: LDEV identifier of the volume. Provisioning Type: Provisioning type of the volume. Remote Storage System Displays the following information about the volumes in the remote storage system. LDEV ID: LDEV identifier of the volume. Provisioning Type: Provisioning type of the volume. 238 Remote Web Console GUI reference for HA

239 Item Description Description Copy Time Started Displays the description of the operation. About the description of each operation, see the manuals of each program product. Displays the expiration time to copy. If the Description is other than Pair Add Complete or Pair Resync. Complete, a hyphen is displayed. Displayed the start time of the operation. If the Description is other than Pair Add Complete or Pair Resync. Complete, a hyphen is displayed. Button Item Export Description Opens a window for outputting the table information by clicking this button. History table (when you select Cnt Ac-J or Cnt Ac-J Z) Table Item Date and Time Local Storage System Description Displays the date and time that the operation was run. Displays the following information about the volumes in the local storage system. LDEV ID: LDEV identifier of the volume. Provisioning Type: Provisioning type of the volume. Journal ID: Journal identifier. Mirror ID: Mirror identifier. Remote Storage System Displays the following information about the volumes in the remote storage system. LDEV ID: LDEV identifier of the volume. Provisioning Type: Provisioning type of the volume. EXCTG ID Description Copy Time Displays the EXCTG identifier. This item is displayed only when the Cnt Ac-J Z is selected. Displays the status of the operation. For the description of each operation, see the manuals of each program product. Displays the expiration time to copy. A hyphen is displayed if other than the following operation statuses is displayed: Paircreate Complete Pairresync Complete Add Pair Complete Resume Pair Complete Histories window 239

240 Button Item Export Description Opens a window for outputting the table information by clicking this button. History table (when you select HA.) Table Item Date and Time Local Storage System Description Displays the date and time that the operation was run. Displays the following information about the volumes in the local storage system. LDEV ID: LDEV identifier of the volume. Pair Position: Displays whether the volume is the primary volume or secondary volume of the pair. Remote Storage System Displays the following information about the volumes in the remote storage system. Model / Serial Number: The model and serial number of the remote storage system. LDEV ID: LDEV identifier of the volume. Mirror ID Quorum Disk ID Virtual Storage Machine Displays a mirror identifier. Displays a quorum disk ID. Displays the following information about the volumes in the virtual storage machine. Model / Serial Number: The model and serial number of the virtual storage system. LDEV ID: LDEV ID of the volume. Description Code Description Copy Time Displays the description code. Displays the status of the operation. See Messages displayed in Description of the Histories window (page 205). Displays the expiration time to copy. A hyphen is displayed if other than Copy Complete is displayed Button Item Export Description Opens a window for outputting the table information by clicking this button. 240 Remote Web Console GUI reference for HA

241 Related topics Checking the operation history of HA pairs (page 204) Add Remote Connection wizard Related topics Adding a remote connection (page 192) Add Remote Connection window Setting Fields Item Connection Type Description Select a connection type. The default is System. System: Local storage systems are connected to remote storage systems in units of systems. Select this to create a Continuous Access Synchronous pair, Continuous Access Journal pair, Continuous Access Journal Z pair, or HA pair. CU: Local storage systems are connected to remote storage systems in units of CUs. Select this to create a Continuous Access Synchronous Z pair. Local Storage System Item Model Serial Number Local CU Description Displays the model of the local storage system. Displays the serial number of the local storage system. Select a CU number for the local storage system between 00 and FE. Only CUs in which mainframe system volumes exist are displayed. Add Remote Connection wizard 241

242 Item Description This is displayed only if CU was selected for Connection Type. If System was selected for Connection Type, a hyphen is displayed. Remote Storage System Item Model Serial Number Remote CU SSID Add SSID Description Specify a model for the remote storage system. XP7(7) P9500(6) XP24000/XP20000(5) If a numerical value other than above is specified, a storage system to be supported in the future is assumed. In this case, the Remote Connections window shows the model in a format which a numerical value is surrounded by parenthesis (such as (255)). Specify a serial number for the remote storage system. The value you can specify varies depending on the specified model. XP7, P9500 or XP24000/XP20000: 1 to Storage system to be supported in the future: 0 to Select a CU number for the remote storage system. You can select this only if you selected CU for Connection Type. Specify an SSID for the remote storage system between 0004 and FFFE (hexadecimal). You can select this only if you selected CU for Connection Type. If two or more valid SSIDs exist, the - button appears. If you click the - button, the SSID text box is deleted. If you click the button, an SSID is added to a remote storage system. Maximum of four SSIDs can be added. This is not displayed if four SSIDs were already added. Remote Paths Item Path Group ID Minimum Number of Paths Local Port ID Remote Port ID Add Path Description Select an ID for the path group between 00 and FF. Maximum of 64 path group IDs can be registered per storage system. You can select this only if you selected System for Connection Type. Select the minimum number of paths between 1 and 8. The default setting is 1. For Continuous Access Journal or Continuous Access Journal Z, the minimum number of paths is 1 regardless of the specified number. Select a local port name. Select a remote port name. If the number of valid paths is greater than the minimum number of paths, the - button appears. If you click the - button, the local port and remote port text boxes are deleted. If you click the button, a path is added. Maximum of eight paths can be added. 242 Remote Web Console GUI reference for HA

243 Options Item RIO MIH Time Round Trip Time FREEZE Option Description Specify the RIO MIH time (the wait time for completion of data copy between storage systems) between 10 and 100. The default setting is 15. Specify the round trip time between 1 and 500. The default setting is 1. The specified time is enabled only when using a Continuous Access Synchronous pair, Continuous Access Synchronous Z pair, and HA pair. Select whether to enable or disable the support of the CGROUP (FREEZE/RUN) PPRC TSO command. Enable: The local storage system accepts and executes the CGROUP command. Disable: The local storage system rejects the CGROUP command. The FREEZE option is enabled only when using a Continuous Access Synchronous Z pair. You can select this only if you selected CU for Connection Type. Confirm window Selected Remote Connections Table Item Connection Type Local CU Description Displays the connection type. Displays the CU number of the local storage system. Add Remote Connection wizard 243

244 Item Remote Storage System Path Group ID Number of Remote Paths Minimum Number of Paths RIO MIH Time (sec.) Round Trip Time (msec.) FREEZE Option Description Displays the information about the remote storage system. Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system' Displays the path group ID. Displays the number of remote paths. Displays the minimum number of paths. Displays the RIO MIH time. Displays the round trip time. Displays whether to enable the FREEZE option. Selected Remote Paths Table Item Local Port ID Remote Port ID Description Displays the local port name. Displays the remote port name. Add Quorum Disks wizard Related topics Adding the quorum disk (page 194) Add Quorum Disks window 244 Remote Web Console GUI reference for HA

245 Setting Fields Item Quorum Disk ID Description Select a quorum disk ID between 00 and 1F. Available LDEVs Table Item LDEV ID LDEV Name CLPR Capacity Description Displays the LDEV ID of the volume. Displays the LDEV name of the volume. Displays the CLPR number of the volume. Displays the capacity of the volume. Remote Storage System Select a model and serial number for the remote storage system. Add Adds the quorum disks that were specified in the left area to the Selected Quorum Disks table. Add Quorum Disks wizard 245

246 Selected Quorum Disks Table Table Item Quorum Disk ID Quorum Disk Description Displays the quorum disk ID. Displays the information about the quorum disk. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. CLPR: Displays the CLPR number of the volume. Capacity: Displays the capacity of the volume. Remote Storage System Displays the model and serial number of the remote storage system. Button Item Remove Description Remove the selected quorum disks. 246 Remote Web Console GUI reference for HA

247 Confirm window Selected Quorum Disks Table Item Quorum Disk ID Quorum Disk Remote Storage System Description Displays the quorum disk ID. Displays the information about the quorum disk. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. CLPR: Displays the CLPR number of the volume. Capacity: Displays the capacity of the volume. Displays the model and serial number of the remote storage system. Add Quorum Disks wizard 247

248 Assign HA Reserves window Selected LDEVs table Item LDEV ID Virtual Storage Machine Description Displays the LDEV identifier of the volume. Displays the model and serial number of the virtual storage machine to which the volume belongs. 248 Remote Web Console GUI reference for HA

249 Related topics Assigning the HA reservation attribute (page 195) Create HA Pairs wizard Related topics Creating HA pairs (page 196) Create HA Pairs window Create HA Pairs wizard 249

250 Setting Fields Item Remote Storage System Description Select a remote storage system. Model / Serial Number: Select a model and a serial number. Path Group ID: Select a path group ID. Primary Volume Selection Item LU Selection Description Select an LU in the local storage system. Port ID: Select a port number. Host Group Name: Select a host group name. If you choose Any, the Available LDEVs table displays all LUNs in the specified port. Available LDEVs Table Item Port ID Host Group Name Description Displays the port name. Displays the host group name. 250 Remote Web Console GUI reference for HA

251 Item LUN ID LDEV ID LDEV Name Capacity CLPR Description Displays the LUN ID. Displays the LDEV ID. Displays the LDEV name. Displays the volume capacity. Displays the CLPR number. Secondary Volume Selection Item Base Secondary Volume Selection Type Description Displays the information about the base secondary volume. Port ID: Select a port name. Host Group ID: Select a host group ID. LUN ID: Select a LUN ID. Select a selection type. The default is Interval. Interval: Select an interval for allocating the secondary volume. Relative Primary Volume: Calculates the difference of the LUs between the two neighboring primary volumes, and uses the result to choose a LUN based on the result. For example, assume that LUNs of three primary volumes are 1, 5, and 6. In this case, if you specify 2 to LUN in Base Secondary Volume, the LUN of the three secondary volumes will be 2, 6, and 7 respectively. Mirror ID Select a mirror ID to be assigned to the pair. Quorum Disks Select a quorum disk ID to be assigned to the pair. Options Item Initial Copy Type Copy Pace Description Select a initial copy type. The default is Entire. Entire: Creates a pair, and copies data from the primary volume to the secondary volume. None: Creates a pair, but data is not copied from the primary volume to the secondary volume. To select None, make sure that the primary volume and the secondary volume are identical. Specify the maximum number of tracks to be copied in a single remote I/O. The default is 15. The speed of 1 through 5 is low, and is used to reduce impact on host I/O. The speed of 5 through 10 is in the middle. The speed of 11 through 15 is high, and the host I/O performance might be degraded. Add Adds the pairs that were specified in the left area to the Selected Pairs table. Create HA Pairs wizard 251

252 Selected Pairs Table Table Item Local Storage System Description LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name. Port ID: Displays the port name of the volume. Host Group Name: Displays the host group name of the volume. LUN ID: Displays the LUN ID of the volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. Remote Storage System Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. Port ID: Displays the port name. 252 Remote Web Console GUI reference for HA

253 Item Description Host Group ID: Displays the host group ID. LUN ID: Displays the LUN ID. Path Group ID Mirror ID Quorum Disk ID Initial Copy Type Copy Pace Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Displays the initial copy type. Displays the maximum number of tracks to be copied in a single remote I/O. Button Item Change Settings Remove Description Displays the Change Settings window. Removes the selected pairs. Change Settings window Create HA Pairs wizard 253

254 Setting Fields Item Base Secondary Volume Description Select check box when you change the settings for the base secondary volume. Continuous Access Synchronous pair, Continuous Access Journal pair, or HA pair Port ID: Port Identifier Host Group ID: Host group identifier LUN ID: LUN Identifier Note: If the decimal number is displayed as LUN ID in Remote Web Console of the local storage system, enter decimal number. If the hexadecimal number is displayed as LUN ID in Remote Web Console of the local storage system, enter hexadecimal number. XP24000/XP20000 Disk Array and P9500 displays the LUN ID with hexadecimal format in Remote Web Console. If you apply the decimal format to the LUN ID of the local storage system, enter the LUN ID after converting it from the hexadecimal format to the decimal format. For details about the notation change of the LUN ID in Remote Web Console, see the HP XP7 Remote Web Console User Guide. When the pair is a Continuous Access Synchronous Z pair or a Continuous Access Journal Z pair. LDKC: displays "00". This value cannot be changed. CU: For Continuous Access Synchronous Z: the CU number of the volume in the remote storage system. For Continuous Access Journal Z: the CU number of the remote storage system, between 00 and FE. LDEV: LDEV number, between 00 and FF. Interval: Select the interval. Primary Volume Fence Level Initial Copy Type Copy Pace Select the fence level. None: The write operation to the primary volume can be done, even if you split the pair. Data:The write operation to the primary volume cannot be done, if the update copy fails. Status: The write operation to the primary volume cannot be done, only if the storage system at the primary site cannot change the pair status of the secondary volume to PSUE (in case of Continuous Access Synchronous) or to Suspend (in case of Continuous Access Synchronous Z). This item is displayed only in case of Continuous Access Synchronous pair, or Continuous Access Synchronous Z pair. Select the type of the pair create operation. Entire Volume: After the pair creation, data is copied from the primary volume to the secondary volume. None: A pair is created but the data is not copied from the primary volume to the secondary volume. If you select "None", make sure that the primary volume and the secondary volume are equal. Delta: A pair is created but the initial copy operation is not performed. The status of the created pair for the delta resync operation in Continuous Access Journal is HOLD or HOLDING. The status of the created pair for the delta resync operation in Continuous Access Journal Z is Hold or Holding. You can select this item only when you change the setting of Continuous Access Journal pair or Continuous Access Journal Z pair. Enter the maximum number of tracks to be copied per remote I/O. For Continuous Access Synchronous, enter the number between one and Remote Web Console GUI reference for HA

255 Item Description For Continuous Access Synchronous Z, select 3 or 15 in the list. The default setting is : Slow pace. These values are appropriate for the reduction of the host I/O influence. 5-10: Medium pace : High pace. It may happen that the host I/O performance goes down. This item is displayed only when the pair is a Continuous Access Synchronous pair, a Continuous Access Synchronous Z pair, or an HA pair. Initial Copy Priority CFW data DFW to Secondary Volume Host I/O Time Stamp Transfer Error Level CFW Enter the priority of the pair creation operation by the decimal number form 1 to 256. This item is not displayed when the pair is an HA pair. Select either one from the following alternatives. Primary Volume Only: The cache fast write (CFW) data is not copied to the secondary volume. Secondary Volume Copy: The cache fast write (CFW) data is copied to the secondary volume. This item is displayed only when the pair is a Continuous Access Synchronous Z pair. Select either one from the following alternatives when the storage system in the secondary site cannot copy the DFW data to the secondary volume. Not Require: The storage system at the primary site does not split the Continuous Access Synchronous Z pair. Require: The storage system at the primary site splits a Continuous Access Synchronous Z pair. This item is displayed only when the pair is a Continuous Access Synchronous Z pair. The interaction of the setting of the DFW and the primary volume fence level causes the eternal I/O error in the host application when you update the primary volume. Trace the pair whose DFW is set to Require and then confirm that the DFW usage to the secondary volume is not blocked. IBM PPRC command does not support the DFW to Secondary Volume option. If you create the pair by CESTPAIR TSO command, the DFW to Secondary Volume is set to Not Require. Select either one from the alternatives. The default setting is Disable. Enable: The time stamp of the host is transferred to the secondary volume. Disable: The time stamp of the host is not transferred to the secondary volume. This item is displayed only when the pair is a Continuous Access Synchronous Z pair. Select one from the following options that are the location of pair split operation when a failure occurs. LU: Only the failed pair is split. You can select this option when the pair is a Continuous Access Journal pair. Mirror: All pairs in the same mirror as the failed pair are split. You can select this option when the pair is a Continuous Access Journal pair, or a Continuous Access Journal Z pair. Volume: Only the failed pair is split. You can select this option when the pair is a Continuous Access Journal Z pair. This item is displayed when the pair is a Continuous Access Journal pair, or a Continuous Access Journal Z pair. Select either one from the following alternatives. Primary Volume Only: The cache fast write (CFW) data is not copied to the secondary volume. Secondary Volume Copy: The cache fast write (CFW) data is copied to the secondary volume. This item is displayed only when the pair is a Continuous Access Journal Z pair. Create HA Pairs wizard 255

256 Confirm window Selected Pairs Table Item Local Storage System Remote Storage System Path Group ID Mirror ID Quorum Disk ID Initial Copy Type Copy Pace Description Displays the information about the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name. Port ID: Displays the port name of the volume. Host Group Name: Displays the host group name of the volume. LUN ID: Displays the LUN ID of the volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. Port ID: Displays the port name. Host Group ID: Displays the host group ID. LUN ID: Displays the LUN ID. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Displays the initial copy type. Displays the maximum number of tracks to be copied in a single remote I/O. 256 Remote Web Console GUI reference for HA

257 Suspend Pairs window Selected Pairs Table Item Local Storage System Copy Type Status Remote Storage System Path Group ID Mirror ID Quorum Disk ID Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays whether the volume is the primary volume or the secondary volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Displays the copy type. Displays the pair status. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. LDEV ID: Displays the LDEV ID of the volume. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Suspend Pairs window 257

258 Related topics Suspending HA pairs (page 198) Resync Pairs wizard Related topics Resynchronizing HA pairs (page 198) Resync Pairs window Selected Pairs Table Item Local Storage System Copy Type Status Remote Storage System Path Group ID Mirror ID Quorum Disk ID Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays the pair position of the volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Displays the copy type. Displays the pair status. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. LDEV ID: Displays the LDEV ID of the volume. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. 258 Remote Web Console GUI reference for HA

259 Setting Fields Item Copy Pace Description Specify the maximum number of tracks to be copied in a single remote I/O, between 1 and 15. Confirm window Selected Pairs Table Item Local Storage System Copy Type Status Copy Pace Remote Storage System Path Group ID Mirror ID Quorum Disk ID Description Displays the information about the volume in the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays the pair position of the volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Displays the copy type. Displays the pair status. Displays the maximum number of tracks to be copied in a single remote I/O. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. LDEV ID: Displays the LDEV ID of the volume. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Resync Pairs wizard 259

260 Delete Pairs wizard Related topics Deleting HA pairs (page 199) Delete Pairs window Selected Pairs Table Item Local Storage System Copy Type Status Remote Storage System Path Group ID Mirror ID Quorum Disk ID Description Displays the information about the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays whether the volume is the primary volume or the secondary volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Displays the copy type. Displays the pair status. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. LDEV ID: Displays the LDEV ID of the volume in the remote storage system. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. 260 Remote Web Console GUI reference for HA

261 Setting Fields Item Delete Mode Volume Access Description Specify a deletion mode. Normal: The selected pair is deleted. Force: The specified pair is forcibly deleted. If you select Force, the pair is deleted, even if the local storage system cannot communicate to the remote storage system. The server waiting for the device-end (I/O completion signal) from the local storage system which is not able to communicate to the remote storage system is released, and then you can continue to operate the server. Note: Only when both I/O mode is block in the primary volume and the secondary volume, you can select Force. If you want to delete the HA pair forcibly when the I/O mode is other than block, contact HP for assistance. Note: If you select Force, the HA pair must be removed from the storage system both at the primary site and the secondary site. When the server can access both volumes, if you delete the pair forcibly with the Enable option being specified to the Volume Access, the data failure might occur because the contents of each volume are not coincident. Therefore, delete the pair according to the following procedure. 1. Stop access from the server to one volume. 2. To the volume which the access from the server has been stopped, delete the pair forcibly with the Disable option being specified to the Volume Access. 3. To the volume which the access from the server continues, delete the pair forcibly with the Enable option being specified to the Volume Access. Specify either one from the following alternatives. Enable: By remaining the virtual LDEV ID of the volume in the local storage system, the server can access the volume even if the pair is deleted. Disable: By deleting the virtual LDEV ID of the volume in the local storage system, the server cannot access the volume after the pair is deleted. Set the reservation attribute for the secondary volume. Confirm window Delete Pairs wizard 261

262 Selected Pairs Table Item Local Storage System Copy Type Status Delete Mode Volume Access Remote Storage System Path Group ID Mirror ID Quorum Disk ID Description Displays the information about the local storage system. LDEV ID: Displays the LDEV ID of the volume. LDEV Name: Displays the LDEV name of the volume. Pair Position: Displays whether the volume is the primary volume or the secondary volume. Capacity: Displays the capacity of the volume. CLPR: Displays the CLPR number of the volume. I/O Mode: Displays the I/O mode of the volume. Displays the copy type. Displays the pair status. Displays the deletion mode. Displays the volume access. Displays the information about the remote storage system. Model / Serial Number: Displays the model and the serial number. LDEV ID: Displays the LDEV ID of the volume in the remote storage system. Displays the path group ID. Displays the mirror ID. Displays the quorum disk ID. Edit Remote Replica Options wizard Related topics Editing remote replica options (page 207) Edit Remote Replica Options window 262 Remote Web Console GUI reference for HA

263 Setting Fields Item Copy Type Description Select a copy type. Cnt Ac-S/Cnt Ac-S Z Cnt Ac-J/Cnt Ac-J Z HA Storage System Options This is not displayed if HA was specified for Copy Type. Item Maximum Initial Copy Activities Path Blockade Watch Path Blockade SIM Watch Services SIM of Remote Copy Description Enter the maximum initial copy activities. For a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair, specify a value between 1 and 512. For a Continuous Access Journal pair or Continuous Access Journal Z pair, specify a value between 1 and 128. Specify the path blockage watch between 2 and 45. This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. Specify the path blockade SIM watch between 2 and 100. This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. Select whether to report the services SIM of remote copy. Report No Report This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. CU Options This is not displayed if HA was specified for Copy Type. Item Maximum Initial Copy Activities Description Select whether to enable the maximum initial copy activities. Enable Disable This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. CUs Table This is not displayed if HA was specified for Copy Type. Table Item CU Maximum Initial Copy Activities Description Displays the CU number. Displays the maximum initial copy activities. If Disable was selected for Maximum Initial Copy Activities above the table, a hyphen is displayed. Edit Remote Replica Options wizard 263

264 Item Description This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. PPRC Support Services SIM Displays whether PPRC is supported. This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. Displays whether to report by the services SIM of the remote copy. Button Item Change CU Options Description Displays the Change CU Options window. Maximum Initial Copy Activities Specify the maximum initial copy activities between 1 and 512. This is displayed only if HA was selected for Copy Type. Confirm window Cnt Ac-S/Cnt Ac-S Z Storage System Options table This is displayed if Cnt Ac-S/Cnt Ac-S Z was specified for Copy Type. Item Maximum Initial Copy Activities Path Blockade Watch (sec.) Path Blockade SIM Watch (sec.) Services SIM Description Displays the maximum initial copy activities. Displays the path blockade watch. Displays the path blockade SIM watch. Displays the services SIM. 264 Remote Web Console GUI reference for HA

265 Cnt Ac-J/Cnt Ac-J Z Storage System Options table This is displayed if Cnt Ac-J/Cnt Ac-J Z was specified for Copy Type. Item Maximum Initial Copy Activities Description Displays the maximum initial copy activities. HA Storage System Options table This is displayed if HA was specified for Copy Type. Item Maximum Initial Copy Activities Description Displays the maximum initial copy activities. CU Options Table This is not displayed if HA was specified for Copy Type. Item CU Maximum Initial Copy Activities PPRC Support Services SIM Description Displays the CU number. Displays the maximum initial copy activities. This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. Displays the PPRC support. This is displayed only for a Continuous Access Synchronous pair or Continuous Access Synchronous Z pair. Displays the services SIM. Edit Remote Replica Options wizard 265

266 Remove Quorum Disks window Selected Quorum Disks Table Item Quorum Disk ID Quorum Disk Remote Storage System Description Displays the quorum disk ID. Displays the information about the quorum disk. LDEV ID: Displays the LDEV ID of the quorum disk. LDEV Name: Displays the LDEV name of the quorum disk. CLPR: Displays the CLPR number of the quorum disk. Capacity: Displays the capacity of the quorum disk. Displays the model and serial number of the remote storage system. 266 Remote Web Console GUI reference for HA

267 Related topics Removing quorum disks (page 208) Force Delete Pairs (HA Pairs) window Selected LDEVs Table Item LDEV ID LDEV Name Capacity CLPR Description Displays the LDEV ID of the volume. Displays the LDEV name. Displays the capacity. Displays the CLPR number. Force Delete Pairs (HA Pairs) window 267

268 Related topics Forcibly deleting HA pairs(for nonpaired volumes) (page 202) Edit Remote Connection Options wizard Related topics Editing remote connection options (page 209) Edit Remote Connection Options window Setting Fields Item Minimum Number of Paths RIO MIH Time Round Trip Time FREEZE Option Description Select the check box, and then specify the minimum number of paths. For Continuous Access Journal or Continuous Access Journal Z, the minimum number of paths is 1 regardless of the specified number. Select the check box, and then specify the RIO MIH time between 10 and 100. The default is 15. Select the check box, and then specify the round trip time between 1 and 500. The default is 1. The specified time is enabled only when using a Continuous Access Synchronous or Continuous Access Synchronous Z. Select whether to enable or disable the support of the CGROUP (FREEZE/RUN) PPRC TSO command. Enable: The local storage system accepts and executes the CGROUP command. Disable: The local storage system rejects the CGROUP command. The FREEZE option is available only when using a Continuous Access Synchronous Z. This is displayed only if Connection Type for the remote connection is CU. 268 Remote Web Console GUI reference for HA

269 Confirm window Selected Remote Connection Table Item Connection Type Local CU Description Displays the connection type. Displays the CU number of the local storage system. Remote Storage System Model / Serial Number:Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Path Group ID Number of Remote Paths Minimum Number of Paths RIO MIH Time (sec.) Round Trip Time (msec.) FREEZE Option Displays the path group ID. Displays the number of remote paths. Displays the minimum number of paths. Displays the RIO MIH time. Displays the round trip time. Displays whether to enable the FREEZE option. Edit Remote Connection Options wizard 269

270 Add Remote Paths wizard Related topics Adding remote paths (page 209) Add Remote Paths window Local Storage System Item Model Serial Number Local CU Description Displays the model of the local storage system. Displays the serial number of the local storage system. Displays the CU number of the local storage system. For a system connection, a hyphen is always displayed. Remote Storage System Item Model Serial Number Remote CU SSID Description Displays the model of the remote storage system. Displays the serial number of the remote storage system. Displays the CU number of the remote storage system. For a system connection, a hyphen is always displayed. Displays the SSID of the remote storage system. For a system connection, a hyphen is always displayed. 270 Remote Web Console GUI reference for HA

271 Remote Paths Item Path Group ID Minimum Number of Paths Local Port ID Remote Port Name Add Path Description Displays the path group ID. For a CU connection, a hyphen is always displayed. Displays the minimum number of paths. Select a local port name. Select a remote port name. If the number of valid paths is greater than the minimum number of paths, the - button appears. If you click the - button, the local port and remote port text boxes are deleted. If you click the button, a path is added. Maximum of eight paths can be added. Confirm window Selected Remote Connection Table Item Connection Type Local CU Description Displays the connection type. Displays the CU number of the local storage system. Remote Storage System Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Path Group ID Displays the path group ID. Add Remote Paths wizard 271

272 Item Number of Remote Paths Minimum Number of Paths Description Displays the number of remote paths. The value is the sum of the number of existing paths and the number of paths to be added. Displays the minimum number of paths. Selected Remote Paths Table Item Local Port ID Remote Port ID Description Displays the local port name. Displays the remote port name. Remove Remote Paths wizard Related topics Removing remote paths (page 210) Remove Remote Paths window Local Storage System Item Model Serial Number Local CU Description Displays the model of the remote storage system of the model. Displays the serial number of the local storage system. Displays the CU number of the local storage system. For a system connection, a hyphen is always displayed. 272 Remote Web Console GUI reference for HA

273 Remote Storage System Item Model Serial Number Remote CU SSID Description Displays the model of the remote storage system. Displays the serial number of the remote storage system. Displays the CU number of the remote storage system. For a system connection, a hyphen is always displayed. Displays the SSID of the remote storage system. For a system connection, a hyphen is always displayed. Remote Paths Item Path Group ID Minimum Number of Paths Local Port ID Remote Port ID Remove Description Displays the path group ID. For a CU connection, a hyphen is always displayed. Displays the minimum number of paths. Displays the local port name. Information about the added path is displayed. Displays the remote port name. Information about the added path is displayed. Select the check box of a path to be deleted from the remote connection. Confirm window Remove Remote Paths wizard 273

274 Selected Remote Connection Table Item Connection Type Local CU Description Displays the connection type. Displays the CU number of the local storage system. Remote Storage System Model / Serial Number: Displays the model and serial number of the remote storage system. CU: Displays the CU number of the remote storage system. SSID: Displays the SSID of the remote storage system. Path Group ID Number of Remote Paths Minimum Number of Paths Displays the path group ID. Displays the number of remote paths. The value which the number of paths to be deleted is subtracted from the number of added paths is displayed. Displays the minimum number of paths. Selected Remote Paths Table Item Local Port ID Remote Port ID Description Displays the local port name. Displays the remote port name. Remove Remote Connections window 274 Remote Web Console GUI reference for HA

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Global-Active Device User Guide Product Version Getting Help Contents MK-92RD8072-05 2014, 2015 Hitachi, Ltd. All rights reserved. No part of this publication may

More information

HP XP7 Business Copy User Guide

HP XP7 Business Copy User Guide HP XP7 Business Copy User Guide Abstract This guide provides instructions for setting up, planning, and operating Business Copy on the HP XP7 Storage (HP XP7) system. Please read this document carefully

More information

XP7 Online Migration User Guide

XP7 Online Migration User Guide XP7 Online Migration User Guide Abstract This guide explains how to perform an Online Migration. Part Number: 858752-002 Published: June 2016 Edition: 6 Copyright 2014, 2016 Hewlett Packard Enterprise

More information

HP XP7 Provisioning for Mainframe Systems User Guide

HP XP7 Provisioning for Mainframe Systems User Guide HP XP7 Provisioning for Mainframe Systems User Guide Abstract This document describes and provides instructions for using the provisioning software to configure and perform operations on HP XP7 Storage

More information

XP7 External Storage for Open and Mainframe Systems User Guide

XP7 External Storage for Open and Mainframe Systems User Guide XP7 External Storage for Open and Mainframe Systems User Guide Abstract This guide provides information and instructions for planning, setup, maintenance, and troubleshooting the use of external volumes

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 FASTFIND LINKS Contents Product Version Getting Help MK-92RD8019-02 2014 Hitachi, Ltd. All rights reserved. No part of this publication may be reproduced or transmitted

More information

HPE 3PAR OS MU5 Patch 49 Release Notes

HPE 3PAR OS MU5 Patch 49 Release Notes HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:

More information

HP 3PAR OS MU3 Patch 17

HP 3PAR OS MU3 Patch 17 HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

HPE 3PAR OS MU3 Patch 24 Release Notes

HPE 3PAR OS MU3 Patch 24 Release Notes HPE 3PAR OS 3.1.3 MU3 Patch 24 Release Notes This release notes document is for Patch 24 and intended for HPE 3PAR Operating System Software + P19. Part Number: QL226-99298 Published: August 2016 Edition:

More information

Hitachi TrueCopy. User Guide. Hitachi Virtual Storage Platform G1000 and G1500. Hitachi Virtual Storage Platform F1500

Hitachi TrueCopy. User Guide. Hitachi Virtual Storage Platform G1000 and G1500. Hitachi Virtual Storage Platform F1500 Hitachi TrueCopy User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage Platform

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

Hitachi Universal Replicator

Hitachi Universal Replicator Hitachi Universal Replicator User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage

More information

HP 3PAR OS MU3 Patch 18 Release Notes

HP 3PAR OS MU3 Patch 18 Release Notes HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August

More information

HP P4000 Remote Copy User Guide

HP P4000 Remote Copy User Guide HP P4000 Remote Copy User Guide Abstract This guide provides information about configuring and using asynchronous replication of storage volumes and snapshots across geographic distances. For the latest

More information

HP 3PAR OS MU2 Patch 11

HP 3PAR OS MU2 Patch 11 HP 3PAR OS 321 MU2 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software 321200 (MU2) Patch 11 (P11) HP Part Number: QL226-98118 Published:

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Hitachi TrueCopy User Guide Product Version Getting Help Contents MK-92RD8019-09 May 2016 2014, 2016 Hitachi, Ltd. All rights reserved. No part of this publication

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

HPE 3PAR OS MU2 Patch 36 Release Notes

HPE 3PAR OS MU2 Patch 36 Release Notes HPE 3PAR OS 321 MU2 Patch 36 Release Notes This release notes document is for Patch 36 and intended for HPE 3PAR Operating System Software 321200 (MU2)+P13 Part Number: QL226-99149 Published: May 2016

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

Hitachi Virtual Storage Platform

Hitachi Virtual Storage Platform Hitachi Virtual Storage Platform Hitachi Thin Image User Guide Document Organization Product Version Getting Help Contents MK-90RD7179-06 2010-2016 Hitachi, Ltd. All rights reserved. No part of this publication

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

HPE XP7 Performance Advisor Software 7.2 Release Notes

HPE XP7 Performance Advisor Software 7.2 Release Notes HPE XP7 Performance Advisor Software 7.2 Release Notes Part Number: T1789-96464a Published: December 2017 Edition: 2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information

More information

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Technical white paper HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Handling HP 3PAR StoreServ Peer Persistence with HP Storage Provisioning Manager Click here to verify the latest

More information

Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Copy-on-Write Snapshot User s Guide

Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Copy-on-Write Snapshot User s Guide Hitachi Universal Storage Platform V Hitachi Universal Storage Platform VM Hitachi Copy-on-Write Snapshot User s Guide FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96RD607-15

More information

HP StorageWorks. EVA Virtualization Adapter administrator guide

HP StorageWorks. EVA Virtualization Adapter administrator guide HP StorageWorks EVA Virtualization Adapter administrator guide Part number: 5697-0177 Third edition: September 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard Development Company,

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

HP StorageWorks XP24000 Auto LUN Software User's Guide

HP StorageWorks XP24000 Auto LUN Software User's Guide HP StorageWorks XP24000 Auto LUN Software User's Guide Part number: T5215 96001 First edition: June 2007 Legal and notice information Copyright 2007 Hewlett-Packard Development Company, L.P. Confidential

More information

HP 3PAR Remote Copy Software User s Guide

HP 3PAR Remote Copy Software User s Guide HP 3PAR Remote Copy 3.1.1 Software User s Guide This guide is for System and Storage Administrators who monitor and direct system configurations and resource allocation for HP 3PAR Storage Systems. HP

More information

MSA Event Descriptions Reference Guide

MSA Event Descriptions Reference Guide MSA Event Descriptions Reference Guide Abstract This guide is for reference by storage administrators to help troubleshoot storage-system issues. It describes event messages that may be reported during

More information

HP UFT Connection Agent

HP UFT Connection Agent HP UFT Connection Agent Software Version: For UFT 12.53 User Guide Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise

More information

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide This document provides device overview information, installation best practices and procedural overview, and illustrated

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

HP StorageWorks Cluster Extension XP user guide

HP StorageWorks Cluster Extension XP user guide HP StorageWorks Cluster Extension XP user guide XP48 XP128 XP512 XP1024 XP10000 XP12000 product version: 2.06.00 seventh edition (October 2005) part number T1609-96006 This guide explains how to use the

More information

Hitachi Thin Image. User Guide. Hitachi Virtual Storage Platform G200, G400, G600, G800. Hitachi Virtual Storage Platform F400, F600, F800

Hitachi Thin Image. User Guide. Hitachi Virtual Storage Platform G200, G400, G600, G800. Hitachi Virtual Storage Platform F400, F600, F800 Hitachi Thin Image User Guide Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage Platform F400, F600, F800 Hitachi Virtual Storage Platform G1000 MK-92RD8011-08 May 2016 2014,

More information

HPE ProLiant Gen9 Troubleshooting Guide

HPE ProLiant Gen9 Troubleshooting Guide HPE ProLiant Gen9 Troubleshooting Guide Volume II: Error Messages Abstract This guide provides a list of error messages associated with HPE ProLiant servers, Integrated Lights-Out, Smart Array storage,

More information

HP 3PAR Host Explorer MU1 Software User Guide

HP 3PAR Host Explorer MU1 Software User Guide HP 3PAR Host Explorer 1.1.0 MU1 Software User Guide Abstract This guide is for Microsoft Windows, Red Hat Linux, and Solaris Sparc administrators who are responsible for maintaining the operating environment

More information

HP XP7 Performance Advisor Software Installation Guide (v6.1.1)

HP XP7 Performance Advisor Software Installation Guide (v6.1.1) HP XP7 Performance Advisor Software Installation Guide (v6.1.1) Abstract This document describes how to install and configure the HP XP7 Performance Advisor Software. This document is intended for users

More information

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Part number: 5697-0025 Third edition: July 2009 Legal and notice information Copyright

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

HP XP P9000 Remote Web Console Messages

HP XP P9000 Remote Web Console Messages HP XP P9000 Remote eb Console Messages Abstract This document lists the error codes and error messages for HP XP P9000 Remote eb Console for HP XP P9000 disk arrays, and provides recommended action for

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Hitachi Universal Replicator for Mainframe User Guide Product Version Getting Help Contents MK-92RD8022-10 June 2016 2014, 2016 Hitachi, Ltd. All rights reserved.

More information

Virtual Recovery Assistant user s guide

Virtual Recovery Assistant user s guide Virtual Recovery Assistant user s guide Part number: T2558-96323 Second edition: March 2009 Copyright 2009 Hewlett-Packard Development Company, L.P. Hewlett-Packard Company makes no warranty of any kind

More information

Hitachi Universal Replicator for Mainframe

Hitachi Universal Replicator for Mainframe Hitachi Universal Replicator for Mainframe User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 MK-92RD8022-12 March 2017 2014, 2017 Hitachi, Ltd. All rights

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

HPE 3PAR OS MU3 Patch 28 Release Notes

HPE 3PAR OS MU3 Patch 28 Release Notes HPE 3PAR OS 3.2.1 MU3 Patch 28 Release tes This release notes document is for Patch 28 and intended for HPE 3PAR Operating System Software 3.2.1.292 (MU3)+Patch 23. Part Number: QL226-99107 Published:

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Data Sheet: Storage Management Overview Veritas Storage Foundation 6.0 for Windows brings advanced online storage management

More information

HPE 3PAR OS MU3 Patch 23 Release Notes

HPE 3PAR OS MU3 Patch 23 Release Notes HPE 3PAR OS 321 MU3 Patch 23 Release tes This release notes document is for Patch 23 and intended for HPE 3PAR Operating System Software 321292 (MU3)+Patch 18 Part Number: QL226-98364 Published: December

More information

HP Device Manager 4.7

HP Device Manager 4.7 Technical white paper HP Device Manager 4.7 Importing Templates from Similar Operating Systems Table of contents Overview... 2 Preparation... 2 Template preparation... 2 Modifying an exported XML template...

More information

Hitachi Universal Replicator for Mainframe

Hitachi Universal Replicator for Mainframe Hitachi Universal Replicator for Mainframe User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 MK-92RD8022-11 October 2016 2014, 2016 Hitachi, Ltd. All rights

More information

HP Business Availability Center

HP Business Availability Center HP Business Availability Center for the Windows and Solaris operating systems Software Version: 8.00 Embedded UCMDB Applets Using Direct Links Document Release Date: January 2009 Software Release Date:

More information

HP XP7 Remote Web Console User Guide

HP XP7 Remote Web Console User Guide HP XP7 Remote Web Console User Guide Abstract This document provides information and instructions to help you set up Remote Web Console for the HP XP7 Storage system, set up end-user computers and web

More information

Hitachi TrueCopy for Mainframe

Hitachi TrueCopy for Mainframe Hitachi TrueCopy for Mainframe User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 MK-92RD8018-10 October 2016 2014, 2016 Hitachi, Ltd. All rights reserved.

More information

HP StorageWorks XP Performance Advisor Software User Guide

HP StorageWorks XP Performance Advisor Software User Guide HP StorageWorks XP Performance Advisor Software User Guide This guide describes how to use HP StorageWorks XP Performance Advisor Software product (XP Performance Advisor), and includes the user tasks

More information

HP Data Center Automation Appliance

HP Data Center Automation Appliance HP Data Center Automation Appliance DCAA at-a-glance Express Edition Software Version: 1.00 Release Date: April 2015 Legal Notices Warranty The only warranties for HP products and services are set forth

More information

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview. Overview HP 6 Port SATA RAID controller provides customers with new levels of fault tolerance for low cost storage solutions using SATA hard drive technologies. Models SATA RAID Controller 372953-B21 DA

More information

HP EVA Cluster Extension Software Administrator Guide

HP EVA Cluster Extension Software Administrator Guide HP EVA Cluster Extension Software Administrator Guide Abstract This guide contains detailed instructions for configuring and troubleshooting HP EVA Cluster Extension Software. The intended audience has

More information

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo Vendor: Hitachi Exam Code: HH0-130 Exam Name: Hitachi Data Systems Storage Fondations Version: Demo QUESTION: 1 A drive within a HUS system reaches its read error threshold. What will happen to the data

More information

Nondisruptive Migration Hitachi Virtual Storage Platform F series and G series

Nondisruptive Migration Hitachi Virtual Storage Platform F series and G series Nondisruptive Migration Hitachi Virtual Storage Platform F series and G series User Guide This guide describes Hitachi Command Suite (HCS) nondisruptive migration and provides instructions for using nondisruptive

More information

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide Abstract This guide is intended to assist customers in successfully migrating their VMware ESX/ESXi hosts on HP 3PAR StoreServ storage

More information

Hitachi Virtual Storage Platform

Hitachi Virtual Storage Platform Hitachi Virtual Storage Platform Hitachi Copy-on-Write Snapshot User Guide Document Organization Product Version Getting Help Contents MK-90RD7013-13 December 2016 2010-2016 Hitachi, Ltd. All rights reserved.

More information

Nondisruptive Migration

Nondisruptive Migration Nondisruptive Migration User Guide Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage Platform G1000, G1500 Hitachi Virtual Storage Platform F1500 MK-92RD8086-06 2015, 2016

More information

Hitachi Virtual Storage Platform G series and F series

Hitachi Virtual Storage Platform G series and F series Hitachi Virtual Storage Platform G series and F series SVOS 7.3 Thin Image User Guide This document describes and provides instructions for using Hitachi Thin Image to plan, configure, and perform pair

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Hitachi ShadowImage for Mainframe User Guide FASTFIND LINKS Contents Product Version Getting Help MK-92RD8020-04 2014-2015 Hitachi, Ltd. All rights reserved. No part

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Nondisruptive Migration User Guide MK-92RD8086-01 2015 Hitachi, Ltd All rights reserved No part of this publication may be reproduced or transmitted in any form or

More information

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5 A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

Marvell BIOS Utility User Guide

Marvell BIOS Utility User Guide Marvell BIOS Utility User Guide for HPE MicroServer Gen10 Abstract This user guide provides information on how to use the embedded Marvell BIOS Utility to create and manage RAID virtual disks and arrays.

More information

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide

Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Hewlett Packard Enterprise StoreOnce 3100, 3500 and 5100 System Installation and Configuration Guide Abstract This guide is for HPE StoreOnce System Administrators. It assumes that the user has followed

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

Management and Printing User Guide

Management and Printing User Guide Management and Printing User Guide Copyright 2007 Hewlett-Packard Development Company, L.P. Windows is a U. S. registered trademark of Microsoft Corporation. Intel and Centrino are trademarks or registered

More information

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions HPE 3PAR OS 3.1.3 MU3 Patch 18 Upgrade Instructions This upgrade instructions document is for installing Patch 18 on the HPE 3PAR Operating System Software 3.1.3.334 (MU3). This document is for Hewlett

More information

Hitachi Virtual Storage Platform G1000

Hitachi Virtual Storage Platform G1000 Hitachi Virtual Storage Platform G1000 Hitachi ShadowImage for Mainframe User Guide Product Version Getting Help Contents MK-92RD8020-09 June 2016 2014, 2016 Hitachi, Ltd. All rights reserved. No part

More information

BrightStor ARCserve Backup for Windows

BrightStor ARCserve Backup for Windows BrightStor ARCserve Backup for Windows Volume Shadow Copy Service Guide r11.5 D01191-2E This documentation and related computer software program (hereinafter referred to as the "Documentation") is for

More information

Hitachi Thin Image. User Guide. Hitachi Virtual Storage Platform G1000 and G1500. Hitachi Virtual Storage Platform F1500

Hitachi Thin Image. User Guide. Hitachi Virtual Storage Platform G1000 and G1500. Hitachi Virtual Storage Platform F1500 Hitachi Thin Image User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage Platform

More information

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide

HP Real User Monitor. Software Version: Real User Monitor Sizing Guide HP Real User Monitor Software Version: 9.26 Real User Monitor Sizing Guide Document Release Date: September 2015 Software Release Date: September 2015 Real User Monitor Sizing Guide Legal Notices Warranty

More information

Enabling High Availability for SOA Manager

Enabling High Availability for SOA Manager Enabling High Availability for SOA Manager Abstract... 2 Audience... 2 Introduction... 2 Prerequisites... 3 OS/Platform... 3 Cluster software... 4 Single SOA Manager Server Fail Over... 4 Setting up SOA

More information

HP Data Protector A support for Microsoft Exchange Server 2010

HP Data Protector A support for Microsoft Exchange Server 2010 HP Data Protector A.06.11 support for Microsoft Exchange Server 2010 White paper Introduction... 2 Microsoft Exchange Server 2010 concepts... 2 Microsoft Volume Shadow Copy Service integration... 2 Installation

More information

Parallels Containers for Windows 6.0

Parallels Containers for Windows 6.0 Parallels Containers for Windows 6.0 Deploying Microsoft Clusters June 10, 2014 Copyright 1999-2014 Parallels IP Holdings GmbH and its affiliates. All rights reserved. Parallels IP Holdings GmbH Vordergasse

More information

HP LeftHand P4500 and P GbE to 10GbE migration instructions

HP LeftHand P4500 and P GbE to 10GbE migration instructions HP LeftHand P4500 and P4300 1GbE to 10GbE migration instructions Part number: AT022-96003 edition: August 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company, L.P. Confidential

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

HPE Intelligent Management Center v7.3

HPE Intelligent Management Center v7.3 HPE Intelligent Management Center v7.3 Service Operation Manager Administrator Guide Abstract This guide contains comprehensive conceptual information for network administrators and other personnel who

More information

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V User Guide Abstract This document provides information about using HP 3PAR Recovery Manager for Microsoft Hyper-V for experienced Microsoft Windows

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems.

OMi Management Pack for Microsoft SQL Server. Software Version: For the Operations Manager i for Linux and Windows operating systems. OMi Management Pack for Microsoft Software Version: 1.01 For the Operations Manager i for Linux and Windows operating systems User Guide Document Release Date: April 2017 Software Release Date: December

More information

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Windows Server 2003 Windows Server 2008 5.1 Veritas Storage Foundation

More information

QuickSpecs. Models ProLiant Cluster F200 for the Entry Level SAN. Overview

QuickSpecs. Models ProLiant Cluster F200 for the Entry Level SAN. Overview Overview The is designed to assist in simplifying the configuration of cluster solutions that provide high levels of data and applications availability in the Microsoft Windows Operating System environment

More information

Hitachi ShadowImage for Mainframe

Hitachi ShadowImage for Mainframe Hitachi ShadowImage for Mainframe User Guide Hitachi Virtual Storage Platform G1000 and G1500 Hitachi Virtual Storage Platform F1500 MK-92RD8020-11 March 2017 2014, 2017 Hitachi, Ltd. All rights reserved.

More information

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions HPE 3PAR OS 3.2.2 MU3 Patch 97 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 97 on the HPE 3PAR Operating System Software. This document is for Hewlett Packard

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

HP integrated Citrix XenServer Online Help

HP integrated Citrix XenServer Online Help HP integrated Citrix XenServer Online Help Part Number 486855-002 September 2008 (Second Edition) Copyright 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to

More information

G200, G400, G600, G800

G200, G400, G600, G800 Hitachi ShadowImage User Guide Hitachi Virtual Storage Platform G200, G400, G600, G800 Hitachi Virtual Storage Platform F400, F600, F800 Product Version Getting Help Contents MK-94HM8021-04 May 2016 2015,

More information

QuickSpecs. What's New. Models. Overview

QuickSpecs. What's New. Models. Overview Overview The HP Smart Array P400 is HP's first PCI-Express (PCIe) serial attached SCSI (SAS) RAID controller and provides new levels of performance and reliability for HP servers, through its support of

More information

HP Service Manager. Process Designer Tailoring Best Practices Guide (Codeless Mode)

HP Service Manager. Process Designer Tailoring Best Practices Guide (Codeless Mode) HP Service Manager Software Version: 9.41 For the supported Windows and UNIX operating systems Process Designer Tailoring Best Practices Guide (Codeless Mode) Document Release Date: September 2015 Software

More information

Configuring RAID with HP Z Turbo Drives

Configuring RAID with HP Z Turbo Drives Technical white paper Configuring RAID with HP Z Turbo Drives HP Workstations This document describes how to set up RAID on your HP Z Workstation, and the advantages of using a RAID configuration with

More information

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018

StarWind Virtual SAN. HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2. One Stop Virtualization Shop MARCH 2018 One Stop Virtualization Shop StarWind Virtual SAN HyperConverged 2-Node Scenario with Hyper-V Cluster on Windows Server 2012R2 MARCH 2018 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the

More information