HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide

Size: px
Start display at page:

Download "HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide"

Transcription

1 HPE 3PAR Peer Motion and HPE 3PAR Online Import User Guide Abstract This guide provides information about HPE 3PAR products that provide unidirectional data migration to HPE 3PAR StoreServ Storage systems as well as bidirectional data mobility between HPE 3PAR StoreServ Storage systems. Part Number: QL Published: May 2018

2 Copyright 2015, 2018 Hewlett Packard Enterprise Development LP Notices The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein. Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not responsible for information outside the Hewlett Packard Enterprise website. Acknowledgments Intel, Itanium, Pentium, Intel Inside, and the Intel Inside logo are trademarks of Intel Corporation in the United States and other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Adobe and Acrobat are trademarks of Adobe Systems Incorporated. Java and Oracle are registered trademarks of Oracle and/or its affiliates. UNIX is a registered trademark of The Open Group.

3 Contents What's New in this Edition... 9 Getting Started with HPE 3PAR data migration and data mobility products Overview of HPE 3PAR Data Migration and Data Mobility Products Choosing the right HPE 3PAR data migration and data mobility product Data migration types Concurrent migration limitations Volume compression...16 HPE 3PAR Peer Motion with bidirectional multi-array federated storage Guidelines Requirements...20 Restrictions and limitations...21 Authorization Storage Federation Topology PAR Peer Motion in multi-array bidirectional configuration Zoning...34 Zoning topologies...34 Zoning requirements for unidirectional and bidirectional data mobility Zoning configuration...35 Setting up and configuring a Storage Federation...37 SSMC System Selector...49 Federation preparation for data mobility Recommended settings for Federated Systems Copying settings from existing systems Resolving Import Federation activity errors...61 Synchronizing Federations...61 Resolving Sync Federation conflicts Performing Peer Motion with the SSMC...66 Managing Peer Motion from the SSMC Monitoring Peer Motion workflow in Federations Postmigration tasks Contents 3

4 Performing fabric topology postmigration tasks Performing volume postmigration tasks Performing Remote Copy postmigration tasks...82 Host environments for multi-array bidirectional Peer Motion...84 Microsoft Windows Host operating system Microsoft failover clusters...84 Linux...85 VMware ESXi IBM AIX HP-UX HPE 3PAR Peer Motion with unidirectional data mobility between HPE 3PAR StoreServ Storage systems...90 HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion...91 Data migration requirements PAR Peer Motion General Requirements and Restrictions...95 Requirements and restrictions in Federation environments...97 Network and fabric zoning requirements for 3PAR Peer Motion...98 Requirements for multiple source arrays ALUA and Path STATE change detection requirements Premigration constraints Adding a migration source to a Federation Using the HPE 3PAR Peer Motion Utility Performing premigration tasks before installing the 3PAR Peer Motion utility System requirements for installing the 3PAR Peer Motion utility Installing the 3PAR Peer Motion Utility on a Windows system Installing the 3PAR Peer Motion Utility on a Linux system Adding users to groups Verifying 3PAR Peer Motion Utility service Launching the 3PAR Peer Motion Utility PAR Peer Motion Utility workflow Performing online migration Performing minimally disruptive migration (MDM) Performing offline migration Migrating virtual volume sets and host sets Consistency Groups management Prioritization when migrating volumes or volume sets Using the autoresolve option to resolve LUN conflicts Postmigration tasks Uninstalling the 3PAR Peer Motion Utility Host environments for unidirectional Peer Motion Contents

5 Microsoft Windows Host operating system Microsoft failover clusters Linux VMware ESXi IBM AIX HP-UX Solaris Symantec/Veritas Storage Foundation requirements PAR Online Import with unidirectional data migration from third-party storage systems to a 3PAR StoreServ Storage system Overview of the 3PAR Online Import Utility The third-party data migration process Supported migrations and requirements Migration process checklists Phase I: Preparing for data migration General considerations System requirements Migration planning Preparing clusters for migration Reconfiguring the host multipath solution Configuration rules for 3PAR Online Import support Installing and configuring the 3PAR Online Import Utility Upgrading the 3PAR Online Import Utility from 2.0 to SMI-S provider installation and configuration Phase II: Premigration Network and fabric zoning requirements for 3PAR Online Import Requirements for multisource array migration Identifying the source and destination storage systems Zoning the source storage system to the destination 3PAR StoreServ Storage system Zoning host(s) to the destination storage system Required prerequisite information The creatmigration command process Phase III: Migration Issuing the startmigration command online migration Issuing the startmigration command MDM Issuing the startmigration command offline migration Phase IV: Postmigration tasks Performing online migration and MDM Performing postmigration tasks in VMware ESXi environments Contents 5

6 Removing an EMC Storage system from EMC SMI-S provider Removing HDS Storage system from the HiCommand Suite Aborting a migration Falling back to the source array after a failed or aborted migration on Oracle RAC clusters Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC Data migration to legacy 3PAR and IBM XIV systems Data migration from legacy 3PAR systems Data migration from IBM XIV Troubleshooting Troubleshooting resources PAR Peer Motion Utility and 3PAR Online Import Utility error and resolution messages PAR Online Import Utility troubleshooting Utility logs Troubleshooting communication between the 3PAR Online Import Utility and an EMC Storage source array Troubleshooting issues Cleaning up and recovering from a failed data migration SSMC temporarily displays peer volume as RAID level Cannot add a source storage system Cannot add a source or destination storage system with the 3PAR Peer Motion Utility Cannot connect to the 3PAR Peer Motion Utility Cannot create a migration Cannot log in to the HPE 3PAR Online Import Utility Cannot validate certificate for a 3PAR StoreServ Storage system with the 3PAR Peer Motion Utility Migration from multiple EMC Storage VMAX or DMX4 systems includes unexpected LUNs and hosts For EMC VNX and CX4 storage controllers, the HPE 3PAR Peer Port HBA initiators must be set to failovermode 4 (Active/Active) Powering on or migrating a VMware ESX virtual machine fails after a successful migration PAR Online Import Utility does not open in Windows PAR Peer Motion Utility loses communication with the 3PAR StoreServ Storage PAR Peer Motion Utility cannot reach a source or destination storage system or does not load data on time Cannot admit or importing a volume does not succeed Trailing spaces in IP address return login error VMware datastore errors following a successful migration The adddestination command returns error OIUERRDST The addsource command for HDS Storage fails Contents

7 The createmigration command fails for HDS storage with host "HCMDxxxx not found" error The createmigration command fails for migrated LUNs in a storage group or host group with no host The createmigration command fails for LUN name or host name containing special characters The createmigration is unsuccessful with the 3PAR Peer Motion Utility The createmigration command returns error OIUERRAPP0000 or OIURSLAPP The createmigration command returns error OIUERRDB The createmigration command returns error OIUERRPREP The createmigration command returns error OIUERRCS The createmigration command with -srcvolmap returns error OIUERRAPP The createmigration -hostset command returns error OIUERRDST The createmigration -vvset command succeeds but the data migration job stays indefinitely in the preparing state The 3PAR Online Import Utility stalls without an error message The showmigration command returns error OIUERRDST The showtarget command does not return HDS Storage details The startmigration command fails with host name that exceeds 31 characters The startmigration task fails without generating an appropriate error message PAR Peer Motion Utility or 3PAR Online Import Utility CLI returns error OIUERRMS Preventing VMFS datastore loss in VMware ESXi Reference PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands Command usage guidelines Using read-only commands Commands Quick Reference Command descriptions Data migration with 3PAR Remote Copy group PAR Peer Motion requirements Volume migration in a Remote Copy Primary group Volume migration in a Remote Copy secondary group Performing data migration with 3PAR Peer Persistence relationship Identifying and deleting source array LUN paths Identifying and deleting source array LUN paths with Linux Native Device-Mapper Multipath Identifying and deleting source array LUN paths with HP-UX 11 v3 on HDS Storage Identifying and deleting source array LUN paths with ESX Contents 7

8 Guidelines for rolling back to the original source array Clearing a SCSI reservation Clearing a SCSI reservation with 3PAR OS MU3 or later Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage Clearing a SCSI reservation after an incomplete migration with 3PAR OS MU1 or MU2 on HDS Storage Data migration for an Oracle RAC cluster use case Migrating source array 1 to 3PAR StoreServ Storage Migrating source array 2 to 3PAR StoreServ Storage Websites Support and other resources Accessing Hewlett Packard Enterprise Support Accessing updates Customer self repair Remote support Warranty information Regulatory information Documentation feedback Contents

9 What's New in this Edition This edition has been updated to describe HPE 3PAR Peer Motion Utility 2.2 and HPE 3PAR Online Import Utility 2.2, which provide the following enhancements: A new showpremigration command that lists and decribes both common and source storage systemspecific prerequisites that must be met before starting a data migration. Enhanced validations and exception handling to provide better and more context-specific 3PAR Peer Motion Utility and 3PAR Online Import Utility error and resolution messages for failure scenarios during migration. Improved processes for cleaning up after a failed migration, as well as the process for rolling back to the original source system, if needed. Additional enhancements: Provide an option to specify migration priority in the createmigration command withvolmapfile parameter. Provide an option to specify migration priority in thecreatemigration command with - srcvolmap parameter. Provide an option to specify domain in the createmigration command, so you can migrate from non-domain to intended domain. Allow connectivity only between identical versions of PMU or OIU client and server. Display WWN of peer ports and virtual peer ports in showconnection output. Remove duplicate port WWN entries from "xxxxx_src_hosts.xml file." Avoid import progress going back to 0% near end of migration and then jumping to 100%. Remove excessive white space and blank lines in the output of all show* commands. Update the serial number or name of array to log files. Provide a log entry for creation of a second peer host group. Increase the size and number of log files. Provide an appropriate error message when migrating mainframe volumes. Provide an appropriate error message when migrating snapshot volumes. Provide an appropriate error message when a provisioning type value is missing in the - srcvolmapfile parameter. Provide an appropriate error message for offline migration with exported volume(s). Provide appropriate error message for a createmigration involving volumes smaller than 256 MB. Validate an uid option that includes invalid character(s). Block migration capability when a mix of exported and unexported volumes are specified in createmigration command. A createmigration fails if an online migration is initiated on mixed-mode volumes. What's New in this Edition 9

10 NOTE: Both the 3PAR Peer Motion Utility and 3PAR Online Import Utility products consist of two installable applications, a client and a server component. Starting with product version 2.2 and later, you must use matching versions of the client and server components. Other recent changes to this document include: A step was added to the procedures for migrating volumes that are part of a Remote Copy group on the Remote Copy primary array. To avoid potential issue whereby Windows and ESX host IOs could experience stalls resulting in application outages at the host, an step was added that calls for unexporting the volumes being migrated. When using SSMC to migrate the Remote Copy volumes, see Performing Peer Motion on a Remote Copy Group. When using the Peer Motion Utility to migrate the Remote Copy volumes using 3PAR Command Line Interface (CLI), see Data migration with 3PAR Remote Copy group. Peer Motion Utility 2.1, which added support for: Maintaining source LUN IDs on the target VVset when there is no LUN conflict. If LUN ID conflicts exist, autoresolve goes into effect (with default parameter value of true) and LUN IDs are sequenced automatically. Or, you can set the autoresolve parameter to false, resolve the conflicts yourself, then start the migration. NOTE: With PMU 2.1 and later, the vvset and hostset names cannot be the same or the migration will fail. Use CLI commands or SSMC to change one of those names, if necessary. The CLI command to change the name of a virtual volume set is setvvset; to change the name of a host set, the CLI command is sethostset. For more information, see the HPE 3PAR Command Line Interface Administrator Guide. Migrating from legacy HPE 3PAR storage systems (3.1.2 or later) to HPE 3PAR StoreServ 9000 or R2 systems. IPV6 support. A new section has been added to describe Cleaning up and recovering from a failed data migration. Added a clarification for expected behavior that SSMC temporarily displays peer volume as RAID level 0 while a migration is in progress. Additional information regarding the WWNs of a migrated volume has been added to Performing volume postmigration tasks. A Postmigration tasks section has been added under "Performing Peer Motion with the SSMC." 10 What's New in this Edition

11 Getting Started with HPE 3PAR data migration and data mobility products This chapter introduces the HPE 3PAR data migration and data mobility products and provides information to help you choose the right product for your needs. adding something here Overview of HPE 3PAR Data Migration and Data Mobility Products The primary HPE 3PAR data migration and data mobility products are: Storage federation functionality within SSMC Peer Motion Utility Online Import Utility For more information on those and other supported products, and for help in choosing the right tool for your data migration needs, see Table 1: Data mobility and migration user scenarios on page 12. Storage federation functionality within SSMC Storage federation is a scale-out strategy to support scalability beyond the limits of a single storage system. Through storage federation functionality available within the SSMC, you can perform both bidirectional data mobility and unidirectional data migration between HPE 3PAR StoreServ Storage systems. With SSMC 3.1 and later, you can also migrate data from legacy 3PAR (T-, F- and V- class systems running StoreServ OS or 3.1.3) and supported IBM XIV systems to a destination 3PAR StoreServ Storage system. Peer Motion Utility HPE 3PAR Peer Motion is a do-it-yourself tool that provides nondisruptive, unidirectional data migration to HPE 3PAR StoreServ Storage systems and in conjunction with the storage federation functionality available within the SSMC provides multi-array, bidirectional data mobility between HPE 3PAR StoreServ Storage systems. Online Import Utility HPE 3PAR Online Import, which is built upon the 3PAR Peer Motion technology, provides data migration from a source that is a non-3par StoreServ Storage system to a destination 3PAR StoreServ Storage system. Using 3PAR Online Import, you can migrate volumes and host configuration information to a destination 3PAR StoreServ Storage system without changing host configurations or interrupting data access. 3PAR Online Import coordinates the movement of data from the source to the destination while servicing I/O requests from hosts. During data migration, host I/O service to the source storage system takes place through the destination 3PAR StoreServ Storagesystem. The host and volume presentation implemented on the source storage system is maintained on the destination 3PAR StoreServ Storage system. Orchestration for data migration from a non-3par StoreServ Storage system to a destination 3PAR StoreServ Storage system is provided by the HPE 3PAR Online Import Utility. HPE 3PAR Management Console (IMC) A supported, but not recommended method for performing unidirectional data migration from legacy 3PAR StoreServ Storage systems. Getting Started with HPE 3PAR data migration and data mobility products 11

12 Choosing the right HPE 3PAR data migration and data mobility product Table 1: Data mobility and migration user scenarios on page 12 is designed to help you select the right tool for your needs and point you to the information you need to get started. IMPORTANT: Hewlett Packard Enterprise strongly recommends using the latest version of these data migration and data mobility products. Usage of those latest versions is described in this user guide. For more information on products upgrades and the new features, fixes, known issues for each version, see the Peer Motion Utility and Online Import Utility Release Notes, available in the Hewlett Packard Enterprise Information Library: Table 1: Data mobility and migration user scenarios Source system Data mobility Destination system Tool To get started 3PAR StoreServ Storage systems running 3PAR OS MU1 or later Bidirectional N:N 3PAR StoreServ Storage systems running 3PAR OS MU1 or later HPE 3PAR StoreServ Management Console (SSMC) 2.2 or later. See HPE 3PAR Peer Motion with bidirectional multiarray federated storage on page 19. 3PAR StoreServ 8000 and systems running 3PAR OS or later Unidirectional 1:1 N:1 3PAR StoreServ Storage systems running 3PAR OS or later HPE 3PAR StoreServ Management Console (SSMC) 2.2 or later. See HPE 3PAR Peer Motion with unidirectional data mobility between HPE 3PAR StoreServ Storage systems on page 90. 3PAR StoreServ F- Class or T- Class system running or Unidirectional 1:1 N:1 3PAR StoreServ Storage systems running 3PAR OS or later HPE 3PAR StoreServ Management Console (SSMC) 3.1 or later. See Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC on page 251. Table Continued 12 Choosing the right HPE 3PAR data migration and data mobility product

13 Source system Data mobility Destination system Tool To get started 3PAR StoreServ Storage systems running 3PAR OS or later 1 Unidirectional 1:1 1:N N:1 N:N 3PAR StoreServ Storage systems running 3PAR OS or later HPE 3PAR Peer Motion Utility 2.0 or later. For information about supported environments and migration paths, see HPE 3PAR Peer Motion Utility Support Matrix and the HPE 3PAR Peer Motion - Unidirectional Data Migration Host Support Matrix on SPOCK: storage/spock To get started, see HPE 3PAR Peer Motion with unidirectional data mobility between HPE 3PAR StoreServ Storage systems on page 90. 3PAR StoreServ Storage systems running 3PAR OS or later Unidirectional 1:1 3PAR StoreServ Storage systems running 3PAR OS or later Peer Motion via the HPE 3PAR Management Console (IMC). See the HPE 3PAR Peer Motion Data Migration Guide. Table Continued Getting Started with HPE 3PAR data migration and data mobility products 13

14 Source system Data mobility Destination system Tool To get started Non-3PAR source arrays EMC HDS IBM XIV 2 Unidirectional 1:1 1:N N:1 N:N HPE 3PAR StoreServ Storage systems HPE 3PAR Online Import Utility 2.0 or later. For information about supported environments and migration paths, see the HPE Online Import Utility Support Matrix for Non-3PAR Storage Systems (EMC, HDS & IBM) on SPOCK: storage/spock To get started, see 3PAR Online Import with unidirectional data migration from third-party storage systems to a 3PAR StoreServ Storage system on page To migrate data from IBM XIV through SSMC, see Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC on page 251. EVA Storage system Unidirectional 1:1 HPE 3PAR StoreServ Storage system HPE Command View EVA HPE 3PAR Online Import for EVA Storage Host Support Matrix on SPOCK: storage/spock HPE 3PAR Online Import for EVA Storage, available in the Hewlett Packard Enterprise Information Library: storage/docs 1 StoreServ F-Class and T-Class systems running 3PAR OS or With SSMC 3.1 and later, IBM XIV source systems can be migrated through the SSMC interface. 3 If using HPE 3PAR Online Import Utility 1.3, see HP 3PAR Online Import for EMC Storage Data Migration Guide or HP 3PAR Online Import for HDS Storage Data Migration Guide. NOTE: This guide describes the storage federation functionality found in SSMC 3.2. IF using an earlier supported version, see the SSMC documentation for that version. For information about the 3PAR Peer Motion Utility and 3PAR Online Import Utility commands, with descriptions of the commands, their parameters, and examples, see 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands on page Getting Started with HPE 3PAR data migration and data mobility products

15 Data migration types Storage federation, 3PAR Peer Motion, and 3PAR Online Import support three types of data migration procedures: Online migration on page 15 Minimally disruptive migration (MDM) on page 15 Offline migration on page 16 The appropriate type of data migration depends on the objects being migrated and the migration type. For details about OS, multipath, and cluster solutions that are supported with each of the migration types, see the SPOCK website: Online migration The online migration type is used when the host OS and multipath solution can handle the addition and removal of paths on the fly and when the cluster being migrated (if applicable) conforms to the requirements for online cluster migration. During online migration, all presentation relationships between hosts and volumes being migrated are maintained, and host I/O to the data is not disrupted. Online migration can be initiated in one of two ways-at the host level or at the volume level. Host-level migration Host-level migration is initiated from the Host or Host Set pane in the SSMC. With this type of migration, all virtual volumes exported to the host or host set selected will be migrated concurrently. If the source storage system is running 3PAR OS or earlier, only host-level migration is supported, and the host must be unzoned from the source storage system before the migration begins. Single-volume migration With single-volume migration, a subset of the virtual volumes exported to a host or host set can be migrated while leaving the remaining virtual volumes to continue to be serviced on their original 3PAR StoreServ Storage system. Load distribution among the 3PAR StoreServ Storage systems can be achieved in this manner. For single-volume migration, both the source and destination storage systems must be running 3PAR OS or later, and the hosts involved must be using an ALUA-capable persona on both source and destination 3PAR StoreServ Storage systems. The host does not need to be unzoned from the source storage system during the migration process; it can concurrently access the volumes that have not been migrated from the source storage system and the volumes that are in the process of, or have migrated to the destination storage system. Single-volume migration should be initiated from the Virtual Volumes pane in the SSMC. Minimally disruptive migration (MDM) The MDM type is used when the host OS or multipath solution cannot properly distribute I/O to the 3PAR StoreServ Storage systems participating in the data migration or when the cluster solution (if applicable) does not meet the requirements for online cluster migration. With MDM, host I/O is interrupted only during the time it takes to reconfigure the host to use the destination array instead of the original source array. The host is able to access the data through the destination storage system while data migration from the source to the destination is occurring. Since there is a brief interruption to I/O with MDM, it is supported only at the host or host-set level. Data migration types 15

16 Offline migration The offline migration type is used when migrating one or more unpresented virtual volumes. During offline migration, only the selected volumes are migrated. No hosts are migrated in this situation, so an offline migration must be initiated from the virtual volume or virtual volume set level in the SSMC. Concurrent migration limitations The migration utilities (SSMC, OIU, PMU) do not enforce a hard limit on the number of migration tasks that can be initiated. Once a migration task is issued it will be added to a queue on the destination array. However, only 9 migration tasks will be active at any given time while the rest remain in the queue waiting their turn to be serviced as resources become available. For more information on concurrent migrations, see: Using the HPE 3PAR Peer Motion Utility on page 116. For Online Import, see Requirements for concurrent createmigration and startmigration operations on page 158. Volume compression When migrating a volume, you can configure that the data be compressed on the destination system. Requirements and limitations for using volume compression during data migration include: Supported with HPE 3PAR Peer Motion Utility 2.0 or later. Supported with HPE 3PAR Online Import Utility 2.0 or later. Supported with SSMC 3.1 or later. Supported only on HPE 3PAR StoreServ 8000, StoreServ 9000, and StoreServ Storage systems running 3PAR OS or later. Disks must be SSD with a volume size of at least 16 GB. The source volume must be smaller than 16 TB to be migrated as compressed volume. A full provisioned volume cannot be compressed. You can compress just select volumes being migrated (see Compressing selected volumes in a migration (volume-based compression) on page 16) or all volumes that meet compression requirements (see Compressing all volumes in a migration (host-based migration) on page 17). Compressing selected volumes in a migration (volume-based compression) To compress just selected volumes in a migration, provide a compress parameter in the srcvolmap option of the createmigration command for each of those volume entries. For a detailed description of the createmigration command, see createmigration. Procedure To compress a volume with thin provisioning (vol1 in this example), the srcvolmap syntax should be: 16 Offline migration

17 srcvolmap [{vol1,thin,testcpg,compress}] To compress a volume with dedupe provisioning (vol2 in this example), the srcvolmap syntax should be: srcvolmap [{vol2,dedupe,testcpg,compress}] Example: createmigration -sourceuid xxxxxxxxxxxxxxxx -srcvolmap "[{vol2,thin,testcpg,compress}]" -destcpg testcpg -destprov <thin/full/dedupe> -migtype <online/offline/mdm> When setting the priority for vvset in the srcvolmap option, that should be the last parameter: -srcvolmap "[{set: set1,thin,testcpg,compress,high}, {set: set2,thin,testcpg,compress,low}]" More information Using the volmapfile option on page 17 Using the volmapfile option You can also compress just select volumes using the compress parameter in the volmapfile option of the createmigration command. The advantage of using this option is that you can list all desired volume details in separate text file rather than doing so within the command. For a detailed description of the createmigration command, see createmigration. Procedure To compress volumes with thin provisioning, issue the createmigration -sourceuid <source_uid> -volmapfile <file_name> as shown in the following example: Example: createmigration -sourceuid 2FF70002AC001DB5 -volmapfile "C://Volume/volumeMap.txt" Where the file volumemap.txt contains the following lines: sow_k31.0, thin, SSD_r1, compress sow_k31.1, thin, FC_r1 sow_k31.2, thin, SSD_r1, compress To compress volumes with dedupe provisioning, issue the createmigration -sourceuid <source_uid> -volmapfile <file_name> as shown in the following example: Example: createmigration -sourceuid 2FF70002AC001DB5 -volmapfile "C://Volume/volumeMap.txt" Where the file volumemap.txt contains the following lines: sow_k31.0, dedupe, SSD_r1, compress sow_k31.1, dedupe, FC_r1 sow_k31.2, dedupe, SSD_r1, compress Compressing all volumes in a migration (host-based migration) For a detailed description of the createmigration command, see createmigration. Using the volmapfile option 17

18 Procedure To compress all volumes in a migration, use the compressall option in the createmigration command: createmigration -sourceuid <source_uid> -srchost "set:hostset1" -migtype online -destcpg TEST_CPG -destprov {thin/full/dedupe} -compressall Example: createmigration -sourceuid 2FF70002AC001DB5 -srchost "host_a" -migtype mdm -destcpg FC_r1 -destprov full -compressall 18 Getting Started with HPE 3PAR data migration and data mobility products

19 HPE 3PAR Peer Motion with bidirectional multiarray federated storage HPE 3PAR Peer Motion with bidirectional multi-array federated storage 19

20 Guidelines Bidirectional multi-array data mobility between HPE 3PAR StoreServ Storage systems is achieved through the storage federation functionality available within the HPE 3PAR StoreServ Management Console (SSMC). The SSMC works with the HPE 3PAR Peer Motion Utility products to provide nondisruptive, multi-array, bidirectional data mobility of a host and its data from one system to another. Throughout the migration process, I/O requests from all attached hosts continue to be serviced. Virtual volumes and system configuration information can be copied to a new system with no changes to host configurations, no loss of access by a host to its data in an online migration, and minimal downtime during a minimally disruptive migration (MDM). Support for HPE 3PAR Peer Motion multi-array bidirectional data mobility begins with 3PAR OS Requirements IMPORTANT: Before beginning peer-motion or online-import operations, see the SPOCK website to verify that the source and destination 3PAR OS versions on the hosts to be migrated are supported: Storage Federation Components Th e following components are required to create a multi-array bidirectional peer motion federation: A management host running SSMC 2.2 or later. A multi-array bidirectional setup can be created only from the SSMC. At least two 3PAR StoreServ Storage systems running 3PAR OS or later. A valid HPE 3PAR Peer Motion Software license on all systems that are to be part of the multi-array bidirectional peer motion setup. For bidirectional peer motion, migration source systems are not essential to create a federation. However, you can include a migration source as part of a federation setup. For information about adding a migration source to a federation, see Adding a migration source to a Federation on page 102. Storage Federation Requirements The management host must have network access to all storage systems that are to be part of the bidirectional peer motion setup. Before any storage systems are added to the federation, they must first be added to the SSMC. A storage system can be part of only one federation at a time. All the storage systems in a multi-array bidirectional peer motion setup should be running 3PAR OS or later. If the imported volume is intended to be a thinly provisioned volume on the destination storage system, the HPE 3PAR Thin Provisioning Software license is required on the destination system. If the migrated hosts or volumes are intended to be part of a domain on the destination storage system, the HPE 3PAR Virtual Domains Software license is required on the destination system. For SSMC support, the migrated host or the volumes must have the same domain name on both the source and destination storage systems. 20 Guidelines

21 If the imported volume has retention set in the source storage system and is intended to carry the retention to the destination, the HPE 3PAR Virtual Lock Software license is required on the destination system. Two dedicated FC ports (one port per node) are required on each 3PAR StoreServ Storage system, to be configured as peer ports. These ports will be used exclusively for inter-system communication and data transfer. They cannot be used for host I/O. The ports must be from two partner nodes but are not required to be partner ports. The SSMC will configure these ports into peer mode if they are not already in peer mode. The SSMC will also create eight virtual ports for each peer port. Peer ports cannot be used for host attachment. The use of partner ports for array-to-array communication is recommended, but not required. For more information, see the HPE technical white paper HPE 3PAR StoreServ Persistent Ports (HPE document #F4AA4-4545ENW). This document is available at the Hewlett Packard Enterprise Information Library website: Two FC ports are required on each 3PAR StoreServ Storage system to be configured as host ports These ports will be used for inter-system communication and data transfer. Existing ports that are being used for host I/O can be used for this purpose. The ports must be from two partner nodes but are not required to be partner ports. The SSMC will reconfigure these ports into host mode if they are not already in host mode. If configuring a federation using Smart SAN enabled systems only, the host ports must also be Smart SAN enabled and not manually zoned. Restrictions and limitations General Federation restrictions Ports are not reset after Edit or Delete actions: When you remove a system from a federation, remove the federation itself, or change ports, the SSMC will not reset the previously used ports. This is done so as not to affect other peer-motion tools. Zones must be updated after Edit or Delete actions: You must update zones after making changes in federation topology. Failing to do so will cause removed systems and/or old ports to continue to be in the zone. Use the Recommended Zone Configuration view to see the new requirements when you update zones. NOTE: This does not apply when using Smart SAN target zones, as they are automatically zoned Delay in detecting zoning changes: Any changes in fabric zoning or array interconnection take a minimum of 30 seconds for the SSMC to detect. During that interval, the SSMC cache will be in a stale Restrictions and limitations 21

22 state, and peer link status may be inaccurate. After the SSMC detects the zoning changes, the status will automatically be refreshed. Unique Node WWN: The Unique Node WWN setting is not supported. Ports with this capability enabled will not be available for use in federations. Maximum number of storage systems: You can have up to a maximum of four 3PAR StoreServ Storage systems as part of a multi-array bidirectional peer motion setup. Maximum number of storage systems: You can have up to a maximum of four 3PAR StoreServ Storage systems as part of a multi-array bidirectional peer motion setup. Performing data mobility/migration with remote copy is supported by using the 3PAR CLI (see Data migration with 3PAR Remote Copy group on page 380) and SSMC 2.3 or later (see Data mobility with remote copy on page 23). Point-to-point connections: Only point-to-point connections (also known as fabric connections) between the storage systems are supported. 3PAR Peer Motion over FC protocol: 3PAR Peer Motion is supported only over FC protocol. FC port speed: The speeds of the FC ports do not need to be the same. If Smart SAN ports are used, they must have a speed of at least 16 Gbps. FCoE Host Side Support: FCoE host side adapters are supported for use with Peer Motion provided they are connected to an FC port on the array via an FCoE switch. Direct FC and FCoE: Direct FC connections and FCoE connections are not supported for migration purposes. iscsi is not supported: iscsi is not supported as either the array-to-array or host-to-array communication protocol. Nested host sets: Nested host set configuration is not supported. Host sets with different host personas: Host sets containing hosts with different host personas are not supported. If the host T10 DIF configuration is enabled on the hosts to be migrated, you must use the minimally disruptive migration procedure. To use an online migration, the host T10 DIF configuration must be disabled on the hosts. For zoning requirements, see Zoning requirements for unidirectional and bidirectional data mobility on page 35. Federations and the SSMC The SSMC is the default orchestration tool for a multi-array bidirectional peer motion federation as well as a growing number unidirectional migrations. A federation can be created and managed only from the SSMC. You cannot create federations using the 3PAR CLI. A federation should only be managed by one SSMC server at a time. However, a federation created by SSMC 3.0 or later can be seen by other SSMC servers of the same or later version if the same set of arrays are registered to be managed. All systems involved in a multi-array bidirectional peer motion setup must be managed by the SSMC. Removing storage systems from the SSMC: You should not remove a storage system from the SSMC using the Administrator Console if the system is part of a federation or is used as a migration source. A warning will appear if you try to disconnect or remove the storage system from the SSMC. 22 Federations and the SSMC

23 You should first remove that system from the federation before removing it from the SSMC. After the system is disconnected or removed from the SSMC, it cannot be the destination array for further migration. The SSMC will not modify already configured ports that are in use: During federation operations, the SSMC will list available ports that can be used. However, the SSMC will not change the configuration mode from host to peer or from peer to host if that port is in use. In some situations, this will make systems unavailable for federation. This can happen, for example, if all fabric-attached ports in a system are in host mode and none of them are offline or free. Do not upgrade SSMC if Peer Motion workflows are in running state. A federation configuration might be overwritten if multiple operations are performed simultaneously on the configuration by different SSMC clients connected to the same or different SSMC servers. If you upgrade SSMC to 3.0 (or later) while a federation array member is in the disconnected state, a manual cleanup of the federation configuration on the array might be required. Federation does not support migration of Key Value information associated with a virtual volume. When using pre-3.0 version of SSMC only: If you replace the SSMC, you must restore the persistent store from the backup in order to discover the federations or use the restore option from the Create Federation dialog to restore existing federation configurations the new instance will not discover federations automatically. If the persistent store is lost, or SSMC server is replaced and no backup of the persistent store is available, contact the Hewlett Packard Enterprise Support Center: With SSMC 3.0 and later, all information is stored on the array. Data mobility with remote copy Data mobility with remote copy can be done through SSMC 2.3 or later, with these restrictions: To ensure successful Remote Copy group migration, the following limitations must be enforced: Do not change the name of the vvset that is automatically created by Remote Copy group. Do not create a vvset name that starts with "RCP_" (not including the quotes) as this prefix is used internally by HPE 3PAR Remote Copy. Federation only supports Remote Copy group for Synchronous and Periodic group. Periodic group is supported only if both source and target are running HPE 3PAR OS or later. When migrating a Remote Copy group, the Federation manager creates temporary snapshots with names using the.pm extension. You should not tamper with these snapshots. When migrating a Virtual Volume set that is part of a remote copy group, perform the start peer motion operation from the Remote Copy view page. Supported federation migration sources 3PAR OS or later 3PAR OS or later Data mobility with remote copy 23

24 3PAR OS or later With SSMC 3.1 (or later), supported migration sources include legacy 3PAR and IBM XIV Storage Systems. (See Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC on page 251.) Legacy system support now includes HPE 3PAR F-Class or HPE 3PAR T-Class storage system running 3PAR OS or For more information, see Choosing the right HPE 3PAR data migration and data mobility product on page 12. The migration source system must be managed by the same SSMC that is managing the federation to which you want to migrate. For the system support matrix, see the SPOCK website: Migration sources do not need the HPE 3PAR Peer Motion Software license. Restrictions for editing Federations and migration sources Peer ports cannot be changed if migration sources are present: The Edit federation action does not allow changing the peer ports of any federated system that has migration sources attached to it. If the peer ports must be changed,first remove the migration sources, change the peer ports, and add the migration sources again to the federation. Editing or deleting a federation is restricted when 3PAR Peer Motion is in progress: The following restrictions apply when peer motion is in progress: The federation cannot be deleted. Systems in peer motion cannot be removed from federation. Port changes are not allowed for systems in peer motion. Federation operations are restricted when some systems are disconnected:the following restrictions apply if one of the federated systems is disconnected: The Edit federation action will not succeed unless the disconnected system is also removed in the same action. A disconnected federated system cannot be used as a migration source in the Add Migration Source or Edit Migration Source actions. A disconnected migration source cannot be selected in the Edit Migration Source action. Import configuration restrictions All selected category objects are copied: All instances of a selected category (such as domain or users) will be copied. You cannot copy selected items within a category. Items are copied to all systems in the federation: Selected items are copied to all systems in the federation. You cannot select individual systems as a destination. When domains are not selected: Domains are containers for hosts and users (among other objects). Hewlett Packard Enterprise strongly recommends that you select domains to be copied. If domains are not selected, the results include: If users are selected for copying and domains are not selected, then: 24 Restrictions for editing Federations and migration sources

25 Users that have no domains associated with them will be copied, assuming there are no conflicts. Domain users will be copied if the destination system has that domain, assuming that there are no conflicts. If hosts are selected for copying and domains are not selected, then: Hosts and host sets that have no domains and/or domain sets associated with them will be copied, assuming there are no conflicts. Domain-specific hosts and host sets will be copied if the destination system has the same domain and/or domain sets, assuming there are no conflicts. In all of the cases above, items may not be copied if there are conflicts. Priority optimization policy: Volume sets with priority optimization enabled can be migrated, but the migrated volume set will not have priority optimization enabled. You must manually configure the volume set's priority optimization after the migration. Priority optimization policy on domains: HPE 3PAR Priority Optimization policy settings associated with domains are not copied during the Import Configuration action. You must manually recreate the HPE 3PAR Priority Optimization settings. Import configuration conflicts A conflict occurs when an item with the same name as the item being copied exists on the destination and has a different property. For example, a conflict occurs if a user named user1 exists on both the source and destination systems but the user is not assigned the same roles on both systems. Table 2: Conflicts on page 25 shows which properties are checked during an Import Configuration action. All the properties must be identical in order to avoid conflicts. Table 2: Conflicts Category Domain User Host Domain set Host set LDAP SNMP NTP System log parameters Properties Checked during Import Configuration action Name, virtual volume retention time Name, domain name, role Name, domain or domain set, persona Name, comment, list of member domains Name, comment, domain, list of member hosts All LDAP properties SNMP manager list server name status, syslog host name if status is on Import configuration conflicts 25

26 Peer Motion restrictions and limitations Each 3PAR StoreServ Storage destination system running 3PAR OS can have a maximum of four migration sources (storage systems) connected to it. Each 3PAR StoreServ Storage destination system running (or later) can have up to eight migration sources connected. Those source systems may include the other members of the federation, legacy 3PAR StoreServ Storage systems (systems running 3PAR OS 3.1.x, 3.2.x, or 3.3.x) for use with unidirectional peer motion, or a third-party system for use with online import. For example, in a four-system federated configuration running 3PAR OS 3.2.2, each member of the federation can have no more than one additional source (three federated sources + one other source). For more details and sample configurations, see Topology rules for migration sources on page 30. General:The following restrictions apply to all source item types-virtual volumes, volume sets, hosts, and host sets: Al l source items involved in a single peer motion must be from the same 3PAR StoreServ Storage. All source items must be from the same domain or should not be in any domain. The SSMC supports only the following export types for peer motion: Individual host Host set The export types port present and matched set are not supported. Unsupported virtual volumes: The following virtual volumes are not supported for migration: Volumes used as VVols Physical copies Volumes that have been previously migrated or that are being migrated Volumes that are admitted for peer motion Volumes with states other than normal Subset of virtual volumes that are part of an exported VV set. Derived volumes: Derived virtual volumes, such as virtual copies, are not migrated with their parent volumes. Derived volumes include the following: Virtual copies Physical copies Online copies (An online copy is a volume created using physical copy with the -online option so that the volume can be used immediately.) Snapshot schedules: Snapshot schedules (scheduled jobs to create virtual copies periodically) associated with virtual volumes are not migrated. You must create new schedules in the destination systems after migration. 26 Peer Motion restrictions and limitations

27 Unsupported virtual volume capabilities:virtual volumes with the capabilities listed in the first column of Table 3: Restrictions on virtual volume capabilities on page 27 are not supported for peer. However, you can migrate them without one or more of the capabilities, as described in the Restrictions column. Table 3: Restrictions on virtual volume capabilities Virtual volumes Volumes that are part of a priority optimization group Volumes that are part of a flash cache-enabled group Volumes using HPE 3PAR Adaptive Optimization Volumes that are part of a remote copy group Volumes using peer persistence Restrictions Migrated volumes will not be part of a priority optimization group. The volumes will not have any quality of service restrictions. The migrated volumes will not be part of a flash cache group. The volumes will not use flash cache. Volumes can be migrated with or without adaptive optimization by choosing destination CPGs appropriately. However, volume performance will be degraded. Virtual Volumes can be migrated from the Remote Copy view page in SSMC only as part of a consistency group. See Performing data migration with 3PAR Peer Persistence relationship on page 392. Multiple volumes in a single peer motion action: You can migrate multiple virtual volumes in a single peer motion action. The following restrictions apply: All hosts involved must support online single-volume migration (that is, hosts must have an ALUA persona), or the volumes selected must not be exported to any hosts (that is, offline migration). All volumes must be from the same 3PAR StoreServ Storage. You cannot select volumes from more than one 3PAR StoreServ Storage. All volumes must be from the same domain, or none of the volumes should be a member of a domain. You cannot select volumes from two different domains or volumes with and without domains in a single peer motion. Adaptive optimization: Volumes using adaptive optimization must be migrated carefully. Make sure that you choose an adaptive optimization configuration as the destination user CPG. If the destination CPG is not configured for adaptive optimization, the migrated volume will not have adaptive optimization. A migrated AO volume on the destination system will initially have degraded performance after migration. Performance will improve progressively as the volume is accessed from its new location. Priority optimization policy: Volume sets with priority optimization enabled can be migrated, but the migrated volume set will not have priority optimization enabled. You must manually configure the volume set's priority optimization after the migration. Priority optimization policy on domains: HPE 3PAR Priority Optimization policy settings associated with domains are not copied during the Import Configuration action. You must manually recreate the HPE 3PAR Priority Optimization settings. Adaptive flash cache policy: Volume sets with adaptive flash policy enabled can be migrated, but migrated volumes will not have adaptive flash cache enabled. You must manually configure the volume set's adaptive flash policy after the migration. Guidelines 27

28 VMware VVol storage containers: Migration of volume sets that are used as VMware storage containers are not supported. Online peer motion: The SSMC supports online migration of volumes-that is, volumes can be migrated while being used by an application. The following restrictions apply: Windows Server2008/2012/2016 clusters with only four or fewer nodes support online migration; clusters with five or more nodes must use MDM. An ALUA-capable persona must be configured on the hosts to be migrated if the single-volume migration feature is to be used. Windows Server 2003 clusters are not supported for online migration. MDM or offline migration must be used instead. Peer motion of offline volumes: Offline volumes are volumes that have no exports defined in the 3PAR StoreServ Storage-that is, the volumes should not have any template VLUNs associated with them, either directly or indirectly by means of volume set exports. Migration types cannot be combined in the same action: All hosts involved in a single peer motion must support the same migration types. You cannot combine, for example, online and MDM hosts in a single action. Offline volumes are permitted with other migration types Source volume cleanup after successful peer motion: Source volumes will be deleted after successful migration if that option was selected. The following restrictions apply: The source volume will not be deleted if it has virtual copies. A source volume set will be deleted only if all of its member volumes are migrated successfully. If a source volume set is migrated as a consistency group, then none of its volumes will be deleted until all volumes are successfully migrated. Multiple selection of resources and peer motion: Multiple selection for peer motion is supported only from Virtual Volume pages. Multiple selection is not supported for the following resource types: Volume set Host Host set The following workaround methods may be used: To migrate more than one volume set in a single start peer motion action, create a new volume set and add all involved volumes into it. You can now select this new volume set for peer motion. To migrate more than one host in a single start peer motion action, create a new host set with the hosts as members. There is no need to re-export volumes or volume sets to the new host set. You can now select this host set as the source of peer motion. To migrate more than one host set (for example, all hosts in a host cluster) in a single start peer motion action, create a new host set and include all hosts that you want to migrate as its members. There is no need to re-export volumes or volume sets to the new host set. You can now select this host set for peer motion. 28 Guidelines

29 NOTE: For any of the workaround methods above, the new object set created will be migrated too. You can delete it manually on the destination system after successful migration if the volume or host set is not needed. Domains and peer motion: All source items must be in the same domain or should not be in any domains.y ou cannot include domain items and non-domain items or items from two domains in the same start peer motion action. Domain sets are not supported: The SSMC does not support host-level or host set-level peer motion if the host or host set is associated with a domain set. The workaround is to perform volume-level or volume set-level migration of volumes in the same domain separately for each domain. Missing destination systems: The SSMC shows only systems connected to the source system as potential peer motion destinations. This means that, depending on your topology, not all federated systems may be available as destinations. Moreover, the SSMC also checks the capacity and capabilities for recommending destinations. Volume sets and multiple destinations: For pre-3.0 versions of SSMC only, all volumes in a volume set must be migrated to the same destination system. You can migrate selected volumes to different destination storage systems by using host-level and host set-level peer motion Consistency groups during peer motion of host and host set:the SSMC will automatically migrate all exported volume sets as consistency groups. You cannot disable this feature or change the volume set selection. User intervention may be required: The SSMC can import volumes without any user intervention for most of the operating system(os) platforms. However, in some cases, user intervention may be required in order to avoid I/O errors. The SSMC will pause peer motion activity and prompt you to take the necessary step. Other tools and utilities must be updated after peer motion: It may be necessary to change the settings of some utilities, such as the Recovery Manager or HPE Insight Control for VMwarev Center (IC4VC), after volumes have been migrated. Failing to do so may cause errors. For pre-3.0version of SSMC only:a virtual volume that is a member of one or more virtual volume sets can be migrated individually as long as the following conditions are met: None of the sets of which it is a member is exported. The volume itself can be exported. You choose to delete the source volume after a successful migration. You cannot start peer motion until this option is selected. The SSMC will create volume sets in the destination system if they do not exist already. Coexistence with the 3PAR Peer Motion Utility, the 3PAR Management Console, and the 3PAR Online Import Utility The SSMC can be used with other peer motion tools the 3PAR Peer Motion Utility, the 3PAR Online Import Utility, and the 3PAR Management Console (IMC) to import data into the federation from systems that are not managed by the SSMC. However, the following restrictions apply: HHewlett Packard Enterprise strongly recommends that the SSMC be used if the source system is managed or is capable of being managed by the SSMC. The SSMC must be used to migrate data from a federated system. No other peer motion tools are supported for this purpose. Coexistence with the 3PAR Peer Motion Utility, the 3PAR Management Console, and the 3PAR Online Import Utility 29

30 The SSMC and other peer motion tools must share the same set of peer ports from federated systems. Use of more than two peer ports per system is not supported by the SSMC. Peer motion tools must not reconfigure peer ports and host ports of federated systems. The SSMC cannot be used to monitor peer motion tasks initiated by other tools. The fan-in configuration must conform to the fan-in limits (see Topology rules for migration sources on page 30). Do not reset or make any configuration changes to the peer ports before, during, or after peer motion or online import. Do not use existing federation zones to configure peer links. Use new sets of zones for peer motion and online import. Peer ports in a federation are configured to use virtual ports. Depending on the operating system (OS) platform, you may exclude virtual ports from peer motion or online import zones if they are not required. Peer motion activity can be scheduled only if the destination array is or later. Authorization To create a federation, you must have Super permissions in ALL storage systems that are to be part of the federation. Storage Federation Topology Topology refers to how systems are interconnected in a storage federation. Hewlett Packard Enterprise supports a wide variety of topologies to handle varying data migration scenarios. Every federated system is always connected to every other federated system, so each federated system can be the source and destination of data migration to and from any other federated system within the same federation. Users have no ability to change this topology. Migration sources are not required to be connected to all federated systems. You can determine destination systems when you add a migration source. You can modify these at any time by editing migration source topology. NOTE: A migration source can be connected only to a federated system. It can never be connected to another migration source. Topology rules for migration sources Migration source systems can be connected to federated systems to support one-to-many or many-to-one data migration as long as the following rules are satisfied: Maximum Systems Rule A federation running 3PAR OS can have only up to eight systems, including federated systems and migration sources. A federation running (or later) can have up to 24 systems. Fan-In Maximum Rule A federated system can be the data migration destination of a maximum of four systems. This count includes other federated systems and migration sources connected to the federated system. 30 Authorization

31 Maximum Federation Systems Rule A federation can have up to four federated systems. A migration source cannot act as the migration destination of other migration sources or federated systems. A federated system cannot be the source of a nonfederated system. A federation with only one storage system running 3PAR OS (a single-array federation) is supported. Supported topologies The following figures show sample configurations. They are only samples; other configurations are possible. In the diagrams, arrows show the direction of possible data migration. Figure 1: One federated system and three migration sources In the three examples that follow, there is a bidirectional link between the two federated systems. There must be a link between them; otherwise the two federated systems become two single-array federations. In this type of configuration: The source storage systems managed by the SSMC must run 3PAR OS or later. A maximum of six migration sources is supported. The total number of systems must be eight or less. The same source system can be connected to two or more destination systems. Supported topologies 31

32 Figure 2: Two federated systems and six migration sources In this example, the federation consists of three federated systems with five migration sources. Migration source 4 is connected to two of the three federated systems. Figure 3: Three federated systems and five migration sources 32 Guidelines

33 Figure 4: Four federated systems and four migration sources 3PAR Peer Motion in multi-array bidirectional configuration Multi-array bidirectional peer motion is supported with 3PAR OS and later. The stages include: Initial setup of storage systems in a federation, where each system acts as a source as well as a destination (this is a one-time setup using the SSMC). Admitting volumes. Importing volumes. Completing post-migration tasks. 3PAR Peer Motion in multi-array bidirectional configuration 33

34 Zoning In a federated environment, virtual volumes on storage systems participating in a federation can be moved among storage systems at will. To achieve this, the storage systems must be connected to each other to enable data mobility on an as-needed basis. With unidirectional 3PAR Peer Motion, zoning requirements are such that each peer port on the destination storage system must have its own zone with visibility only to the intended source host port to mimic single-initiator zoning best practices. Maintaining this requirement in a federated configuration would result in 24 separate zones to enable full mobility in a four-array federated configuration (not including additional zones that would be required if peer motion is also being utilized). To reduce the burden on the SAN administrators and to simplify federation setup, the zoning requirements for federated arrays have been relaxed. Zoning should be set up so that each storage system has dual path visibility (one from each of the partner nodes) to each of the other arrays in the federation. Zoning topologies Dual-path visibility between storage systems in the federation can be achieved by creating two zones for example, federation zone 1 and federation zone 2. Federation zone 1 should contain a host port and a peer port (along with all its virtual ports) from one controller node in each of the storage systems. Similarly, federation zone 2 should contain a host port and a peer port (along with all the virtual ports) from the partner nodes that were used in the creation of federation zone 1 (see Figure 5: Single-fabric zoning for federation array-to-array communication on page 35). This allows all arrays in the federation to communicate with one other for data mobility. Federation zone 1 and federation zone 2 can be on the same fabric or on two separate fabrics, but a dual-fabric configuration is recommended for resiliency. NOTE: For the zones used for array-to-array communication, zoning using Smart SAN is not supported. The zones created for bidirectional communication must contain only FC ports from the federated systems. Other systems or migration sources or hosts should not be added to these federation zones. 34 Zoning

35 Figure 5: Single-fabric zoning for federation array-to-array communication NOTE: The SSMC supports only port WWN-based zoning. Other forms of zoning are not supported. Zoning requirements for unidirectional and bidirectional data mobility Shared hosts ports for host and peer I/O While it is generally recommended to have dedicated host ports and peer ports for array-to-array communication, host ports used for migration purposes can also be used for host I/O. However, this may impact both the speed of the migration and the performance of host-initiated I/O. Careful planning is needed so that the migration is performed during periods of low I/O to minimize the impact to host I/O and maximize efficiency of the migration. NOTE: Only the host ports participating in array-to-array communication in a federation can be used for host I/O. Array ports configured and peer ports must be dedicated for array-to-array communication and cannot be shared. Requirements for Smart SAN auto zoning For auto zoning using Smart SAN, a pair of Smart SAN-enabled host ports must be available to use for each member of the federation. Zoning configuration Federation requires two zones in the same fabric or one zone in each of two fabrics. The peer ports and host ports for the peer links of the even-numbered nodes on the source and destination systems enter one zone. The other zone is composed of the peer ports and host ports located on the odd-numbered nodes. Zoning requirements for unidirectional and bidirectional data mobility 35

36 36 Zoning It is possible that some port WWNs are not listed in the fabric name server on the SAN switch. This can happen if zones have been created before the SSMC is used to create a federation. If that is the case, you can manually enter port WWNs.

37 Setting up and configuring a Storage Federation Storage systems can be added to a storage federation in one of two ways: As a federated system Federated systems can have bidirectional data migration between other federated systems in the same federation. A federated system must be running 3PAR OS or later. As a migration source Migration sources are added to a federation in order to migrate data into one or more federated systems. A migration source can never be the destination of data migration. To create and manage a storage federation, follow these steps: Procedure 1. Log in to the SSMC. Make sure that you have Super permissions on all systems that are to be included in the federation. Figure 6: SSMC login When you are logged in, the SSMC Dashboard page appears. Setting up and configuring a Storage Federation 37

38 Figure 7: The SSMC dashboard 2. To open the Federations page, click the 3PAR StoreServ menu on the upper left, then click Show all to display all features supported by the SSMC. 3. Under Federation, click Federation Configurations. 38 Setting up and configuring a Storage Federation

39 The Federations page appears, listing all the federations in the SSMC. The following figure shows a configuration with no existing federations. Figure 8: Federations resource page 4. Click the +Create Federation button on the left. The Create Federation dialog appears. Setting up and configuring a Storage Federation 39

40 Figure 9: Create Federation dialog 5. Enter a name for the federation in the Name field. A name is required, and it must be unique in the SSMC. Enter any comments in the Comments field; comments are optional. 6. Click the Add Systems button. The Add Systems dialog appears. 40 Setting up and configuring a Storage Federation

41 Figure 10: Add Systems dialog The Add Systems dialog shows only systems that comply with the requirements for a storage federation. Figure 10: Add Systems dialog on page 41 shows two systems, both available to enter a storage federation. If use Smart SAN is checked, the Add Systems dialog will only display systems that comply with the requirements for auto zoning using Smart SAN. NOTE: Systems can be unavailable for various reasons. You can select the unavailable system to display details. 7. Select the systems you want to include in the federation. Figure 11: Add Systems dialog with two federation systems selected on page 42 shows both available systems selected. Setting up and configuring a Storage Federation 41

42 Figure 11: Add Systems dialog with two federation systems selected 8. Click the Add button to select the systems and close the Add Systems dialog. The Create Federation dialog shows the systems selected. The SSMC will automatically select peer ports and host ports if it can. If no peer ports exist, the SSMC will create them. NOTE: For Smart SAN federations, both host ports used for array to array communication must be Smart SAN enabled. 42 Setting up and configuring a Storage Federation

43 Figure 12: Create Federation dialog You may change the ports selection of any system by clicking the pencil icon ( ) next to the port. This opens an Edit dialog where you can change peer port and host port selections for the federation system (see Figure 13: Editing the port selection on page 44). Setting up and configuring a Storage Federation 43

44 Figure 13: Editing the port selection Use the dropdown menu to change the default port selection (see Figure 14: Changing the default port selection on page 45). The Edit dialog lists only ports that are eligible for selection. All ports in the same fabric must be zoned together. For FC zoning configurations that are required for a federation, see Zoning on page 34. NOTE: Hewlett Packard Enterprise recommends that you create zoning before creating a federation. If zoning has not been set up, a validation warning will appear 44 Setting up and configuring a Storage Federation

45 Figure 14: Changing the default port selection 9. Click OK to save the selection and close the Edit dialog. 10. In the Create Federation dialog, click Create to create the federation. If all the required zones have been created already, the new federation will be created and the Overview screen appears (see Figure 15: Federation Overview screen on page 45). Figure 15: Federation Overview screen After zoning is configured for all the systems within federations, you can use the cursor to hover over the system. The Peer Links panel will display the ports from systems where the link is established. See Figure 16: Peer Links on page 46. Setting up and configuring a Storage Federation 45

46 Figure 16: Peer Links 11. If zone configurations do not exist or are incomplete, you have two choices to proceed: Yes, create Clicking the Yes, create button allows you to continue without first creating or modifying zone configuration. However, the federation will be in a degraded state until zone configuration has been updated. After the federation has been created, select the federation and open the Recommended Zoning screen. This will show the required zoning configuration. See Figure 17: Recommended zoning on page Setting up and configuring a Storage Federation

47 Figure 17: Recommended zoning You may change the zone configuration now, and then click the Create button (Figure 18: Create Federation dialog with FC zoning on page 48). Clicking the Cancel button closes the warning, and the main SSMC dialog shows the required zoning configurations. You may click Cancel to close the dialog without creating a federation. Clicking the Cancel button allows you to create or update the zone configuration later Setting up and configuring a Storage Federation 47

48 Figure 18: Create Federation dialog with FC zoning Figure 19: Create Federation dialog with the only Smart SAN enabled systems on page 49 shows the Create Federation dialog with the "Only Smart SAN enabled systems" option selected, in which case the zoning panel will be empty 48 Setting up and configuring a Storage Federation

49 Figure 19: Create Federation dialog with the only Smart SAN enabled systems SSMC System Selector By default, SSMC resource pages list items from all systems to which you have access. You can use the selector to filter out only a subset of systems, so as to focus on one or several systems (for example, when provisioning volumes). However, the systems selector can affect federation operations; most of the federation operations, such as the start peer motion action, can if all systems in the federation are not selected. Hewlett Packard Enterprise recommends resetting the filter by selecting All Systems. SSMC System Selector 49

50 Federation preparation for data mobility After a federation is set up, it must be prepared for data mobility. This process varies, depending on your system configuration and environment. Recommended settings for Federated Systems Storage federation allows you to move virtual volumes between 3PAR StoreServ Storage systems while the volumes are in use. This liberates your application from the confines of a single system. In order to maximize this benefit, the systems in the federations must have identical configurations. The systems can still consist of different models with different capabilities and performance characteristics. However, Hewlett Packard Enterprise recommends that systems have identical definitions for the following settings: Domain and domain sets Host and host sets Users In a properly prepared federation, systems will have identical domain, host, and user definitions, and the data can be migrated from any system to any other system. Having identical definitions ensures that, for example, a volume always remains in the correct domain no matter which system is currently serving it. Similarly, it is essential to have identical user configurations; otherwise, users will have no access or limited access to some systems in the federation. If the host definitions are not identical or do not exist in all systems, data mobility is limited. In addition to host and user definitions, Hewlett Packard Enterprise recommends that systems also have identical settings for the following: LDAP configuration SNMP settings NTP settings Syslog parameters These parameters do not affect data mobility, but make federation management easier. The SSMC provides two actions that can be used to prepare federations. These are: Import Configuration Sync Federation The Import Configuration action can be used to copy settings from any system into the federation. The Sync Federation action checks systems in the federation and reports configuration mismatches. In some cases, the Sync Federation action can fix issues automatically. Table 4: Import Configuration and Sync Federation actions on page 51 shows various conditions under which these actions should be performed. 50 Federation preparation for data mobility

51 Table 4: Import Configuration and Sync Federation actions Are all systems in the federation new and unconfigured? Do you have other configured systems? Suggested Procedure Yes Yes Import Configuration No Manual setup No Yes Sync Federation, Import Configuration No Sync Federation Copying settings from existing systems You can configure a federation by copying settings from other systems in your datacenter. This is the fastest way to configure your federation if you start from new systems that have not been configured or has been partially configured. With SSMC 3.1 and later, you can import configuration settings from a legacy 3PAR StoreServ Storage system or IBMXIV array that has been added as a migration source for unidirectional data migration. For more information, see Data migration from legacy 3PAR systems on page 252 or Data migration from IBM XIV on page 259. NOTE: The source system can be any system being managed by the SSMC, with these limitations: From legacy 3PAR StoreServ Storage systems, you can copy only host, hosts sets, domain and domain sets to your datacenter. For non-hpe storage systems, you can copy only hosts. Procedure 1. Open the SSMC and, under Storage Systems, click Federations. 2. Select the federation you want to configure. Click Actions, and then select Import configuration, as shown in Figure 20: Importing a federation on page 52. Copying settings from existing systems 51

52 Figure 20: Importing a federation The Import Configuration dialog opens, as shown in Figure 21: Import Configuration dialog on page Federation preparation for data mobility

53 Figure 21: Import Configuration dialog The Import From pane lists the source system from which to copy settings. By default, no system is selected. The Configuration pane lists all the categories that are available for copying. By default, all categories are selected for copying. 3. Click the Select Source button to select a system from which to copy settings. The Select Source System dialog opens, as shown in Figure 22: Select Source System dialog on page 54. Federation preparation for data mobility 53

54 Figure 22: Select Source System dialog 4. Figure 22: Select Source System dialog on page 54 shows four systems, two of which are part of another federation. Select a system and click the Select button. The Select Source System dialog closes, and the Import Configuration dialog appears, as shown in Figure 23: Import Configuration dialog with a federation selected on page 55. NOTE: The Use Smart SAN host ports option appears when a supported switch is present. 54 Federation preparation for data mobility

55 Figure 23: Import Configuration dialog with a federation selected Import configuration limitations If LDAP, SNMP, or NTP have not been configured, the corresponding check boxes under Configuration will not be selectable. After a source system has been selected, the category selection is reset to reflect the items available in the source. In this example, the selected source system has no configurations for LDAP, SNMP, or NTP. You can expand each category to see list of items, as shown in Figure 24: Expanded categories in the Import Configuration dialog on page 56. Federation preparation for data mobility 55

56 Figure 24: Expanded categories in the Import Configuration dialog Click the check box beside each category you want to copy. NOTE: Host settings will be copied even if the hosts are not connected to the destination system. When importing from a legacy 3PAR StoreServ Storage system, only Host/Host Sets and Domains/ Domain Sets would be displayed. When importing from a non-3par storage system, Host is the only option from which to import, as shown in Figure 25: Import Configuration dialog with non-3par storage system selected on page 57. Click Add Host. 56 Federation preparation for data mobility

57 Figure 25: Import Configuration dialog with non-3par storage system selected The Select Host dialog is displayed, as shown in Figure 26: Select Host dialog for a non-3par storage system on page 58. Select the desired host and destination host persona from the dropdown option and click on ADD, or ADD+ for multiple selections. If the host is not available, click Refresh to refresh the list of host options. Federation preparation for data mobility 57

58 Figure 26: Select Host dialog for a non-3par storage system 5. After selecting the settings you want to copy, click Import. The SSMC validates the selection and checks for any conflicts. If conflicts are detected, a warning dialog appears, similar to the one shown in Figure 27: Import Configuration dialog with conflicts on page Federation preparation for data mobility

59 Figure 27: Import Configuration dialog with conflicts A conflict occurs when an item with the same name in one or more of the federated systems has different properties. In the example in Figure 27: Import Configuration dialog with conflicts on page 59, the user dom1editusr2 exists in one system with the edit role. 6. To import the configuration, click Yes, continue. A list of settings and resources that will be copied appears (Figure 28: Import Configuration dialog with settings and resources to be imported on page 60). Federation preparation for data mobility 59

60 Figure 28: Import Configuration dialog with settings and resources to be imported 7. Click Import. The import configuration is executed as a background task. You can monitor the progress from the Activitiy page. The task takes a while to complete, depending on the number of items and systems involved. The Activity page shows details of the import process, including items copied and failures, if any. A sample activity page is shown in (see Figure 29: Import configuration Activity page on page 61). 60 Federation preparation for data mobility

61 Figure 29: Import configuration Activity page If you imported configuration settings from an IBM XIV array, return to the procedure for Data migration from IBM XIV on page 259. Resolving Import Federation activity errors The most common reason for copying failure is that the destination or source system becomes unreachable due to network issues. In other cases, a configuration might have been changed manually, introducing new conflicts. Procedure When failures occur, you can retry the Import federation operation. Synchronizing Federations Hewlett Packard Enterprise recommends that all systems in a federation have similar configurations of domains, hosts, and users to ensure maximum flexibility with data mobility. If the systems have dissimilar configurations, then the Sync federation task can be used to align the configuration. Procedure 1. In the SSMC, open the dashboard and, under Storage Systems, click Federations. This opens the Federations page, which shows all federations in your environment. 2. Select the federation you want to synchronize, click Actions, and select Sync federation, as shown in Figure 30: Sync federation task on page 62. Resolving Import Federation activity errors 61

62 Figure 30: Sync federation task 3. The Sync dialog opens, as shown in Figure 31: Sync dialog with conflicts on page 62. The dialog lists also lists items that cannot be synchronized because of conflicts. Figure 31: Sync dialog with conflicts For example, if a user exists in more than one system with identical properties, then systems are already synchronized and there is no need to copy. However, if items have different properties, then the SSMC does not make any changes and both systems will continue to have conflicting items. 62 Federation preparation for data mobility

63 4. Click Yes, Continue when you are ready to proceed. A list of all items that will be copied between systems appears, with source and destinations also listed for each item, as shown in Figure 32: Sync dialog with items to be copied on page 63. Figure 32: Sync dialog with items to be copied 5. The synchronization is executed as a background task. You can monitor the progress from the Activity page. The task takes a while to complete, depending on number of items and systems involved. The Activity page shows details of the synchronization process, including items copied and failures, if any. A sample activity page is shown Figure 33: Synchronization Activity page on page 64. Federation preparation for data mobility 63

64 Figure 33: Synchronization Activity page Resolving Sync Federation conflicts The SSMC will not make any changes in existing items in case of conflicts. Conflicts can be only be resolved manually. To resolve conflicts, follow these steps: Procedure 1. Run a Sync federation task to obtain the list of conflicted items. 2. Make a note of the name, type, and system name that contains the copy you want to keep. 3. Close the Sync federation dialog. 4. From the SSMC dashboard, open the appropriate resource page (Domain, Hosts, Users, and so on), according to the conflicted resource type. 5. In Systems Filter select all systems in the federation except the one noted in step In the Search pane, type in the name of item. The main view will list all instances of the item you want to remove. 7. Select all the items. 8. From the Actions menu, select Delete. 9. Retry. 64 Resolving Sync Federation conflicts

65 The most common reason for copying failure is that the destination or source system becomes unreachable due to network issues. In other cases, a configuration might have been changed manually, introducing new conflicts. When failures occur, you can retry importing the federation. Federation preparation for data mobility 65

66 Performing Peer Motion with the SSMC Managing Peer Motion from the SSMC After federations have been configured and migration source have been added to the federations from the SSMC, you can perform peer motion to move virtual volumes from one 3PAR StoreServ Storage to another. Five different objects can be managed in peer motion from the SSMC: Virtual volumes Virtual volume sets Hosts Host sets Remote copy groups Those objects have corresponding menu items in the SSMC. Performing Peer Motion on virtual volumes Procedure 1. Click Virtual Volumes from the SSMC dashboard to open the Virtual Volumes page, where all the virtual volumes managed by SSMC on 3PAR StoreServ Storage systems are listed. Single-volume migration can be initiated from this page. Select single or multiple virtual volumes on the left, and then click Action to open the Start Peer Motion menu item, as shown in Figure 34: Virtual Volumes page on page Performing Peer Motion with the SSMC

67 NOTE: The Virtual Volumes page supports single or multiple object selection; however, the Virtual Volume Set, Host, and Host Set pages support only single selection If a volume is exported to a host that is configured with a non-alua capable persona, peer motion through the Virtual Volumes page of the SSMC is not supported. Use instead the Host page or the Host Set page to migrate all volumes exported to the hosts together. Figure 34: Virtual Volumes page 2. On the Start Peer Motion page, parameters must be entered under these headings for peer motion to work properly: General Peer Motion settings Virtual Volumes settings Virtual Volumes Set settings General Under the General heading (illustrated in Figure 35: Start Peer Motion dialog on page 68), enter the following information: Peer Motion activity name An optional identification to track each peer motion workflow progress on the Peer Motions page. Virtual volumes The virtual volumes selected for peer motion. Performing Peer Motion with the SSMC 67

68 Source host persona The host persona for the hosts where the virtual volumes are exported. Source system The 3PAR StoreServ Storage from which the selected virtual volumes will be moved. Migration type Support online peer motion unless specified. Figure 35: Start Peer Motion dialog Peer Motion settings To see all settings under the Peer Motion Settings heading (as illustrated in Figure 36: Start Peer Motion dialog Peer Motion Settings on page 69), select Adanced options. Enter the following information: Destination system The 3PAR StoreServ Storage to which the selected virtual volumes will be moved. Destination CPG CPG name from the destination 3PAR StoreServ Storage on which volume user space will be allocated. This can be customized on a volume by volume basis in the Virtual Volumes pane. 68 Performing Peer Motion with the SSMC

69 Delete source virtual volumes Request virtual volumes to be deleted from the source 3PAR StoreServ Storage after successful peer-motion operation. Minimally Disruptive Migration (MDM) Request to perform data migration using MDM procedure (see Performing minimally disruptive migration (MDM) on page 131). Pause Peer Motion before starting data migration Request to pause Peer Motion before starting the actual data migration. Figure 36: Start Peer Motion dialog Peer Motion Settings Virtual Volumes settings To view and edit Virtual Volumes Settings, select Edit virtual volume settings. To change any of the parameters for a given virtual volume, select the virtual volume and click the pencil icon ( ) on the right (as illustrated in Figure 37: Start Peer Motion dialog Virtual Volume Settings on page 70). Virtual volume parameters include: Performing Peer Motion with the SSMC 69

70 Destination CPG The CPG name from the destination 3PAR StoreServ Storage on which volume user space will be allocated. Destination Provisioning Type The provisioning type to which the selected virtual volumes will be moved. Priority Prioritize the migration of the volume. Volume Set Whether or the volume belongs to a virtual volume set. Destination System The 3PAR StoreServ Storage to which the volume will be moved. Figure 37: Start Peer Motion dialog Virtual Volume Settings Virtual Volumes Set settings To view and edit Virtual Volumes Set Settings, select Edit virtual volume set settings. To change any of the parameters for a virtual volume set, click the pencil icon ( ) to the right of the the virtual 70 Performing Peer Motion with the SSMC

71 volume set name (as illustrated in Figure 38: Start Peer Motion dialog Virtual Volume Set Settings on page 71). Virtual volume set parameters include: Priority Prioritize the migration of the volume set. Consistency Group Request to migrate the volume set consistently (see Consistency Groups management). Figure 38: Start Peer Motion dialog Virtual Volume Set Settings 3. To initiate the Peer Motion action, click Start. Once the Start Peer Motion action is triggered, the Virtual Volumes pane appears, where you confirm or change the settings. Performing Peer Motion on a virtual volume set Performing peer motion from the Virtual Volume Set page is similar to performing peer motion from the Virtual Volumes page. The differences are that only single selection is allowed for virtual volume sets, and the Consistency Group option is supported for peer motion on a virtual volume set. With SSMC 3.1 (or later), you can also set consistency settings on an individual virtual volume set (see Figure 39: Consistency Group setting on page 72). Performing Peer Motion on a virtual volume set 71

72 Figure 39: Consistency Group setting NOTE: Performing peer motion on a virtual volume set to an existing virtual volume set that has the same volume set name on the destination 3PAR StoreServ Storage is not supported. The Consistency group option does not support the existence of virtual volumes in multiple virtual volume sets. Procedure Use the Delete source volume after successful Peer Motion option to request that a virtual volume be removed from the source 3PAR StoreServ Storage after a successful peer motion operation. When Delete source volume after successful Peer Motion is selected for migration of a virtual volume set, the entire source volume set is also removed after all of the members of the virtual volume set have been successfully migrated. Performing Peer Motion on a host or host set Procedure Performing peer motion from the Host or Host Set page is similar to performing peer motion from the Virtual Volumes page. The differences are that only single selection is allowed for a host or host set, and 72 Performing Peer Motion on a host or host set

73 the volumes to be migrated are automatically determined through the exports to the selected host or host set. See Performing Peer Motion on a virtual volume set on page 71. NOTE: Any VV set exported to the host will be migrated with peer motion in a consistency group. See Consistency Groups management on page 136. Performing Peer Motion on a Remote Copy Group Procedure 1. Open the SSMC dashboard. 2. Click Remote Copy Groups. The Remote Copy Groups screen opens, listing all Remote Copy Groups managed by the SSMC. See Figure 40: Performing peer motion on Remote Copy Groups on page 73. NOTE: 3PAR Peer Motion with SSMC supports only Remote Copy Groups that are in Sync mode. To migrate Remote Copy Groups in other modes, see Data migration with 3PAR Remote Copy group. All virtual volumes within a Remote Copy group are migrated as a Consistency Group. Figure 40: Performing peer motion on Remote Copy Groups 3. Select a single Remote Copy group on the left panel, and then click Action to open the Start Peer Motion dialog. 4. On the Start Peer Motion dialog, select the system (Source system or Target system) from which the Remote Copy Group is to be migrated. See Figure 41: Selecting a system for Peer Potion on a Remote Copy Group on page 74. Performing Peer Motion on a Remote Copy Group 73

74 Figure 41: Selecting a system for Peer Potion on a Remote Copy Group 5. Click Yes, continue. The Start Peer Motion dialog appears (Figure 42: Start Peer Motion dialog on page 74). Figure 42: Start Peer Motion dialog 74 Performing Peer Motion with the SSMC

75 The Start Peer Motion dialog for a Remote Copy group is similar to the Start Peer Motion dialog for virtual volumes, except for the addition of Remote Copy Group Settings. Remote Copy Group Settings displays attributes of the remote copy group that will be created in the destination. Use this pane to select a CPG and copy CPG for volumes that are autocreated after migration. NOTE: Peer motion for a Remote Copy group should always be performed as a consistency group, so the Consistency Group is automatically enabled (under Virtual Volume Set Settings). 6. For Windows and ESX hosts, perform the tasks in this step to pause the migration workflow and unexport volumes being migrated. (This step is optional for other OSs.) a. Select Advanced options. Under "Peer Motion Settings," enable Pause Peer Motion before starting data migration to cause the migration to pause at the end of the admit phase. b. Click Start to create the migration definition. c. When the migration has paused, perform a rescan on the host to discover additional paths to the volume from the destination array. d. After the rescan has finished, verify that the peer volumes created on the destination for this migration are exported to the host. e. Verify (using statport -peer) that the Peer links carry application traffic. f. Perform the Unexport action (under "Virtual Volumes") to remove export of the volumes from the Remote Copy primary array to the host. g. To proceed with the import, select your migration under "Peer Motions" and perform the Resume action. Performing Peer Motion with the SSMC 75

76 Monitoring Peer Motion workflow in Federations Procedure 1. After peer motion has started for the selected objects, you can click Peer Motion details in the yellow banner to send control to the Federation view. Figure 43: Monitoring Peer Motion for a virtual volume set on page 76 shows the yellow banner after peer motion peer motion for a virtual volume set has been successfully initiated. Figure 43: Monitoring Peer Motion for a virtual volume set Alternatively, in the Federations page, select the federation on the left, click View in the pane on the right, and select Peer Motions (Figure 44: Federations page on page 77). 76 Monitoring Peer Motion workflow in Federations

77 Figure 44: Federations page The SSMC monitors each peer motion workflow during the peer motion cycle. It displays the overall progress at the workflow level, and each workflow record can be expanded to display additional details at the virtual volume set level or virtual volume level. The workflow record can be deleted once workflow has completed. See Figure 45: Peer Motion Detail pane on page 78. Performing Peer Motion with the SSMC 77

78 Figure 45: Peer Motion Detail pane Once a Peer Motion has started, it will progress through several phases. The Peer Motion cycle consists of the following phases: Peer Motion preparation Data import Post Peer Motion cleanup During the peer motion preparation phase, the 3PAR StoreServ Storage systems are prepared and validated for peer motion based on the options selected. This includes creating a virtual volume set or virtual volume and exporting them to the host server from the destination 3PAR StoreServ Storage. Some host operating systems(oss) require user intervention for the newly arrived device paths to be discovered and devices readied for I/O when virtual volumes are exported from the destination 3PAR StoreServ Storage. The Peer Motion workflow is paused until the requirements are fulfilled, and the Peer Motions page will indicate that a host rescan is needed. See Figure 46: Pausing the Peer Motion workflow Host rescan needed on page Performing Peer Motion with the SSMC

79 Figure 46: Pausing the Peer Motion workflow Host rescan needed NOTE: In some cases, the peer motion workflow can be stopped or aborted when the process is still in the preparation phase, but manual cleanup might be required on the destination 3PAR StoreServ Storage. 2. Click Resume when the condition that required user intervention has been corrected. The Resume Peer Motion confirmation dialog will appear (see Figure 47: Resume Peer Motion dialog on page 80). Performing Peer Motion with the SSMC 79

80 Figure 47: Resume Peer Motion dialog The second phase of the Peer Motion workflow, importing data, begins after the workflow is resumed. See Figure 48: Importing data Move data for Peer Motion on page 80. The transition to the importing data phase is automatic if no conditions are detected that require user intervention after the preparation phase. Figure 48: Importing data Move data for Peer Motion When peer motion is complete, the workflow record can be deleted if it is no longer needed. 80 Performing Peer Motion with the SSMC

81 NOTE: The Delete action (see Figure 46: Pausing the Peer Motion workflow Host rescan needed on page 79) only removes the Peer Motion record from the SSMC. It does not delete the Peer Motion operation or undo any of the operations that were carried out as part of the migration process. 3. The final phase is the post Peer Motion cleanup phase, in which any remnants from the migration process are cleaned up if needed. On the source array, the virtual volumes and volume sets must be manually removed if the Delete source virtual volumes option was not selected. If host-level migration was performed, or if all volumes for a given host or cluster were migrated and connectivity to the original source array is no longer desired, zoning changes may be needed to remove connectivity from the hosts to the source array. In addition, some operating systems(oss) maintain awareness of the paths to the source array even after connectivity to that array has been removed following a successful migration. It may be necessary to manually remove those devices, so that the multipathing output properly reflects the correct number and state of paths. NOTE: After a successful migration and source array cleanup, no STANDBY paths should be visible to the hosts. If hosts were booting from the array and the boot volume was migrated, you must modify the server's boot setting to indicate that the volume to boot from is now on the destination storage system. Failure do so may result in the host failing to boot. This configuration change does not need to be performed immediately after a successful migration, however, and can be postponed until a more convenient time. Performing Peer Motion with the SSMC 81

82 Postmigration tasks Performing fabric topology postmigration tasks Performing volume postmigration tasks Performing Remote Copy postmigration tasks Performing fabric topology postmigration tasks No clean up of the zoning between the source and destination storage systems is required unless one of the systems is removed from the federation or removed as a migration source. Performing volume postmigration tasks Procedure 1. After you verify that everything has been correctly migrated to the destination storage system, you can delete the migrated volumes if they were not deleted by the SSMC already. The WWN of a migrated volume is the one it had on the source system. To change the WWN into a local-array one, use the 3PAR CLI command setvv wwn. Execution of this command requires the volume to be unexported. While it is possible to keep the WWN of the source volume on the destination system, it is recommended to make this change at the next available opportunity. The immediate change is mandatory when using the volume with the HPE 3PAR Recovery Manager software and the Microsoft VSS framework. 2. If the Path Verify Enabled MPIO setting was enabled for the migration, disable it again. However, if the source and destination HPE 3PAR StoreServ systems are in a Peer Persistence relationship, do not disable the setting. 3. If the volume or volume set that was migrated was subject to an HPE 3PAR Priority Optimization rule on the source system, you must recreate this rule manually on the destination HPE 3PAR StoreServ system. Performing Remote Copy postmigration tasks If you are using HPE 3PAR Remote Copy software, the next step is to perform the remote copy postmigration tasks: Procedure 1. If necessary, recreate the remote copy groups on the destination storage system to match the remote copy groups on the new source system. 2. Perform the remote copy synchronization task. 3. Remove the remote copy groups from the old source system. 4. Configure and start the remote copy groups on the destination storage system from a specially created snapshot that represents the end step of the data migration. 82 Postmigration tasks

83 For more information, see the HPE 3PAR Remote Copy Software User's Guide and the HPE 3PAR Command Line Interface Administrator s Manual, available at the Hewlett Packard Information Library website: Postmigration tasks 83

84 Host environments for multi-array bidirectional Peer Motion Some host environments may require special multipath or host operating system(os) setting for greater compatibility with the migration process. This section describes those requirements and conveys some restrictions that are imposed on host OS environments. In all cases, a persona that is supported on both the source 3PAR OS version and the destination 3PAR OS version should be used-both the initial environment and the intended postmigration environment should be supported by and compliant with SPOCK requirements. For details about the supported migration paths and specific OS versions or cluster solutions supported for migration, see SPOCK: HPE 3PAR Peer Motion supports migrations for a host with an FCoE host bus adapter connected to an FCoE switch that itself is connected over FC to the HPE 3PAR StoreServ. See SPOCK for supported FCoE host bus adapters per host operating system. Microsoft Windows Host operating system Windows Server 2012 and Windows Server 2008 hosts can be migrated using the online migration procedure (see Performing Peer Motion with the SSMC on page 66. However, the Path Verify Enabled MPIO setting must be in effect on all the hosts. SPOCK: More information Enabling the Path Verify setting Enabling the Path Verify setting When migrating volumes exported to Windows 2012 Server or Windows 2008 Server hosts, ensure that the Path Verify Enabled setting is in effect on all the hosts. CAUTION: On Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008 (non- R2), ensure that Microsoft hotfix KB is installed. If it is not, do not use the Microsoft CLI command mpclaim or attempt to display the MPIO information via the Disk Management GUI during the peer motion migration admitvv stage, since these actions would result in the host becoming unresponsive. Procedure On Windows Server 2012, the setting can be found at Device Manager > Disk Drives. Right-click any of the HPE 3PAR disks, then select the MPIO > MS DSM Details. Select the Path Verify Enabled check box. On Windows Server 2008, the setting can be found at Server Manager > Disk Management. Rightclick any of the HPE 3PAR disks, then select MPIO > MS DSM Details. Select the Path Verify Enabled check box. Microsoft failover clusters Windows Server 2008 or Windows Server 2012 clusters environments (both Hyper-V and non-hyper-v) can be migrated online as long as the source array is running 3PAR OS MU1 P08 or later and the 84 Host environments for multi-array bidirectional Peer Motion

85 Linux number of Microsoft Failover Cluster (MSFC) nodes in the cluster is four or fewer. If these requirements are not met, then the MDM procedure must be used (see Managing Peer Motion from the SSMC on page 66). Throughout the migration process, the nodes in the MSFC should not be rebooted to minimize persistent reservation thrashing. The migration should be planned during a period where host maintenance is not needed. After the migration completes and the source paths are removed, maintenance operation on the MSFC nodes can be carried out. Linux migrations can be carried out using the online migration procedure. However, if the single volume migration feature is desired, an ALUA-enabled host persona must be used, and the /etc/ multipath.conf settings must conform to the required ALUA settings cited in the appropriate Linux implementation guide: HPE 3PAR Red Hat and Oracle Linux Implementation Guide HPE 3PAR SUSE Linux Enterprise Implementation Guide These documents are available at the Hewlett Packard Enterprise Information Library website: VMware ESXi Similarly to Linux, migration for VMware ESXi host environments should be performed using the online migration procedure. Persona 11 and the associated VMW_SATP_ALUA.SATP rule are required for single-volume migration. See the HPE 3PAR VMware ESX/ESXi Implementation Guide for details about setting up ALUA with VMware ESXi hosts. IBM AIX HP-UX This documents is available at the Hewlett Packard Enterprise Information Library website: The IBM AIX ODM/operating system does not support ALUA. Because ALUA support is a requirement for single-volume migration, this means that single-volume migration is not supported on AIX. Therefore, when planning the migration of AIX hosts, planning should take into consideration that all volumes exported to the AIX host must be migrated together. In addition, volumes that have been formatted with the JFS file system cannot be migrated online. Hosts that make use of the JFS file system must first quiesce I/O and the file system must be unmounted before proceeding with the migration. JFS2 and other file systems supported with AIX are not impacted. Rescanning for new LUN paths Procedure Before continuing with migration, it is recommended that you rescan for new LUN paths and make a note of the new paths to the volumes: Linux 85

86 # ioscan -fnc disk HP-UX 11i V3 HP-UX 11i v3 standalone and Serviceguard clustered hosts can be migrated by using the online migration procedure. No additional configuration is required on the hosts. HP-UX 11i V2 NOTE: The procedures described in this section are only for disks under HP-UX LVM volume management The single-volume migration feature is not supported with HP-UX 11i v2 hosts, so HP-UX 11i v2 hosts must be migrated using host-level migration. HP-UX 11i v2 standalone hosts can be migrated by using the online migration procedure. However, after zoning in the destination storage system to the standalone hosts, the new paths/physical volumes (PVs) must be added to the volume group/pvlinks configuration by using the vgextend command before removing the paths to the source storage system. For example: # vgextend my_standalone_vg new_pv_path1 new_pv_path2 To confirm configuration of the PVLinks configuration, execute the vgdisplay command. A Serviceguard cluster, running on HP-UX 11i V2, can also be migrated by using the online migration procedure, but if shared volume groups that use the SLVM are used, then additional configuration steps are required because the shared volume group does not automatically recognize new paths to the volume exported through the destination. Use the following single-node online reconfiguration operation (see Reconfiguring a single node online on page 86 to change the configuration of a shared volume group, while keeping it active in only a single node. During the volume group reconfiguration, applications on at least one node will be available. Reconfiguring a single node online Procedure 1. Identify the shared volume group on which a configuration change is required. Name it vg_shared. 2. Identify one node of the cluster which is running an application using the shared volume group. Call it node1. The applications on this node that are using the volume group, vg_shared, will remain unaffected during the procedure. 3. Stop the applications using the shared volume group on all the other cluster nodes, thus scaling down the cluster application to the single cluster node, node1. 4. Deactivate the shared volume group on all other nodes of the cluster, except node1, by issuing the vgchange command -a n option: # vgchange -a n vg_shared 5. Ensure that the volume group, vg_shared, is now active only on a single cluster node, node1, by using the vgdisplay command on all cluster nodes. The Status should show that the volume group is available on a single node only. 6. On node1, change the activation mode to exclusive by issuing the following command: # vgchange -a e -x vg_shared 86 HP-UX 11i V3

87 7. On node1, make a note of the new pv_paths to the PVs already in the volume group (from the output of the LUN rescan; see Rescanning for new LUN paths on page 85. Add all the new paths to the volume group, using the following command: # vgextend vg_shared pv_path 8. Export the changes to other cluster nodes: a. From node1, export the mapfile for vg_shared: # vgexport -s -p -m /tmp/vg_shared.map vg_shared b. Copy this mapfile, /tmp/vg_shared, to all the other nodes of the cluster. c. On the other cluster nodes, export vg_shared and re-import it using the new map file: # ls -l /dev/vg_shared/group crw-rw-rw- 1 root sys 64 0x Nov 16 15:27 /dev/vg_shared/group d. Make a note of the minor number ( 0x in the example above); it should match the minor number as shown in the mknod... command in the following example: # vgexport vg_shared # mkdir /dev/vg_shared # mknod /dev/vg_shared/group c 64 0x # vgimport -m /tmp/vg_shared.map -s vg_shared 9. Change the activation mode back to shared on all the cluster nodes: a. Change the mode back to shared on node1 by issuing the following command: # vgchange -a s -x vg_shared b. Change the mode to shared on the other cluster nodes by issuing the following command: # vgchange -a s vg_shared Applications using the shared volume group can now be restarted on other hosts. For more information about SLVM, see SLVM Online Volume Reconfiguration, available at the following website: If you are migrating cluster lock disks, you can update cluster lock disk configuration online by following the instructions in. Updating a cluster lock disk configuration online Procedure 1. Make a note of the new pv_paths to the lock disks from the output of the LUN rescan (see Rescanning for new LUN paths on page 85). 2. Execute the following command: # vgcfgrestore -n /dev/vg_lock pv_path 3. For each node in the cluster configuration file, modify the values of FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV. Updating a cluster lock disk configuration online 87

88 Solaris 4. To check the configuration, run the cmcheckconf command. 5. To apply the configuration, run the cmapplyconf command. For more information on updating cluster lock disk configuration, see Managing Serviceguard A.11.20, available at in ithe User Guide section. There are no special considerations for migrating Solaris hosts. However only standalone hosts are supported for migration. Solaris clusters are not supported. Symantec/Veritas Storage Foundation requirements As of 3PAR OS MU2, some configurations that use Symantec Storage Foundation or Veritas InfoScale are supported for migration through HPE 3PAR Peer Motion. For information about supported configurations or migration paths, see the SPOCK website: NOTE: Veritas Storage Foundation configurations not specifically listed on SPOCK, including configurations composed of ESX Virtual Machines, are supported only through the minimally disruptive migration (MDM) or offline migration procedures. IMPORTANT: For online data migration with Symantec Storage Foundation or Veritas InfoScale, virtual peer ports must be created while setting up the peer connections between the source and destination storage systems. Twice the number of virtual peer ports must be created as there are nodes in the cluster per peer port. For example, if you are migrating a two-node cluster, four NPIV ports must be created on each peer port. If you are using the SSMC, click Action on the Ports screen to edit port settings. NOTE: The SSMC can be used to create the NPIV ports, but only the 3PAR Peer Motion Utility, beginning with V1.5, can be used to carry out the migration. The SSMC is currently not a supported migration utility for Symantec Storage Foundation or Veritas InfoScale. If you are using the HPE 3PAR Management Console, see Set Up Connections in the HPE 3PAR Peer Motion Data Migration Guide. This guide is available at the Hewlett Packard Enterprise Information Library website: A maximum cluster of four-nodes is supported. In addition, the single-volume migration feature is not supported in Symantec Storage Foundation or Veritas InfoScale environments. This means that all virtual volumes exported to the hosts or cluster being migrated must be selected for migration and that the paths to the source array must be removed before starting the data migration. After the migration is complete, Hewlett Packard Enterprise recommends that the Storage Foundation or InfoScale UDID on the virtual volumes be updated to reflect the new array serial number of the destination storage system, during the next available maintenance window. Enter the following Veritas Volume Manager CLI command to update the UDID written in the private region of the virtual volumes: vxdisk updateudid <device> 88 Solaris

89 NOTE: Updating the UDID is an offline process and requires that the diskgroups be deported before executing this command. Host environments for multi-array bidirectional Peer Motion 89

90 HPE 3PAR Peer Motion with unidirectional data mobility between HPE 3PAR StoreServ Storage systems 90 HPE 3PAR Peer Motion with unidirectional data mobility between HPE 3PAR StoreServ Storage systems

91 HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion 3PAR Peer Motion takes place in four stages: Interlinking source and destination storage systems Admitting volumes Importing volumes Completing postmigration Figure 49: Host connected to the source array on page 91 shows a host that owns volumes on the source 3PAR StoreServ Storage system. The host is connected to the source storage system through FC and a SAN. The host has two HBAs connected to two adjacent controller nodes on the source storage system. The (potentially new) destination storage system is also online and visible over the network to an instance of the SSMC. NOTE: Single-fabric/single-HBA configurations are also supported, but are not recommended due to the single point of failure that is inherent in those configurations. Figure 49: Host connected to the source array To interlink the storage systems (see Figure 50: Interlinking source and destination storage systems on page 92), two FC ports are configured as peer ports on the destination storage system and hooked up through the SAN to host ports on the source storage system. The peer ports must be created on horizontally adjacent nodes on the destination storage system. On the source storage system, the host ports do not need to be on horizontally adjacent nodes, but should be on different nodes. You can create peer ports by using the SSMC, the CLI controlport command, or the 3PAR Management Console. HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion 91

92 The ports for the array interlinks (blue lines) on the source storage system are standard host ports. They can even be host ports that are already in use by a host. Direct FC connections between destination storage systems is not supported. Figure 50: Interlinking source and destination storage systems In the admit stage (see Figure 51: Admit stage on page 93), the volumes to be migrated are admitted to the destination storage system. These admitted volumes appear as volumes in the SSMC for the destination storage system with 'peer' for their provisioning type. In this stage, the volumes are defined and prepared on the destination storage system for export to the host. No local storage is allocated for these volumes on the destination storage system at this point in time. The multipathing software on the host sees four paths to the same volume. Appropriate SCSI rescan/multipathing configuration must be performed on the host to pick up the additional paths from the destination storage system. 92 HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion

93 Figure 51: Admit stage In the import stage (see Figure 52: Import stage on page 93), migration can start. During this stage, the host accesses the volumes on the source storage system through the destination storage system. The host-issued read/write I/O may see increased latency. For host-level migration, the hosts are prevented from directly accessing the volumes on the source storage system by manually removing the zones for those paths. For single-volume migration, the hosts are prevented from directly accessing the volumes on the source storage system by an automatic reconfiguration by the 3PAR Peer Motion software of those paths from the active state to the standby state. Figure 52: Import stage HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion 93

94 When all LUNs on the source storage system are migrated (see Figure 53: Migration complete on page 94), the source storage system can be decommissioned or re-initialized. After migration has completed, all the data that was in the source volumes has been copied to new destination volumes. At this point, all host-issued read/write I/O is serviced directly on the new destination volumes. Figure 53: Migration complete 94 HPE 3PAR to 3PAR data mobility with HPE 3PAR Peer Motion

95 Data migration requirements IMPORTANT: Before beginning migration, see the SPOCK website to verify that the 3PAR OS version on the hosts to be migrated are supported. SPOCK website: For information about using the 3PAR Peer Motion Utility 1.3 or earlier to set up unidirectional peer motion between 3PAR StoreServ Storage systems running 3PAR OS or earlier, see the HPE 3PAR Peer Motion Data Migration Guide. This document is available at the Hewlett Packard Enterprise Information Library: 3PAR Peer Motion General Requirements and Restrictions Requirements A management host running the SSMC, the 3PAR Peer Motion Utility, or the 3PAR Management Console must be available. The management host must have network access to the source and destination storage systems. The source and destination storage systems must not already be in a peer motion configuration. See the SPOCK website to verify that the host operating system (OS) versions to be migrated are supported: The destination storage system in a peer motion operation should be running 3PAR OS or later. The source storage system 3PAR OS version must be the same or earlier than that of the destination storage system. For example, migration of a 3PAR OS source storage system to a 3PAR OS destination storage system is not supported. The SSMC, the 3PAR Peer Motion Utility, and the 3PAR Management Console do not take into account the MU level of the source or destination storage system. For example, the 3PAR Peer Motion Utility can be used when the source storage system is 3PAR OS MU2 and the destination storage system is 3PAR OS MU1. The minimum3par OS level for the destination storage system when you are using the 3PAR Peer Motion Utility is or later. The 3PAR Peer Motion Utility and the 3PAR Online Import Utility do not support migration of remotecopy groups from the source storage system. Only point connections (also known as fabric connections) between the destination and the source storage systems are supported. Direct FC connections are not supported. The speeds of the FC ports do not need to be the same. iscsi is not supported as either the array-to-array or host-to-array communication protocol. Enabling 3PAR Peer Motion on the destination storage system requires an HPE 3PAR Peer Motion Software license. The source storage system does not require any special license. If the imported volume is intended to be a thinly provisioned volume on the destination storage system, the HPE 3PAR Thin Provisioning Software license is required. Data migration requirements 95

96 If the migrated hosts or volumes are intended to be part of a domain on the destination storage system, the HPE 3PAR Virtual Domains Software license is required. If the host persona being used on the source 3PAR StoreServ Storage system for the hosts to be migrated is not supported on the 3PAR OS running on the destination storage system, the persona being used on the source storage system must be changed to a common persona supported by the 3PAR OS versions on both the source and destination storage systems, prior to initiating the import process. Thin reclamation may not work while the migration is in progress. Thin reclamation functionality is reactivated according to the provisioning type chosen after the migration completes. If the imported volume is intended to be a thinly deduped volume or a compressed volume, use SSD CPG on the destination storage system. For online Windows cluster migration: The destination storage system must be a 3PAR StoreServ Storage or 3PAR StoreServ 7000 Storage system running 3PAR OS or later, or a 3PAR StoreServ Storage or 3PAR StoreServ 8000 Storage system running 3PAR OS or later. Cluster sizes of up to four hosts are supported and the peer ports on the destination storage system must be configured with twice as many NPIV ports as cluster hosts (for example, a two node cluster requires that each peer port be configured with four NPIV ports). NOTE: Federated systems running 3PAR OS will automatically be configured by the SSMC to have eight NPIV ports per peer port. The FC fabric between the source and destination storage systems is capable of and enabled for NPIV. The MDM procedure must be followed in the following cases: The destination storage system is running a version of 3PAR OS earlier than The destination storage system is an HPE 3PAR T-Class or HPE 3PAR F-Class system The cluster size is larger than four hosts. The FC fabric is not capable of NPIV. For single-volume migration: Both the source and destination storage systems must be running 3PAR OS or later. The hosts to which the volumes are exported must be defined using an Asymmetric Logical Unit Access (ALUA) persona on both source and destination storage systems. The hosts must be configured to use ALUA. See host-specific documentation for enabling ALUA. Restrictions For offline data migration, the volumes that are to be migrated from the source storage system must not be exported to any host. For information about migrating volumes that are part of a remote copy group, see Data migration with 3PAR Remote Copy group on page 380. Wild cards are not supported. 96 Data migration requirements

97 The 3PAR Peer Motion Utility and 3PAR Online Import Utility do not support the migration of snapshots, clones, or remote-copy groups. 3PAR Management Console v4.7 does not support migration from a source 3PAR StoreServ Storage system running 3PAR OS to a 3PAR StoreServ Storage system running 3PAR OS Requirements and restrictions in Federation environments When using the 3PAR Management Console in a federation environment: Multiple sources can be added to a single 3PAR StoreServ Storage system that is running 3PAR OS or later. Because multiple storage systems can be configured to a single destination storage system, sharing of peer ports is allowed. Peer ports must be from partner nodes. However, there is no requirement for them to be partner ports. (Partner ports have identical slot and port numbers.) For a Create PM Configuration operation: The Available Systems list in the Peer Motion Manager will list all the connected 3PAR StoreServ Storage systems. 3PAR StoreServ Storage systems running any 3PAR OS version earlier than can only be selected as Source. 3PAR StoreServ Storage systems running 3PAR OS or later can be selected as Destination. With a fresh peer motion configuration with a destination 3PAR StoreServ Storage system running 3PAR OS or later, you can create or configure peer ports using the Create Peer Motion configuration link from common actions. With a fresh peer motion configuration with a destination 3PAR StoreServ Storage system running 3PAR OS or later, you can create or configure peer ports using the Create Peer Motion configuration link from common actions. For subsequent peer motion configurations with the same 3PAR StoreServ Storage system, you must create zoning between new the source and the destination arrays. You cannot reconfigure peer ports. A peer port on the destination storage system may be shared across multiple sources. Accordingly, peer ports cannot be reconfigured using the 3PAR Management Console Remove PM Configuration operation. In this case, a message will appear prompting you to remove the configuration by unzoning the source from the destination array. A federation setup can be recognized if the destination 3PAR StoreServ Storage system is running 3PAR OS or later with peer ports that have multiple paths from multiple sources. A peer port on the destination storage system may be shared across multiple sources. Accordingly, peer ports cannot be reconfigured using the 3PAR Management Console Remove PM Configuration operation. In this case, a message will appear prompting you to remove the configuration by unzoning the source from the destination array. A federation setup can be recognized if the destination 3PAR StoreServ Storage system is running 3PAR OS or later with peer ports that have multiple paths from multiple sources. Requirements and restrictions in Federation environments 97

98 Network and fabric zoning requirements for 3PAR Peer Motion The host must remain connected to the destination storage system during the data migration process. This also applies for the 3PAR Peer Motion Utility. You must have two unique FC paths between the source and destination storage systems. In other words, there must be two separate nodes on the source connected to two separate nodes on the destination. The nodes used for intersystem communication on both the source and destination storage systems must be a partner node pair (for example, nodes 0 and 1, or nodes 2 and 3, or nodes 4 and 5, or nodes 6 and 7). At least one FC switch is required. The use of two switches adds redundancy and is strongly recommended. Only fabric connections on peer ports are supported. The ports should be zoned 1:1 across the fabric. If nodes 0 and 1 are used on the source storage system and nodes 0 and 1 are used on the destination storage system, the ports on source-0 and destination-0 should be zoned together. The ports source-1 and destination-1 should be zoned together in a separate zone. The peer ports on the destination storage system have different node/port WWNs than host ports, so zoning on the FC fabric between the source and destination storage systems should be set up only after the ports have been correctly configured as peer ports. For all NPIV ports created, the NPIV port WWNs must be included in the zones created between the source and the destination storage systems. The zones must also include the physical peer port WWNs. Unidirectional peer motion can be accomplished with four tools (the SSMC, the3par Peer Motion Utility, the 3PAR Online Import Utility, or the 3PAR Management Console) and NPIV port zoning differs, depending on the tool. If you use the SSMC for unidirectional peer motion (that is, migration from a migration source to a federated system) then NPIV ports must be in the zone. On the other hand, if you use the 3PAR Online Import Utility or the 3PAR Peer Motion Utility to migrate into a federated system, NPIV port zoning is required only if Windows or Symantec Storage Foundation clusters are being migrated. Do not disconnect the source and destination storage systems from each other until the data migration is complete. For more information, find the appropriate implementation guide for the operating system (OS) of your host at the following Hewlett Packard Enterprise Information Library website: 3PAR Peer Motion and federation: When you are using peer motion to migrate data from a storage system running a 3PAR OS version earlier than to a storage system that is part of a federation, the zoning guidelines for both peer motion and federation must be adhered to. The zoning between the legacy storage system and the federated destination storage system must be one-to-one zoning, where one peer port on the destination storage system is zoned to only one host port on the source storage system. When you are using peer motion to migrate to a storage system running 3PAR OS 3.2.2, the peer zones must include the physical WWN of the peer port as well as the eight virtual ports associated with the peer port. Zoning for arrays within the federation must follow the guidelines in Zoning topologies on page Network and fabric zoning requirements for 3PAR Peer Motion

99 Each migration source requires its own zones. At a minimum, a migration source requires two zones. However, the actual number depends on the number of federated systems to which the migration source is connected. The SSMC guides you through the zone creation process. The SSMC lists the number of zones and port WWNs that need to be included in each zone. Zoning for 3PAR Peer Motion Procedure 1. Create the first zone, Z1, with host port 1 zoned to peer port Create the second zone, Z2, with host port 2 zoned to peer port There is already a third zone, Z3, between host port 1, host port 2 of the source array and host HBA. 4. Create the fourth zone, Z4, with host port 1, host port 2 of the destination array and host HBA. 5. Save and enable the configuration with Z1 and Z2. See Figure 54: Zoning for 3PAR Peer Motion on page 99. Figure 54: Zoning for 3PAR Peer Motion Requirements for multiple source arrays N:1 configurations, with migration from multiple source arrays to a single destination 3PAR StoreServ Storage system, are supported (Figure 55: Unidirectional migration from multiple source arrays to a single 3PAR StoreServ Storage system on page 100). Use the addsource CLI command to add multiple arrays. Zoning for 3PAR Peer Motion 99

100 Figure 55: Unidirectional migration from multiple source arrays to a single 3PAR StoreServ Storage system Only unidirectional migration is supported. The source can be any of the supported legacy 3PAR StoreServ Storage source arrays. Zoning must be set up between each of the source arrays and the single destination 3PAR StoreServ Storage system. Single-initiator single-target zones are required. LUN conflicts are resolved at the destination 3PAR StoreServ Storage system with the autoresolve option of the createmigration command, which is enabled by default (see Using the autoresolve option to resolve LUN conflicts on page 141). When a single server or cluster that accesses LUNs from multiple source arrays is being migrated: The host name on each source array must match. The host entry on each source array must contain the same HBA WWPNs The createmigration operation tests for these conditions to ensure that the LUNs from each source array are placed under the same 3PAR StoreServ Storage host. Any mismatch yields an error. ALUA and Path STATE change detection requirements If the source and the destination storage systems are both running at least 3PAR OS it is unnecessary to unzone the host from the source storage system at the import stage, before the migration begins. The storage systems communicate with each other and the paths from the source to the host for the volumes to be migrated are automatically reported as standby paths once data migration begins. As the host does not lose access to the source storage system during the data migration process, it is possible to move only a subset of the volumes the host sees from the source storage system to the destination storage system. This requires that the host is configured to use an ALUA capable host persona on both the source and destination storage systems. If the source storage system is running an 3PAR OS release earlier than 3.1.3, or if the host is not capable of using the ALUA information, or is configured on a non-alua persona, then it will still be necessary to unzone the host from the source before beginning the data 100 ALUA and Path STATE change detection requirements

101 migration. This will require migrating all the volumes that the host sees from the source to the destination, at the same time. Premigration constraints For updated information about 3PAR OSversions and source and destination 3PAR StoreServ Storage systems supported by HPE 3PAR Peer Motion, see the SPOCK website: Premigration constraints 101

102 Adding a migration source to a Federation With SSMC 3.1 and later, you can add a supported IBM XIV or legacy 3PAR StoreServ Storage system as a migration source for unidirectional data migration. For more information, see Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC on page 251. NOTE: For information about setting up and configuring a federation, see Setting up and configuring a Storage Federation on page 37. To add a migration source to a federation using the SSMC, follow these steps Prerequisites A migration source requires two FC ports, to be configured as host ports: These ports will be used for intersystem communication and data transfer. These ports can be shared for host I/O as well. Existing ports that are being used for host I/O can be used for this purpose. Two ports must be from two partner nodes. They are not required to be partner ports. The SSMC will reconfigure these ports into host mode if they are not already in host mode. The following storage systems can be migration sources: 3PAR StoreServ Storage migration source must be running one of these 3PAR OS versions: 3PAR OS or later 3PAR OS or (3PAR StoreServ 7000 Storage or 3PAR StoreServ Storage) only With SSMC 3.1 and later, embedded Online Import Utility functionality adds support for unidirectional data migration from: Legacy 3PAR StoreServ T-Class/F-Class systems IBM XIV NOTE: Supported IBM XIV Storage systems and HPE 3PAR F-Class and T-Class systems can be selected as a source for data migration, but cannot be a federated system managed by the SSMC. For more information about supported storage systems, see the SPOCK website: Procedure 1. Open the Federations page and select the federation you want to work with (see Figure 56: Adding a migration source on page Adding a migration source to a Federation

103 Figure 56: Adding a migration source 2. Click the Actions menu and select Add migration source. The Add Migration Source dialog opens (see Figure 57: Add Migration Source dialog on page 104). The Source System section refers to the migration source to be added. Destination Systems shows federated systems that are already in the federation. Adding a migration source to a Federation 103

104 Figure 57: Add Migration Source dialog 3. Click the Select source button. The Select Source System dialog appears, listing all systems with 3PAR OS versions compatible with migration sources (see Figure 58: Selecting Source System dialog on page 105). In this example, three systems are listed. Of these, two systems are unavailable because they are already in a federation and cannot be added as a migration source. 104 Adding a migration source to a Federation

105 Figure 58: Selecting Source System dialog To migrate a supported legacy 3PAR or non-3par storage system, click Add systems and select the appropriate option. Adding a migration source to a Federation 105

106 Figure 59: Add System dialog If you select non-3par, following dialog appears: 106 Adding a migration source to a Federation

107 Figure 60: Add System dialog 4. Select the first system and click Select. This closes the Select Source System pane and returns you to the Add Migration Source dialog (see Figure 61: Add Migration Source dialog with source system selected on page 108). Adding a migration source to a Federation 107

108 Figure 61: Add Migration Source dialog with source system selected The SSMC selects host ports if they already exist. You can change the port selection if you prefer. Changing the port selection is required if the ports already zoned are not the one selected by default. To edit the port selection, click the pencil icon ( the port selection on page 109). ). The Edit dialog appears (see Figure 62: Editing 108 Adding a migration source to a Federation

109 Figure 62: Editing the port selection If you click OK, you accept these ports and are returned to the main Add Migration Source dialog (see Figure 63: Add Migration Source dialog with changed ports on page 110). In this example, the ports have been changed to 2:1:2 and 3:1:2. Adding a migration source to a Federation 109

110 Figure 63: Add Migration Source dialog with changed ports 5. After the ports are correctly selected, click the Add destinations button. The Add Destination Systems dialog appears, listing destinations (see Figure 64: Add Destination Systems dialog on page 111). All federation systems are shown. In this example, both federations shown are available. 110 Adding a migration source to a Federation

111 Figure 64: Add Destination Systems dialog Select any unavailable system to see why it is unavailable. The most likely reason a destination is unavailable is that the system already has the supported maximum number of source systems, as shown in Figure 65: Example unavailable destination system on page 112. Federated system fed1_sys3 is unavailable because adding a new migration source will exceed the maximum supported peer systems. Adding a migration source to a Federation 111

112 Figure 65: Example unavailable destination system 6. Select an available federation as the destination, and then click Add. This closes the Add Destination Systems dialog and returns you to the Add Migration Source dialog (see Figure 66: Example Add Migration Source dialog with source and destination systems selected on page 113). NOTE: In the following example, the selected federation is s Adding a migration source to a Federation

113 Figure 66: Example Add Migration Source dialog with source and destination systems selected 7. Click the Add button and, if the required zones are in place, the migration source will be added to the Overview screen (see Figure 67: fedcluster Overview screen showing migration source on page 114). Adding a migration source to a Federation 113

114 Figure 67: fedcluster Overview screen showing migration source 8. If zone configurations do not exist or are incomplete, you have two choices to proceed: Yes, continue Clicking the Yes, continue button allows you to continue without first creating or modifying zone configuration. However, the migration source will be in a degraded state until zone configuration has been updated. After the migration source has been created, select the federation and open the Recommended Zones pane. This will show the required zoning configuration for every migration source (as shown in Figure 17: Recommended zoning on page 47). Cancel See Figure 68: Adding destination systems after incomplete or absent zoning warning on page 115. You may change the zone configuration now, and then click the Add button. Clicking the Cancel button closes the warning, and the main SSMC dialog will show the required zoning configurations. You may click Cancel to close the dialog without adding a migration source. Clicking the Cancel button allows you to create or update the zone configuration later. 114 Adding a migration source to a Federation

115 Figure 68: Adding destination systems after incomplete or absent zoning warning If you added an IBM XIV array as a source for unidirectional data migration, return to the procedure for Data migration from IBM XIV on page 259. Adding a migration source to a Federation 115

116 Using the HPE 3PAR Peer Motion Utility The 3PAR Peer Motion Utility controls the migration of a host and its data from a source 3PAR StoreServ Storage system to a destination 3PAR StoreServ Storage system with as little disruption to the host as possible. It provides a set of commands for performing the migration operations. You can use it to script migrations of virtual volumes from one 3PAR StoreServ Storage system to another. Using the 3PAR Peer Motion Utility, you can add multiple source arrays to multiple configured destination 3PAR StoreServ Storage systems. The 3PAR Peer Motion Utility supports simultaneous migration for the same source and destination pair. The 3PAR Peer Motion Utility supports simultaneous data mobility for hosts and volumes from source systems to destination systems. Simultaneous data mobility is supported for multiple hosts and for volumes from the same source to the same destination, from the same source to different destinations, from different sources to the same destination, and from different sources to different destinations, as shown in Table 5: Concurrent migration scenarios supported by the 3PAR Peer Motion Utility on page 116 and the examples that follow it. NOTE: Concurrent migration of the same volume is not supported. Table 5: Concurrent migration scenarios supported by the 3PAR Peer Motion Utility Migration Migration 1: Source system 1 to Destination system 1 Migration 2: Source system 1 to Destination system 1 Migration 1: Source system 1 to Destination system 1 Migration 2: Source system 2 to Destination system 2 Migration 1: Source system 1 to Destination system 1 Migration 2: Source system 2 to Destination system 1 Migration 1: Source system 1 to Destination system 1 Migration 2: Source system 1 to Destination system 2 Single host in each createmigration operation Single volume in each createmigration operation Multiple volumes in each createmigration operation Multiple hosts in each createmigration operation Common volume in migration 1 and migration 2 Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Supported Not supported Not supported Not supported Not supported Example 1 Concurrent migration of host between the same source and destination pair 116 Using the HPE 3PAR Peer Motion Utility

117 > createmigration -sourceuid <UID> -srchost h1 -destprov full -destcpg xxxx -migtype online > createmigration -sourceuid <UID> -srchost h2 -destprov full -destcpg xxxx -migtype online Example 2 Concurrent migration of multiple hosts in single createmigration operation between a different source and destination pair > createmigration -sourceuid <UID> -srchost h1,h2 -destprov full -destcpg xxxx -migtype MDM -destinationuid > createmigration -sourceuid <UID> -srchost h3,h4 -destprov full -destcpg xxxx -migtype MDM -destinationuid Example 3 Concurrent migration of volumes between the same source and destination pair > createmigration -sourceuid <UID> -srcvolmap [{V1,full,cpg1}] -destprov full -destcpg xxxx -migtype online > createmigration -sourceuid <UID> -srcvolmap [{V2,full,cpg1}] -destprov full -destcpg xxxx -migtype online NOTE: The migration for host h2 can be triggered immediately after a migration ID has been generated for the host h1 migration, or even after the startmigration operation is triggered for host h1. The 3PAR Peer Motion Utility is for users who have good knowledge of storage administration and migration workflow. For additional details about the CLI commands, see 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands on page 341. The 3PAR Peer Motion Utility supports three types of data migration: Online migration Online windows cluster migration Single-volume migration MDM Offline migration For details regarding data migration types, see Data migration types on page 15. To use the 3PAR Peer Motion Utility, follow these steps: Procedure 1. Verify that prerequisites are met. See Performing premigration tasks before installing the 3PAR Peer Motion utility on page Complete the installation of the 3PAR Peer Motion Utility software. See Installing the 3PAR Peer Motion Utility. 3. Verify that 3PAR Peer Motion is running. See Verifying 3PAR Peer Motion Utility service on page Launch the 3PAR Peer Motion Utility application. See Launching the 3PAR Peer Motion Utility on page Execute the 3PAR CLI commands for the 3PAR Peer Motion Utility. For information about the 3PAR OS commands, see 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands on page 341. Performing premigration tasks before installing the 3PAR Peer Motion utility The following must be performed and verified before using the 3PAR Peer Motion Utility: Performing premigration tasks before installing the 3PAR Peer Motion utility 117

118 Prerequisites Refer to Data migration requirements on page 95 for requirements. Procedure On the storage system: Convert two unused host ports on partner nodes (0/1, 2/3, 4/5 or 6/7) to peer ports on the destination 3PAR StoreServ Storage system (for instance, 2:1:1 (peer port 1) and 3:1:1 (peer port 2), using the SSMC, the 3PAR CLI, or the 3PAR Management Console. For online windows cluster migration, create NPIV ports for above peer ports configured on destination using the 3PAR CLI and zone them to respective peer ports. Two ports to be configured as host ports on the source 3PAR StoreServ Storage system; for example, 2:2:1 (host port of destination 1) and 3:2:1 (host port of destination 2), using the SSMC, the 3PAR CLI, or the 3PAR Management Console. Ensure that only two peer ports are configured in a given destination 3PAR StoreServ Storage system. On the fabric: If the destination storage system is part of a federation, all the virtual ports created by the federation must be included in the zoning wherever applicable. Create the zoning (see Network and fabric zoning requirements for 3PAR Peer Motion on page 98. System requirements for installing the 3PAR Peer Motion utility Software requirements For information on the supported operating systems, see the Support Matrix on the SPOCK website: NOTE: The 3PAR Peer Motion Utility software consists of two installable applications: the 3PAR Peer Motion Utility client component and the 3PAR Peer Motion Utility server component. The 3PAR Peer Motion Utility server runs only on a Windows system. Starting with Peer Motion Utility 2.2 and later, you must use matching versions of the client and server components. Hardware requirements Multi-core processor 2 GB on the hard drive for installation Minimum of 1024 MB of free RAM 118 System requirements for installing the 3PAR Peer Motion utility

119 Installing the 3PAR Peer Motion Utility on a Windows system Procedure 1. Download the HPE 3PAR Peer Motion Utility.exe file from the HPE Software Depot at: Double-click the installer. 3. On the Welcome screen, click Next. 4. In the License Agreement screen, read the Hewlett Packard Enterprise end user license agreement, then select the I accept the terms in the license agreement radio button to proceed and click Next. Installing the 3PAR Peer Motion Utility on a Windows system 119

120 5. In the Custom Setup screen, you can specify a folder where you want to install the application for the client, server, or both (optional). The default location is: <Installation_Drive>\ProgramFiles(x86)\Hewlett Packard Enterprise \hpe3parpmu 120 Using the HPE 3PAR Peer Motion Utility

121 To change the current destination folder, click Change, click Browse, and select or enter a folder location. Click OK and Next. Using the HPE 3PAR Peer Motion Utility 121

122 NOTE: During first-time installation, the 3PAR Peer Motion Utility installer supports feature-wise installation, but does not support feature-wise uninstallation. 6. If you selected the 3PAR Peer Motion Utility server in step 5, a dialog box appears prompting you to indicate whether you have a CA signed certificate. Click No to generate and install the self-signed certificate, then click Browse and select a folder location to store the certificate. Click Yes if you already have a CA signed certificate, then click Browse to navigate to the folder location. 7. By default, the 3PAR Peer Motion Utility uses the ports 2390 for server and 2388 for shutdown. If a port is busy during installation, a message displays and the installer prompts you to enter a free port. Click Next to proceed. 122 Using the HPE 3PAR Peer Motion Utility

123 NOTE: If you need to assign new port numbers, enter the new port numbers in the Server Port and Shutdown Port fields, then click Next. For example, if you specify port 9090 during the installation, then edit the OIUCli.bat file at:<install location>\hewlett Packard Enterprise\hpe3parpmu\CLI with new port: java -jar..\cli\oiucli jar-with-dependencies.jar %* -port Click Install in the Ready to Install the Program screen to complete installation. The installation may take several minutes to complete. While the installation is in progress, status messages appear. Using the HPE 3PAR Peer Motion Utility 123

124 9. To view the installer log details after the installation completes, select the Show the Windows Installer log check box, and then click Finish to exit the installation wizard. After installation, you must add users to these groups to grant administrator and user access rights. See Adding users to groups. Upgrading the 3PAR Peer Motion Utility on a Windows system Use the procedure that follows to upgrade from 3PAR Peer Motion Utility 2.0 or 2.1 to 2.2. If you have a pre-2.0 version of the 3PAR Peer Motion Utility installed, you must uninstall it (see Uninstalling the 3PAR Peer Motion Utility), then follow the procedure for Installing the 3PAR Peer Motion Utility on a Windows system. 124 Upgrading the 3PAR Peer Motion Utility on a Windows system

125 Procedure 1. Double-click the installer. The InstallShield Wizard prompt appears. Figure 69: Welcome screen upgrade 2. Click Yes. The installer begins the upgrade on the system. Installing the 3PAR Peer Motion Utility on a Linux system Prerequisites Download the HPE3PARPMUtility_x64/x86 tar file at the HPE Software Depot website: NOTE: Use HPE3PARPMUTILITY_x86.tar for 32 bit and HPE3PARPMUTILITY_x64.tar for 64 bit systems. Log in with root permissions. Ensure you are on a supported Red Hat Enterprise Linux version. See the SPOCK website for details: Free disk space should be more than 100 MB. The Java Runtime Environment version should 1.8 or later. Procedure 1. Extract the tar file by running Linux command: tar -xvf HPE3PARPMUTILITY_x86 or tar -xvf HPE3PARPMUTILITY_x64 on the console. To see the Linux installer on the console, issue the ls command. 2. Set the execution permission to Linux_local_install.sh file by using following command: chmod -R 744 Linux_local_install.sh Installing the 3PAR Peer Motion Utility on a Linux system 125

126 3. Execute./Linux_local_install.sh and enter y when prompted to install. 4. After the successful installation of client, execute./hpe3parpmu.sh under "/opt/hpe/ hpe3parpmu". Adding users to groups The 3PAR Peer MotionUtility helps prevent unauthorized use by validating the user name provided at the time of login against the user name added in the user groups created during installation. The application can only be accessed by authorized users. By default, the 3PAR Peer Motion Utility installer creates the following user groups on your system during installation of server components: HPE Storage Migration Admins HPE Storage Migration Users When the client-side of the 3PAR Peer Motion Utility is running on Linux, the authentication still happens on the server side on Windows. Users belonging to the HP Storage Migration Admins user group have privileges to use the following commands: addsource, removesource, adddestination, removedestination, createmigration, startmigration, removemigration, and all the show commands, such as showmigration. Users belonging to the HP Storage Migration Users user group have privileges to execute only show commands, such as showmigration. Procedure To add a user to a group, see Windows documentation for your release of Windows OS. Local and domain users can be added. Verifying 3PAR Peer Motion Utility service The 3PAR Peer Motion Utility service automatically starts when the installation completes. To confirm that the 3PAR Peer Motion Utility service has started, follow these steps: Procedure 1. Click Start > Run In the Run field, type services.msc, then click OK. 3. In the Services window, verify that the HPE 3PAR Peer Motion Utility service appears in "Started" mode (in the Status column), then close the window. Launching the 3PAR Peer Motion Utility NOTE: Starting with Peer Motion Utility 2.2 and later, you must use matching versions of the client and server components. 126 Adding users to groups

127 Procedure 1. Double-click the 3PAR Peer Motion Utility shortcut created on your Desktop. 2. Log in to the CLI: a. For IP address, type the IP address of the system running the 3PAR Peer Motion Utility server. b. For USERNAME, specify a user name that was entered in one of the user groups. c. For PASSWORD, type the password of the user specified in step 2.b on page PAR Peer Motion Utility workflow Figure 70: 3PAR Peer Motion Utility Workflow 3PAR Peer Motion Utility workflow 127

128 IMPORTANT: For information about the 3PAR Peer Motion Utility and 3PAR Online Import Utility commands, with descriptions of the commands, their parameters, and examples, see 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands on page 341. Performing online migration To perform online migration, follow the steps and execute the commands in the following order: Procedure 1. addsource -mgmtip <ipaddress> -user <username> -password <password> -type 3PAR 2. showsource 3. adddestination -mgmtip <ipaddress> -user <username> -password <password> - type 3PAR 4. showdestination 5. showconnection 6. createmigration -sourceuid <sourceuid> -srcvolmap [{"<volumename>","<destinationprovisioning>","<destinationcpg>"}] -migtype online 7. showmigration -migrationid <migrationid> Online migration can be performed using the srchost parameter or the volmapfile parameter or the srcvolmap parameter. NOTE: In case there is a need to abort the prepared migration, use the removemigration command: > removemigration -migrationid <migrationid> Online migration can be performed using the srchost, the srcvolmap, or the volmapfile parameter. IMPORTANT: After the admit phase is complete, rerun a scan on the host to discover the new paths to the migrating volumes. Verify that the host paths to the destination storage system are active. If path verification is successful, remove zone Z3. (See Performing premigration tasks before installing the 3PAR Peer Motion utility on page 117.) Other clean up activities may be necessary on the source and destination systems, on the zoning, and on the host(s). See Guidelines for rolling back to the original source array. 8. startmigration migrationid <migrationid> 9. showmigrationdetails migrationid <migrationid> This completes the successful flow of the migration. 128 Performing online migration

129 You can remove the added source and destination storage systems, when the Complete status for the migrating volumes appears. After the migrations have completed successfully and if no additional migration need to be carried out, the following optional commands are to be used to remove the added source and destination storage systems: 1. removesource -uid <uid> -type 3par 2. removedestination -uid <uid> NOTE: Removing a source or destination storage system also removes the migration history for the respective source or destination storage system. If the source storage system is running 3PAR OS or earlier, make sure the OS for the host in storage system is updated with appropriate persona value. Issue the showpersona command; the host OS value must not be blank. Performing online Windows cluster migration Virtual peer ports must be created as NPIV ports for the peer motion configured physical peer ports. The number of NPIV ports that must be created on each peer port is twice the number of nodes that are being migrated. For example, migration of two node clusters require that each peer port be configured with four NPIV ports. The destination storage system must be an HPE 3PAR StoreServ 7000 Storage system or an HPE 3PAR StoreServ Storage system and must be running 3PAR OS 3.1.3, or later, or an HPE 3PAR StoreServ 8000 Storage, HPE 3PAR StoreServ Storage, or HPE 3PAR StoreServ 9000 Storage system. Cluster sizes of up to four hosts are supported. To perform online windows cluster migration, follow the steps and execute the commands in the following order: Prerequisites Virtual peer ports must be created as NPIV ports using the following commands on 3PAR OS CLI. For example, cli% controlport config peer virt_ports 8 2:2:1 (where 8 is the number of NPIV ports and 2:2:1 is peer port number). Procedure 1. addsource -mgmtip <ipaddress> -user <username> -password <password> -type 3PAR 2. showsource 3. adddestination -mgmtip <ipaddress> -user <username> -password <password> - type 3PAR 4. showdestination 5. showconnection 6. createmigration -sourceuid <sourceuid> -srcvolmap [{"<volname1>","<destprov1>","<destcpg1>"}, <volname2>","<destprov2>","<destcpg2>"},...] -migtype online destprov <destprov> -destcpg <destcpg> 7. showmigration -migrationid <migrationid> Performing online Windows cluster migration 129

130 NOTE: In case there is a need to abort the prepared migration, use the removemigration command: > removemigration -migrationid <migrationid> Windows cluster migration can be performed using the srchost, the srcvolmap, or the volmapfile parameter. IMPORTANT: Re-run a scan on the hosts to discover the new paths to the migrating volume(s). Verify that the host paths to the destination storage system are active. If path verification is successful, remove zone Z3. 8. startmigration migrationid <migrationid> 9. showmigrationdetails migrationid <migrationid> This completes the successful flow of the migration. You can remove the added source and destination storage systems when the migration is completed. The following optional commands are to be used to remove the added source and destination storage systems: 1. removesource -uid <uid> -type 3par 2. removedestination -uid <uid> NOTE: Removing a source or destination storage system also removes the migration history for the respective source or destination storage system. Performing single-volume migration The -singlevv option can be used to migrate a subset of volumes provisioned to a host. In this case, there is no need to unzone the source storage system from the host that is being migrated. To perform single-volume migration, follow the steps and execute the commands in the following order: Prerequisites In order to use single-volume migration, the following conditions must be satisfied: Both the source and destination storage systems must be running at least 3PAR OS The hosts to which the volumes are exported must be defined using an ALUA persona on both source and destination storage systems. 130 Performing single-volume migration

131 Procedure 1. addsource -mgmtip <ipaddress> -user <username> -password <password> -type 3PAR 2. showsource 3. adddestination -mgmtip <ipaddress> -user <username> -password <password> - type 3PAR 4. showdestination 5. showconnection 6. createmigration -sourceuid <sourceuid> -srcvolmap [{"<volumename>","<destinationprovisioning>","<destinationcpg>"}] -migtype online -singlevv 7. showmigration -migrationid <migrationid> In case there is a need to abort the prepared migration, use the removemigration command for the same: removemigration -migrationid <migrationid> Single-volume migration can be performed using the srchost, the srcvolmap, or the volmapfile parameter. 8. startmigration migrationid <migrationid> 9. showmigrationdetails migrationid <migrationid> This completes the successful flow of the migration. You can remove the added source and destination storage systems, when the Complete status for the migrating volumes appears. The following optional commands are to be used to remove the added source and destination storage systems: 1. removesource -uid <uid> -type 3par 2. removedestination -uid <uid> NOTE: Removing a source or destination storage system also removes the migration history for the respective source or destination storage system. Performing minimally disruptive migration (MDM) To perform minimally disruptive migration, follow the steps and execute the commands in the following order: Procedure 1. addsource -mgmtip <ipaddress> -user <username> -password <password> -type 3PAR 2. showsource Performing minimally disruptive migration (MDM) 131

132 3. adddestination -mgmtip <ipaddress> -user <username> -password <password> - type 3PAR 4. showdestination 5. showconnection 6. createmigration -sourceuid <sourceuid> -srcvolmap [{"<volname1>","<destprov1>","<destcpg1>"}, {"<volname2>","<destprov2>","<destcpg2>"},...] -migtype MDM destprov <destprov> -destcpg <destcpg> 7. showmigration -migrationid <migrationid> NOTE: In case there is a need to abort the prepared migration, use the removemigration command for the same: removemigration -migrationid <migrationid> Single-volume migration can be performed using the srchost, the srcvolmap, or the volmapfile parameter. 8. startmigration migrationid <migrationid> 9. showmigrationdetails migrationid <migrationid> This completes the successful flow of the migration. You can remove the added source and destination storage systems, when the Complete status for the migrating volumes appears. The following optional commands are to be used to remove the added source and destination storage systems: 1. removesource -uid <uid> -type 3par 2. removedestination -uid <uid> NOTE: Removing a source or destination storage system also removes the migration history for the respective source or destination storage system. Performing offline migration To perform offline migration, follow the steps and execute the commands in the following order: Procedure 1. addsource -mgmtip <ipaddress> -user <username> -password <password> -type 3PAR 2. showsource 3. adddestination -mgmtip <ipaddress> -user <username> -password <password> - type 3PAR 4. showdestination 132 Performing offline migration

133 5. showconnection 6. createmigration -sourceuid <sourceuid> -srcvolmap [{"<volumename>","<destinationprovisioning>","<destinationcpg>"}] -migtype offline NOTE: Offline migration can be performed by using the srcvolmap or the volmapfile parameter. 7. showmigration -migrationid <migrationid> NOTE: In case there is a need to abort the prepared migration, use the removemigration command for the same: 8. startmigration migrationid <migrationid> 9. showmigrationdetails migrationid <migrationid> This completes the successful flow of the migration. You can remove the added source and destination storage systems, when the Complete status for the migrating volumes appears. The following optional commands are to be used to remove the added source and destination storage systems: 1. removesource -uid <uid> -type 3par 2. removedestination -uid <uid> NOTE: Removing a source or destination storage system also removes the migration history for the respective source or destination storage system. Migrating virtual volume sets and host sets The 3PAR Peer Motion Utility supports volume set and host set migration. With volume set migration, all volumes of a set will be migrated to the destination storage system together with their presentation. A volume set is created at the destination storage system. With host set migration, all hosts of a host set are migrated and will be part of a corresponding host set as in the source storage system. Virtual volume sets can be migrated consistently by using either the allvolumesincg or the cgvolmap parameter. Host set migration examples: 1. When a volume from a volume set is exported to host set and host-based migration is triggered, all volumes that are exported to selected host set are migrated. NOTE: The host set name should be preceded with set to identify input as a host set name instead of a host name. Command syntax: createmigration -sourceuid xxxxxxxxxxxxxx -srchost "set:hostset1" -migtype online -destcpg TEST_CPG -destprov {thin full dedupe} Migrating virtual volume sets and host sets 133

134 NOTE: In this example, the migration type is online. MDM is also possible. 2. When a volume set is exported to a host set and host-based migration is triggered, all volumes of the volume set are migrated and exported to the host set (and corresponding host members). Command syntax: createmigration -sourceuid xxxxxxxxxxxxxx -srchost "set:hostset1" -migtype online -destcpg TEST_CPG -destprov {thin full dedupe} Volume set-based migration examples 1. When a volume set is exported to a host and a volume-based migration is triggered, all volumes of a volume set are migrated and exported to a host, and the volume set is not created on the destination storage system. NOTE: The volume set name should be preceded with set: to identify input as a volume set name rather than a volume name. Command syntax: createmigration -sourceuid xxxxxxxxxxxxxx -srcvolmap "[{set:volset1,thin,test_cpg}]" -destcpg TEST_CPG -destprov {thin full dedupe} migtype online 2. When a volume set is exported to a host set and a volume-based migration is triggered, all volumes of the set are migrated. Command syntax: createmigration -sourceuid xxxxxxxxxxxxxx -srcvolmap "[{set:volset1,thin,test_cpg}]" -destcpg TEST_CPG -destprov {thin full dedupe} migtype online 3. If volume set-based migration is triggered through the volmapfile parameter, then the set name should be specified in the file, preceded by set:. The expected behavior is the same as either 1 or 2 (above), based on presentation of the volume set (either to a host or host set). Command syntax: createmigration -sourceuid xxxxxxxxxxxxxx volmapfile "c:\\volmapfilename.txt where the volmap file contains one or more volume set details to be migrated. 4. When a volume set is exported to host set or host and migration triggered for the volume set, all the volumes and the volume set are migrated and exported to the host or host set. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXX -srcvolmap "[{set:set1,thin,arch_cpg}]" -destcpg ARCH_CPG -destprov {thin full dedupe} -migtype online 134 Using the HPE 3PAR Peer Motion Utility

135 Expected behavior: Migration succeeds. All volumes, volume set, host and host set are created and exports are same as the source storage system. 5. When the volumes of a volume set are exported to a host or host set and the migration triggered for the volume set, then all the volumes and the volume set are migrated and exported to the host or host set. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{set:set2,thin,arch_cpg}]" -destcpg ARCH_CPG -destprov {thin full dedupe} -migtype online Expected behavior: Migration succeeds. All volumes, volume set, host and host set will be created and exports are the same as in the source storage system. 6. When the volumes of a volume set are exported to a host or host set and the migration is triggered for the individual volumes, then all the volumes are implicitly migrated and exported to a host or host set. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,arch_test_cpg},{vol2,thin,arch_test_cpg}]" -destcpg ARCH_TEST_CPG -destprov {thin full dedupe} -migtype online Expected behavior: Migration succeeds. Implicitly, all the volumes are migrated and exported to a host or host set. A volume set will not be created on the destination storage system, and the volumes are removed from the volume set on the source storage system. 7. When a volume set is selected for migration, and at least one of its virtual volume is part of another volume set that is exported to a host or host set, the migration fails in the preparation phase with an appropriate error message. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX srcvolmap "[{set: set1, thin,arch_cpg}]" -destcpg ARCH_CPG -destprov {thin full dedupe} -migtype online Error message: OIUERRVVS0006: Migration cannot proceed.oiurslvvs0006: One or more VV is part of another VVset. 8. When a volume that is part of a volume set is selected for migration, where the volume set is exported to host or host set, the migration fails in the preparation phase with an appropriate error message. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap"[{vol,thin,arch_cpg]" -destcpg ARCH_CPG -destprov {thin full dedupe} -migtype online Error message: OIUERRVVS0004: Migration of volumes is not supported when vvset is exported to a host/hostset. NOTE: Volume sets and individual volumes cannot be migrated in a single createmigration Volume set offline migration Using the HPE 3PAR Peer Motion Utility 135

136 1. A volume set can be migrated in offline mode. In offline migrations, all volumes of a set are migrated to the destination storage system, and the volume set is created on the destination storage system. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{set:volset1,thin,testcpg}]" -destcpg TEST_CPG -destprov {thin full dedupe} migtype offline 2. If migration is triggered for a volume that is part of a volume set, then the volume set is not created at the destination storage system, and the volume is removed from the volume set of the source storage system. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol,thin,testcpg}]" migtype offline Expected behavior: Migration succeeds for all volumes. A volume set will not be created in the destination storage system, and the volumes are removed from the volume set on the source storage system. Consistency Groups management The optional HPE 3PAR Consistency Group feature allows you create and consistently migrate dependent volumes of applications. I/O that is issued to volumes that are members of a consistency group is mirrored to the source array until all members are completely migrated to the destination array, keeping the source volumes in a consistent state. NOTE: Support for consistency groups is available on a destination 3PAR StoreServ Storage system that runs 3PAR OS or later. To migrate consistently, a minimum of two volumes must be added in consistency group. Best practices for consistency groups: For consistent imports, Hewlett Packard Enterprise recommends that you limit the number of volumes in a consistency group to 20. Limit the volumes in the set to just those that really need to be consistent with each other. Do not add all of the volumes exported to the host in one consistency group. To avoid long switch-over times at the end of imports, Hewlett Packard Enterprise recommends that you limit the total size of volumes in a set to 40 TB. More information Creating a consistency group on page 136 Creating a consistency group Consistency groups are defined in the createmigration command. The following parameters are available to create consistency groups: 136 Consistency Groups management

137 cgvolmap: Defines the consistency groups and their member volumes. allvolumesincg: All volumes (including implicit volumes) will be migrated consistently. Volume-based migration For information on volume-based migration, see: Migrating a subset of volumes consistently on page 137 Migrating all volumes consistently on page 137 Migrating a subset of volumes consistently Procedure Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]" -destcpg testcpg -destprov {thin full dedupe} -cgvolmap {"values":{"cg1":["vol1","vol2]}} where cgvolmap is a parameter that accepts the name of a consistency group and volume names to be migrated consistently. Expected behavior: vol1, vol2, vol3 are created at the destination storage system and vol1 and vol2 are migrated consistently. Volume-based migration with specified volumes in consistency groups > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]" -destcpg testcpg -destprov {thin full dedupe} -cgvolmap { values":{"cg1": ["vol1","vol2"]}} migtype online -persona RHEL_5_6 Three volumes (vol1, vol2 and vol3) are chosen for migration. A consistency group named cg1 will be created, containing vol1 and vol2. When a startmigration command is issued, I/O to both vol1 and vol2 will be mirrored to both the source storage array and the 3PAR StoreServ Storage until both vol1 and vol2 import tasks are complete. After both vol1 and vol2 import tasks are complete, I/O for both volumes will cut over to the 3PAR StoreServ Storage. Migrating all volumes consistently Procedure Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]" -destcpg testcpg -destprov {thin full dedupe} -allvolumesincg where allvolumesincg is an optional parameter. If this parameter is specified, all volumes are migrated consistently. Volume-based migration 137

138 Expected behavior: vol1, vol2, vol3 are created at the destination storage system and migrated consistently. Volume-based migration with all volumes in a consistency group > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]" -allvolumesincg -destcpg testcpg -destprov {thin full dedupe} migtype online -persona RHEL_5_6 All volumes (including implicit volumes) will be placed in a single consistency group. When a startmigration command is issued, I/O to all volumes will be mirrored to both the source storage array and the 3PAR StoreServ Storage until all import tasks complete. After all import tasks are complete, I/O for all volumes will cut over to the 3PAR StoreServ Storage. NOTE: After import tasks are complete, consistency groups are deleted from the destination storage system. Host-based migration For information on host-based migration, see: Migrating a subset of volumes consistently on page 138 Migrating all volumes consistently on page 139 Migrating a subset of volumes consistently Procedure Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srchost "hostname" -migtype online -destcpg testcpg -destprov thin -cgvolmap {"values":{"cg1":["vol1","vol2","vol3"],"cg2":["vol4","vol5","vol6"]}} Expected behavior: At the destination all volumes (including implicit volumes) exported to host are created and volumes specified in the cgvolmap parameter are migrated consistently. Host-based migration with specified volumes in consistency groups > createmigration -sourceuid XXXXXXXXXXXXXXXX -srchost "hostname" -destcpg testcpg -destprov thin -cgvolmap{ values":{"cg1":["vol1","vol2","vol3"],"cg2":["vol4","vol5","vol6"]}} migtype online -persona RHEL_5_6 All volumes (including implicit volumes) presented to the host are chosen for migration. Two consistency groups will be created: cg1, containing vol1, vol2, and vol3 cg2, containing vol4, vol5, and vol6 Any other volume that was defined explicitly in the createmigration command, or that was added implicitly, will be migrated, but not as a part of a consistency group. When a startmigration command is issued, I/O for volumes in both cg1 and cg2 will be mirrored to both the source storage array and the 138 Host-based migration

139 3PAR StoreServ Storage until all cg1 or cg2 import tasks are complete. After cg1 or cg2 import tasks are complete, I/O to the respective consistency group volumes will cut over to the 3PAR StoreServ Storage. Migrating all volumes consistently Procedure Command syntax: createmigration -sourceuid 2FFXXXX2AC003F8E -srchost "hostname" -migtype online -destcpg testcpg -allvolumesincg Expected behavior: At the destination all volumes (including implicit volumes) exported to host are created and migrated consistently. Host-based migration with all volumes in a consistency group > createmigration -sourceuid XXXXXXXXXXXXXXXX -srchost "hostname" -destcpg cpg1 -allvolumesincg migtype online -persona RHEL_5_6 NOTE: Hewlett Packard Enterprise recommends that only allvolumesincg be used with Oracle RAC. All volumes (including implicit volumes) presented to the host will be placed in a single consistency group. When a startmigration command is issued, I/O to all volumes will be mirrored to both the source storage array and the 3PAR StoreServ Storage until all import tasks are complete. After all import tasks complete, I/O for all volumes will cut over to the 3PAR StoreServ Storage. Prioritization when migrating volumes or volume sets With 3PAR OS 3.2.2, you can prioritize the migration of one or more volumes or a volume set so that the prioritized volume or volume set is migrated first. The priority parameter is optional. If no priority is specified, a priority of medium is set by default. The 3PAR Peer Motion Utility or 3PAR Online Import Utility allows you to specify a priority for volume or for a volume set. If a priority is set for both, the volume-level priority takes precedence over the volume set-level priority. The priority parameter is specified in the createmigration command. More information Prioritizing volumes on page 139 Prioritizing volumes and volume sets on page 140 Prioritizing with consistency groups on page 140 Prioritizing volumes Procedure Command syntax at the volume level: Migrating all volumes consistently 139

140 createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg}]" -destcpg testcpg -destprov {thin full dedupe} -priorityvolmap {"values":{"low":["vol11","vol12"],"high":["vol3","vol4"]}} Expected behavior: Volumes will be migrated in accordance with the priority specified in priorityvolmap; in this instance, volumes vol3 and vol4 will take precedence in the migration, while vol11 and vol12 will receive a lower priority than all other volumes. The priorityvolmap option can also be used with srchost option to set the priority to volumes exported to a host or host set. Prioritizing volumes and volume sets Procedure Command syntax at both the volume level and volume set level: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{set:volset1, thin, testcpg,high}]" -destcpg testcpg -destprov {thin full dedupe} -priorityvolmap {"values":{"low":["vol1","vol2"],"medium":["vol3","vol4"]}} Prioritizing with consistency groups Prerequisites To migrate volumes consistently and also set priorities, all volumes in a consistency group must have the same priority. The 3PAR Peer Motion Utility or 3PAR Online Import Utility does not support migration of volumes that are part of the same consistency group but have different priority settings. Procedure To migrate all volumes in the consistency group, use the allvolumesincg option. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,fc_r1 },{vol2,thin,fc_r1},{vol3,thin,fc_r1},{vol4,thin,fc_r1}]" -destcpg FC_r1 -destprov thin allvolumesincg -priorityvolmap {"values":{"high":["vol1","vol2", vol3, vol4 ]}} Expected behavior: All volumes will be migrated consistently with specified priority. To migrate volumes consistently with different priority settings for each consistency group, use the following command. Command syntax: createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol13,thin,testcpg}, {vol14,thin,testcpg}]" -destcpg testcpg -destprov thin -cgvolmap {"values":{"cg1":["vol1","vol2"]}} -priorityvolmap {"values":{"low":["vol1","vol2", vol3, vol4 ]}} Expected behavior: All volumes in srcvolmap will be migrated with the priority specified in priorityvolmap, and the volumes listed in cgvolmap will be migrated consistently with low priority. Migration of the consistency group is triggered first, then migration of individual volumes. Individual volumes and the consistency group volumes have the same priority. 140 Prioritizing volumes and volume sets

141 Using the autoresolve option to resolve LUN conflicts The 3PAR Peer Motion Utility supports M:N migration (migration from multiple source 3PAR StoreServ Storage systems to multiple destination 3PAR StoreServ Storage systems). In such a case, the LUN IDs of migrating volumes might already be assigned for a particular host on the destination storage system. As a result, a LUN conflict occurs, and the migration times out at the preparation stage. The 3PAR Peer Motion Utility resolves LUN conflicts automatically by using the autoresolve option. NOTE: The autoresolve parameter is optional and the default value is true. Specify the autoresolve parameter as false to deactivate automatic resolution. Procedure Host-based example: > createmigration -sourceuid 2FFXXXX2AC003F8E -srchost "hostname" -migtype online -destcpg testcpg -destprov thin -autoresolve true Volume-based example: > createmigration -sourceuid 2FFXXXX2AC003F8E -srcvolmap "[{pvol3,thin,testcpg}]"-migtype online -destcpg testcpg -destprov thin -autoresolve false showmigration command output in case of conflict: Field Value MIGRATION_ID TYPE TYPE SOURCE_NAME STORAGE SYSTEM 1 DESTINATION_NAME STORAGE SYSTEM 2 START_TIME Mon Feb 02 12:28:27 IST 2015 END_TIME STATUS (PROGRESS) NA preparationfailed(-na-) (OIUERRPREP1021: Lun number conflict exists for LUN# 0, 1, while presenting to hosts host1 Postmigration tasks Performing fabric topology postmigration tasks Performing volume postmigration tasks Performing Remote Copy postmigration tasks Using the autoresolve option to resolve LUN conflicts 141

142 Performing fabric topology postmigration tasks Procedure The only task is to clean up the zoning between the source and destination storage systems after all migrations between the two are complete. Performing volume postmigration tasks Procedure 1. After you verify that everything has been correctly migrated to the destination storage system, you can reclaim the space on the source system by deleting the migrated volumes. The WWN of a migrated volume is the one it had on the source system. To change the WWN into a local-array one, use the 3PAR CLI command setvv wwn. Execution of this command requires the volume to be unexported. While it is possible to keep the WWN of the source volume on the destination system, it is recommended to make this change at the next available opportunity. The immediate change is mandatory when using the volume with the HPE 3PAR Recovery Manager software and the Microsoft VSS framework. 2. If the Path Verify Enabled MPIO setting was enabled for the migration, disable it again. However, if the source and destination HPE 3PAR StoreServ systems are in a Peer Persistence relationship, do not disable the setting. 3. If the volume or volume set that was migrated was subject to an HPE 3PAR Priority Optimization rule on the source system, you must recreate this rule manually on the destination HPE 3PAR StoreServ system. Performing Remote Copy postmigration tasks If you are using HPE 3PAR Remote Copy software, the next step is to perform the remote copy postmigration tasks: Procedure 1. If necessary, recreate the remote copy groups on the destination storage system to match the remote copy groups on the new source system. 2. Perform the remote copy synchronization task. 3. Remove the remote copy groups from the old source system. 4. Configure and start the remote copy groups on the destination storage system from a specially created snapshot that represents the end step of the data migration. For more information, see the HPE 3PAR Remote Copy Software User's Guide and the HPE 3PAR Command Line Interface Administrator s Manual, available at the Hewlett Packard Information Library website: Performing fabric topology postmigration tasks

143 Uninstalling the 3PAR Peer Motion Utility WARNING: Removing the software also completely deletes the database containing the source and destination storage system details and the migration definitions. Procedure To remove the application from a Windows system: 1. In the Control Panel, select Programs > Programs and Features > Uninstall a Program, then select the 3PAR Peer Motion Utility from the list and click Uninstall. 2. In the Programs and Features dialog box, click Yes to uninstall the application. To remove the application from a Linux system: In the console, execute./linux_local_install.sh and enter y when prompted to uninstall (the file is located under the directory where the binaries are extracted). Uninstalling the 3PAR Peer Motion Utility 143

144 Host environments for unidirectional Peer Motion Some host environments may require special multipath or host operating system (OS) setting for greater compatibility with the migration process. This section describes those requirements and conveys some restrictions that are imposed on host OS environments. In all cases, a persona that is supported on both the source 3PAR OS version and the destination 3PAR OS version should be used both the initial environment and the intended postmigration environment should be supported by and compliant with SPOCK requirements. For details about the supported migration paths and specific OS versions or cluster solutions supported for migration, see SPOCK: HPE 3PAR Peer Motion supports migrations for a host with an FCoE host bus adapter connected to an FCoE switch that itself is connected over FC to the HPE 3PAR StoreServ. See SPOCK for supported FCoE host bus adapters per host operating system. Microsoft Windows Host operating system Windows Server 2012 and Windows Server 2008 hosts can be migrated using the online migration procedure (see Performing Peer Motion with the SSMC on page 66. However, the Path Verify Enabled MPIO setting must be in effect on all the hosts. SPOCK: More information Enabling the Path Verify setting Enabling the Path Verify setting When migrating volumes exported to Windows 2012 Server or Windows 2008 Server hosts, ensure that the Path Verify Enabled setting is in effect on all the hosts. CAUTION: On Windows Server 2012, Windows Server 2008 R2, and Windows Server 2008 (non- R2), ensure that Microsoft hotfix KB is installed. If it is not, do not use the Microsoft CLI command mpclaim or attempt to display the MPIO information via the Disk Management GUI during the peer motion migration admitvv stage, since these actions would result in the host becoming unresponsive. Procedure On Windows Server 2012, the setting can be found at Device Manager > Disk Drives. Right-click any of the HPE 3PAR disks, then select the MPIO > MS DSM Details. Select the Path Verify Enabled check box. On Windows Server 2008, the setting can be found at Server Manager > Disk Management. Rightclick any of the HPE 3PAR disks, then select MPIO > MS DSM Details. Select the Path Verify Enabled check box. Microsoft failover clusters Windows Server 2008 or Windows Server 2012 clusters environments (both Hyper-V and non-hyper-v) can be migrated online as long as the source array is running 3PAR OS MU1 P08 or later and the 144 Host environments for unidirectional Peer Motion

145 Linux number of Microsoft Failover Cluster (MSFC) nodes in the cluster is four or fewer. If these requirements are not met, then the MDM procedure must be used (see Managing Peer Motion from the SSMC on page 66). Throughout the migration process, the nodes in the MSFC should not be rebooted to minimize persistent reservation thrashing. The migration should be planned during a period where host maintenance is not needed. After the migration completes and the source paths are removed, maintenance operation on the MSFC nodes can be carried out. Linux migrations can be carried out using the online migration procedure. However, if the single volume migration feature is desired, an ALUA-enabled host persona must be used, and the /etc/ multipath.conf settings must conform to the required ALUA settings cited in the appropriate Linux implementation guide: HPE 3PAR Red Hat and Oracle Linux Implementation Guide HPE 3PAR SUSE Linux Enterprise Implementation Guide These documents are available at the Hewlett Packard Enterprise Information Library website: VMware ESXi Similarly to Linux, migration for VMware ESXi host environments should be performed using the online migration procedure. Persona 11 and the associated VMW_SATP_ALUA.SATP rule are required for single-volume migration. See the HPE 3PAR VMware ESX/ESXi Implementation Guide for details about setting up ALUA with VMware ESXi hosts. IBM AIX HP-UX This documents is available at the Hewlett Packard Enterprise Information Library website: The IBM AIX ODM/operating system does not support ALUA. Because ALUA support is a requirement for single-volume migration, this means that single-volume migration is not supported on AIX. Therefore, when planning the migration of AIX hosts, planning should take into consideration that all volumes exported to the AIX host must be migrated together. In addition, volumes that have been formatted with the JFS file system cannot be migrated online. Hosts that make use of the JFS file system must first quiesce I/O and the file system must be unmounted before proceeding with the migration. JFS2 and other file systems supported with AIX are not impacted. Rescanning for new LUN paths Procedure Before continuing with migration, it is recommended that you rescan for new LUN paths and make a note of the new paths to the volumes: Linux 145

146 # ioscan -fnc disk HP-UX 11i V3 HP-UX 11i v3 standalone and Serviceguard clustered hosts can be migrated by using the online migration procedure. No additional configuration is required on the hosts. HP-UX 11i V2 NOTE: The procedures described in this section are only for disks under HP-UX LVM volume management The single-volume migration feature is not supported with HP-UX 11i v2 hosts, so HP-UX 11i v2 hosts must be migrated using host-level migration. HP-UX 11i v2 standalone hosts can be migrated by using the online migration procedure. However, after zoning in the destination storage system to the standalone hosts, the new paths/physical volumes (PVs) must be added to the volume group/pvlinks configuration by using the vgextend command before removing the paths to the source storage system. For example: # vgextend my_standalone_vg new_pv_path1 new_pv_path2 To confirm configuration of the PVLinks configuration, execute the vgdisplay command. A Serviceguard cluster, running on HP-UX 11i V2, can also be migrated by using the online migration procedure, but if shared volume groups that use the SLVM are used, then additional configuration steps are required because the shared volume group does not automatically recognize new paths to the volume exported through the destination. Use the following single-node online reconfiguration operation (see Reconfiguring a single node online on page 86 to change the configuration of a shared volume group, while keeping it active in only a single node. During the volume group reconfiguration, applications on at least one node will be available. Reconfiguring a single node online Procedure 1. Identify the shared volume group on which a configuration change is required. Name it vg_shared. 2. Identify one node of the cluster which is running an application using the shared volume group. Call it node1. The applications on this node that are using the volume group, vg_shared, will remain unaffected during the procedure. 3. Stop the applications using the shared volume group on all the other cluster nodes, thus scaling down the cluster application to the single cluster node, node1. 4. Deactivate the shared volume group on all other nodes of the cluster, except node1, by issuing the vgchange command -a n option: # vgchange -a n vg_shared 5. Ensure that the volume group, vg_shared, is now active only on a single cluster node, node1, by using the vgdisplay command on all cluster nodes. The Status should show that the volume group is available on a single node only. 6. On node1, change the activation mode to exclusive by issuing the following command: # vgchange -a e -x vg_shared 146 HP-UX 11i V3

147 7. On node1, make a note of the new pv_paths to the PVs already in the volume group (from the output of the LUN rescan; see Rescanning for new LUN paths on page 85. Add all the new paths to the volume group, using the following command: # vgextend vg_shared pv_path 8. Export the changes to other cluster nodes: a. From node1, export the mapfile for vg_shared: # vgexport -s -p -m /tmp/vg_shared.map vg_shared b. Copy this mapfile, /tmp/vg_shared, to all the other nodes of the cluster. c. On the other cluster nodes, export vg_shared and re-import it using the new map file: # ls -l /dev/vg_shared/group crw-rw-rw- 1 root sys 64 0x Nov 16 15:27 /dev/vg_shared/group d. Make a note of the minor number ( 0x in the example above); it should match the minor number as shown in the mknod... command in the following example: # vgexport vg_shared # mkdir /dev/vg_shared # mknod /dev/vg_shared/group c 64 0x # vgimport -m /tmp/vg_shared.map -s vg_shared 9. Change the activation mode back to shared on all the cluster nodes: a. Change the mode back to shared on node1 by issuing the following command: # vgchange -a s -x vg_shared b. Change the mode to shared on the other cluster nodes by issuing the following command: # vgchange -a s vg_shared Applications using the shared volume group can now be restarted on other hosts. For more information about SLVM, see SLVM Online Volume Reconfiguration, available at the following website: If you are migrating cluster lock disks, you can update cluster lock disk configuration online by following the instructions in. Updating a cluster lock disk configuration online Procedure 1. Make a note of the new pv_paths to the lock disks from the output of the LUN rescan (see Rescanning for new LUN paths on page 85). 2. Execute the following command: # vgcfgrestore -n /dev/vg_lock pv_path 3. For each node in the cluster configuration file, modify the values of FIRST_CLUSTER_LOCK_PV and SECOND_CLUSTER_LOCK_PV. Updating a cluster lock disk configuration online 147

148 Solaris 4. To check the configuration, run the cmcheckconf command. 5. To apply the configuration, run the cmapplyconf command. For more information on updating cluster lock disk configuration, see Managing Serviceguard A.11.20, available at in ithe User Guide section. There are no special considerations for migrating Solaris hosts. However only standalone hosts are supported for migration. Solaris clusters are not supported. Symantec/Veritas Storage Foundation requirements As of 3PAR OS MU2, some configurations that use Symantec Storage Foundation or Veritas InfoScale are supported for migration through HPE 3PAR Peer Motion. For information about supported configurations or migration paths, see the SPOCK website: NOTE: Veritas Storage Foundation configurations not specifically listed on SPOCK, including configurations composed of ESX Virtual Machines, are supported only through the minimally disruptive migration (MDM) or offline migration procedures. IMPORTANT: For online data migration with Symantec Storage Foundation or Veritas InfoScale, virtual peer ports must be created while setting up the peer connections between the source and destination storage systems. Twice the number of virtual peer ports must be created as there are nodes in the cluster per peer port. For example, if you are migrating a two-node cluster, four NPIV ports must be created on each peer port. If you are using the SSMC, click Action on the Ports screen to edit port settings. NOTE: The SSMC can be used to create the NPIV ports, but only the 3PAR Peer Motion Utility, beginning with V1.5, can be used to carry out the migration. The SSMC is currently not a supported migration utility for Symantec Storage Foundation or Veritas InfoScale. If you are using the HPE 3PAR Management Console, see Set Up Connections in the HPE 3PAR Peer Motion Data Migration Guide. This guide is available at the Hewlett Packard Enterprise Information Library website: A maximum cluster of four-nodes is supported. In addition, the single-volume migration feature is not supported in Symantec Storage Foundation or Veritas InfoScale environments. This means that all virtual volumes exported to the hosts or cluster being migrated must be selected for migration and that the paths to the source array must be removed before starting the data migration. After the migration is complete, Hewlett Packard Enterprise recommends that the Storage Foundation or InfoScale UDID on the virtual volumes be updated to reflect the new array serial number of the destination storage system, during the next available maintenance window. Enter the following Veritas Volume Manager CLI command to update the UDID written in the private region of the virtual volumes: 148 Solaris

149 vxdisk updateudid <device> NOTE: Updating the UDID is an offline process and requires that the diskgroups be deported before executing this command. Host environments for unidirectional Peer Motion 149

150 3PAR Online Import with unidirectional data migration from third-party storage systems to a 3PAR StoreServ Storage system 150 3PAR Online Import with unidirectional data migration from third-party storage systems to a 3PAR StoreServ Storage system

151 Overview of the 3PAR Online Import Utility The HPE 3PAR Online Import Utility is a software package that eases the migration of data from a source third-party storage system to a destination 3PAR StoreServ Storage system. Using 3PAR Online Import, you can migrate volumes and host configuration information to a destination 3PAR StoreServ Storage system without changing host configurations or interrupting data access. NOTE: The terms "storage system" and "array" are used interchangeably throughout this guide, and may refer either to the third-party storage system or the 3PAR StoreServ Storage system. The source in the migration is the third-party storage system, and the "destination" is the 3PAR StoreServ Storage system. The 3PAR Online Import Utility coordinates the movement of data from the source to the destination while servicing I/O requests from the hosts. During the data migration, host I/O is serviced from the host storage system through the 3PAR StoreServ Storage system. The host/volume presentation implemented concurrently on the third-party storage system is maintained on the destination 3PAR StoreServ Storage system. For additional information about supported third-party storage systems, see the 3PAR Online Import Utility support matrix on the SPOCK website: IMPORTANT: For information about the 3PAR Peer Motion Utility and 3PAR Online Import Import Utility commands, with descriptions of the commands, their parameters, and examples, see 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands on page 341. The third-party data migration process Data can be migrated by selecting a host or a volume on the source storage system. In addition to the host or volume explicitly selected for migration, other objects will be included in the migration using implicit selection. The migration process identifies the relationship between hosts and presented volumes and selects all additional objects to completely migrate the hosts. Consequently, the objects that will be migrated are the combination of the explicit and the implicit selection, enabling the migration of a large amount of volumes. Selecting a single host results in the implicit migration of the following: The host as well as any volumes presented to it Any other hosts to which those volumes are presented Any volumes presented to those other hosts The maximum number of volumes that can be migrated with an MDM, online, or offline migration is 255. IMPORTANT: The implicit selection of objects for migration occurs automatically and cannot be modified. If the number of objects selected for migration exceeds 255, the migration will be not be submitted and will return a failed status. If you have more volumes to migrate, rerun the migration steps for the additional volumes. The migration process selects objects to migrate using the following rules: Overview of the 3PAR Online Import Utility 151

152 Hosts When selecting a single host or group of hosts with volume presentations, all the volumes presented to the host(s) are migrated. In addition, any presentations the source volumes have to other hosts will include those hosts and all of their presented volumes in the migration. Presented volumes When selecting a volume or group of volumes with host presentations, the selected volumes and the hosts to which they are presented are migrated. In addition, any presentations the source hosts have with other volumes will be included in the migration. Unpresented volumes Only the selected unpresented volumes are migrated. The offline migration type is the only type available for unpresented volume migration. NOTE: For more information about supported and unsupported configurations and about the implicit selection of objects for migration, see Migrations supported by the 3PAR Online Import Utility on page 152. Migration phases The process to migrate source storage volumes to a 3PAR StoreServ Storage system consists of four phases: Preparing for migration Planning the migration, reconfiguring the host multipath solution (if necessary), installing and configuring the 3PAR Online Import Utility, installing and configuring the SMI-S provider, and preparing the host for migration. Premigration Creating a migration definition, if applicable, which involves defining the source and destination systems for the migration. This work is not repeated when migrating a second host or additional volumes between the same source and destination system after the first migration. Migration Creating and executing the migration. The migration work can include the process of unzoning the host from the source storage system, zoning the host to the destination storage system, and removing the migration definition after the migration is complete. This work is repeated for every migration. Postmigration Cleaning up the configuration after migration is completed. Supported migrations and requirements Migrations supported by the 3PAR Online Import Utility For migration types supported for EMC Storage, HDS Storage, and IBM XIV Storage arrays, see Table 7: Migrations supported for EMC Storage, HDS Storage, and IBM XIV Storage Arrays on page 154. For additional migration types supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4 arrays, see Table 8: Additional migrations supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4 arrays on page 158. For migration types supported by the 3PAR Online Import Utility for the HDS VSP, HDS USP_VM, the HDS USP_V, the HDS TagmaStore USP, and the HDS TagmaStore NSC, see Table 7: Migrations supported for EMC Storage, HDS Storage, and IBM XIV Storage Arrays on page 154. See Table 6: Legend for the migration-type tables on page 153 for a legend for the migration-type tables. 152 Migration phases

153 Table 6: Legend for the migration-type tables LUN Host Storage group/host group Item selected by user input on the 3PAR Online Import Utility For information about online import from an EVA Storage system to a 3PAR StoreServ Storage system, see HPE 3PAR Online Import for EVA Storage. This document is available at the Hewlett Packard Information Library website: Migrations supported for EMC storage, HDS storage, and IBM XIV arrays The following third-party arrays are supported for unidirectional data migration to a 3PAR StoreServ Storage system: For EMC Storage: VNX, VNX2, CLARiiON CX4, VMAX, and DMX4. NOTE: For VNX and CLARiiON CX4, the failover mode must be set to 4 (see For EMC VNX and CX4 storage controllers, the HPE 3PAR Peer Port HBA initiators must be set to failovermode 4 (Active/Active) on page 321. For HDS Storage: HDS VSP, HDS USP_VM, HDS USP_V, HDS TagmaStore USP, and HDS TagmaStore NSC For IBM XIV Storage: XIV_Gen2, XIV_Gen3 For details about supported arrays, see the 3PAR Online Import - Migration Host Support Matrix on the SPOCK website: Migrations supported by the 3PAR Online Import Utility for third-party storage arrays are shown in Table 7: Migrations supported for EMC Storage, HDS Storage, and IBM XIV Storage Arrays on page 154 and Table 8: Additional migrations supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4 arrays on page 158. Migrations supported for EMC storage, HDS storage, and IBM XIV arrays 153

154 All emulation types for OPEN systems are supported. Mainframe volumes on the source storage system cannot be migrated. Table 7: Migrations supported for EMC Storage, HDS Storage, and IBM XIV Storage Arrays Supported configuration Description Single LUN, single host The host is selected in the 3PAR Online Import Utility. Single LUN, single host The LUN is selected in the 3PAR Online ImportUtility. Multiple LUNs, single host The host is selected in the 3PAR Online Import Utility. Multiple LUNs, single host All LUNs are selected in the 3PAR Online Import Utility. Table Continued 154 Overview of the 3PAR Online Import Utility

155 Supported configuration Description Multiple LUNs, single host Not all LUNs are selected in the 3PAR Online Import Utility. All LUNs will be migrated. Single LUN, multiple hosts The host is selected in the 3PAR Online Import Utility. Single LUN, multiple hosts The LUN is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts The host is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts All LUNs are selected in the 3PAR Online Import Utility. Table Continued Overview of the 3PAR Online Import Utility 155

156 Supported configuration Description Multiple LUNs, multiple hosts Not all LUNs are selected in the 3PAR Online Import Utility. Single LUN, multiple hosts The host is selected in the 3PAR Online Import Utility. Single LUN, multiple hosts The LUN is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts One of the hosts is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts All LUNs are selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts Not all LUNs are selected in the 3PAR Online Import Utility. Table Continued 156 Overview of the 3PAR Online Import Utility

157 Supported configuration Description Single LUN, multiple hosts One of the hosts is selected in the 3PAR Online Import Utility Single LUN, multiple hosts The LUN is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts One of the hosts is selected in the 3PAR Online Import Utility. Multiple LUNs, multiple hosts All LUNs are selected in the 3PAR Online Import Utility. Overview of the 3PAR Online Import Utility 157

158 Additional migrations supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4 Table 8: Additional migrations supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4 arrays Configurations supported for EMC VMAX and DMX4 arrays Description A LUN in multiple storage groups is selected, and the groups are not identical. A LUN in a single storage group is selected, but it is grouped with a shared LUN, and the shared LUN s storage groups are not identical. A host is selected, but it is grouped with a shared LUN, and the shared LUN s storage groups are not identical. LUNs are selected spanning multiple groups, and the storage groups are not identical. LUNs are selected spanning multiple storage groups, and the storage groups are not identical. Requirements for concurrent createmigration and startmigration operations For HDS, EMC VNX, VNX2, CLARiiON CX4, VMAX, DMX4 storage and IBM XIV storage, you can migrate multiple hosts using a single createmigration command. For all except HDS, you can also use multiple instances of the createmigration command (from the same source storage system). 158 Additional migrations supported for EMC VMAX, VNX, VNX2, and CLARiiON CX4

159 The table that follows shows the different source-destination array configurations that support concurrent migrations. Specific configurations may or may not be supported, depending on the type of source array vendor or model. IMPORTANT: The concurrent createmigration and startmigration support covers migrations for all volumes and/or subsets of volumes, and applies to all parameters or options that are currently supported with the createmigration and startmigration commands. NOTE: For an MDM migration type, run the migrations using the Concurrent migration of multiple source arrays to the same destination array configuration and with the optional vvset and hostset parameters specified. A second createmigration operation can be started only if the first migration is in the import phase that is, all the importing LUNs should have task IDs. Table 9: Concurrent migrations with the 3PAR Online Import Utility Source array Concurrent migration of multiple hosts from the same source to the same destination array using separate createmigration commands for each host Concurrent migration of multiple, mutually exclusive source-todestination array pairs (N:N) Concurrent migration of multiple source arrays to the same destination array (N:1) Concurrent migration of hosts from the same source array to multiple destination arrays (1:N) EMC CLARiiON CX4, DMX4, VNX, VMAX HDS TagmaStor e NSC, TagmaStor e USP, USP_V, USP_VM, VSP IBM XIV_Gen2, XIV_Gen3 Yes 1 Yes Yes Yes No 1 Yes No No Yes 1 Yes Yes Yes 1 HPE 3PAR Online Import Utility supports concurrent createmigration operations between the same source and destination array pair by issuing a single createmigration command for multiple hosts with the same persona type. Use the -srchost host1,host2 parameter (comma separated host names without any space) in the createmigration command. If hosts test1 and test2 have the same persona type and are to be migrated concurrently, issue a single createmigration command, as follows. Overview of the 3PAR Online Import Utility 159

160 Concurrent migrations using a single createmigration command > createmigration -sourceuid destinationuid 2FF70002AC00B08A -srchost test1,test2 -destcpg "FC_r5" -destprov thin -cluster "Linux_Oracle_RAC" -migtype online -allvolumesincg 160 Overview of the 3PAR Online Import Utility

161 Migration process checklists Follow the steps in this chapter to perform online migration, MDM, or offline migration. Table 10: Phase I: Checklist Preparing for migration Step Type Task See: 1 All Plan the migration. Check the SPOCK website to confirm proper configuration. SPOCK ( Configuration rules for 3PAR Online Import support on page 186 General considerations on page MDM Online For MDM and online migration: Reconfigure the host multipath solution (if vendor-specific multipath software is installed). For EMC Storage source arrays on page 167 For HDS Storage source arrays on page 168 For IBM XIV Storage source arrays on page 169 System requirements on page 170 Migration planning on page 170 Reconfiguring the host multipath solution on page 175 Offline n/a n/a 3 All Install and configure the 3PAR Online Import Utility. 4 All Install and configure the SMI-S provider. Configuration rules for 3PAR Online Import support on page 186 Installing and configuring the 3PAR Online Import Utility on page 187 SMI-S provider installation and configuration on page 200 EMC SMI-S Provider for EMC Storage on page 200 HiCommand Suite installation for HDS Storage on page 203 Migration process checklists 161

162 Table 11: Phase II: Checklist Premigration Step Type Task See: 5 All Review network and fabric zoning requirements. Network and fabric zoning requirements for 3PAR Online Import on page 212 Requirements for multisource array migration on page All Identify the source and destination systems Identifying the source and destination storage systems on page 213 Prerequisites on page 214 Required information on page All Add the source storage system. 8 All Add the destination storage system. 9 All Zone the source storage system to the destination storage system. Adding the source storage system on page 216 Adding the destination storage system on page 217 Zoning the source storage system to the destination 3PAR StoreServ Storage system on page MDM Online Zone the host(s) to the destination storage system. Zoning host(s) to the destination storage system on page All Gather the required prerequisite information. 12 MDM Stop all server applications. Offline/unmount LUNs. Stop cluster service, if applicable. Required prerequisite information on page 221 Preparing clusters for migration on page 171 NOTE: This step applies only to migrations from HDS arrays, and also for Windows Server 2003 configurations from any source array. 13 All Issue the createmigration command. The creatmigration command process on page Migration process checklists

163 Table 12: Phase III: Checklist Migration Step Type Task See: 14 Online 1. Update the host device path information after the createmigration operation. Reconfiguring the host multipath solution on page Unzone the hosts from the source array. Rescan the server bus after the createmigration command completes and after you unzone the hosts from the source array. Network and fabric zoning requirements for 3PAR Online Import on page 212 MDM 3. Issue the startmigration command. 4. Monitor the migration status. 1. Update the host device path information after the createmigration operation. 2. Stop all server applications. Offline/ unmount LUNs. Stop cluster service, if applicable. NOTE: This step applies only to migrations from EMC and IBM arrays for Windows Server 2008 and Windows Server 2012 configurations. Issuing the startmigration command online migration on page 232 showmigration Reconfiguring the host multipath solution on page 175 Preparing clusters for migration on page Shut down the hosts. Reconfiguring the host multipath solution for MDM on page Unzone the hosts from the source array. 5. Issue the startmigration command. Network and fabric zoning requirements for 3PAR Online Import on page 212 Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234 Table Continued Migration process checklists 163

164 Step Type Task See: Offline 6. Monitor the migration status. 7. Bring the hosts back online. 8. Rescan the disks to detect the migrated 3PAR StoreServ Storage volumes. 1. Issue the startmigration command. 2. Monitor the migration status. showmigration Bringing the host back online on page 237 Reconfiguring the host multipath solution on page 175 Issuing the startmigration command offline migration on page 242 Table 13: Postmigration checklist Step Type Task See: 15 All Remove the migration definition when the migration has completed. Remove zoning between the source storage system and 3PAR StoreServ Storage system after all migrations from the thirdparty storage system are complete. When all migrations are complete, remove the source storage system and the destination 3PAR StoreServ Storage from the 3PAR Online Import Utility, using the removesource and removedestination commands. removemigration Performing online migration and MDM Performing offline migration removesource removedestination Reconfigure peer ports to host ports. Table Continued 164 Migration process checklists

165 Step Type Task See: Remove the SMI-S provider. Perform VMware postmigration tasks. Removing an EMC Storage system from EMC SMI-S provider on page 248 Removing HDS Storage system from the HiCommand Suite on page 249 Performing postmigration tasks in VMware ESXi environments on page 247 Migration process checklists 165

166 Phase I: Preparing for data migration Before migrating data using 3PAR Online Import, be aware of the following general considerations, system requirements, and other tasks associated with planning, host configuration, and network and zoning requirements. General considerations Volumes (LUNs or LDEVs) and hosts can be implicitly added to the migration, even if they are not specified by the input. When you add a third-party storage system as a migration source via Online Import, two FC ports must be zoned with the appropriate HPE 3PAR peer ports. HPE 3PAR Online Import Utility zoning aligns with Federation non-3par array zoning; in other words, 1:1 zoning without NPIV Ports. Volumes presented to FC hosts can be migrated online or by MDM. Volumes presented to iscsi or FCoE hosts can only be migrated offline. The online or MDM migration of volumes presented by the path "FCoE HBA on host -> FcoE + FC SAN Switch -> FC port on HPE 3PAR StoreServ" is not supported. When a cluster is being migrated, all the hosts in the cluster must be migrated at once. If a clustering solution uses SCSI-3 reservations, then only MDM or offine migration is supported. Multiple source storage systems can be attached as migration sources to the destination 3PAR StoreServ Storage system. Volumes identified for migration should not be part of any current replication or backup jobs. The volume limit for offline migrations is 255. The maximum number of volumes that can be migrated with MDM or online migration is 255. A createmigration command cannot simultaneously contain a host and a volume for migration. Volumes cannot be migrated if they are less than 256 MB, the minimum volume size on the 3PAR StoreServ Storage systems. The 3PAR StoreServ Storage is 256 MB boundary-based. A volume, even one that is not a multiple of 256 MB, can still be migrated, provided that the volume is not smaller than 256 MB. Volumes cannot be migrated if they are larger than 16 TB, the maximum volume size on the 3PAR StoreServ Storage system. No two volumes being migrated should share the same name. This is especially important in scenarios where a virtual instance of a physical LUN or volume is created. For online migration of a source array LUN with LUN ID 254 on Linux platforms, use the procedure described in Migrating a source array LUN with LUN ID 254 onlilne (Linux platforms) on page 233. Only the LUNs capable of accepting SCSI reservations are eligible. For single source array migration in storage foundation cluster environment where disk group contains multiple arrays, see Migrating a single source array in a Linux storage foundation cluster on page Phase I: Preparing for data migration

167 The hostset/vvset parameter with Srcvolmap option of the createmigration command is not supported. To be migrated as compressed volume, a source volume must be at least 16 GB and smaller than 16 TB. For more information, see Volume compression on page 16. The maximum supported number of source and destination arrays that can be added to the OIU database is 4 each. It is recommends using srchost option instead of srcvolmap during the createmigration, except for offline migrations. It is recommended to delete the peer host group if it already exists in the source array. Boot LUN should be migrated Offline. The boot LUN should not have any presentations before starting the migration. For more information, see the appropriate HPE 3PAR implementation guide, available at the Hewlett Packard Enterprise Information Library website: For EMC Storage source arrays All LUNs present in an EMC Storage group are prepared for migration, even if only one of them was selected in the createmigration command. When a host is selected for migration, all LUNs presented to that host are selected. In both cases, the implicit addition algorithm may add additional LUNs in the case of host sets of clustered hosts. All or a subset of LUNs selected are migrated with the startmigration command. The selection of the LUNs for the subset is by the -subsetvolmap option. Use cases that benefit from selecting a subset of all volumes for migration include: When data-transfer time is limited and insufficient for the migration of all volumes, a subsequent migration can be started with a another subset until all LUNs have migrated. When the host issues large amounts of read/write traffic over the destination 3PAR StoreServ Storage and the peer links to the source system, and LUN service time would be adversely influenced by the data migration traffic from the source to the destination system. Migrating from an EMC DMX4 source array is supported on 3PAR OS MU4 and later. Volume Identifier Name rules and guidelines for EMC VMAX and DMX4: A Volume Identifier Name can be specified replacing the default hexadecimal name for a volume. This identifier can be up to 64 characters long; however, the name for volumes to be migrated using Online Import must to be reduced to 31 characters (the maximum length for a volume name on an HPE 3PAR StoreServ). Volumes to be migrated might have the same Volume Identifier Name; before attempting a migration, make sure that each volume to be migrated has a unique Volume Identifier Name. When naming EMC volumes through the srcvolmap option, if migrating volumes do not have Volume Identifier names assigned to them, the name should include the word Volume, and also a 0 added before every EMC LUN in order to make the device id 5 characters, if not so already. For example: For EMC Storage source arrays 167

168 createmigration -sourceuid <source_id> -srcvolmap [{"Volume 0<volmap_ID>"}] -destcpg <CPG_ID> -destprov full -migtype offline When naming EMC volumes through the srcvolmap option, the input to the srcvolmap option should be the Volume Identifier Name and not the default hexadecimal name. When naming EMC volumes through the srcvolmap option, if migrating volumes do have Volume Identifier names assigned to them. If a volume selected for migration has a Volume Identifier Name assigned to it, the same name is used to create VVs on the destination HPE 3PAR during the admitvv operation. HPE 3PAR host persona/host OS: EMC Storage arrays do not implement specific operating system (OS) settings when registering host initiators on the EMC Storage array. Issue the persona parameter in the createmigration command to pass the correct persona to the HPE 3PAR host. Issue the showpersona command for valid persona settings. When a volume on the source EMC Storage array is migrated, a SCSI-3 reservation is issued to prevent any unwanted management changes to the volume during the migration. With 3PAR OS MU2 and earlier, the SCSI-3 reservation remains on the volume after the migration. With 3PAR OS MU3 and later, the SCSI reservation is removed upon successful migration. For information about removing a SCSI reservation following an unsuccessful migration, see: Clearing a SCSI reservation with 3PAR OS MU3 or later on page 411 Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage on page 412 For an EMC VMAX or DMX4 migration, the following flags should be enabled on the EMC VMAX or DMX4 ports to be used for the peer links: SCSI_3(SC3) : Enabled nl SPC2_Protocol_Version(SPC2) : Enabled nl SCSI_Support1(OS2007) : Enabled On VMAX manually delete stale masking view, Initiator group and child Initiator group for an Initiator group containing HPE 3PAR StoreServ peer ports. If required, enable the SCSI3 Persistent Reservation flag on all volumes being migrated. When performing an online migration from an EMC array using VMware ESXi, ensure that VAAI is disabled on all server nodes and ATS is disabled on all datastores before issuing a createmigration command. These features can be re-enabled once the migration is complete. For information about concurrent migration support, see Requirements for concurrent createmigration and startmigration operations on page 158. For HDS Storage source arrays Only volumes presented to FC hosts can be migrated. 168 For HDS Storage source arrays

169 LDEVs presented to iscsi hosts cannot be migrated. Only LUNs presented to OPEN systems are supported. All LDEVs present in an HDS Storage group are prepared for migration, even if only one of them was selected in the createmigration command. When a host is selected for migration, all LUNs presented to that host are selected. In both cases, the implicit addition algorithm may add additional LDEVs in the case of host sets of clustered hosts. All or a subset of LDEVs selected are migrated with the startmigration command. The selection of the LDEVs for the subset is by the -subsetvolmap option. Use cases that benefit from selecting a subset of all volumes for migration include: When data-transfer time is limited and insufficient for the migration of all volumes, a subsequent migration can be started with a another subset until all LDEVs have migrated. When the host issues large amounts of read/write traffic over the destination 3PAR StoreServ Storage and the peer links to the source system, and LDEV service time would be adversely influenced by the data migration traffic from the source to the destination system. When an LDEV on the source HDS Storage array is migrated, a SCSI-3 reservation is issued to prevent any unwanted management changes to the LDEV during the migration. The SCSI-3 reservation is removed after the migration. During migration, LDEVs are renamed for use by the 3PAR StoreServ Storage. In an LDEV on the source storage system, the colon becomes an underscore; for example, an LDEV name like 04:6A becomes 04_6A. LDEVs cannot be migrated if they are less than 256 MB, the minimum volume size on the 3PAR StoreServ Storage systems. The 3PAR StoreServ Storage is 256 MB boundary-based. A volume or LDEV, even one that is not a multiple of 256 MB, can still be migrated, provided that the volume is not smaller than 256 MB. LDEVs cannot be migrated if they are larger than 16 TB, the maximum volume size on the 3PAR StoreServ Storage system. External volumes cannot be migrated. Replication volumes cannot be migrated. For information about concurrent migration support, see Requirements for concurrent createmigration and startmigration operations on page 158. For IBM XIV Storage source arrays IBM XIV Storage source arrays are supported only with 3PAR OS MU2 and later. For information about concurrent migration support, see Requirements for concurrent createmigration and startmigration operations on page 158. Migrating a previously migrated XIV LUN between two federated 3PARs using SSMC is supported starting with SSMC version 2.4. Cluster hosts are not supported for migration. To migrate the cluster host, move all the nodes out of the cluster host. Moving the nodes out of the cluster host does not impact node level LUN mapping or node application I/O. For Host mapping, LUNs mapped with a LUN ID greater than 255 are not supported for migration. For IBM XIV Storage source arrays 169

170 System requirements Before migrating data from a source storage system to a 3PAR StoreServ Storage system using 3PAR Online Import, the following system requirements must be met: The source storage system must be running a supported firmware level. (See the SPOCK website for supported firmware versions.) The destination 3PAR StoreServ Storage system must have a valid 3PAR Online Import or HPE 3PAR Peer Motion license installed. The destination 3PAR StoreServ Storage system must be at a supported 3PAR OS level. (See the SPOCK website for supported 3PAR OS levels.) Always use the most current version of the 3PAR Online Import Utility. For detailed information about ensuring that the source storage system and the destination 3PAR StoreServ Storage system are configured properly for data migration, see the SPOCK website: Migration planning Advanced planning ensures that you achieve the desired results when migrating data: It is a recommended best practice to make a backup of your host/data before starting a migration. If you have created your own host or cluster hosts on the destination 3PAR (outside of those created by the OIU, make sure the host name and member WWNs match the host on the source array, and make sure the appropriate host persona is assigned. Identify the volumes and/or hosts that will be migrated. If you do not want to migrate all the volumes or LUNs presented to a host, you must unpresent the volumes or LUNs that you do not want migrated. Otherwise, all the volumes will be implicitly included in the migration. When you migrate TPVVs from the source storage system, only those volumes can be migrated where the TPVV pool size is greater than or equal to the total size of the volumes selected for migration. NOTE: Volume size is the actual size, not merely the space used by the TPVV. Determine whether you will be using thin (dynamic), full compression, or dedupe provisioning on the destination storage system. This decision impacts the amount of capacity needed on the destination storage system. NOTE: For 3PAR Online Import, TDVVs are supported on a destination with 3PAR OS MU2 or later. For HPE 3PAR Online Import, Compressed VVs are supported on a destination with 3PAR OS or later. Because there is some impact on performance, you may want to schedule migrations during off-peak hours, if possible. Hosts with a lighter load/less data should be migrated first. The use of consistency groups is supported by the 3PAR Online Import Utility. I/O that is issued to volumes that are members of a consistency group is mirrored to the source array until all members are completely migrated to the destination array, keeping the source volumes in a consistent state. For more information, see Consistency Groups management on page 136. If performing an online migration using ESXi, downtime must be scheduled beforehand to disable ATS on all datastores as well as VAAI on all servers. 170 System requirements

171 EMC Storage A maintenance window is required for the removal of EMC PowerPath, if it is present. For offline migration from VMAX, VNX, VNX2, or CLARiiON CX4, volumes need to be assigned to storage group with no host mapping associated with it. When selecting objects for migration, be aware of any overlap or sharing of volumes between hosts. In this situation, selecting a host or a volume in any set results in every volume and every host in all sets being implicitly selected for migration due to the overlap. For additional information, see Migrations supported by the 3PAR Online Import Utility on page 152. For EMC VMAX and DMX4 only: Gatekeeper LUNs are widely used for communication between the host and the EMC array or for array operations such as backups. Gatekeeper LUNs are typically smaller than 256 MB, and are therefore too small to be migrated. If gatekeeper LUNs have been presented to hosts explicitly in the masking view that corresponds to a migration, they cause the migration to fail. Gatekeeper LUNs (or any small LUNs) must be removed from masking views being migrated. Identify which functions on the host are relying on the gatekeeper and ensure that they are properly shut down. In the case of operations such as automated backup, plan to perform the operation after the migration. HDS Storage A maintenance window is required for the removal of the Hitachi Dynamic Link Manager (HDLM) software, if it is present. Remove any existing host group object like HCMDxxxx from CHA ports. IBM XIV Storage There are no specific steps that need to be executed in the planning phase for IBM XIV Storage arrays. Native OS multipathing is assumed to be enabled on the hosts being migrated from the IBM XIV source array to the destination 3PAR StoreServ Storage. NOTE: A maintenance window is required for the removal of third-party multipath software, if present. If, during the course of a migration, peer ports are changed, remove any existing HPE 3PAR peer host group created by OIU on the source before attempting another migration. Preparing clusters for migration Preparing Windows clusters for migration When a Windows cluster is migrated, the cluster must first be disabled so that any SCSI reservations in use are released. For an active/passive cluster running Windows Server 2008 or Windows Server 2008 R2, this can be achieved by stopping all applications running on the cluster, then stopping the cluster service on all nodes. When a Windows Server 2012 active/passive cluster is migrated, maintenance mode must be set on the cluster disks to clear SCSI reservations on the disks. Because the quorum disk cannot be set in the maintenance mode, it must be set to the offline mode to clear its SCSI reservations. Then, the cluster service must be stopped on ALL nodes. EMC Storage 171

172 For Hyper-V running on Windows Server 2008 R2, follow these steps to stop the cluster: Procedure 1. Set the quorum disk to Offline, CSV disks in Maintenance mode. 2. Using the Failover Cluster Manager, select Shutdown Cluster. 3. Clear the cluster reservation, if present, by following the appropriate Microsoft documentation about Hyper-V clusters. For Windows Server 2003 or Windows Server 2003 R2, stop the cluster service on each cluster node. For more information, see Microsoft documentation on how to start/stop the cluster services. Preparing HP-UX clusters for migration on an HDS Storage system Procedure For HP-UX 11i v3, the Serviceguard cluster must be stopped gracefully before the node is shut down during MDM. For active/active clusters, offline all disks. For more information, see Serviceguard for HP- UX documentation. See the following website: Serviceguard ( Preparing HP-UX Serviceguard Active-Passive clusters using native multipathing for online migration on an HDS Storage system CAUTION: If legacy DSFs are used, HPE recommends that you change to agile DSFs. If volume groups are not configured with agile DSFs, then after the createmigration operation is complete and before removing the source paths, you must add the corresponding legacy 3PAR StoreServ Storage paths for the LUNS manually. Otherwise, data will become unavailable. For more information, see the white paper LVM Migration from Legacy to Agile Naming Model, available at the following website: LVM Migration from Legacy to Agile Naming Model ( display?sp4ts.oid= &docid=emr_na-c &doclocale=en_us) Preparing Veritas clusters for migration NOTE: For EMC Storage as source array, only VMAX and DMX4 arrays are supported with Veritas clusters. Procedure 1. On Veritas clusters, stop the clusters by issuing the following command: # /opt/vrtsvcs/bin/hastop -all 2. Then verify cluster status by issuing the following command on all nodes: # /opt/vrtsvcs/bin/hastatus When the cluster is down, messages like the following will appear: 172 Preparing HP-UX clusters for migration on an HDS Storage system

173 attempting to connect... VCS ERROR V Cannot connect to VCS engine attempting to connect...not available; will retry 3. To verify that reservation is clear after stopping a cluster, follow these steps: a. Issue the vxdisk list command to get a list of all LUNs or LDEVs used in the cluster: Example: EMC Storage Listing LUNs in a cluster # vxdisk list DEVICE TYPE DISK GROUP STATUS cciss/c0d0 auto:none - - online invalid emc0_00fe auto:cdsdisk - - online thinrclm emc0_00ff auto:cdsdisk - - online thinrclm emc0_0100 auto:cdsdisk - - online thinrclm emc0_0101 auto:cdsdisk - - online thinrclm emc0_0102 auto:cdsdisk - - online thinrclm emc0_0103 auto:cdsdisk - - online thinrclm emc0_0104 auto:cdsdisk - - online thinrclm emc0_0105 auto:cdsdisk - - online thinrclm emc0_0106 auto:cdsdisk - - online thinrclm Example: HDS Storage Listing LDEVs in a cluster # vxdisk list DEVICE TYPE DISK GROUP STATUS hitachi_usp0_1000 auto:cdsdisk - - online hitachi_usp0_1001 auto:cdsdisk - - online hitachi_usp0_1002 auto:cdsdisk - - online hitachi_usp0_1003 auto:cdsdisk - - online hitachi_usp0_1004 auto:cdsdisk - - online hitachi_usp0_1005 auto:cdsdisk - - online hitachi_usp0_1006 auto:cdsdisk - - online hitachi_usp0_1007 auto:cdsdisk - - online hitachi_usp0_1008 auto:cdsdisk - - online b. Create a tmpfile in the /root directory with all the LUNs or LDEVs. The tmpfile will be similar to the following example: Example EMC Storage tmpfile with LUNs /dev/vx/rdmp/emc0_00fe /dev/vx/rdmp/emc0_00ff /dev/vx/rdmp/emc0_0100 /dev/vx/rdmp/emc0_0101 /dev/vx/rdmp/emc0_0102 /dev/vx/rdmp/emc0_0103 /dev/vx/rdmp/emc0_0104 /dev/vx/rdmp/emc0_0105 /dev/vx/rdmp/emc0_0106 Example HDS Storage tmpfile with LDEVs /dev/vx/rdmp/hitachi_usp0_1000 /dev/vx/rdmp/hitachi_usp0_1001 /dev/vx/rdmp/hitachi_usp0_1002 /dev/vx/rdmp/hitachi_usp0_1003 /dev/vx/rdmp/hitachi_usp0_1004 /dev/vx/rdmp/hitachi_usp0_1005 /dev/vx/rdmp/hitachi_usp0_1006 /dev/vx/rdmp/hitachi_usp0_1007 /dev/vx/rdmp/hitachi_usp0_1008 Phase I: Preparing for data migration 173

174 c. Issue the vxfenadm -s all -f tmpfile command to verify that the reservation keys are clear: Example EMC Storage Verifying that reservation keys are clear # vxfenadm -s all -f tmpfile Device Name: /dev/vx/rdmp/emc0_00fe Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0100 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_00ff Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0101 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0102 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0106 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0105 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0103 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/emc0_0104 Total Number Of Keys: 0 Example HDS Storage Verifying that reservation keys are clear # vxfenadm -s all -f tmpfile Device Name: /dev/vx/rdmp/hitachi_usp0_1000 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1002 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1001 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1004 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1003 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1005 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1007 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_1006 Total Number Of Keys: 0 No keys... Device Name: /dev/vx/rdmp/hitachi_usp0_ Phase I: Preparing for data migration

175 Total Number Of Keys: 0 No keys... d. If the reservation keys are not clear, issue the following commands on any one cluster node to clear them: # vxfenadm -a -k f tmpfile # vxfenadm -c -k f tmpfile The reservation keys are clear when each data LUN in the tmpfile indicates No keys. Reconfiguring the host multipath solution Data migration may involve reconfiguring the host multipath solution from a third-party multipath solution to one supported by a 3PAR StoreServ Storage system. Native multipath software varies, depending on the host operating system: For a Windows host: Windows Server 2012, Windows Server 2012 R2, Windows Server 2008, and Windows Server 2008 R2: Windows native MPIO Windows Server 2003 only: HPE MPIO for Windows Server 2003 For a Linux host: Linux native device-mapper multipath software For an IBM AIX host: the HPE 3PAR Multipath I/O ODM for IBM For an HP-UX host: HP-UX 11i v3: HP-UX native multipath software HP-UX 11i v2: PVLinks, a component of the HP-UX LVM For up-to-date information about the supported multipath solutions for 3PAR StoreServ Storage systems, see the SPOCK website: After the migration preparation phase is complete, the host must be reconfigured to use the multipath software for the 3PAR StoreServ Storage system. Procedure For EMC Storage: Remove the EMC PowerPath multipath software, if present, and configure native multipath software. IMPORTANT: If you unzone the host from an EMC Storage system but do not reconfigure the host multipathing software, the host/cluster will experience an outage, because it will not be able to communicate with the destination 3PAR StoreServ Storage system. For HDS Storage, the Hitachi Dynamic Link Manager (HDLM) software installed on the host can stay present until the last HDS LDEVs are migrated. The HDLM can be removed at the next maintenance window, but this is not required. Reconfiguring the host multipath solution 175

176 NOTE: After the migration has completed, if the host connects to another HDS Storage array, the HDS multipathing software must remain installed on the host. If no other HDS arrays are connected, then optionally, the HDS multipathing software may be uninstalled. VMware compatibility considerations See the VMware Compatibility Guide to ensure that your ESXi configuration conforms to VMware's guidelines pertaining to features such as VAAI and ATS for your particular storage array. For an online migration on a storage array on which these features are not supported, plan downtime before the migration to disable ATS on all datastores as well as VAAI on all servers. Both features can be re-enabled once the migration to the 3PAR is complete. VMware Compatibility Guide: devicecategory=san NOTE: For VMware, you may need to re-signature migrated disks before the first server reboot. Plan for this after the migration is complete. Reconfiguring the host multipath solution for online migration For Oracle RAC-based configurations Before migrating a SAN-based Oracle RAC cluster using the 3PAR Online Import Utility, it is critical to understand whether and how the Oracle RAC cluster registry (CRS), voting disks, and data disks are distributed across the source arrays, and to plan migrations accordingly. For the 3rd party arrays on which the Online Import Utility does not support N:1 migration, multiple migrations must be executed serially in order to transfer all the Oracle-based disks from multiple source arrays to a single destination 3PAR StoreServ Storage. This use-case scenario is described in detail in Data migration for an Oracle RAC cluster use case on page 418. At the time of this publication, the distribution of Oracle RAC disks is supported across the 3PAR StoreServ Storage and EMC Storage, HDS Storage, or IBM XIV Storage arrays listed in the SPOCK website: For ASM-based Oracle RAC configurations, the persistent device or partition names that are used by the ASMlib to label ASM disks must be modified. With vendor-specific multipath software, the devices get the following names: For EMC Storage: /dev/emcpower* For HDS Storage: /dev/sddlm* With Linux native device-mapper multipath software, the devices get a /dev/mapper/mpath* name. To ascertain whether the current ASMlib is using the vendor-specific multipath-based names, issue the following Oracle ASM CLI command: Example: oracleasm querydisk for EMC Storage # oracleasm querydisk -p /dev/emcpower* Example: oracleasm querydisk for HDS Storage # oracleasm querydisk -p /dev/sddlm* 176 VMware compatibility considerations

177 Vendor-specific multipath-based device names are being used if the output is similar to the following example: Example: Vendor-specific multipath device names Device "/dev/emcpowera1" is marked an ASM disk with the label "ASM_DISK_*" Uninstall the vendor-specific multipath software by following these steps: CAUTION: The vendor-specific multipath software must not be present during the migration process. The native Linux device-mapper-multipath must be managing the paths. CAUTION: Stop all applications before removing the vendor-specific multipath software. For cluster configurations, also stop the cluster services and the cluster. Applications should be restarted after migration has started. Prerequisites If vendor-specific multipath software (that is, EMC PowerPath for EMC Storage, or the HDLM for HDS Storage) is installed, it must be uninstalled, and another multipath software, usually one native to the OS, must be used to configure multipathing. A maintenance window is required to complete an online data migration with removal of the vendorspecific multipath software (see step 2 ). Procedure 1. Close all applications on the host. NOTE: Stop the Oracle RAC database services by issuing the following command on the primary node: # $ORACLE_HOME/bin/srvctl stop database -d <DB_name> -o immediate To confirm that the database services have stopped, issue the following command: # $ORACLE_HOME/bin/srvctl status database -d <DB_name> 2. Unmount the application file systems, bring offline any raw devices (if configured), and deactivate any volume groups where the LVM is in use. 3. For a cluster, stop the cluster services, and then stop the cluster. Phase I: Preparing for data migration 177

178 NOTE: For Oracle RAC, stop the cluster services and the ASM by performing the following steps: a. Issue the following command on the primary node: # $GRID_HOME/bin/crsctl stop cluster -all b. To confirm that the cluster services have stopped, issue the following command on any of the nodes: # $GRID_HOME/crs_stat -t The expected output is as follows: CRS-0184: Cannot communicate with CRS Daemon c. Stop the ASM by issuing the following command on each node: # /etc/init.d/oracleasm stop 4. Uninstall the vendor-specific multipath software from the host, following the EMC PowerPath or HDS HDLM documentation instructions. 5. Configure the native multipathing software on the host. In a cluster environment, update the multipath.conf file on each of the cluster nodes. NOTE: For HDS Storage arrays using ESXi 5.5, ESXi 5.1, ESXi 5.0, or HP-UX 11i v3: For ESXi 5.5, ESXi 5.1, or ESXi 5.0, set the multipath load balance policy to round-robin for all the devices on all ESXi cluster nodes To verify newly discovered paths using ESXi 5.5, ESXi 5.1, or ESXi 5.0, issue the esxcfg-mpath -b command on the ESX node, or use the vcenter console. For HP-UX 11i v3, issue the following commands to verify newly discovered paths: # ioscan -f # ioscan -m lun For the Linux DM-MPIO device path updates to work appropriately with the EMC CX4, VNX, or VNX2 arrays, under the EMC DGC device array entries in the /etc/ multipath.conf file, edit the hardware_handler setting to read and restart the native device-mapper multipath service: hardware_handler "1 alua" If the HPE 3PAR LUNs are not whitelisted, register the HPE 3PAR LUN types with native devicemapper-multipath by whitelisting HPE 3PAR-specific information. (In the /etc/multipath.conf file, the vendor is 3PARdata and the product is VV.) See the 3PAR StoreServ Storage product documentation on the Linux host configuration. 3PARdata and VV are case sensitive. If the HPE 3PAR LUNs are whitelisted, start the native device-mapper multipath. Example: Starting the native device-mapper multipath 178 Phase I: Preparing for data migration

179 # /etc/init.d/multipathd restart ok Stopping multipathd daemon: [ OK ] Starting multipathd daemon: [ OK ] Verify multipathing updates by rescanning the HBAs and listing the mapping. Example: EMC Storage Rescanning HBAs and listing the updated multipath mapping with RHEL 5.x host2 host3 # echo "1" > /sys/class/fc_host/host2/issue_lip # echo "- - -" > /sys/class/scsi_host/host2/scan # echo "1" > /sys/class/fc_host/host3/issue_lip # echo "- - -" > /sys/class/scsi_host/host3/scan # multipath -ll mpath13 ( bf902a00e03cb1bb3c3fe411) dm-4 DGC,VRAID [size=150g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:2 sdg 8:96 [active][ready] \_ 2:0:1:2 sdj 8:144 [active][ready] mpath12 ( bf902a00cae2a2a33c3fe411) dm-3 DGC,VRAID [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:1 sdf 8:80 [active][ready] \_ 2:0:1:1 sdi 8:128 [active][ready] mpath11 ( bf902a002ac0388f3c3fe411) dm-2 DGC,VRAID [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:0 sdb 8:16 [active][ready] \_ 2:0:1:0 sdh 8:112 [active][ready] Example: HDS Storage Rescanning HBAs and listing the updated multipath mapping for RHEL 5.x and RHEL 6.x # ls /sys/class/fc_host host4 host5 # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "1" > /sys/class/fc_host/host5/issue_lip # echo "- - -" > /sys/class/scsi_host/host4/scan # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpath2 (360060e80045be be ) dm-3 HITACHI,OPEN-8 [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:1 sdb 8:16 [active][ready] \_ 5:0:1:1 sdh 8:112 [active][ready] mpath1 (360060e80045be be ) dm-2 HITACHI,OPEN-8 [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:0 sda 8:0 [active][ready] \_ 5:0:1:0 sdg 8:96 [active][ready] Example: IBM XIV Storage Rescanning HBAs and listing the updated multipath mapping with RHEL 5.x # ls /sys/class/fc_host host4 host5 # echo "1" > /sys/class/fc_host/host4/issue_lip Phase I: Preparing for data migration 179

180 # echo "1" > /sys/class/fc_host/host5/issue_lip # echo "- - -" > /sys/class/scsi_host/host4/scan # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpathr ( e4a055e) dm-13 IBM,2810XIV size=1.9t features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 4:0:0:12 sdm 8:192 active ready running - 5:0:0:12 sdbb 67:80 active ready running - 4:0:1:12 sdy 65:128 active ready running `- 5:0:1:12 sdbn 68:16 active ready running mpathe ( e4a001e) dm-5 IBM,2810XIV size=16g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 4:0:0:4 sde 8:64 active ready running - 5:0:1:4 sdbf 67:144 active ready running - 4:0:1:4 sdq 65:0 active ready running `- 5:0:0:4 sdat 66:208 active ready running Example: VMware ESXi CLI command Rescanning HBAs with ESXi 5.5, ESXi 5.1, or ESXi 5.0 # esxcli storage core adapter rescan --all Example: VMware ESXi CLI command Listing updated multipath mapping with ESXi 5.5, ESXi 5.1, or ESXi 5.0 # esxcfg-mpath b Example: HP-UX CLI command Rescanning HBAs 6. If the vendor-specific multipath software was uninstalled (see Reconfiguring the host multipath solution on page 175, after multipathing has been configured again on the host, follow these steps: a. Modify /etc/fstab for new mount points. Base the new mount points on discovered LUNs, if the migrating LUN alias was not created in the multipath.conf file. Remount file systems. 180 Phase I: Preparing for data migration

181 IMPORTANT: In Linux, vendor-specific multipath devices are presented as follows: EMC PowerPath: /dev/emcpower* (for example, /dev/emcpowera) HDLM: /dev/sddlm* (for example, /dev/sddlmaa1) Removal of the vendor-specific multipath software in Linux also removes this device type. This changes when the Linux native device-mapper multipath assumes management of these devices. For example, before removal of the vendor-specific multipath software, consider combining the device names as follows: For EMC Storage, combine /dev/sdb and /dev/sdc into /dev/emcpowera For the HDLM, combine /dev/sdb and /dev/sdc into /dev/sddlmaa1 After removal of the vendor-specific multipath software, the devices (/dev/sdb and /dev/sdc for EMC Storage, or /dev/sdb and /dev/sdc for HDS Storage) would be represented by /dev/mpathx. This represents a challenge for customers who use direct device referencing in /etc/ fstab or other custom scripts. Hewlett Packard Enterprise generally recommends that fstab mounts be performed using blkid/uuid ; however, this is not always employed. In that case, consider mounting /dev/emcpowera (for EMC Storage) or /dev/ sddlmaa1 (for HDS Storage) as /var. After removal of the vendor-specific multipath software, /var would not automatically mount to /dev/mpathx. b. Where applicable, make the appropriate changes to the LVM configuration. c. If this is for a cluster configuration, initialize the cluster and start cluster services. For Oracle RAC-based configurations Phase I: Preparing for data migration 181

182 NOTE: For Oracle RAC, edit the /etc/sysconfig/oracleasm file on all nodes in order to redirect the ASMlib to use the newly defined persistent device names from the /etc/ multipath.conf file, which will replace the names that were being used to label the ASM disks ( /dev/emcpower* for EMC PowerPath, /dev/sddlm* for HDS Storage). I. In the /etc/sysconfig file, change the following: ORACLEASM_SCANORDER="" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="" to: # ORACLEASM_SCANORDER: Matching patterns to order disk scanning ORACLEASM_SCANORDER="dm" # ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan ORACLEASM_SCANEXCLUDE="<pattern>" where <pattern> is: emcpower (for EMC Storage) sddlm (for HDS Storage) II. Reset the oracleasm services by issuing the following commands on all cluster nodes, beginning with the primary node: # /etc/init.d/oracleasm start # oracleasm scandisks # oracleasm listdisks III. Verify that all ASM disks are visible from all nodes. Also verify that the ASM disk labels are now using the appropriate Linux native devicemapper multipath-generated names by issuing the following command: # oracleasm querydisk -p <ASM_label> The output should be similar to the following example for all configured ASM disks: Device <device_alias> is marked as an ASM disk with the label "ASM_DISK_*" IV. Restart the Oracle RAC cluster and database services by issuing the following command on the primary node: # $GRID_HOME/bin/crsctl start cluster -all # $ORACLE_HOME/bin/srvctl start database -d <DB_name> Verify that all cluster services have been restarted by issuing the following commands on all nodes: 182 Phase I: Preparing for data migration

183 # $GRID_HOME/bin/crs_stat t # $GRID_HOME/bin/crsctl stat res t init d. Start applications. Reconfiguring the host multipath solution for MDM CAUTION: Third-party storage-specific multipath software, such as EMC PowerPath for EMC Storage or the HDLM for HDS Storage, must not be installed during the migration process. Instead, the paths must be managed by native multipath software. CAUTION: Stop all applications before removing the storage-specific multipath software. For cluster configurations, also stop the cluster services and the cluster. Applications can be restarted after migration has started. NOTE: Hewlett Packard Enterprise recommends that, for MDM, the host be shut down for the zoning changes and multipath reconfiguration. While the host is down, data transfer is started, after which the host can be brought back online. This process ensures that the host is never multipathing between the source storage system and the destination 3PAR StoreServ Storage system. Perform the following steps on the host: Procedure 1. If vendor-specific multipath software is present, remove it from the host. If EMC PowerPath software is present, remove it from the host, following the EMC PowerPath documentation instructions. If the HDLM software is present, remove it from the host, following HDS documentation instructions. For Linux, IBM AIX, and HP-UX hosts: a. Close all applications on the host. b. For a cluster, stop the cluster services, and then stop the cluster. c. Unmount the application file systems, bring offline any raw devices (if configured), and deactivate any volume groups where the LVM is in use. d. Uninstall the HDLM from the host, following HDS documentation instructions. For Microsoft Windows Host Operating System: After removal, you will be prompted to restart the host in order for the changes to take effect. Do not restart at this point. Reconfiguring the host multipath solution for MDM 183

184 IMPORTANT: If you are uninstalling the host multipath software from a Windows host where the host is booting over SAN from the source system, you will be prompted to restart the host to complete the removal of the multipath software. This restart is required and must be performed. The removal of the host multipath software from a Windows host may have also disabled the MPIO installation without removing it. Manually check to see whether the native Microsoft MPIO is still loaded, and if it is, fully remove it at this time. Now restart the host. For Windows Server 2003 R2 Cluster or Windows Server2003 Cluster, if the cluster service has restarted after the reboot, you must stop it again on each cluster node. For Hyper-V: After removal, you will be prompted to restart the host in order for the changes to take effect. Do not restart at this point. 2. Zone the host to the destination 3PAR StoreServ Storage system to establish communication. Using the SSMC, verify that the host whose LUNs are under migration has paths to as many HPE 3PAR controller nodes as are zoned in the SAN. Additional steps for an HDS Storage system using an IBM AIX host: a. Using the AIX CLI lspv command, make a note of the volume group-to-pvid mapping. b. Using the exportvg command, export the volume groups. c. On the host, use the rmdev -dl <disk> command to delete all disks that are on the HDS Storage system and being migrated. d. d. If the host is a member of a cluster, clear all the SCSI reservations on the cluster disks. e. Install HPE 3PAR Multipath I/O ODM for IBM (if not already installed). 3. Configure the native multipath software on the host. Configuring MPIO for a Windows host For more information about configuring multipath software on a Microsoft Windows Server host, see the HPE 3PAR Windows Server 2012 and Windows Server 2008 Implementation Guide or the HPE 3PAR Windows Server 2003 Implementation Guide. These documents are available on the Hewlett Packard Enterprise Information Library website: For supported Windows Server operating systems (OSs) except Windows Server 2003 or Windows Server 2003 R2: a. Enable the Windows native multipath MPIO (if it is not already not enabled). b. Register HPE 3PAR LUN types with MPIO by configuring MPIO to use 3PARdataVV as the device hardware ID. 184 Phase I: Preparing for data migration

185 NOTE: 3PARdataVV is case sensitive. You will be prompted to reboot the host in order for the change to take effect. Do not reboot at this point, since the later shutdown (in Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234, step 1) and subsequent reboot of the host for the MDM procedure replaces this reboot. For Windows Server 2003 or Windows Server 2003 R2: a. Install HPE MPIO for HPE 3PAR. NOTE: You will be prompted to reboot the host in order for the change to take effect. Do not reboot at this point, since the later shutdown (in Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234, step 1) and subsequent reboot of the host for the MDM procedure replaces this reboot. b. Upgrade HBA drivers if required. For information about supported HBA driver versions, see the SPOCK website: c. Install the HPE 3PAR NULL driver. The driver is available as a zip file (QL zip) on the Software Depot website: NOTE: You will be prompted to reboot the host in order for the change to take effect. Do not reboot at this point, since the later shutdown (in Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234, step 1) and subsequent reboot of the host for the MDM procedure replaces this reboot. Configuring multipath software for a Windows Hyper-V Host a. Install MPIO. b. Register HPE 3PAR LUN types with MPIO by configuring MPIO to use 3PARdataVV as the device hardware ID. NOTE: You will be prompted to reboot the host in order for the change to take effect. Do not reboot at this point, since the later shutdown (in Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234, step 1) and subsequent reboot of the host for the MDM procedure replaces this reboot. Configuring multipath software for a Linux Host a. Install device-mapper multipath (if not already installed). b. Register HPE 3PAR LUN types with DM by whitelisting HPE 3PAR-specific information (the vendor is 3PARdata and the product is VV) in /etc/multipath.conf. For more information about the Linux host configuration, see the HPE 3PAR Red Hat and Oracle Linux Implementation Guide, available at the Hewlett Packard Enterprise Information Library website: Phase I: Preparing for data migration 185

186 NOTE: 3PARdataVV is case sensitive. You will be prompted to reboot the host in order for the change to take effect. Do not reboot at this point, since the later shutdown (in Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page 234, step 1) and subsequent reboot of the host for the MDM procedure replaces this reboot. c. Restart device-mapper multipath. d. If vendor-specific multipath software was removed in step, modify /etc/fstab for new mount points. Base the new mount points on discovered LUNs, if the migrating LUN alias was not created in the multipath.conf file. Configuring multipath software for an HP-UX host Upgrade HBA drivers, if required. For information about supported HBA driver versions, see the SPOCK website: For more information about configuring multipath software, see the HPE 3PAR HP-UX Implementation Guide. This document is available at the SPOCK website. Configuring multipath software for an IBM AIX host Upgrade HBA drivers, if required. For information about supported HBA driver versions, see the SPOCK website: For more information about configuring multipath software, see the HPE 3PAR AIX and IBM Virtual I/O Server Implementation Guide. This document is available at the SPOCK website. Configuration rules for 3PAR Online Import support Fo r detailed information about ensuring that the source storage system and the destination The supported destination 3PAR StoreServ Storage systems and their associated 3PAR OS versions. system are configured properly for data migration, see the SPOCK website: The SPOCK website lists the supported system environments, including: The supported source storage systems and their associated firmware levels. The supported destination The supported destination 3PAR StoreServ Storage systems and their associated 3PAR OS versions. systems and their associated 3PAR OS versions. The supported host operating systems. The supported SMI-S versions. 186 Configuration rules for 3PAR Online Import support

187 The supported HBAs and the associated BIOS/firmware/driver versions to allow 3PAR StoreServ Storage and source storage coexistence. The supported configuration environments for installing 3PAR Online Import. Installing and configuring the 3PAR Online Import Utility The 3PAR Online Import Utility can be obtained from the HPE Software Depot website: NOTE: See the SPOCK website for supported target environments for the 3PAR Online Import Utility server: The 3PAR Online Import Utility software consists of two installable applications: the 3PAR Online Import Utility client component and the 3PAR Online Import Utility server component. Starting with Online Import Utility 2.2 and later, you must use matching versions of the client and server components. If one component has already been installed on the management server, and you want to add the second component on the same system, first run the installer to remove the component already installed, and then rerun the installer to install both components at once. To upgrade from HPE 3PAR Online Import Utility 2.0 to 2.2, see Upgrading the 3PAR Online Import Utility from 2.0 to 2.2. If you have a pre-2.0 version of the 3PAR Online Import Utility installed, you must uninstall it and then use procedure that follows. The default directories and options are shown below: Default installation directory: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu Default log files hpeoiu.log and hpeoiuaudit.log: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu\ OIUTools \tomcat\32-bit\apache-tomcat \logs Default port numbers: Service port: 2370 Shutdown port: 2371 Procedure 1. Double-click the installer package image icon to begin the installation process. The 3PAR Online Import Utility splash screen appears with the InstallShield Wizard. Installing and configuring the 3PAR Online Import Utility 187

188 2. When the Welcome screen appears, click Next. 3. Accept the end user license agreement and click Next. 188 Phase I: Preparing for data migration

189 4. The Custom Setup dialog box appears. To choose the default setup (installation of the 3PAR Online Import Utility server and client on this machine), click Next. Otherwise, select the component that you want to omit from this installation. Phase I: Preparing for data migration 189

190 Use this option if you want to install the 3PAR Online Import Utility server and client on separate machines, and then click Next to continue. NOTE: Use this option if you want to install the 3PAR Online Import Utility server and client on separate machines. 5. Click Yes if you have a CA signed certificate, or click No to generate a certificate from the installer. After the certificate installation dialog box is passed, the installer checks whether the default ports are already in use on this machine. If they are, another dialog box will be displayed, prompting you to specify available ports to use for this installation. This is not a common occurrence. If there was a conflict with the default port numbers and different port numbers were supplied to the installation process, the following steps must be taken after installation is complete to properly use the new ports: a. Edit OIUCli.bat, located at: <Install location>\hewlett Packard Enterprise\hpe3paroiu\CLI b. If port 2396 was chosen as the alternative to 2370, make the change in the OIUCli.bat file, as shown in the following example: Example: Editing the service port in the OIUCli.bat file java -jar..\cli\oiucli jar-with-dependencies.jar %* -port Click Install. 190 Phase I: Preparing for data migration

191 7. Monitor the progress of the installation. Phase I: Preparing for data migration 191

192 IMPORTANT: The installer does not support doing a close, cancel, or kill action during the Installation. If the installation is interrupted, the software installation is not fully cancelled and is left in an indeterminate state. DO NOT click the Cancel button or close the window by clicking the X in the upper right corner. 8. Select the Show the Windows Installer log check box if you want to view the installer log file. Click Finish to end the installation. If you selected the Show the Windows Installer log check box, the log appears in a separate window after you click the Finish button. Save the log in a location of your choice. The 3PAR Online Import Utility shortcut is installed on your desktop 9. After successful installation, verify that the following groups have been created: HP Storage Migration Admins HP Storage Migration Users 10. Add the Administrator account or another account with administrator privileges as a member of HP Storage Migration Admins group. This is the USERNAME that will be used for the 3PAR Online Import Utility. Local and domain users can be added. 192 Phase I: Preparing for data migration

193 11. If the client and server components have been installed on the same machine, you can use the keyword LOCALHOST to connect to the local server. Otherwise, use the IP address or the DNS hostname of the machine where the server is located. T o start the utility, double-click the 3PAR Online Import Utility desktop icon, and provide USERNAME and PASSWORD that you created in step step 11. Upgrading the 3PAR Online Import Utility from 2.0 to 2.2 The default directories and options are shown below: Default installation directory: Upgrading the 3PAR Online Import Utility from 2.0 to

194 C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu Default log files hpeoiu.log and hpeoiuaudit.log: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu\ OIUTools \tomcat\32-bit\apache-tomcat \logs Default port numbers: Service port: 2370 Shutdown port: 2371 Procedure 1. Double-click the installer package image icon to begin the installation process. The 3PAR Online Import Utility splash screen appears with the InstallShield Wizard. 2. The Welcome screen appears, along with a pop-up informing you that the existing version of the 3PAR Online Import Utility will be removed. Click Yes to continue with the upgrade or No to cancel. When performing the upgrade, the installer uses existing data to build the OIU database. Once the upgrade is complete, you can run showsource, showdestination, and showmigration commands to validate the database information. After clicking Yes to continue with the upgrade, click Next in the Welcome screen to proceed with the upgrade. 194 Phase I: Preparing for data migration

195 3. Accept the end user license agreement and click Next. 4. The Custom Setup dialog box appears. Phase I: Preparing for data migration 195

196 To choose the default setup (installation of the 3PAR Online Import Utility server and client on this machine), click Next. Otherwise, select the component that you want to omit from this installation, and then click Next to continue. NOTE: Use this option if you want to install the 3PAR Online Import Utility server and client on separate machines. 5. Click Yes if you have a CA signed certificate, or click No to generate a certificate from the installer. After the certificate installation dialog box is passed, the installer checks whether the default ports are already in use on this machine. If they are, another dialog box will be displayed, prompting you to specify available ports to use for this installation. This is not a common occurrence. If there was a conflict with the default port numbers and different port numbers were supplied to the installation process, the following steps must be taken after installation is complete to properly use the new ports: a. Edit OIUCli.bat, located at: 196 Phase I: Preparing for data migration

197 <Install location>\hewlett Packard Enterprise\hpe3paroiu\CLI b. If port 2396 was chosen as the alternative to 2370, make the change in the OIUCli.bat file, as shown in the following example: Example: Editing the service port in the OIUCli.bat file java -jar..\cli\oiucli jar-with-dependencies.jar %* -port Click Install. 7. Monitor the progress of the installation. Phase I: Preparing for data migration 197

198 IMPORTANT: The installer does not support doing a close, cancel, or kill action during the Installation. If the installation is interrupted, the software installation is not fully cancelled and is left in an indeterminate state. DO NOT click the Cancel button or close the window by clicking the X in the upper right corner. 8. Select the Show the Windows Installer log check box if you want to view the installer log file. Click Finish to end the installation. 198 Phase I: Preparing for data migration

199 9. A reboot is required after the upgrade is complete. Click Yes to reboot now or No to reboot later. HPE recommends to reboot the server right away. Source array cleanup after upgrading to 3PAR Online Import Utility 2.0 Following an upgrade from 3PAR Online ImportUtility 1.5 to 2.0, sourcearray cleanup is required for: Performing cleanup on an EMC VMAX array on page 199 Performing cleanup on an IBM XiV Gen2 array on page 200 Performing cleanup on an EMC VMAX array Starting with 3PAR Online Import Utility 2.0, how the initiator group, port group, storage group, and masking view are created for an EMC VMAX source array will be different. With OIU 2.0 and later, the following groups are created only once for the migration between a VMAX and destination HPE 3PAR pair: HOST_FOR_<Destination HPE 3PAR name>_sg HOST_FOR_<Destination HPE 3PAR name>_ig Source array cleanup after upgrading to 3PAR Online Import Utility

200 HOST_FOR_<Destination HPE 3PAR name>_pg For subsequent migrations between that VMAX and HPE 3PAR pair, those same groups are used (instead of being created new for each migration, as in OIU 1.5). After all migrations are completed, the HOST_FOR_<Destination 3PAR name_mv> masking view is deleted. Before or performing a migration using OIU 2.0: Procedure From the VMAX, remove any existing initiator group, child Initiator group, port group, storage group and masking view created by the OIU 1.5. (OIU 1.5 created objects with the names OIU_ or HOST_FOR.) Failure to do so will result in a failed createmigration Performing cleanup on an IBM XiV Gen2 array During a migration using OIU 1.5, two HPE 3PAR peer hosts are created with the naming format HOST_FOR_<3PAR name>, each with a WWPN assigned to it. With OIU 2.0, one such HPE 3PAR peer host is created with two peer port WWPNs assigned to it. Procedure If an HPE 3PAR system was used as a migration destination with OIU 1.5, you must remove its existing peer hosts before it is used as a destination with OIU 2.0. Failure to do this will result in failed createmigration. SMI-S provider installation and configuration For online import from EMC, HDS, and IBM XIV storage systems to a 3PAR StoreServ Storage, an SMI-S provider manipulates the source storage array. The SMI-S provider for EMC Storage is the EMC SMI-S Provider; for HDS Storage, the SMI-S provider is the HiCommand Suite. For IBM XIV Storage, the SMI-S provider in embedded in the array itself. EMC SMI-S Provider for EMC Storage The 3PAR Online Import Utility manages the source EMC Storage system through the EMC SMI-S Provider software. NOTE: If the SMI-S plugin for the third-party array is being installed on a Windows server, Hewlett Packard Enterprise highly recommends that you check the Windows Services and ensure that no other SMI-S agents are already running on the system that is identified for the source array plugin install. Installing EMC SMI-S Provider for EMC Storage The EMC SMI-S Provider is a component of EMC Solutions Enabler and requires underlying EMC Solutions Enabler code to function. However, installation of the base EMC Solutions Enabler software does not include the EMC SMI-S Provider component. Installation of the EMC SMI-S Provider includes only the necessary underlying EMC Solutions Enabler code, not a full installation of EMC Solutions Enabler. IMPORTANT: It is technically possible to install the EMC SMI-S Provider on a machinewhere EMC Solutions Enabler is already installed, but be aware that the EMC SMI-S Provider installation uninstalls any existing EMC Solutions Enabler software and reinstalls only the subset of EMC Solutions Enabler that comes with the EMC SMI-S Provider. 200 Performing cleanup on an IBM XiV Gen2 array

201 The connection between an EMC VMAX or DMX4 and the EMC SMI-S Provider runs over FC. The connection between an EMC VNX or EMC CLARiiON CX4 and the EMC SMI-S Provider runs over Ethernet. For EMC VMAX and DMX4, the required number of VMAX or DMX4 gatekeeper devices must be presented to the SMI-S server. The EMC SMI-S Provider is available on the EMC Support website: The 3PAR Online Import Utility supports installation of the EMC SMI-S Provider on Windows and Linux. See product release notes, also available on the EMC Support website, for full installation details. To install the EMC SMI-S Provider software, follow these steps: Prerequisites An EMC login account is required to download the EMC SMI-S Provider Installer from the EMC website. Procedure 1. Download EMC SMI-S Provider Installer from the EMC website. (An EMC login account is required.) 2. Launch the EMC SMI-S Provider installer. The EMC Solutions Enabler welcome page appears, prompting you to install Solutions Enabler with SMI-S. 3. Click Next to begin the installation process. 4. The Destination Folder dialog box opens and prompts you to select an install directory for Solutions Enabler and EMC SMI-S Provider. It is recommended that you choose the default directory. 5. Click Next to continue. The Provider List dialog box opens. 6. Select Array Provider. The Service List dialog box opens. 7. Select your daemon and click Next. 8. Click Next to continue. The Ready to Install Program dialog box opens. 9. Click Install to begin installing files to your selected folder. This may take several minutes. 10. When the Installation Program Complete dialog box opens, click Finish to complete the setup. Configuring EMC SMI-S Provider The EMC SMI-S Provider must be configured to allow remote software to connect to the EMC Storage system. Procedure 1. Open a command prompt on the Windows server where the EMC SMI-S Provider software was installed. 2. Change the directory to the location of the EMC SMI-S Provider installation. The default is: For Windows: C:\Program Files\EMC\ECIM\ECOM\bin For Linux: /opt/emc/ecim/ecom/bin 3. Start the TestSmiProvider program. Configuring EMC SMI-S Provider 201

202 4. Accept all default settings when prompted. 5. Attach the storage system to the EMC SMI-S Provider NOTE: If both an EMC VMAX or DMX4 and an EMC CLARiiONCX4 or VNX storage system are being migrated, both of the following steps must be performed. a. Any EMC VMAX or DMX4 storage arrays connected to the EMC SMI-S Provider server will be automatically detected, provided that: The SMI-S and EMC VMAX or DMX4 have a Fibre Channel connection and are zoned. A gatekeeper LUN is presented to the SMI-S server. If the EMC VMAX or DMX4 does not show up, run the disco command to rescan for arrays. Proceed to step step 6, unless you want to add an EMC CLARiiON CX4 or VNX storage system. b. To attach an EMC CLARiiON CX4 or VNX storage system to the SMI-S Provider, use the SMI-S Provider CLI addsys command (localhost:5988)? addsys Add System {y n} [n]: y ArrayType (1=Clar, 2=Symm) [1]: One or more IP address or Hostname or Array ID Elements for Addresses IP address or hostname or array id 0 (blank to quit): <IP address of SPA> IP address or hostname or array id 1 (blank to quit): <IP address of SPB> IP address or hostname or array id 2 (blank to quit): Address types corresponding to addresses specified above. (1=URL, 2=IP/Nodename, 3=array ID) Address Type (0) [default=2]: Address Type (1) [default=2]: User: <Global Administrator> global administrator user for CX4 array Password [null]: <password> ++++EMCAddSystem++++ OUTPUT : 0 Output codes are shown in the following table: 0 Success 1 Not supported 2 Unknown 3 Timeout 4 Failed 5 Invalid parameter Table Continued 202 Phase I: Preparing for data migration

203 4096 Job queued 4097 Size not supported 6. Use the dv command to verify that the storage system is discovered. NOTE: If your EMC SMI-S Provider has client IP filtering enabled, you need to add the IP address of the 3PAR Online Import Utility server to the trusted IP address list. To add a 3PAR Online Import Utility server to the EMC SMI-S Provider trusted IP address list: a. Log in to the SMI-S ECOM administration page using one of the SMI-S user and password configurations: IP address>:5989/ecomconfig IP address>:5988/ecomconfi b. Select the Client IP Filtering link on the ECOM administration page. c. Add the 3PAR Online Import Utility server IP address in the Trusted client s IP field on the Client IP Filtering page, and then click Execute to complete. HiCommand Suite installation for HDS Storage T o access the SMI-S portion of the HiCommand Suite, full implementation of the HiCommand Suite is not required. The SMI-S functionality can be obtained from either of the following licenses: HiCommand Suite CLI/SMI-S license HiCommand Suite Device Manager license NOTE: If you do not have either of these licenses, contact your HDS representative. Installing the HiCommand Suite CLI/SMI-S license The HiCommand Suite CLI/SMI-S license entitles you to use the SMI-S provider and the core CLI of HiCommand Suite software. To install the HiCommand Suite CLI/SMI-S license, first obtain a license key using the registration number from the HiCommand Suite CLI/SMI-S License entitlement certificate. Follow these steps: Procedure 1. Install the HiCommand Suite Device Manager by selecting default settings. When you install the HiCommand Suite, the SMI-S component is installed and enabled by-default. HiCommand Suite installation for HDS Storage 203

204 NOTE: CLI/SMI-S See the SPOCK website for supported versions of the HiCommand Suite: 2. After successful installation, the HiCommand Suite program appears under Windows Start programs as Hitachi Command Suite. 3. From the Hitachi Command Suite program, select Login - HCS. The Hitachi Command Suite login screen appears (Figure 71: Hitachi Command Suite login screen on page 204). Figure 71: Hitachi Command Suite login screen 4. Click the License button and browse to the license file. 5. Apply the HiCommand Suite CLI/SMI-S license, and then click Save. 6. Validate the license key by inspecting the Device Manager License Information screen (Figure 72: HiCommand Suite core license on page 205). 204 Phase I: Preparing for data migration

205 Figure 72: HiCommand Suite core license 7. Log in to the HiCommand Suite, using the following credentials: User: system Password: manager 8. Download the HiCommand Suite by selecting Hitachi Command Suite > Tools > Download. In the Device Manager Software Deployment dialog box, select Device Manager Command Line Interface (CLI) Application and download the Windows edition (see Figure 73: Downloading the Windows edition of the HiCommand Suite CLI on page 206). Phase I: Preparing for data migration 205

206 Figure 73: Downloading the Windows edition of the HiCommand Suite CLI 9. Move the HiCommand Suite CLI download to a directory of your choice, and double-click the file to install the application. Move the HiCommand Suite CLI download to a directory of your choice. Uncompress the file to install the application. 10. To configure the HiCommand Suite CLI, edit HiCommandCLI.properties in the directory where the HiCommand Suite CLI was installed, using the following settings: hdvmcli.serverurl = user = system password = manager NOTE: For this configuration, it is assumed that the HiCommand Suite CLI application is installed in the same system as the HiCommand Suite and uses the default login credentials. If any of these have been modified, update HiCommandCLI.properties accordingly. 11. Add the HDS USP_VM, HDS USP_V, HDS TagmaStore USP, or HDS TagmaStore NSC to the HiCommand Suite, using the HiCommand Suite CLI. Follow these steps: 206 Phase I: Preparing for data migration

207 a. Open a Windows command prompt and change the path to the location where the HiCommand Suite CLI was installed. b. Issue the AddStorageArray command to register the HDS USP_VM, HDS USP_V, HDS TagmaStore USP, or HDS TagmaStore NSC with the HiCommand Suite (see the following example). Example: AddStorageArray command HiCommandCLI AddStorageArray ipaddress=x.x.x.x family=usp_v userid=******** arraypasswd=****** displayfamily=usp_v where: ipaddress: The IP address or host name of the HDS Storage system. userid: The user ID used to access the storage system (a user ID for the HDS Storage Navigator for the HDS Storage array). arraypasswd : The user password used to access the storage system (a password for the HDS Storage Navigator for the HDS Storage array). Phase I: Preparing for data migration 207

208 Figure 74: AddStorageArray command and output NOTE: When the HDS Storage system is being added, the service processor must be in View Mode. Adding the storage system might take up to 3 minutes. Installing the HiCommand Suite Device Manager license T o install the HiCommand Suite Device Manager License, first obtain a license key using the registration number from the HiCommand Suite Device Manager License Entitlement Certificate. Follow these steps: 208 Installing the HiCommand Suite Device Manager license

209 Procedure 1. On the HiCommand Suite login screen, click the License button and browse to the license file (seefigure 75: Hitachi Command Suite login screen on page 209). Figure 75: Hitachi Command Suite login screen 2. Apply the HiCommand Suite CLI/SMI-S license, and then click Save. 3. Validate the license key by inspecting the Device Manager License Information screen (Figure 76: HiCommand Suite Device Manager license on page 209). Figure 76: HiCommand Suite Device Manager license 4. Log in to the HiCommand Suite, using the following credentials: Phase I: Preparing for data migration 209

210 User: system Password: manager 5. Using the HiCommand Suite, follow these steps to add the HDS Storage system: a. Click Hitachi Command Suite, click the Resources tab, and then, under Storage Systems, select Add Storage System (see Figure 77: Adding an HDS Storage system on page 210). Figure 77: Adding an HDS Storage system b. Use the following settings: Storage System Type: USP or USP_V IP Address/Host Name: The IP address or host name of the target storage system User ID: The user ID used to access the storage system (a user ID for the HDS Storage Navigator for the HDS Storage array) Password: The user password used to access the storage system (a password for the HDS Storage Navigator for the HDS Storage array) NOTE: When the HDS Storage system is being added, the service processor must be in View Mode. Adding the storage system might take up to 3 minutes. 210 Phase I: Preparing for data migration

211 NOTE: For IBM XIV arrays, the SMI-S provider is embedded in the administrator module of the arrays. There is no need to install additional software. The provider is enabled and running by default in all administrative modules. There are no options to start or stop the provider process through the XCLI or IBM XIV Storage System GUI. The SMI-S provider is running by default and monitored by a watchdog process. For more information about configuring the SMI-S user credentials for IBM XIV arrays based on the firmware version, see the appropriate IBM documentation. Phase I: Preparing for data migration 211

212 Phase II: Premigration See the following requirements and procedures to prepare for migration: Network and fabric zoning requirements for 3PAR Online Import on page 212 Requirements for multisource array migration on page 212 Identifying the source and destination storage systems on page 213 Zoning the source storage system to the destination 3PAR StoreServ Storage system on page 218 Zoning host(s) to the destination storage system on page 220 Required prerequisite information on page 221 The creatmigration command process on page 222 Network and fabric zoning requirements for 3PAR Online Import Two unique paths must be zoned between the source and destination storage systems. To create two paths, two controller ports on the source storage system must be connected to two Fibre channel peer ports on the destination 3PAR StoreServ Storage system. The peer ports on the destination 3PAR StoreServ Storage system must be on adjacent nodes: 0/1, 2/3, 4/5, or 6/7. SeeZoning the source storage system to the destination 3PAR StoreServ Storage system on page 218. Paths between the host and the source storage system are unzoned before the data migration starts. The timing for this depends on the operating system of the host and the migration type. During data migration, paths must be zoned between the host and destination 3PAR StoreServ Storage system. The time at which these paths are created is determined by the operating system of the host and the migration type. Requirements for multisource array migration Migration from multiple source arrays to a single or multiple destination 3PAR StoreServ Storage systems is supported (Figure 78: Unidirectional migration from multiple source arrays to a single3par StoreServ Storage system on page 213) illustrates an example). Use the addsource CLI command to add multiple arrays. 212 Phase II: Premigration

213 Figure 78: Unidirectional migration from multiple source arrays to a single3par StoreServ Storage system Only unidirectional migration is supported. The source can be any of the supported EMC Storage, HDS Storage, or IBM XIV Storage source arrays. Zoning must be set up between each of the source arrays and the single destination 3PAR StoreServ Storage system. One initiator to one target for each existing zone is required. LUN conflicts are resolved at the destination 3PAR StoreServ Storage system with the autoresolve option of the createmigration command, which is enabled by default (see Using the autoresolve option to resolve LUN conflicts on page 141). When a single server or cluster that accesses LUNs from multiple source arrays is being migrated: The host name on each source array must match. The host entry on each source array must contain the same HBA WWPNs. The createmigration operation tests for these conditions to ensure that the LUNs from each source array are placed under the same 3PAR StoreServ Storage host. Any mismatch yields an error. Identifying the source and destination storage systems This section provides information on how to identify and add the source and destination storage systems for migration. Before beginning, prerequisites must be met and information about the source and destination systems must be gathered. Procedure 1. Prerequisites on page Required information on page 216 Identifying the source and destination storage systems 213

214 3. Adding the source storage system on page Adding the destination storage system on page 217 Prerequisites EMC Storage When a host or initiator group that is accessing LUNs from multiple EMC Storage source arrays is being migrated: The host or initiator group name and assigned WWPNs must match. The volume names should not match across source arrays. EMC SMI-S Provider is installed and operational. The source EMC Storage system must be registered in the EMC SMI-S Provider application. The source EMC Storage system has one or more storage groups configured with the hosts and volumes to be migrated. The 3PAR Online Import Utility is installed and operational with access to the SMI-S Provider server managing the source EMC Storage system and the destination 3PAR StoreServ Storage system. The source EMC Storage system and destination 3PAR StoreServ Storage system are zoned to each another. For EMC VNX, VNX2 and CLARiiON CX4 arrays: The following parameter must be added to the multipath configuration file: device { vendor "DGC" product "*" hardware_handler "1 alua" } After updating the file, restart the multipathd service with the following command: service multipathd restart Register the 3PAR peer port WWNs in the EMC arrays after peer zoning. NOTE: Snapshot and replication volumes are not supported. Replication relationships such as RecoverPoint and SRDF are not supported. 214 Prerequisites

215 HDS Storage The HiCommand Suite is installed and licensed with either the CLI/SMI-S or the Device Manager license. The source HDS Storage system must be registered in the HiCommand Suite. The source HDS system has one or more host groups configured with the hosts and volumes to be migrated. The 3PAR Online Import Utility is installed and operational with access to the HiCommand Suite server managing the source HDS Storage system and the destination 3PAR StoreServ Storage system. The source HDS Storage system and destination 3PAR StoreServ Storage system are zoned to each another. If migrating HDS Storage host groups within the same HDS Storage array have the same member WWPNs, then all the host group names should also match. When migrating a host group that is accessing LUNs from multiple HDS Storage source arrays, the host name and assigned WWPNs must match. HDS LUSE is supported. Snapshot and replication volumes are not supported. NOTE: Snapshot and replication volumes are not supported. Replication relationships such as RecoverPoint and SRDF are not supported. CAUTION: For migration of HP-UX Serviceguard active-passive clusters on HDS Storage, Hewlett Packard Enterprise recommends that volume groups on all nodes in the cluster be configured with agile DSFs. IBM XIV Storage When a host or initiator group that is accessing LUNs from multiple IBM XIV Storage source arrays is being migrated: The host or initiator group name and assigned WWPNs must match. The volume names should not match across source arrays. The source IBM XIV Storage system has one or more hosts configured with the volumes to be migrated. The 3PAR Online Import Utility is installed and operational with access to the source IBM XIV Storage system. The source IBM XIV Storage system and destination 3PAR StoreServ Storage system are zoned to each another. NOTE: Snapshot and replication volumes are not supported. Phase II: Premigration 215

216 Required information Required information for source storage system EMC Storage The IP address of the EMC SMI-S Provider server that is managing the source EMC Storage system from which the volumes are being migrated and the WWN of the source EMC Storage system being managed by the EMC SMI-S Provider The user name and password for the EMC SMI-S Provider. Port to access the EMC SMI-S Provider. HDS Storage The IP address of the HiCommand Suite server that is managing the source HDS Storage system from which the volumes are being migrated and the serial number of the source HDS Storage system. The user name and password for the HiCommand Suite. Port access to the HiCommand Suite: Secure port default 5989 Non-secure port default 5988 IBM XIV Storage The administrator IP address of the IBM XIV array User name and password of the administrator account used to access the IBM XIV array Required information for destination 3PAR StoreServ Storage system IP address of the destination 3PAR StoreServ Storage system User name and password for the HPE 3PAR management application with Super user permission. Adding the source storage system Procedure 1. Using local user credentials, log in to the 3PAR Online Import Utility. 2. From the 3PAR Online Import Utility, issue the addsource command. NOTE: With 3PAR OS and up, multiple source arrays can be added to the 3PAR Online Import Utility database using the addsource command EMC Storage addsource command: > addsource -type VNX -mgmtip XX.XX.XX.XX -user admin -password adminpw -uid F013E500 > SUCCESS: Added source storage system 216 Required information

217 where XX.XX.XX.XX is the IP address of the EMC SMI-S Provider server. HDS Storage addsource command: > addsource -type HDS -mgmtip XX.XX.XX.XX -user admin -password adminpw -uid > SUCCESS: Added source storage system where XX.XX.XX.XX is the IP address of the HiCommand Suite server. IBM XIV Storage addsource command: > addsource -type XiV -mgmtip XX.XX.XX.XX -user admin -password adminpw -uid > SUCCESS: Added source storage system 3. Issue the showsource command to verify the source storage system information. EMC Storage showsource command: > showsource type VNX NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE CLARiiON+APM VNX BEE0177F XX.XX.XX.XX Good HDS Storage showsource command: > showsource type HDS NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE USP_V XX.XX.XX.XX Good IBM XIV Storage showsource command: > showsource type XiV NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE IBM XiV a XX.XX.XX.XX Good Adding the destination storage system NOTE: For N:N configurations, all destination arrays can be added up front by running N instances of the adddestination command. Procedure 1. From the 3PAR Online Import Utility, issue the adddestination command. adddestination command: > adddestination mgmtip XX.XX.XX.XX user 3paradm password Password > SUCCESS: Added destination storage system where XX.XX.XX.XX is the HPE 3PAR management port IP address. If a certificate validation error occurs on the adddestination command, first run the installcertificate command, then run the adddestination command again. Certificate validation error: > adddestination -mgmtip xx.x.xx.xx -user 3paradm -password ******* -port 5783 Adding the destination storage system 217

218 > ERROR: OIUERRDST0010 Unable to validate certificate for HP 3PAR Storage System. C:\\InFormMC\security\HP-3PAR-MC-TrustStore installcertificate command: > installcertificate mgmtip xx.xx.xx.xx TParCertifacteVO [issuedto=hp 3PAR HP_3PAR , commonname=null, issuedbyorganization=null, issuedtoorganization=null, serialno=null, issedby=hp 3PAR HP_3PAR , fingerprint=89:e5:d0:13:6f:d1:07:80:70:76:5c:fe:5b:65:e5:54:c0:18:21:2f, signaturealgo=sha1withrsa, version=v1,validfrom=08/14/2014, validto=08/11/2024. issuedon=null, expireson=null, validdaterange=true] Do you accept the certificate? Y/YES Y > SUCCESS: Installed certificate successfully. 2. Issue the showdestination command to verify the destination storage system information. showdestination command: > showdestination NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE 3par_7200_DCB_01 3PAR 2FF70002AC005F (MU3) XX.XX.XX.XX Normal PEER_PORTS AC00-5F91(0:2:1) AC00-5F91(1:2:1) Zoning the source storage system to the destination 3PAR StoreServ Storage system The correct zoning must be present between the source storage system and the destination 3PAR StoreServ Storage system at every stage of the migration operation. The following zoning rules apply: Zone the source and destination systems together and make sure they are visible to each other before zoning hosts to the destination system. Do not unzone the source and destination systems from each other until the data migration is complete. Data migration is complete when status of showmigration command is "Success." To create the zoning: Procedure 1. On the destination 3PAR StoreServ Storage system, configure two free ports as peer ports using adjacent nodes (for example, nodes 0 and 1, or nodes 2 and 3, or nodes 4 and 5, or nodes 6 and 7). a. Set the port connection type to point. b. Set the port connection mode to peer. 3PAR OS CLI controlport command: cli% controlport offline n:s:p cli% controlport config peer -ct point n:s:p cli% controlport rst n:s:p TIP: Use the showport command to see a list of the available ports on the array. 2. Create two zones between the source storage system and the destination 3PAR StoreServ Storage system, ensuring that one host port on the source storage system is in the same zone with one HPE 3PAR peer port. 218 Zoning the source storage system to the destination 3PAR StoreServ Storage system

219 NOTE: The WWN of a host port changes when it is set to become a peer port. Use the new WWN of the peer port in the zoning. NOTE: With 3PAR OS and later, multiple migration source arrays can be zoned to the same pair of peer ports on the destination 3PAR StoreServ Storage. Consequently, multiple peer zones must be configured, with each zone containing only two ports, one from a source system and the other the peer port configured on the destination storage system. Each zone should contain only two ports, one from each storage system. Adjacent HPE 3PAR peer nodes should be zoned in a one-to-one mapping as follows: For EMC Storage, to separate EMC controllers. For example, zone the peer port on node 0 on the 3PAR StoreServ Storage system to controller A on the EMC Storage system, and zone the peer port on node 1 on the 3PAR StoreServ Storage system to controller B on the EMC Storage system. For HDS Storage, to an HDS CHA board in separate array power domains. For example, zone the peer port on node 0 on the 3PAR StoreServ Storage system to a CHA in domain 1 on the HDS Storage system, and zone the peer port on node 1 on the 3PAR StoreServ Storage system to a CHA in domain 2 on the HDS Storage system. For IBM XIV Storage, to IBM XIV ports in separate array power domains. For example, zone the peer port on node 0 on the 3PAR StoreServ Storage system to a port in one power domain on the IBM XIV Storage array, and zone the peer port on node 1 on the 3PAR StoreServ Storage system to a port on the other power domain on the IBM XIV Storage array. 3. Verify that the source storage system can detect both ports on the 3PAR StoreServ Storage system. For EMC Storage Ensure that the peer port WWNs appear in the discovered port list. To check peer port connections used for EMC VMAX or DMX4, follow these steps: a. Issue the following CLI command as appropriate: For EMC VMAX: # symaccess -sid <vmax id> list logins For EMC DMX4: # symaccess -sid <dmx4 id> list logins b. Check the Identifier listed for each HPE 3PAR peer port and make sure they are logged in ( Yes ). C:\Users\Administrator>symaccess -sid 1212 list logins Symmetrix ID : Director Identification : FA-7E Director Port : 0 User-generated Logged On Identifier Type Node Name Port Name FCID In Fabric Phase II: Premigration 219

220 ac005f91 Fibre ac005f ac005f Yes Yes Director Identification : FA-8E Director Port : 0 User-generated Logged On Identifier Type Node Name Port Name FCID In Fabric ac005f91 Fibre ac005f ac005f Yes Yes To check the HPE 3PAR peer port that connects to the VNX or CLARiiON CX4, open the Host tab, open the Initiator tab, and verify that the initiators are logged in. For HDS Storage, EMC Storage, and IBM XIV Storage Verify that the source storage array is shown as a connected device on both peer ports of the destination 3PAR StoreServ Storage system To check peer port connections from the 3PAR StoreServ Storage, issue the showtarget command with the -rescan option, and then issue the showtarget command. In the following example, Node_WWN is the WWN of the source array: cli% showtarget -rescan cli% showtarget Port ----Node_WWN Port_WWN Description :2: C012F C012F11C reported_as_scsi_target 0:2: C012F C012F118 reported_as_scsi_target Ensure that the peer port WWNs appear in the discovered port list. To check peer port connections from the 3PAR StoreServ Storage, issue the 3PAR CLI showportdev ns n:s:p command, as in the following example. Figure 79: Checking peer port connections using the showportdev ns n:s:p command Zoning host(s) to the destination storage system Procedure For MDM and online migrations only, zone the host(s) to the destination 3PAR StoreServ Storage system. Zoning the peer ports of the source array and destination array is what allows them to communicate during the createmigration process. You can use the showconnection command 220 Zoning host(s) to the destination storage system

221 to confirm this communication or, using SSMC, verify that the host whose LUNs are under migration has paths to as many HPE 3PAR controller nodes as are zoned in the SAN. Ensure that the source HBA WWPNs of all servers being migrated are properly zoned with the destination target port WWPNs before any createmigration commands are issued from the HPE 3PAR Online Import Utility, as this is necessary to access data on the 3PAR once the migration is complete. Required prerequisite information Required information for source storage system EMC Storage source system The WWN of the source EMC Storage system being managed by the EMC SMI-S Provider. The list of volumes that are being migrated. OR Name of the host associated with volumes to be migrated. Host and volumes must be in supported storage group configuration on the source EMC Storage system. For details, see Migrations supported by the 3PAR Online Import Utility on page 152. HDS Storage source system The serial number of the source HDS Storage system from which the volumes are being migrated. The list of one or more LDEV names that are being migrated, or the host name of the host that is being migrated. IBM XIV Storage source system The serial number of the source IBM XIV Storage system from which the volumes are being migrated. The list of one or more volume names that are being migrated, or the host name of the host that is being migrated. Required information for destination system Gather the following information for your destination 3PAR StoreServ Storage system: Provisioning type for the volumes being created on the destination storage system (thin, full, or dedupe). NOTE: For EMC Storage, 3PAR Online Import supports TDVVs on a destination with 3PAR OS MU2 or later. The name of the CPG on the destination storage system where the volumes are being migrated. Required prerequisite information 221

222 The creatmigration command process The 3PAR Online Import Utility performs the following actions in the preparation required for data migration: On an EMC Storage VNX or CLARiiON CX4 source system, the 3PAR Online Import Utility creates a storage group named in the format HOST_FOR_<Destination 3PAR name> and adds two HPE 3PAR initiators. The created storage group also contains volumes selected for migration. The host name of the HPE 3PAR initiators is unknown. The initiator name appears as the WWN of the peer port on the 3PAR StoreServ Storage. On an HDS Storage system source, the 3PAR Online Import Utility creates two host groups, each containing both of the HPE 3PAR peer port initiators. The host group name format is HCMDXXXX. On an EMC VMAX source storage system, the following groups are created only once for the migration between a VMAX and destination HPE 3PAR pair. The same groups are used for subsequent migration between VMAX and destination HPE 3PAR pair: One storage group is added, containing the devices being migrated. The naming convention is: HOST_FOR_<Destination HPE 3PAR name>_sg On e initiator group is added and associated to both of the HPE 3PAR peer port initiators. The naming convention is: HOST_FOR_<Destination HPE 3PAR name>_ig One port group is added, containing the required ports based on the original masking views. The naming convention is: HOST_FOR_<Destination HPE 3PAR name>_pg After all of these groups have been created and populated, a migration masking view is added and associated to the migration storage group, initiator group, and port group. The naming convention is: HOST_FOR_<Destination HPE 3PAR name>_mv NOTE: For an EMC VMAX, these groups remain on the source system after migration is completed (except for the MV). For an EMC Storage DMX4, the 3PAR Online Import Utility creates a host group named in the format <Destination 3PAR name> and adds two HPE 3PAR initiators. It also presents the volumes selected for migration to the created host group. For an MDM or online migration, the host or hosts that are being migrated are created on the destination storage system. A temporary host set is also created on the destination storage system, then deleted at the end of migration. For offline migration, the host definitions must be created by the storage administrator. Upon completion of the createmigration operation: 222 The creatmigration command process

223 For online migration, the volumes or LDEVs being migrated are admitted to the destination system and exported to the host. For MDM, the volumes or LDEVs are admitted to the destination storage system but are not exported to the host. If the createmigration calls for using consistency groups, then temporary VV sets are created on the destination storage system. The temporary VV sets are removed from the destination storage system after migration is completed. The createmigration command performs the following checks before creating the migration: The source storage system group configuration is valid. The volumes or hosts specified in the createmigration command are mapped to a storage group on the associated source EMC Storage system. All LUNs and all hosts in the mapped storage group will be migrated even if only a subset are entered in the createmigration command. To determine whether your configuration is valid, see Migrations supported by the 3PAR Online Import Utility on page 152. EMC SMI-S Provider version. Source storage system model number. NOTE: See the SPOCK website for the latest supported versions: LUN migration eligibility: Protocol must be FC. A LUN under replication cannot be migrated. Issuing the createmigration command online migration Procedure 1. For RHEL 5.x, RHEL 6.x, or RHEL 7x: Using the 3PAR Online Import Utility, issue the createmigration command with the -migtype online option. Optionally, migrations with consistency groups are also supported, using the -allvolumesincg or - cgvolmap parameters. NOTE: For Oracle RAC: if required, multiple migrations must be executed in order to transfer all the Oracle-based disks from multiple source arrays to a single destination 3PAR StoreServ Storage. This use-case scenario is described in detail in Data migration for an Oracle RAC cluster use case on page 418. For Oracle RAC, you must issue the createmigration command using the -allvolumesincg parameter. Issuing the createmigration command online migration 223

224 Using createmigration with -migtype online for RHEL 5.x, RHEL 6.x, and RHEL 7.x: # createmigration -sourceuid <source_id> -srchost "<host_id>" -destcpg <CPG_ID> -destprov full -migtype online -persona "RHEL_5_6" # SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. where: Migration id: <migration_id> <source_id> is the host WWN (for EMC Storage) or the serial number (for HDS Storage or IBM XIV Storage) of the source storage system. For example: C6E059EE (for EMC Storage) (for HDS Storage) (for IBM XIV array) <host_id> is the ID of the host OS. <CPG_ID> is the name of the destination CPG. For example: FC_r5 testcpg <migration_id> is the migration ID that will be assigned by execution of the createmigration command. For example: <persona> is the host OS persona value. For example: ESX_4_5 HPUX_11_v3 HPUX_SG RHEL_5_6 NOTE: RHEL 7.x is also supported using the RHEL_5_6 value of the -persona option. For more information about host OS persona values, see showpersona. The createmigration command for different hosts supported for migtype online between the same source-destination array pair will be similar, the only difference being the -persona value. For example, for a migration of SLES hosts, the createmigration command will appear as follows: # createmigration -sourceuid <source_id> -srchost "<host_id>" -destcpg <CPG_ID> -destprov full -migtype online -persona SUSE_10_11 NOTE: The createmigration command may take several minutes to complete. 224 Phase II: Premigration

225 TIP: Make a note of the migration ID, as it will be used in commands to track migration progress. Using createmigration with -migtype online for HP-UX Serviceguard active-passive cluster: > createmigration -sourceuid srchost "HPUX-NMP-SG1" -destcpg "TEST_CPG_R6" -destprov full -migtype online -cluster HPUX_SG 2. Issue the showmigration command to verify that the data migration task preparation has completed successfully. This may take some time. Upon successful creation of the createmigration task, the STATUS column in the showmigration command output will indicate preparationcomplete(100%). When this status is indicated, continue to the next step. Issuing the showmigration command: # showmigration MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME <migration_id> online <source_name> <destination_name> Thu Sep 25 15:23:50 EDT 2014 where: END_TIME STATUS(PROGRESS) (MESSAGE) -NA- preparationcomplete(100%)(-na-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: CLARiiON+APM (for EMC Storage) SYMMETRIX (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 3. Using the SSMC, verify that the host whose LUNs are under migration has paths to as many HPE 3PAR controller nodes as are zoned in the SAN. 4. Update path configuration on the host by rescanning all HBAs and issuing the multipath -ll command to verify the newly discovered paths. Multipath now recognizes extra paths to the 3PAR StoreServ Storage, but all paths are still managed as source array devices. HBA rescanning verifies new paths on the destination storage system. EMC Storage Rescanning HBAs and listing the updated multipath mapping with RHEL 5.x : # ls /sys/class/fc_host host2 host3 # echo "1" > /sys/class/fc_host/host2/issue_lip # echo "- - -" > /sys/class/scsi_host/host2/scan # echo "1" > /sys/class/fc_host/host3/issue_lip Phase II: Premigration 225

226 # echo "- - -" > /sys/class/scsi_host/host3/scan # multipath -ll mpath13 ( bf902a00e03cb1bb3c3fe411) dm-4 DGC,VRAID [size=150g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 2:0:0:2 sde 8:64 [active][ready] \_ 3:0:0:2 sdg 8:96 [active][ready] \_ 2:0:1:2 sdj 8:144 [active][ready] \_ 3:0:1:2 sdm 8:192 [active][ready] mpath12 ( bf902a00cae2a2a33c3fe411) dm-3 DGC,VRAID [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:1 sdf 8:80 [active][ready] \_ 2:0:0:1 sdd 8:48 [active][ready] \_ 2:0:1:1 sdi 8:128 [active][ready] \_ 3:0:1:1 sdl 8:176 [active][ready] mpath11 ( bf902a002ac0388f3c3fe411) dm-2 DGC,VRAID [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 3:0:0:0 sdb 8:16 [active][ready] \_ 2:0:0:0 sdc 8:32 [active][ready] \_ 2:0:1:0 sdh 8:112 [active][ready] \_ 3:0:1:0 sdk 8:160 [active][ready] HDS Storage-Rescanning HBAs and listingthe updated multipath mapping with RHEL 5.x : # ls /sys/class/fc_host host4 host5 # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "- - -" > /sys/class/scsi_host/host5/scan # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpath2 (360060e80045be be ) dm-3 HITACHI,OPEN-8 [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:1 sdb 8:16 [active][ready] \_ 4:0:4:1 sdf 8:80 [active][ready] \_ 5:0:1:1 sdh 8:112 [active][ready] \_ 5:0:2:1 sdj 8:144 [active][ready] mpath1 (360060e80045be be ) dm-2 HITACHI,OPEN-8 [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:0 sda 8:0 [active][ready] \_ 4:0:4:0 sde 8:64 [active][ready] \_ 5:0:1:0 sdg 8:96 [active][ready] \_ 5:0:2:0 sdi 8:128 [active][ready] If paths were not discovered, run the multipath -v2 command to add or update the new paths to the LUN. IBM XIV Storage-Rescanning HBAs and listing the updated multipath mapping with RHEL 5.x: # ls /sys/class/fc_host host4 host5 # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "- - -" > /sys/class/scsi_host/host5/scan # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpathr ( e4a055e) dm-13 IBM,2810XIV size=1.9t features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 4:0:0:12 sdm 8:192 active ready running 226 Phase II: Premigration

227 - 5:0:0:12 sdbb 67:80 active ready running - 4:0:1:12 sdy 65:128 active ready running `- 5:0:1:12 sdbn 68:16 active ready running mpathe ( e4a001e) dm-5 IBM,2810XIV size=16g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 4:0:0:4 sde 8:64 active ready running - 5:0:1:4 sdbf 67:144 active ready running - 4:0:1:4 sdq 65:0 active ready running `- 5:0:0:4 sdat 66:208 active ready running VMware ESXi CLI command-rescanning HBAs withesxi 5.5: # esxcli storage core adapter rescan -all VMware ESXi CLI command-listing updated multipath mapping with ESXi 5.5: # esxcfg-mpath b 5. On the 3PAR StoreServ Storage, issue the statport command to verify that traffic is occurring over all paths that connect the host to the source. 6. Delete the migrating LUN paths presented to the source array. For details about identifying and deleting LUN paths, see Identifying and deleting source array LUN paths on page Unzone the host from the source array. EMC Storage-Output after unzoning the host from the source array with RHEL 5.x: # multipath ll mpath13 ( bf902a00e03cb1bb3c3fe411) dm-4 3PARdata,VV [size=150g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 2:0:0:2 sde 8:64 [active][ready] \_ 3:0:1:2 sdm 8:192 [active][ready] mpath12 ( bf902a00cae2a2a33c3fe411) dm-3 3PARdata,VV [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 2:0:0:1 sdd 8:48 [active][ready] \_ 3:0:1:1 sdl 8:176 [active][ready] mpath11 ( bf902a002ac0388f3c3fe411) dm-2 3PARdata,VV [size=200g][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 2:0:0:0 sdc 8:32 [active][ready] \_ 3:0:1:0 sdk 8:160 [active][ready] HDS Storage Output after unzoning the host from the source array with RHEL 5.x: # multipath -ll mpath2(360060e80045be be ) dm-3 3PARdata,VV [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:1 sdb 8:16 [active][ready] \_ 5:0:1:1 sdh 8:112 [active][ready] mpath1(360060e80045be be ) dm-2 3PARdata,VV [size=6.8g][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=1][active] \_ 4:0:0:0 sda 8:0 [active][ready] \_ 5:0:1:0 sdg 8:96 [active][ready] NOTE: Example output for an IBM XIV source array will be similar to the output for HDS Storage. Phase II: Premigration 227

228 NOTE: For VMWare ESXi 5.0 migrations from the EMC CX4, EMC VNX or EMC VNX2 arrays: Once source arrays paths are removed upon successful completion of the createmigration operation, the ESXi 5.0 host might lose VMFS datastores if either the source or destination storage system is rebooted or if a cable is pulled. To work around this issue in ESXi 5.0, disable the VAAI ATS locking mechanism on your hosts (see Preventing VMFS datastore loss in VMware ESXi 5.0 on page 338. Issuing the createmigration command MDM Procedure 1. Creating the data migration task on page Updating host multipath software and unzoning from the source storage system on page 230 Creating the data migration task Procedure 1. Using the 3PAR Online Import Utility, issue the createmigration command with the -migtype MDM option. createmigration command: > createmigration -sourceuid <source_id> -srchost "<host_id>" -destcpg <CPG_ID> -destprov thin -migtype MDM persona "<persona>" > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: <migration_id> where: <source_id> is the WWN (for EMC Storage) or the serial number (for HDS Storage and IBM XIV Storage)of the source storage system. For example: C6E059EE (for EMC Storage) (for HDS Storage) (for IBM XIV Storage) <host_id> is the ID of the host OS. <CPG_ID> is the name of the destination CPG. For example: FC_r5 <migration_id> is the migration ID that will be assigned by execution of the createmigration command. For example: For more information about host OS persona values, see showpersona. <persona> is the host OS persona value. 228 Issuing the createmigration command MDM

229 NOTE: The createmigration command may take several minutes to complete. TIP: Make a note of the migration ID, as it will be used in commands to track migration progress. 2. Issue the showmigration command to verify that the data migration task preparation has completed successfully. This may take some time. Upon successful creation of the createmigration task, the STATUS column in the showmigration command output will indicate preparationcomplete(100%). When this status is indicated, continue to the next step. showmigration command: > showmigration -migrationid <migration_id> MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME MDM <source_name> <destination_name> Wed Mar 26 13:08:19 PDT 2014 where: END_TIME STATUS(PROGRESS) (MESSAGE) -NA- preparationcomplete(100%)(-na-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: CLARiiON+APM (for EMC Storage) SYMMETRIX (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 Phase II: Premigration 229

230 Updating host multipath software and unzoning from the source storage system Procedure f vendor-specific multipath software (that is, EMC PowerPath for EMC Storage, or the HDLM for HDS Storage) is installed, it must be uninstalled, and another multipath software, usually one native to the OS, must be used to configure multipathing. See Reconfiguring the host multipath solution on page 175. Issuing the createmigration command offline migration Procedure 1. Using the 3PAR Online Import Utility, issue the createmigration command with the -migtype offline option. createmigration command: > createmigration -sourceuid <source_id> -srcvolmap [{"<volmap_id>","<cpg>","<destprov>"}] -destcpg <CPG_ID> -destprov full -migtype offline > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: where: <source_id> is the host WWN (for EMC Storage) or the serial number (for HDS Storage or IBM XIV Storage) of the source storage system. For example: C6E059EE (for EMC Storage) (for IBM XIV array) <volmap_id> is source volume map ID. For example: EMC_10 (for EMC Storage) IBM_7 (for IBM XIV array) NOTE: For EMC VMAX and DMX4 source arrays only: If a default hexadecimal name is used, then the <volmap_id> parameter for the -srcvolmap must start with the word Volume in front of the device ID. If the device ID does not already contain 5 characters, add a " 0 " (zero) in front of the device id to make it 5 characters in length. Example <volmap_id>: Volume <CPG_ID> is the name of the destination CPG. For example: 230 Updating host multipath software and unzoning from the source storage system

231 FC_r5 testcpg <migration_id> is the migration ID that will be assigned by execution of the createmigration command. For example: NOTE: The createmigration command may take several minutes to complete. TIP: Make a note of the migration ID, as it will be used in commands to track migration progress. 2. Issue the showmigration command to verify that the data migration task preparation has completed successfully. This may take some time. Upon successful creation of the createmigration task, the STATUS column in the showmigration command output will indicate preparationcomplete(100%). When this status is indicated, continue to the next step. showmigration command: > showmigration -migrationid <migration_id> MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME <migration_id> Offline <source_name> <destination_name> Wed Mar 26 13:08:19 PDT 2014 where: END_TIME STATUS(PROGRESS) (MESSAGE) -NA- preparationcomplete(100%)(-na-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: CLARiiON+APM (for EMC Storage) SYMMETRIX (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 Unlike during an MDM or Online migration, the 3PAR Online Import will not create the host definition on the 3PAR StoreServ Storage during an offline migration. The storage administrator must do this manually, during the postmigration steps for offline migrations. See Offline migration. Phase II: Premigration 231

232 Phase III: Migration Issuing the startmigration command online migration Procedure 1. Start the migration by using the 3PAR Online Import Utility startmigration command. Starting migration by using startmigration: # startmigration -migrationid <migration_id> # SUCCESS: Data transfer started successfully. where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: TIP: The STATUS column in the showmigration command output will indicate success when all volumes or LDEVs have been migrated successfully. showmigration command showing successful migration: MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME <migration_id> online <source_name> <destination_name> Fri Apr 04 16:38:24 EDT 2014 where: END_TIME STATUS(PROGRESS)(MESSAGE) -NA- success(-na-) (-NA-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: CLARiiON+APM (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_ Phase III: Migration

233 Migrating a source array LUN with LUN ID 254 onlilne (Linux platforms) The 3PAR StoreServ Storage controller LUN ID is 254. When a source array LUN with LUN ID 254 is migrated, it will not be discovered at the host merely by performing an HBA rescan. Follow these steps to discover the source array LUN with LUN ID 254 on a Linux host. NOTE: The output of the commands in the following procedure is an example only. Following this procedure will enable online migration for LUNs with LUN ID 254 with supported Linux configurations. In the case of a cluster configuration, the procedure must be followed on each node. Procedure 1. Issue the following command on the Linux server: # sg_map -i grep 3PARdata /dev/sg28 3PARdata SES 3222 /dev/sg29 3PARdata SES 3222 /dev/sg69 3PARdata SES 3222 /dev/sg82 3PARdata SES In the output above, observe that when the host is zoned with the 3PAR StoreServ Storage, the SES LUN is automatically presented to the host. 3. When the createmigration operation is complete, rescan HBAs on the host by issuing the following command: host2 host3 # echo "1" > /sys/class/fc_host/host2/issue_lip # echo "- - -" > /sys/class/scsi_host/host2/scan # echo "1" > /sys/class/fc_host/host3/issue_lip # echo "- - -" > /sys/class/scsi_host/host3/scan 4. Rerun the command to list SG devices: # sg_map -i grep 3PARdata /dev/sg28 3PARdata VV 3222 /dev/sg29 3PARdata SES 3222 /dev/sg69 3PARdata SES 3222 /dev/sg82 3PARdata VV 3222 In the output above, observe that two of the sg devices are tagged as VV rather than SES. 5. Run the multipath -ll command on the Linux server and verify that the LUN with LUN ID 254 is not yet listed. To discover the LUN with LUN ID 254, you must delete old devices and rescan the HBAs again. Proceed with the following step. 6. Issue the following command to delete old devices that are listed as VV in step 4: # echo "1" > /sys/class/scsi_generic/sg28/device/delete # echo "1" > /sys/class/scsi_generic/sg82/device/delete 7. Rescan HBAs on the host by issuing the following command: # ls /sys/class/fc_host host2 host3 # echo "1" > /sys/class/fc_host/host2/issue_lip # echo "- - -" > /sys/class/scsi_host/host2/scan Migrating a source array LUN with LUN ID 254 onlilne (Linux platforms) 233

234 # echo "1" > /sys/class/fc_host/host3/issue_lip # echo "- - -" > /sys/class/scsi_host/host3/scan 8. Run the multipath -ll command on the Linux server and verify that the discovered LUN with LUN ID 254 is now listed, and can therefore be migrated online): size=1.0g features='1 queue_if_no_path' hwhandler='1 alua' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 4:0:2:254 sdbr 68:80 active ready running `- 5:0:1:254 sdbs 68:96 active ready running Issuing the startmigration command MDM Procedure 1. Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system on page Bringing the host back online on page Migrating a single source array in a Linux storage foundation cluster on page 241 Starting the data migration from the source storage system to the destination 3PAR StoreServ Storage system Procedure 1. Shut down the host. Leave the host offline until the migration is started. NOTE: By default, Veritas SFHA does not enable SCSI-3 persistent reservations for a given DSM. You must enable persistent reservations manually by following these steps: a. In Veritas Enterprise Administrator, right-click DMP DSMs and select DSM Configuration. b. Select V3PARAA from the available DSMs. c. Select Round Robin (Active/Active) for the load balance policy. d. Select SCSI-3 support for SCSI settings. e. Click OK. 2. Unzone the source storage system from the host. 3. From the 3PAR Online Import Utility, issue the startmigration command. startmigration command: > startmigration migrationid <migration_id> > SUCCESS: Data transfer started successfully where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: Issuing the startmigration command MDM

235 4. To view the status of the migration, issue the showmigration command. showmigration command: > showmigration -migrationid <migration_id> MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME <migration_id> MDM <source_name> <destination_name> Fri Apr 04 16:38:24 EDT 2014 where: END_TIME STATUS(PROGRESS)(MESSAGE) -NA) unpresenting(1%)(-na-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: CLARiiON+APM (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 3par_7200_DCB_01 5. Issue the showmigrationdetails command to verify the volumes being migrated. In the preparation stage, PROGRESS for each volume will be 0% and the TASK_ID will be unknown. showmigrationdetails command: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> unknown 0% where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Phase III: Migration 235

236 Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) A task ID will be assigned and percentage of progress will be shown while the task is being executed. showmigrationdetails command: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> <task_id> 22% where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <task_id> is the task ID that was assigned by execution of the startmigration command. For example: Phase III: Migration

237 Bringing the host back online Procedure 1. Verify that the import has started and confirm that a TASK_ID is available for each volume by issuing the showmigrationdetails command. It is not safe to bring the hosts back online if the TASK_ID number is not yet available in the output of showmigrationdetails, as illustrated in the example that follows. Unsafe to bring the host online: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> unknown 0% where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) It is safe to bring the host online when there is a valid TASK_ID in the output of the showmigrationdetails command. Safe to bring the host online: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> <task_id> 0% where: Bringing the host back online 237

238 <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <task_id> is the task ID that was assigned by execution of the startmigration command. For example: After the import has started for the volumes, bring the host (or cluster hosts) that were shut down back online. For Windows or HP-UX hosts: For a Windows or HP-UX host that is booting over SAN from the source system, the HBA BIOS must be reconfigured during host startup to select the HPE 3PAR boot device. If this is not done, the host will not start up. For Windows hosts: Check the disk status by pointing to diskpart.exe or by opening Disk Manager. Depending on the Windows SAN policy, the disks that were migrated are offline or online. If offline, use diskpart or the disk manager to bring them online. 3. Perform the following host-specific steps, if applicable: For Hyper-V: Bring the quorum and CSV disks back online and start virtual machines from the Failover Cluster Manager. For Linux: Optionally, if LUNs are not discovered on the Linux host, scan for newly exported LUNs from the destination 3PAR StoreServ Storage system. Perform this step for all HBAs on the host that are connected to the 3PAR StoreServ Storage system. To scan for LUNs, use the following general commands: 238 Phase III: Migration

239 echo "1" > /sys/class/fc_host/host<#>/issue_lip echo "- - -" > /sys/class/scsi_host/host<#>/scan Scanning hosts for newly exported LUNs: # ls /sys/class/fc_host host2 host3 # echo "1" > /sys/class/fc_host/host2/issue_lip # echo "1" > /sys/class/fc_host/host3/issue_lip # echo "- - -" > /sys/class/scsi_host/host2/scan # echo "- - -" > /sys/class/scsi_host/host3/scan For HDS Storage using IBM AIX host: a. On the AIX host, perform the following steps: I. Using the cfgmgr command, rescan the host for disks with the HPE 3PAR signature. II. Verify that the AIX host is now seeing the volumes on the 3PAR StoreServ Storage by using the lsdev command. Using the lsdev command: # lsdev -Cc disk hdisk0 Available Virtual SCSI Disk Drive hdisk1 Available PAR InServ Virtual Volume hdisk2 Available PAR InServ Virtual Volume hdisk3 Available PAR InServ Virtual Volume hdisk4 Available PAR InServ Virtual Volume hdisk5 Available PAR InServ Virtual Volume hdisk6 Available PAR InServ Virtual Volume III. Using the volume group-to-pvid mapping information collected earlier, import the volume group(s) using the Linux CLI vgimport -y vgname pvid command, where pvid is a physical disk on which the volume group is created. For details, see the example that follows. If a volume group is mapped to multiple physical disks, specify one disk from the list of disks. The AIX CLI command importvg automatically identifies the remaining set of disks that were mapped to the given volume group. Volume group to PVID mapping before removing HDS Storage disks from the system: # lspv hdisk0 00f825bdd5f7e96e rootvg active hdisk1 00f825bd4a05917b None hdisk2 00f825bd5dfc80c6 AIX1VG active hdisk3 00f825bd5dfc82c1 AIX2VG active NOTE: The AIX1VG volume group is mapped to the disk with PVID 00f825bd5dfc80c6, and AIX2VG is mapped to the disk with PVID 00f825bd5dfc82c1. After the rescan of the disks following the zoning of the host to the 3PAR StoreServ Storage, the lspv command output will be as follows: hdisk0 00f825bdd5f7e96e rootvg active hdisk1 00f825bd4a05917b None hdisk7 00f825bd5dfc80c6 None hdisk8 00f825bd5dfc82c1 None Phase III: Migration 239

240 Import the volume groups by specifying the corresponding disks with the same PVIDs, as follows: # importvg -y AIX1VG hdisk7 AIX1VG # importvg -y AIX2VG hdisk8 AIX2VG # lspv hdisk0 00f825bdd5f7e96e rootvg active hdisk1 00f825bd4a05917b None hdisk7 00f825bd5dfc80c6 AIX1VG active hdisk8 00f825bd5dfc82c1 AIX2VG active If the applications use HDS Storage LDEVs in raw format, reconfigure them to point to the corresponding 3PAR StoreServ Storage volumes that have the same PVID. b. On the host, use the lspath command to verify that all disks exported from the 3PAR StoreServ Storage have the expected number of paths. IBM AIX Host with Multiple LUNs: # lspath Enabled hdisk0 scsi0 Enabled hdisk1 scsi0 Enabled hdisk2 scsi0 Enabled hdisk3 fscsi0 Enabled hdisk4 fscsi0 Enabled hdisk3 fscsi1 Enabled hdisk4 fscsi1 4. You can now restart the cluster, if applicable, and restart all applications and services. The host may resume normal operations. TIP: The STATUS column in the showmigration command output will indicate success when all volumes or LDEVs have been migrated successfully. showmigration command showing successful migration: > showmigration -migrationid <migration_id> MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME MDM <source_name> <destination_name> Fri Apr 04 16:38:24 EDT 2014 where: END_TIME STATUS(PROGRESS)(MESSAGE) -NA- success(-na-) (-NA-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: 240 Phase III: Migration

241 CLARiiON+APM (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 3par_7200_DCB_01 Migrating a single source array in a Linux storage foundation cluster Migrate single source array part of disk group containing multiple source arrays. Standard Devices: The devices on which the DG is imported and the application is currently running are treated standard devices in VxVM. These are the devices that have real copy of data. Clone/Replicated Devices: The devices on which the data is getting copied for backup/restore are treated as Clone/Replicated devices in VxVM. These devices will always be marked with udid_mismatch/ clone_disk flag in vxdisk e list output to identify that these are clone devices. DG import Design: The clone devices are point in time images of the application data; VxVM does not automatically import these devices. It might be a backup taken at some point of time. Thus, VxVM does not allow a mix of standard and clone devices to be imported as part of same DG. The reason is this might result in data on clone devices getting overwritten. The procedure that follows is for when a disk group contains multiple arrays and you want to migrate just one of those source arrays. After the migration, LUNs on 3PAR will be treated as clone disks by VxVM, so the disk group will not be imported automatically. To bring the disk group back online: Procedure 1. Stop the hacluster by issuing the /opt/vrtsvcs/bin/hastop all command. 2. Once the cluster is stopped, run vxdisk list to identify clone disks. LUNs migrated to the destination system will be clone disks. Clone disks are highlighted in yellow and identified with the flag udid_mismatch/clone_disk. # vxdisk list DEVICE TYPE DISK GROUP STATUS cciss/c0d0 auto:none - - online invalid xiv1_2706 auto:cdsdisk xiv1_2706 dg1 online thinrclm shared xiv1_2707 auto:cdsdisk xiv1_2707 dg2 online thinrclm shared xiv1_2708 auto:cdsdisk xiv1_2708 dg1 online thinrclm shared xiv1_2709 auto:cdsdisk xiv1_2709 dg2 online thinrclm shared xiv1_2710 auto:cdsdisk xiv1_2710 dg1 online thinrclm shared xiv1_2711 auto:cdsdisk xiv1_2711 dg2 online thinrclm shared 3pardata9_6287 auto:cdsdisk - - online udid_mismatch 3pardata10_6288 auto:cdsdisk - - online udid_mismatch 3pardata11_6289 auto:cdsdisk - - online udid_mismatch 3pardata12_6284 auto:cdsdisk - - online udid_mismatch 3pardata13_6283 auto:cdsdisk - - online udid_mismatch Migrating a single source array in a Linux storage foundation cluster 241

242 3pardata14_6282 auto:cdsdisk - - online udid_mismatch 3pardata15_6281 auto:cdsdisk - - online udid_mismatch 3pardata16_6286 auto:cdsdisk - - online udid_mismatch 3pardata17_6285 auto:cdsdisk - - online udid_mismatch 3. Update UDID of the clone disks by running this command on each clone disk: vxdisk -f updateudid <disk name> Example: # vxdisk -f updateudid 3pardata9_ Turn off the clone parameter on the clone devices by running this command on each clone disk: # vxdisk set <disk name> clone=off Example: # vxdisk set 3pardata9_6287 clone=off 5. Start the hacluster by running this command on all cluster nodes: /opt/vrtsvcs/bin/hastart Issuing the startmigration command offline migration Procedure 1. From the 3PAR Online Import Utility, issue the startmigration command. startmigration command: > startmigration migrationid <migraton_id> > SUCCESS: Data transfer started successfully NOTE: The <migraton_id> (for example, ) will have been assigned by the createmigration command. 2. To view the status of the migration, issue the showmigration command. showmigration command: > showmigration -migrationid <migration_id> MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME Offline <source_name> <destination_name> Fri Apr 04 16:38:24 EDT 2014 where: END_TIME STATUS(PROGRESS)(MESSAGE) -NA- unpresenting(1%)(-na-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: 242 Issuing the startmigration command offline migration

243 CLARiiON+APM (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 3par_7200_DCB_01 3. Issue the showmigrationdetails command to verify the volumes being migrated. In the preparation stage, PROGRESS for each volume will be 0% and the TASK_ID will be unknown. showmigrationdetails command: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> unknown 0% where: <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) A task ID will be assigned and percentage of progress will be shown while the task is being executed. showmigrationdetails command: > showmigrationdetails -migrationid <migration_id> SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS <source_volume> <destination_volume> <task_id> 22% where: Phase III: Migration 243

244 <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_volume> is the name of the source volume, LDEV, or other device being migrated. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <destination_volume> is the name of the destination volume on the 3PAR StoreServ Storage. For example: Lun_01 (for EMC Storage) 00:07:00 (for HDS Storage) volume_01 (for IBM XIV Storage) <task_id> is the task ID that was assigned by execution of the startmigration command. For example: 6134 TIP: The STATUS column in the showmigration command output will indicate success when all volumes or LDEVs have been migrated successfully. showmigration command showing successful migration: MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME offline <source_name> <destination_name> Fri Apr 04 16:38:24 EDT 2014 where: END_TIME STATUS(PROGRESS)(MESSAGE) -NA- success(-na-) (-NA-) <migration_id> is the migration ID that was assigned by execution of the createmigration command. For example: <source_name> is the name of the source storage system. For example: 244 Phase III: Migration

245 CLARiiON+APM (for EMC Storage) USP_V (for HDS Storage) IBM (for IBM XIV Storage) <destination_name> is the name of the destination 3PAR StoreServ Storage. For example: 3par_7200_DCA_01 Phase III: Migration 245

246 Phase IV: Postmigration tasks Performing online migration and MDM For online migration and Minimally Disruptive Migration (MDM), perform the following optional tasks after successful completion of the migration: Procedure 1. Remove the migration definition when the migration has completed. See removemigration. 2. When all migrations between the source and destination HPE 3PAR StoreServ storage system are complete, remove them from the HPE 3PAR Online Import Utility, using the removesource and removedestination commands. 3. Remove zoning between the source storage system and 3PAR StoreServ Storage system after all migrations from the third-party storage system are complete. 4. If no more migrations to the destination array need to be done in the short term, reconfigure the Peer ports to host ports. 5. Remove the SMI-S provider. See: Perform if applicable: Remove EMC Storage system from EMC SMI-S provider Remove Storage system from the HiCommand Suite 6. Schedule a time to re-signature all migrated VMware disks. This must be done before cluster nodes are rebooted. See: Performing postmigration tasks in VMware ESXi environments on page The WWN of a migrated volume is the one it had on the source system. To change the WWN into a local-array one, use the 3PAR CLI command setvv wwn. Execution of this command requires the volume to be unexported. While it is possible to keep the WWN of the source volume on the destination system, it is recommended to make this change at the next available opportunity. The immediate change is mandatory when using the volume with the HPE 3PAR Recovery Manager software and the Microsoft VSS framework. 8. After you verify that everything has been correctly migrated to the destination storage system, you can delete the migrated volumes. 9. If the Path Verify Enabled MPIO setting was enabled for the migration, disable it again. However, if the source and destination HPE 3PAR StoreServ systems are in a Peer Persistence relationship, do not disable the setting. 10. Perform Remote Copy postmigration tasks. Performing Remote Copy postmigration tasks If you are using HPE 3PAR Remote Copy software, the next step is to perform the remote copy postmigration tasks: 246 Phase IV: Postmigration tasks

247 Procedure 1. If necessary, recreate the remote copy groups on the destination storage system to match the remote copy groups on the new source system. 2. Perform the remote copy synchronization task. 3. Remove the remote copy groups from the old source system. 4. Configure and start the remote copy groups on the destination storage system from a specially created snapshot that represents the end step of the data migration. For more information, see the HPE 3PAR Remote Copy Software User's Guide and the HPE 3PAR Command Line Interface Administrator s Manual, available at the Hewlett Packard Information Library website: Performing postmigration tasks in VMware ESXi environments In VMware ESXi environments, upon the next ESXi host reboot after a successful migration, it might be necessary to remove and re-add the RDM devices to your virtual machines. This is because the VML number generated by ESXi changes after the reboot, requiring that the mappings be corrected or updated. For more information, see VMware's KB article , available at the following website: Upon the next ESXi host reboot after a successful online migration, the ESXi host will not mount the VMFS datastores automatically. To prevent duplicate copies of the same datastore being mounted in various replication scenarios, the ESXi operating system might declare the migrated datastore as a snapshot and not mount it automatically. Procedure After confirming that the original copy of datastore is no longer accessible by the ESX host, see the VMware KB article for available methods (vsphere client GUI or the esxcli command line) for mounting the datastore. Once a re-signature is performed on all migrated datastores, then they will be automatically remounted during subsequent reboots. Additional steps will be required forupdating references to the original signature in virtual machine files. For more information, see the following websites: VMware KB ( ( com.vmware.vsphere.storage.doc_50/guid- EBAB0D5A-3C77-4A9B D4AD69E28DC.html) With EMC Storage source arrays, after a successful online migration and a rescan of the devices on VSphere, the device names may continue to retain their EMC/DGC labels, not their 3PAR StoreServ Storage labels. Performing postmigration tasks in VMware ESXi environments 247

248 Removing an EMC Storage system from EMC SMI-S provider Follow this procedure to remove a system from the list after the migration has completed or if you have added the wrong system for migration. CAUTION: You can remove a system only before a migration begins or after it has completed. Removing a system during a migration process will cause the migration to fail. Procedure 1. In the SMI-S CLI, issue the ein command. When prompted for a Class value, enter the following to list CLARIION series arrays added in the SMI-S database: Clar_StorageSystem View CLARIION series arrays added in the SMI-S database: ++++ Testing EnumerationInstanceNames: Clar_StorageSystem ++++ Instance 0: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name=" CLARiiON+APM " Instance 1: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " 2. Issue the SMI-S CLI remsys command. Copy the entire path of the array Instance that you wish to remove. See the example in step 4. NOTE: The APM value in the object path should match the serial number of the storage system that you wish to remove. 3. At the SMI-S CLI ObjectPatch[null]: prompt, paste the path of the array Instance. 4. Follow the rest of the prompts and look for a message indicating success. When you run an SMI-S CLI dv or ein command, you should no longer see the storage system that you just removed. Verifying that the Storage System Has Been Removed: (localhost:5988)? dv. Firmware version information: (Remote) CLARiiON Array APM (Rack Mounted CX4_120) : (Remote) CLARiiON Array APM (Rack Mounted VNX5300) : (Remote) CLARiiON Array APM (Rack Mounted VNX5200) : (localhost:5988)? ein Class: Clar_StorageSystem ++++ Testing EnumerationInstanceNames: Clar_StorageSystem ++++ Instance 0: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " Instance 1: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " Instance 2: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " Enumerate 3 instance names; repeat count 1;return data in seconds (localhost:5988)? remsys Remove System {y n} [n]: y System's ObjectPath[null]: Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " About to delete system Clar_StorageSystem.CreationClassName="Clar_StorageSystem",Name="CLARiiON+APM " 248 Removing an EMC Storage system from EMC SMI-S provider

249 Are you sure {y n} [n]: y ++++ EMCRemoveSystem ++++ OUTPUT : 0 Legend:0=Success, 1=Not Supported, 2=Unknown, 3=Timeout, 4=Failed 5=Invalid Parameter 4096=Job Queued, 4097=Size Not Supported Note: Not all above values apply to all methods - see MOF for the method. (localhost:5988)? dv. Firmware version information: (Remote) CLARiiON Array APM (Rack Mounted CX4_120) : (Remote) CLARiiON Array APM (Rack Mounted VNX5300) : Removing HDS Storage system from the HiCommand Suite Procedure To remove an HDS Storage system from the HiCommand Suite, issue the following HiCommand Suite CLI command: # DeleteStorageArray -u xxx -p yyy serialnum=xxx model=yyy where YYY is the HDS Storage array model, XXX is the serial number, and hcs.exmple.com the server where the HiCommand Suite is installed. Removing HDS Storage system from the HiCommand Suite 249

250 Aborting a migration A migration can be aborted before it has started by using the removemigration command from the Online Import Utility. To completely remove a migration, the removemigration command must be issued a second time. See removemigration. Once a migration has started, it cannot be aborted. To roll back a data migration in the event that I/O must resume on the original array, see Guidelines for rolling back to the original source array on page 404. Falling back to the source array after a failed or aborted migration on Oracle RAC clusters Procedure 1. After a migration fails or is aborted, stop any applications running I/O to the data LUNs that were being migrated. 2. From any one of the cluster nodes, issue the following command to stop all the databases: # $ORACLE_HOME/bin/srvctl stop database -d <database name> 3. If applicable, on the 3PAR StoreServ Storage, clear the reservation from all the LUNs that are part of the failed migration, using following command: ci% setvv -clrrsv -f <vvname> 4. Verify that all the database instances are offline by issuing the following command on all nodes: # $GRID_HOME/bin/crs_stat t 5. Zone the source array back to the Oracle cluster nodes. 6. To present the volume back to the host, from the source array, rescan for new data paths to the LUN. 7. Delete SD device paths coming through the 3PAR StoreServ Storage. 8. On the 3PAR Online Import Utility, issue the removemigration command to remove the failed migration. NOTE: This step is not applicable if the migration failed while in the import phase. In that case, a manual cleanup of the source and destination arrays is necessary. 9. Verify that migrating VVs are removed on the HPE 3PAR. If not, remove them manually by issuing the following command: # removevv f <vvname> 10. If required, unmask volumes from 3PAR peer host on the source array. 11. Rescan the host and verify that it does not see paths from the 3PAR StoreServ Storage. 12. Start the database using the following command: # $GRID_HOME/bin/crs_stat t 13. Restart the applications that you stopped in step Aborting a migration

251 Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC Unidirectional data migration from legacy 3PAR and IBM XIV storage systems using SSMC 251

252 Data migration to legacy 3PAR and IBM XIV systems With SSMC 3.1 (and later), you can perform data migrations to destination 3PAR StoreServ Storage systems from supported: Legacy 3PAR (see Data migration from legacy 3PAR systems on page 252) IBM XIV (see Data migration from IBM XIV on page 259) This storage federation functionality is built into the SSMC interface; it is not necessary to have Online Import Utility installed. Data migration from legacy 3PAR systems Through the storage federation functionality built into SSMC 3.1 and later, you can perform data migrations from a legacy 3PAR source system to a destination 3PAR StoreServ Storage system. Legacy systems supported by this SSMC-based method include 3PAR StoreServ F-Class and T-Class systems running or For other types of systems or 3PAR OSs, refer to Table 1: Data mobility and migration user scenarios on page 12 for information on choosing the right migration method for your legacy system. You can migrate volumes, volume sets, host sets and host configuration information to a destination 3PAR StoreServ Storage system without changing host configurations or interrupting data access. During data migration, host I/O service to the source storage system takes place through the destination 3PAR StoreServ Storage system. The host and volume presentation implemented on the source storage system is maintained on the destination 3PAR StoreServ Storage system. NOTE: Snapshot and replication volumes are not supported. Requirements and considerations Before attempting a migration, refer to: Reconfiguring the host multipath solution on page 175 Zoning the source storage system to the destination 3PAR StoreServ Storage system on page 218 Performing premigration tasks Migration process checklists on page Data migration to legacy 3PAR and IBM XIV systems

253 Procedure 1. Create Federation (To create Federation see Setting up and configuring a Storage Federation on page 37). 2. Add a Migration source (To add a migration source see Adding a migration source to a Federation on page 102). 3. Import configuration (To import configuration see Copying settings from existing systems on page 51). Performing data migration Upon completing the tasks in Performing premigration tasks on page 252, start the data migration from within the SSMC as follows: Procedure 1. Under Storage Systems, click Federations and select the federation you want to migrate. 2. Click Actions and select Start Peer Motion. Figure 80: Individual Federation page 3. On the Start Data Migration page, enter the following information: Peer Motion activity name: an optional identification to track each peer motion workflow progress on the Peer Motions page. Source system: used to select the legacy 3PAR StoreServ Storage system from which the selected data will be moved. Performing data migration 253

254 Figure 81: Start Data Migration dialog Once those parameters have been specified, click Select to open the Select Objects dialog. 4. From the Select Objects dialog, select the object(s) from which data will be migrated. For legacy 3PAR source systems, you can select virtual volumes, volume set, host, or host set. From the Select Objects dialog, select the object(s) from which data will be migrated. For legacy 3PAR source systems, you can select virtual volumes, volume set, host, or host set. Upon selecting one of those options, the list below is populated with the objects for that category. For virtual volumes, you can select single or multiple objects; for volume set, host, and host set, you can select just one object. Rather than scrolling through a long list, you can also enter the name of the desired object in the search field located between the selection buttons and the object list. 254 Data migration to legacy 3PAR and IBM XIV systems

255 Figure 82: Select Objects dialog If intended object does not appear, you can refresh source array object by clicking Refresh, then click Yes, refresh confirmation button. Figure 83: Refresh dialog Refresh could require a few minutes to complete, but while the refresh is in progress, you can also select objects by clicking Add Objects in the Select Objects dialog. In the Add Object dialog, select the desired object and click Add. Data migration to legacy 3PAR and IBM XIV systems 255

256 NOTE: For the Volumes object only, you can enter single or multiple, comma-separated object names. Figure 84: Add Object dialog 5. In the Select Objects dialog, click Select. 6. On the Start Data Migration page, under Peer Motion Settings, enter the following information: Destination system: the 3PAR StoreServ Storage to which the selected object(s) will be moved. Destination CPG: the CPG name from the destination 3PAR StoreServ Storage on which volume user space will be allocated. Migration type: the type of migration to be performed. Destination provisioning type: the provisioning type to which the selected virtual volumes will be moved. All volumes in consistency group: modifying and set to migrate volumes as a consistency group. Compression: modify and set the compression property. Priority: modify and set priority as high, medium, or low. 256 Data migration to legacy 3PAR and IBM XIV systems

257 Figure 85: Start Data Migration dialog 7. After completing all input, click Start to initiate the data migration. You can monitor the migration progress on the Federations page (see Monitoring Peer Motion workflow in Federations on page 76). Refreshing legacy 3PAR source system objects This procedure may be necessary if you make changes to the configuration of an array after having added it into federation as a source system. Procedure 1. Under Storage Systems, click Federations and select the federation you want to migrate. 2. Click Actions and select Refresh external systems. Refreshing legacy 3PAR source system objects 257

258 Figure 86: Individual Federation page 3. Select source system and click the Refresh button to select a system from which to copy settings. Figure 87: Refresh External Systems dialog In the Refresh confirmation dialog, click Yes, refresh. 258 Data migration to legacy 3PAR and IBM XIV systems

259 Figure 88: Refresh dialog Refresh could require a few minutes to complete, but while the refresh is in progress, you can also select objects by clicking Add Objects in the Select Objects dialog. The refresh process can take a long while depending on how many objects (hosts and volumes) are currently on the storage arrays. Please note that the customer should perform a refresh ahead of time and not during the actual migration window. Data migration from IBM XIV Through the storage federation functionality built into SSMC 3.1 and later, you can perform data migrations from a supported IBM XIV source array to a destination 3PAR StoreServ Storage system. You can migrate volumes and host configuration information without changing host configurations or interrupting data access. During data migration, host I/O service to the source storage system takes place through the destination 3PAR StoreServ Storage system. The host and volume presentation implemented on the source storage system is maintained on the destination 3PAR StoreServ Storage system. Requirements and limitations Before attempting a migration, refer to: The third-party data migration process on page 151 For IBM XIV migration types supported within SSMC, see Table 7: Migrations supported for EMC Storage, HDS Storage, and IBM XIV Storage Arrays on page 154. General Considerations For IBM XIV Storage source arrays on page 169 Considerations for planning to migrate IBM XIV Storage on page 171 Reconfiguring the host multipath solution on page 175 for IBM XIV Storage Zoning the source storage system to the destination 3PAR StoreServ Storage system on page 218 The volume names should not match across source arrays. Snapshot and replication volumes are not supported. SSMC allows two separate "start data migration" with hosts part of same cluster. Data migration from IBM XIV 259

260 Performing premigration tasks Migration process checklists on page 161 Procedure 1. Create Federation (To create Federation see Setting up and configuring a Storage Federation on page 37). 2. Add a Migration source (To add a migration source see Adding a migration source to a Federation on page 102). 3. Import configuration (To import configuration see Copying settings from existing systems on page 51). Performing data migration Upon completing the tasks in Performing premigration tasks on page 260, start the data migration from within the SSMC as follows: Procedure 1. Under Storage Systems, click Federations and select the federation you want to migrate. 2. Click Actions and select Start data migration. Figure 89: Individual Federation page oiu_ssmc_01 3. On the Start Data Migration page, enter the following information: Peer Motion activity name: an optional identification to track each peer motion workflow progress on the Peer Motions page. Source system: used to select the legacy 3PAR StoreServ Storage system from which the selected data will be moved. 260 Performing premigration tasks

261 Figure 90: Start Data Migration dialog Once those parameters have been specified, click Select to open the Select Objects dialog. 4. From the Select Objects dialog, select the object(s) from which data will be migrated. For legacy 3PAR source systems, you can select virtual volumes, volume set, host, or host set. From the Select Objects dialog, select the object(s) from which data will be migrated. For legacy 3PAR source systems, you can select virtual volumes, volume set, host, or host set. Upon selecting one of those options, the list below is populated with the objects for that category. For virtual volumes, you can select single or multiple objects; for volume set, host, and host set, you can select just one object. Rather than scrolling through a long list, you can also enter the name of the desired object in the search field located between the selection buttons and the object list. Data migration to legacy 3PAR and IBM XIV systems 261

262 Figure 91: Select Objects dialog If intended object does not appear, you can refresh source array object by clicking Refresh, then click Yes, refresh confirmation button. 262 Data migration to legacy 3PAR and IBM XIV systems

263 Figure 92: Refresh dialog Refresh could require a few minutes to complete, but while the refresh is in progress, you can also select objects by clicking Add Objects in the Select Objectsdialog. In the Add Object dialog, select the desired object and click Add. NOTE: For the Volumes object only, you can enter single or multiple, comma-separated object names. Figure 93: Add Object dialog 5. In the Select Objects dialog, click Select. 6. On the Start Data Migration page, under Peer Motion Settings, enter the following information: Destination system: the 3PAR StoreServ Storage to which the selected object(s) will be moved. Destination CPG: the CPG name from the destination 3PAR StoreServ Storage on which volume user space will be allocated. Data migration to legacy 3PAR and IBM XIV systems 263

264 Migration type: the type of migration to be performed. Destination provisioning type: the provisioning type to which the selected virtual volumes will be moved. All volumes in consistency group: modifying and set to migrate volumes as a consistency group. Compression: modify and set the compression property. Priority: modify and set priority as high, medium, or low. NOTE: Individual volumes cannot be migrated at a higher priority than others; this setting applies to all volumes in the migration. Figure 94: Start Data Migration dialog 7. After completing all input, click Start to initiate the data migration. You can monitor the migration progress on the Federations page (see Monitoring Peer Motion workflow in Federations on page 76). 264 Data migration to legacy 3PAR and IBM XIV systems

265 Refreshing non-3par source system objects This procedure may be necessary if you make changes to the configuration of an array after having added it into federation as a source system. Procedure 1. Under Storage Systems, click Federations and select the federation you want to migrate. 2. Click Actions and select Refresh external systems. Figure 95: Individual Federation page 3. Select source system and click the Refresh button to select a system from which to copy settings. Figure 96: Refresh External Systems dialog In the Refresh confirmation dialog, click Yes, refresh. Refreshing non-3par source system objects 265

266 Figure 97: Refresh dialog Refresh could require a few minutes to complete, but while the refresh is in progress, you can also select objects by clicking Add Objects in the Select Objects dialog. The refresh process can take a long while depending on how many objects (hosts and volumes) are currently on the storage arrays. Please note that the customer should perform a refresh ahead of time and not during the actual migration window. 266 Data migration to legacy 3PAR and IBM XIV systems

267 Troubleshooting This part describes problems you may encounter when migrating data from a source storage system to an 3PAR StoreServ Storage using storage federation, 3PAR Peer Motion, or 3PAR Online Import. NOTE: During the migration process, it is important to remember that the data continues to be served from the source storage system until the volume import is complete. If host errors occur, both the destination 3PAR StoreServ Storage system and the source storage system should be checked for potential problems. Troubleshooting 267

268 Troubleshooting resources 3PAR Peer Motion Utility and 3PAR Online Import Utility error and resolution messages NOTE: Many of the resolution messages refer you to the Online Import Utility log file. With 3PAR Online Import Utility 2.0 and later, there are multiple log files. For the Online Import Utility itself, there are: hpeoiu.log The complete, detailed log file formerly named hpoiu.log. hpeoiuaudit.log Contains just high-level details for the commands executed, to make it easier to analyze a failed migration to determine the sequence of steps taken, at which point the migration failed, and what all the commands executed by a particular user. When using Online Import functionality embedded within SSMC 3.1 and later to migrate data from legacy 3PAR and non-3par sources, there are the SSMC server logs located at ssmcbase/ data/logs. The default location, unless another location was specified while Installing and configuring the 3PAR Online Import Utility on page 187, is: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu\OIUTools\tomcat \32-bit\apache-tomcat \logs Table 14: 3PAR Peer Motion Utility and 3PAR Online Import Utility error and resolution messages Error Code Error Message Resolution Code Resolution Message OIUERR1TN1006 There are multiple connections between source & destination. OIURSL1TN1006 Please specify the UID of the destination storage system to which migration should happen. OIUERRAPP0000 An unexpected error occurred. OIURSLAPP0000 Contact Hewlett Packard Enterprise support. You may try restarting the application service. OIUERRAPP0004 User is not authorized to login. OIURSLAPP0004 Check if the credentials are correct and if the user is a member of the 'HPEStorage Migration Admins' or 'HPE Storage Migration Users' group. OIUERRAPP0005 Either the configuration file is unavailable or the required entry is missing. OIURSLAPP0005 Please check the installation and make sure configuration file is available. Table Continued 268 Troubleshooting resources

269 Error Code Error Message Resolution Code Resolution Message OIUERRAPP0006 Unable to identify the specified array type. OIURSLAPP0006 Please ensure that the plugin corresponding to the specified array is installed. OIUERRAPP0007 Failed to discover source arrays. OIURSLAPP0007 Contact Hewlett Packard Enterprise support. OIUERRAPP0008 Failed to execute job. OIURSLAPP0008 Contact Hewlett Packard Enterprise support. OIUERRAPP0009 Job might have executed successfully but couldn't receive any handle to track it. OIURSLAPP0009 Contact Hewlett Packard Enterprise support. NOTE: 3PAR Online Import: A problem occurred during an SMI-S related job. Check the 3PAR Online Import Utility log file for a specific description of the problem. Hewlett Packard Enterprise Support may be required. OIUERRAPP0010 SMI-S EMC Provider WBEM Operations error. OIURSLAPP0010 Please check SMI-S ECOM connectivity and access, see Log file for details, and/or contact Hewlett Packard Enterprise support. NOTE: 3PAR Online Import: There could be a problem with communication or access. Logs will show the exact cause as an exception trace. Configuration (remote IP access, local access) of the ECOM might need to be rectified. Table Continued Troubleshooting resources 269

270 Error Code Error Message Resolution Code Resolution Message OIUERRAPP0011 EMC Provider WBEM Connection, Malformed URL. OIURSLAPP0011 Application internal error, see Log file for details. Contact Hewlett Packard Enterprise support. NOTE: 3PAR Online Import: This indicates a problem in accessing the provider due to incorrect protocol (HTTP, HTTPS allowed) or no protocol at all. Try different combinations of secure settings and port settings in the 3PAR Online Import Utility CLI command. Verify enabled ports on the SMI- S provider. OIUERRAPP0012 EMC Provider WBEM Connection, Client initialization error. OIURSLAPP0012 Application internal error, see Log file for details. Contact Hewlett Packard Enterprise support. NOTE: 3PAR Online Import: This might be caused by a blank user name or password when connecting to SMI-S provider. Ensure that the user name and password are specified. OIUERRAPP0013 Unsupported EMC Provider version for SMI-S. OIURSLAPP0013 Check support matrix and upgrade EMC Provider. NOTE: 3PAR Online Import : The SMI-S Provider that was entered is running an unsupported version. Storage systems for migration must be managed by a supported version of SMI-S Provider. Table Continued 270 Troubleshooting resources

271 Error Code Error Message Resolution Code Resolution Message OIUERRAPP0014 Either the Rules XML file(s) is/are unavailable or the required entries are missing. OIURSLAPP0014 Please check the installation and make sure Rules XML file(s) is/are available OIUERRAPP0015 SMI-S EMC Provider Connection error. OIURSLAPP0015 Please check SMI-S ECOM connectivity (IP Address, Port number), see Log file for details, and/or Contact Hewlett Packard Enterprise support. NOTE: 3PAR Online Import: The wrong IP address or wrong port is specified for the SMI-S Provider. Use the correct SMI-S Provider IP address and port. OIUERRAPP0016 SMI-S EMC Provider Authorization error. OIURSLAPP0016 Please check SMI-S ECOM Access Configuration and/or User Credentials, see Log file for details, and/or Contact Hewlett Packard Enterprise support. NOTE: The SMI-S Provider credentials are wrong, or the 3PAR Online Import Utility is not on the trusted client IP address list. Use correct SMI-S Provider credentials. If Trusted IP Address list is enabled, make sure the 3PAR Online Import Utility server is on the trusted IP address list on the ECOM server. OIUERRAPP0020 SMI-S Provider WBEM Operations error. OIURSLAPP0020 Please check SMI-S connectivity and access, see Log file for details, and/or Contact Hewlett Packard Enterprise support. OIUERRAPP0021 Storage Provider WBEM Connection, Malformed URL. OIURSLAPP0021 Application internal error, see Log file for details. Contact Hewlett Packard Enterprise support. Table Continued Troubleshooting resources 271

272 Error Code Error Message Resolution Code Resolution Message OIUERRAPP0022 Storage Provider WBEM Connection, Client initialization error. OIURSLAPP0022 Application internal error, see Log file for details. Contact Hewlett Packard Enterprise support. OIUERRAPP0023 Unsupported Storage Provider version for SMI-S. OIURSLAPP0023 Check support matrix and upgrade storage provider. OIUERRAPP0024 Either the Rules XML file(s) is/are unavailable or the required entries are missing. OIURSLAPP0024 Please check the installation and make sure Rules XML file(s) is/are available OIUERRAPP0025 SMI-S Storage Provider Connection error. OIURSLAPP0025 Please check SMI-S connectivity (IP Address, Port number), see Log file for details, and/or Contact Hewlett Packard Enterprise support. OIUERRAPP0026 SMI-S Storage Provider Authorization error. OIURSLAPP0026 Please check SMI-S Access Configuration and/or User Credentials, see Log file for details, and/or contact Hewlett Packard Enterprise support. OIUERRCER0000 Unable to validate CA signed certificate. OIURSLCER0000 Please Follow the steps in Hewlett Packard Enterprise documentation to install the certificate. OIUERRCER0002 Not able to establish connection with storage array. OIURSLCER0002 Please verify input parameters. OIUERRCER0003 Not able to fetch and publish the certificate details. OIURSLCER0003 Please verify input parameters and the connection with array. OIUERRCER0004 Currently this command not supported for this array type. OIURSLCER0004 Please verify input parameters and the array type. OIUERRCC1000 Invalid port type. OIURSLCC1000 Use appropriate port number. OIUERRCC1001 Invalid security mode. OIURSLCC1001 Use true/false. No value will take default. Table Continued 272 Troubleshooting resources

273 Error Code Error Message Resolution Code Resolution Message OIUERRCC1002 Invalid strict param. OIURSLCC1002 Use true/false. No value will take default. OIUERRCG00000 Preparation failed as virtual volume set provided in cgvolmap already exists. OIURSLCG00000 Please use different consistency group name. OIUERRCG00001 Preparation failed as virtual volume set name & consistency group name are same. OIURSLCG00001 Please use -allvolumesincg option to migrate entire vvset consistently. OIUERRCG00002 Preparation failed as one of virtual volume in cgvolmap is not part of volumes to be migrated. OIURSLCG00002 Please make sure all volumes specified in cgvolmap are part of volumes to be migrated. OIUERRCG00003 Preparation failed as there are less than 2 volumes in consistency group. OIURSLCG00003 Please add at least 2 volumes in consistency group & try again. OIUERRCG00004 Preparation failed as one or more volume is being repeated in one or more consistency group. OIURSLCG00004 Please validate the volume entry for consistency group. OIUERRCG00005 Import failed as virtual volume set by given name doesn't exist at the destination system. OIURSLCG00005 Please try migration again. OIUERRCG00006 Import failed as Less than two volumes found for virtual volume set at the destination system. OIURSLCG00006 Please add 2 or more volumes to virtual volume set & try migration again. OIUERRCG00007 Import failed as virtual volume set doesn't have a default CPG name OIURSLCG00007 Please provide default CPG to volumes of virtual volume set & try migration again Table Continued Troubleshooting resources 273

274 Error Code Error Message Resolution Code Resolution Message OIUERRCG00008 Import failed because a virtual volume set does not have any volumes in it. OIURSLCG00008 Please add volumes to virtual volume set & try migration again OIUERRCG00009 Remove Migration failed to remove volume or vvset. OIURSLCG00009 Please issue the remove migration command again. OIUERRCG00010 Data transfer failed as volume to be migrated is not found at destination. OIURSLCG00010 OIUERRCG00011 Remove Migration failed to remove volume as it is being imported. OIURSLCG00011 OIUERRCG00012 Data transfer failed as volume set doesn't have default CPG value. OIURSLCG00012 Please provide default CPG value for volume set & try again. OIUERRCG00013 Preparation failed because volumes in the consistency group have different priority. OIURSLCG00013 Please provide same priority for all volumes in consistency group & try again. OIUERRCG Preparation failed as virtual volume set name & consistency group name are same. OIURSLCG Please use different consistency group name or - allvolumesincg option to migrate entire vvset consistently. OIUERRCG00015 Failed to remove volumes OIURSLCG00015 Remove migration with force option is not applicable for volumes in CG. OIUERRCGSP00000 Internal volume (cpvv) creation failed. OIURSLCGSP00000 The layout for a volume of size MB, RAID 5, set size 4, devtype FC, with a highavailability (ha) cage requires 4 cages connected between a node pair, but this system has a maximum of only 2 cages connected between a node pair. OIUERRCLU0001 Cluster option is not valid for 3par source array. OIURSLCLU0001 Please provide the valid input. Table Continued 274 Troubleshooting resources

275 Error Code Error Message Resolution Code Resolution Message OIUERRCLU0002 No persona found for the cluster input. OIURSLCLU0002 Please select supported cluster. Use showcluster command for list of supported cluster. OIUERRCLU0003 Persona found for destination host mismatch with cluster input. OIURSLCLU0003 Please provide same persona which destination host has. OIUERRCS1001 Unable to find the array details. OIURSLCS1001 Ensure that proper array ID is provided. OIUERRCS1002 There is no one-toone mapping between the peer ports and the source host ports. OIURSLCS1002 Ensure that there is a one-to-one mapping between the destination peer ports and the source host ports, and that network connectivity between the source array and destination array is proper. OIUERRCS1003 There is no one-toone mapping between the peer ports/npiv port and the source host ports. OIURSLCS1003 Ensure that there is a one-to-one mapping between the peer ports, NPIV port and the source host ports, and that network connectivity between the host, source array and destination array is proper. OIUERRCS1004 NPIV port visible in destination array. Unable to show connection. OIURSLCS1004 Please verify one-to-one mapping using the SSMC. OIUERRCS1005 Destination array is in Federation, 1-1 mapping for one or more peer ports/npiv port and the source host ports could be missing. OIURSLCS1005 Please check network connectivity between the host, source array and destination array is proper. Also ensure oneto-one mapping between peer ports and source host ports OIUERRCS1006 Peer Ports are not adjacent. OIURSLCS1006 Please verify the peer port adjacency and re-try migration. OIUERRDB1001 Failed to access the database due to locking error. OIURSLDB1001 Please retry the operation after some time. OIUERRDB1002 Failed to read the database. OIURSLDB1002 Please retry the operation after some time. Table Continued Troubleshooting resources 275

276 Error Code Error Message Resolution Code Resolution Message OIUERRDB1003 Internal Database error. Invalid object type specified for Database operation. OIURSLDB1003 Ensure that the correct object type is specified. OIUERRDB1004 Internal Database error. Invalid tag type encountered while reading XML. OIURSLDB1004 Contact Hewlett Packard Enterprise support. OIUERRDB1005 Database write failed. OIURSLDB1005 Please retry the operation after some time. If the problem persists contact Hewlett Packard Enterprise support. OIUERRDB1006 Database constraint violated. OIURSLDB1006 Ensure that the object is not already added and that the dependent objects are available. OIUERRDB1007 No entry matching the search criteria is found. OIURSLDB1007 Ensure that the object is already added. OIUERRDB1008 File unlocking failed. OIURSLDB1008 Failed to unlock a file. Please restart the application service. OIUERRDB1009 Database is inconsistent. OIURSLDB1009 Contact Hewlett Packard Enterprise support. OIUERRDB1010 Invalid data passed to database. OIURSLDB1010 Please ensure that valid data is entered. OIUERRDB1011 More than one record found in DataBase for given combination of migrationid, destvolid, srcvolid, srcarrid, destarrid. OIURSLDB1011 Please ensure that valid data is entered. OIUERRDST0001 Unable to connect to the 3PAR storage system. OIURSLDST0001 Please ensure that the IP address and login credentials are correct. OIUERRDST0002 3PAR storage system is already added. OIURSLDST0002 If necessary remove the array and add it again. OIUERRDST0003 The 3PAR array is not in an usable state. OIURSLDST0003 Contact Hewlett Packard Enterprise support. Table Continued 276 Troubleshooting resources

277 Error Code Error Message Resolution Code Resolution Message OIUERRDST0004 Unsupported 3PAR storage system version. OIURSLDST0004 Check support matrix and upgrade 3PAR. OIUERRDST0005 3PAR storage system does not have peer motion license. OIURSLDST0005 Apply peer motion license on 3PAR. OIUERRDST0006 3PAR storage system is not connected. OIURSLDST0006 Ensure that the storage system is already added and that the proper ID is specified. OIUERRDST0007 This 3PAR storage system does not support minimally disruptive migration. OIURSLDST0007 Use other type of migration or upgrade to a firmware that supports minimally disruptive migration. OIUERRDST0008 Admit has failed. OIURSLDST0008 Fix the issue according to the error observed and retry. If you encounter the error message OIUERRDST0008:Admit has failed. null - volume status is degraded, check the LUN masking status of current peer hosts to ensure that there are no stale LUN presentations. If any exist, delete them and rerun the createmigration command. OIUERRDST0009 Volumes partially admitted. OIURSLDST0009 Contact Hewlett Packard Enterprise support. OIUERRDST0010 Unable to validate certificate for HPE 3PAR Storage System. OIURSLDST0010 Please use the installcertificate command to accept the certificate. OIUERRDST0011 This storage system cannot be removed or updated. It is involved in an active migration. OIURSLDST0011 Migrations have to be in complete/abort state for the storage system to be removed. OIUERRDST0012 The destination storage system doesn't support dedupe feature. OIURSLDST0012 Please use dedupe supported array. Table Continued Troubleshooting resources 277

278 Error Code Error Message Resolution Code Resolution Message OIUERRDST0013 Selected CPG or provisioning type does not support dedupe. OIURSLDST0013 Please select supported CPG or provisioning type for dedupe & try again. OIUERRDST0014 No such destination storage system found. OIURSLDST0014 Please verify whether destination is present. OIUERRDST0015 Destination array firmware version is older than 3.2.1, so it will not support consistency group. OIURSLDST0015 Please upgrade firmware version & try again. OIUERRDST0016 Destination array firmware version is older than and is not supported for Priority Migration. OIURSLDST0016 Please upgrade firmware version & try again. OIUERRDST0019 Unable to validate CA signed certificate. Please click on Install Certificate button in the Add Destination window to view and follow steps mentioned in migration guide to install the certificate. OIURSLDST0019 OIUERRDST0020 Volume set creation failed at destination storage system. OIURSLDST0020 Please try migration again. OIUERRDST0021 Volume set removal failed at destination storage system. OIURSLDST0021 Please remove consistency group using SSMC,IMC GUI or CLI. OIUERRDST0022 Error while getting storage array Object. OIURSLDST0022 OIUERRDST0023 Error while loading object set manager OIURSLDST0023 OIUERRDST0024 Unable to remove volume set from destination. OIURSLDST0024 Table Continued 278 Troubleshooting resources

279 Error Code Error Message Resolution Code Resolution Message OIUERRDST0025 Host with same name and a different WWN already exists at destination. OIURSLDST0025 Cannot proceed until tried again with a different host name. OIUERRDST0026 Duplicate volume name(s). Cannot proceed, try again after modifying volume name(s) as per 3PAR naming standards. OIURSLDST0026 Please ensure volume name(s) are as per 3PAR naming standards. OIUERRDST0027 The compression feature is not applicable for full provisioned virtual volume. OIURSLDST0027 Please use compression on the thin/dedupe provisioned virtual volume. OIUERRDST0028 Selected CPG/ provision does not support compression. OIURSLDST0028 Please select supported CPG/ provision for compression & try again. OIUERRDST0029 The destination storage system doesn't support compression feature. OIURSLDST0029 Please use compression supported array. OIUERRDST0030 Import Failed. OIURSLDST0030 Please do the rescan on host and trigger import again. OIUERRDST0031 Unable to connect to the destination storage system(s). OIURSLDST0031 For details please refer log files. OIUERRDST0032 The destination provisioning type is "Full" for implicitly selected volumes. OIURSLDST0032 Compression works only on thin/ dedup provisioned volumes. So, use thin/dedup for destination provisioning type or remove the compression option. OIUERRDST0033 Unable to validate destination array OS. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0034 Unable to reach the destination 3PAR array. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. Table Continued Troubleshooting resources 279

280 Error Code Error Message Resolution Code Resolution Message OIUERRDST0035 Failed to retrieve the details of peer port connection for the destination array. OIURSLDST0041 Ensure that the peer ports are in good state and the destination array is properly connected, and retry the operation. OIUERRDST0036 Failed to retrieve the volumes from destination system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0037 Failed to get the host information from destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0038 Failed to retrieve the CPGs from destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0039 Failed to retrieve the Host sets from destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0040 Failed to create host at the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0041 Failed to create host set at the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0042 Preparation failed during admit phase as the destination 3PAR is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0043 Control port login failed. OIURSLDST0042 OIUERRDST0044 Abort migration failed as the destination 3PAR is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. Table Continued 280 Troubleshooting resources

281 Error Code Error Message Resolution Code Resolution Message OIUERRDST0045 Unable to get the migrating volume information during import phase. OIURSLDST0033 Make sure target ports and LUNs are discovered at destination array. Clean up any failed migrations, ensure premigration checks are taken care and retry the migration. OIUERRDST0046 Unable to start the data transfer during import phase. OIURSLDST0037 Refer the logs for the actual problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRDST0047 Unable to retrieve host presentation from the destination array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0048 Unable to get virtual peer port WWNs from the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0049 Unable to get domain information from the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0051 Failed to validate migration support for windows cluster. OIURSLDST0038 Ensure that all prerequisites for windows cluster migration are met. Clean up the migration and try again. OIUERRDST0052 Unable to validate compression support on target 3PAR. OIURSLDST0043 Ensure that all prerequisites are met. Clean up the migration and try again. OIUERRDST0053 Failed to load Peer Motion Manager while getting volume by volume id from the destination system. OIURSLDST0037 Refer the logs for further details of the problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRDST0054 Pre migration validation failed as destination system is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. Table Continued Troubleshooting resources 281

282 Error Code Error Message Resolution Code Resolution Message OIUERRDST0055 Unable to get the host-set name information from the destination 3PAR array. OIURSLDST0038 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0056 Failed to create the volume set at the destination system as the system is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0058 Removing volume set from destination system failed as the system is not in connected state. OIURSLDST0039 Ensure that the destination array is properly connected. Remove and re-add the destination system and retry the operation. OIUERRDST0059 Failed to create volume set during admit. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0060 Unable to add the volumes to volumeset in the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0061 Failed to present volume to host or host set during admit as the destination system is not reachable. OIURSLDST0037 Refer the logs for further details of the problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRDST0062 Failed to validate consistency group or priority enforcement during admit phase. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0063 Failed to retrieve the volume set members from the destination system. OIURSLDST0037 Refer the logs for further details of the problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRDST0064 Unable to validate if the system is in federation as array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. Table Continued 282 Troubleshooting resources

283 Error Code Error Message Resolution Code Resolution Message OIUERRDST0065 Unable to add the volumes to volumeset in the destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0066 Failed to retrieve the host-set members from the destination 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0067 Failed to start data transfer as the destination system is not in connected state. OIURSLDST0039 Ensure that the destination array is properly connected. Remove and re-add the destination system and retry the operation. OIUERRDST0068 Unable to validate destination array OS as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0069 Source is not visible to destination storage system. OIURSLDST0035 Ensure showtarget is showing the list of target ports. OIUERRDST0070 Failed to retrieve the volumes from target array as array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0071 Failed to retrieve the host from target array as array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0072 Failed to retrieve the CPGs from destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0073 Failed to retrieve the Host sets from destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. Table Continued Troubleshooting resources 283

284 Error Code Error Message Resolution Code Resolution Message OIUERRDST0074 Failed to create host at the destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0075 Failed to create host set at the destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0076 Admit has failed. The error could be because unable to present volume to host on destination 3PAR or any other configuration issue. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRDST0078 Unable to retrieve host presentation from the destination array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0079 Unable to get virtual peer port WWNs from the destination 3PAR array as the array is not reachable. OIURSLDST0034 Ensure that the destination array is properly connected and retry the operation. OIUERRDST0083 Failed to present volume set to host or host set during admit as the destination array is not reachable. OIURSLDST0037 Refer the logs for further details of the problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRDST0084 Admit has failed. The error could be due to one or more of the following reasons: more than two peer ports configured, target port not visible, improper peer port zoning, target 3PAR running with OS lower than MU5 and/or source volume(s) have SCSI reservations. OIURSLDST0040 Refer pre-migration checklist to ensure that all the configurations are as per the recommendation such as only two partner peer ports configured in the destination 3PAR array, showtarget lists the visible target WWNs, all the peer port HBA initiators are in supported failover mode, upgrade 3PAR OS to MU5 or later, remove SCSI reservation on source volume etc. Table Continued 284 Troubleshooting resources

285 Error Code Error Message Resolution Code Resolution Message OIUERRDST0085 Admit has failed. The error could be due to one or more of the following reasons: peer ports are not in good state, source LUN(s) have SCSI reservations and/or cluster shut down may not have happened. OIURSLDST0085 Rectify the problem and retry the migration. OIUERRDST0086 Admit has failed. Volume status is degraded. The error could be due to one of the peer ports not in good state or source LUN(s) have SCSI reservations. OIURSLDST0086 Rectify the problem and retry the migration. OIUERRDST0087 Failed to get remote port WWNs from NS. There could be more than two peer ports configured or peer ports not in good state. OIURSLDST0087 Rectify the problem and retry the migration. OIUERRDST0088 Unable to retrieve the host details for the host: OIURSLDST0033 Clean-up the migration and try again. OIUERRDSTH0001 Failed to create destination host. OIURSLDSTH0001 Please try migration again. OIUERRMC1000 Source array UID not provided. OIURSLMC1000 Ensure that source array UID is provided as input. OIUERRMC1001 Command cannot be processed. OIURSLMC1001 Either the srchost or srcvolmap or volmapfile should be provided. OIUERRMC1003 Source volume details are supplied with wrong syntax. OIURSLMC1003 Ensure that entries are provided in accordance with the user guide. OIUERRMC1004 Invalid migration type. OIURSLMC1004 Provide migtype as online or MDM or offline. OIUERRMC1005 Invalid clunode parameter. OIURSLMC1005 Provide a number greater than 1. Table Continued Troubleshooting resources 285

286 Error Code Error Message Resolution Code Resolution Message OIUERRMC1006 Clunodes not provided though migration type is AO. OIURSLMC1006 With AO migration type clunodes parameter should be more than or equal to 1. See support matrix for max value. OIUERRMC1007 Number of entries in volume map is invalid. OIURSLMC1007 Number of volume map entries should be between 1 and 3. OIUERRMC1008 The destcpg and/or destprov are not provided. OIURSLMC1008 Default values required at least for one entry of volume map. OIUERRMC1009 Value for destcpg or destprov is not provided. OIURSLMC1009 Either provide the values through volume map or provide default values via destprov or destcpg options. OIUERRMC10010 Empty volume identifier specified. OIURSLMC10010 Provide a valid identifier for source volume. OIUERRMC10011 destprov is not specified. OIURSLMC10011 Specify destprov as 'thin' or 'full'. OIUERRMC10012 Invalid value provided for destprov. OIURSLMC10012 destprov should be 'thin' or 'full'. OIUERRMC10013 destcpg is not specified. OIURSLMC10013 destcpg should be provided as cpg information is missing for at least one entry in volume map. OIUERRMC10014 Invalid migration type specified. OIURSLMC10014 If at least one of the volumes being migrated is provisioned to a host then migration type can be either MDM or online. OIUERRMC10015 destprov/destcpg is not specified with source host. OIURSLMC10015 If source host is mentioned, both destprov and destcpg should be specified. OIUERRMC10016 One or more of the volumes, selected explicitly or implicitly, are ineligible for migration because they are in remote copy relationship or they are a snapshots or part of a snapshot pool. OIURSLMC10016 Remove the remote copy relationship and/or remove any snapshot or snapshot pool volumes from the volumes selected for migration. Table Continued 286 Troubleshooting resources

287 Error Code Error Message Resolution Code Resolution Message OIUERRMC10017 One or more of the volumes, selected explicitly or implicitly, are ineligible for migration due to an iscsi provisioning. OIURSLMC10017 Remove the iscsi presentations for the volume(s) selected for migration. OIUERRMC10018 One or more of the volumes, selected explicitly or implicitly, are reserved, therefore ineligible for migration. OIURSLMC10018 Remove the volume(s) selected for migration from reserved group(s). OIUERRMC10019 Duplicate migrations are not allowed, for Source Array of type <>. OIURSLMC10019 Create another migration, when this Array has finished migration and has been removed from migration list. OIUERRMC10020 One or more volumes are selected implicitly. -destprov and - destcpg options need to be specified. OIURSLMC10020 Please specify both -destprov and -destcpg values. OIUERRMC10021 No volumes found. OIURSLMC10021 Please make sure that at least one volume is selected either explicitly or implicitly. OIUERRMC10022 Create Migration failed while parsing cg volmap input. OIURSLMC10022 Please pass proper input as specified in command help. OIUERRMC10023 Invalid input for autoresolve option. OIURSLMC10023 Use either true or false. OIUERRMC10024 Create Migration failed while parsing priority volmap input. OIURSLMC10024 Please pass proper input as specified in command help. OIUERRMC10025 Create Migration failed while parsing srchost input as it contains invalid host or host set name. OIURSLMC10025 Please provide valid input. OIUERRMC10026 Combination of host and hostset migration is not supported. OIURSLMC10026 Please specify either host or host set alone. Table Continued Troubleshooting resources 287

288 Error Code Error Message Resolution Code Resolution Message OIUERRMC10027 srchost input contains duplicate host or host set name. OIURSLMC10027 Please provide valid input. OIUERRMC10028 One or more of the volumes, selected explicitly or implicitly, has scsi3 persistent reservation disabled, therefore ineligible for migration. OIURSLMC10028 Enable scsi3 persistent reservation for selected volume(s). OIUERRMC10029 Failed in Admit stage. Control port login failed. OIURSLMC10029 OIUERRMC10030 One or more of the volumes, selected explicitly or implicitly, is a snapshot (virtual copy of a volume) and therefore ineligible for migration. OIURSLMC10030 Make sure the volumes selected for migration are not snapshots or part of any snapshot pool. OIUERRMS10001 Destination array not found. OIURSLMS10001 Ensure that destination array specified is already added and that the correct array ID is specified. OIUERRMS10002 Source array not found. OIURSLMS10002 Ensure that the source array specified is already added to OIU and that valid UID is used. OIUERRMS10003 Source host not found. OIURSLMS10003 Ensure that the unique name specified for the source host is valid. OIUERRMS10004 Could not fetch source volume details. OIURSLMS10004 Ensure that the volume you are trying to migrate is present in the source array and is not a mainframe/snapshot type. OIUERRMS10005 One or more of the volume(s) selected for migration explicitly or implicitly is already under migration process. OIURSLMS10005 Ensure that the volume specified and the corresponding linked volumes are not under migration. OIUERRMS10006 Failed to start Data transfer. OIURSLMS10006 Ensure that all prerequisites are met. Table Continued 288 Troubleshooting resources

289 Error Code Error Message Resolution Code Resolution Message OIUERRMS10007 Number of source volumes selected for offline migration is more than the maximum limit. OIURSLMS10007 Verify support matrix and restrict the number of volumes accordingly. OIUERRMS10008 Migration details not found. OIURSLMS10008 Ensure that proper migration ID is specified. OIUERRMS10009 Required domain not present at destination storage system. OIURSLMS10009 Please create domain before triggering migration. OIUERRMS10010 Volumes belong to different domain. OIURSLMS10010 All volumes should be in same domain for domain based migrations. OIUERRMS10011 Invalid host Set Name or host set is empty. OIURSLMS10011 Provide valid host set name or add hosts to hostset & try migration OIUERRMS10012 Invalid volume Set Name or volume set is empty. OIURSLMS10012 Provide valid volume set name or add volumes to volumeset & try migration. OIUERRMS10013 Preparation failed as one of virtual volume in priorityvolmap is not part of volumes to be migrated. OIURSLMS10013 Please make sure all volumes specified in priorityvolmap are part of volumes to be migrated. OIUERRMS10014 Preparation failed as priority is set for one of implicit volumes in case of singlevv. OIURSLMS10014 Ensure that priority is not set for implicit volumes in case of singlevv. OIUERRMS10015 Volumes that are part of migration are not presented to same group of host. OIURSLMS10015 Provide volume with proper presentation where all volumes belong to same host group. OIUERRMS10016 Host set with given name already exists. OIURSLMS10016 Please provide a different host set name OIUERRMS10017 The -vvset and - hostset options are not available for a source of type 3par. OIURSLMS10017 Please use these options only for non 3par source arrays. Table Continued Troubleshooting resources 289

290 Error Code Error Message Resolution Code Resolution Message OIUERRMS10018 Implicit hosts have multiple operating systems. OIURSLMS10018 Make sure implicit hosts have same operating system. OIUERRMS10019 One of host to be migrated doesn't have volumes exported. OIURSLMS10019 Make sure all hosts to be migrated have volumes exported. OIUERRMS10020 One of the hostset to be migrated does not contain hosts. OIURSLMS10020 Please make sure hosts are present in all hostset to be migrated. OIUERRMS10021 VVset and HostSet already exist at the destination with a different group of hosts.dev comment: source: "VVset and HostSet is already exists at destination with different group of hosts"; OIURSLMS10021 Please provide a different VVset name and HostSet name for the migration. OIUERRMS10022 The existing VVset and HostSet presentation are different than the input provided. OIURSLMS10022 Please provide the valid input for VVset and HostSet. OIUERRMS10023 Source hostset not found. OIURSLMS10023 Ensure that the unique name specified for the source hostset is valid. OIUERRMS10024 Migrating object domain is not same as provided domain. OIURSLMS10024 Please make sure that migration is triggered with same domain. OIUERRMSA0075 A host with one of destination initiator exists. OIURSLMSA0075 Remove destination initiator from existing host. OIUERRMSA0076 There are multiple hosts at Source MSA with same name. OIURSLMSA0076 Rename one host and try again. OIUERRMSA0077 Host not found for source. OIURSLMSA0077 Check if given host ID exists. OIUERRMSA0078 Failed to remove source. OIURSLMSA0078 Please check if MSA Source exists and is valid. Table Continued 290 Troubleshooting resources

291 Error Code Error Message Resolution Code Resolution Message OIUERRMSA0079 MSA Source Array ID not found. OIURSLMSA0079 Try entering correct Source Array ID. OIUERRMSA0080 Connection with given MSA Array failed. OIURSLMSA0080 Please check if correct IP and/or credentials are provided. OIUERRMSA0081 MSA is Already connected. OIURSLMSA0081 Disconnect the array and try again for reconnect. OIUERRMSA0082 Maximum of 255 volumes can be exported from MSA to 3PAR at once. OIURSLMSA0082 Wait for ongoing migrations to complete then try again. OIUERRMSA0083 Given MSA Array model is currently unsupported. OIURSLMSA0083 Currently only MSA 1040, MSA 2040 and P2000 G3 Arrays are supported. OIUERRPER0000 No matching persona found for the provided persona value. OIURSLPER0000 Please select another supported persona. OIUERRPER0001 Either Persona or Cluster cannot be empty for online or MDM migration for non 3par source array OIURSLPER0001 Please select supported persona or cluster. Use showpersona command or showcluster for list of supported persona or cluster. OIUERRPREP1001 Unable to fetch volume info. OIURSLPREP1001 OIUERRPREP1002 Ineligible volume. OIURSLPREP1002 OIUERRPREP1003 OIUERRPREP1004 OIUERRPREP1005 OIUERRPREP1006 OIUERRPREP1007 Unsupported operating system for host. Implicit hosts different domain. Volume bigger than supported LUN size. Migration type is not supported. Unable to verify and create destination as a host at source. OIURSLPREP1003 OIURSLPREP1004 OIURSLPREP1005 OIURSLPREP1006 Table Continued Troubleshooting resources 291

292 Error Code Error Message Resolution Code Resolution Message OIUERRPREP1008 Either there is only one port WWN with which host port is create or there are more than 2 peer ports. OIURSLPREP1008 OIUERRPREP1009 Cluster nodes is more than maximum supported value for almost online migration type. OIURSLPREP1009 OIUERRPREP1010 Host port that can see peer port is not available. OIURSLPREP1010 OIUERRPREP1011 Maximum number of exports reached. OIURSLPREP1011 OIUERRPREP1012 Failed to get volumes on destination array. OIURSLPREP1012 OIUERRPREP1013 Name of WWID of volume already exists. OIURSLPREP1013 OIUERRPREP1014 Error creating host on destination. OIURSLPREP1014 See: The createmigration command returns error OIUERRPREP1014 OIUERRPREP1015 Error creating host sets on destination. OIURSLPREP1015 OIUERRPREP1016 Destination array is not thin licensed. OIURSLPREP1016 OIUERRPREP1017 Getting CPGs from destination failed. OIURSLPREP1017 OIUERRPREP1018 CPG not found at destination in specified domain. OIUERRPREP1018 CPG at destination doesn't support dedupe. OIURSLPREP1018 OIUERRPREP1019 Failed to present a volume to host representing destination at source. OIURSLPREP1019 Table Continued 292 Troubleshooting resources

293 Error Code Error Message Resolution Code Resolution Message OIUERRPREP1020 OIUERRPREP1021 OIUERRPREP1021 OIUERRPREP1022 OIUERRPREP1023 OIUERRPREP1024 OIUERRPREP1025 OIUERRPREP1026 OIUERRPREP1027 OIUERRPREP1028 Lun number conflict exists for LUN#. Trying to migrate hots/ volume from non domain to domain. Unsupported alua for an vvset migration.please try migration with ALUA based Hosts. Failed to present volumes to host representing destination at source. Migration with mixmode volumes (presented and unpresented volumes) is not supported. Migration type is not supported. Initiated OFFLINE migration where volumes have presentation. Migration type is not supported. Initiated ONLINE/MDM migration where volumes have no presentation. Volume is ineligible for migration as size is less than minimum size limit 256 MB or more than the maximum size 16 TB. Lun number conflict exists for LUN# OIURSLPREP1020 OIURSLPREP1021 OIURSLPREP1021 OIURSLPREP1022 OIURSLPREP1023 Table Continued Troubleshooting resources 293

294 Error Code Error Message Resolution Code Resolution Message OIUERRPWF0000 Unable to connect Destination Arrays. OIURSLPWF0000 Please check the connection, restart service or contact Hewlett Packard Enterprise support. OIUERRRM00000 Force option is not applicable when preparation complete/ preparation failure. OIUSLRM00000 OIUERRSEC0000 Authorization failed. OIURSLSEC0000 This user is not part of the supported user groups. OIUERRSEC0001 Invalid credentials. OIURSLSEC0001 Please try again with correct username and password. OIUERRSRC0001 Unable to connect to the array. OIURSLSRC0001 Contact Hewlett Packard Enterprise support. OIUERRSRC0002 Unable to connect to the SVP of the array. OIURSLSRC0002 Make sure that the SVP is up and running OIUERRSRC0003 Unable to find the array specified. OIURSLSRC0003 Make sure the array is already added in CV AE. Otherwise contact support OIUERRSRC0004 The array you are trying to add is not supported. OIURSLSRC0004 Check the support matrix and make sure that only the supported arrays with supported firmware versions are added OIUERRSRC0005 An unexpected error occurred when communicating with the array. OIURSLSRC0005 Contact Hewlett Packard Enterprise support. You may check if storage array used is operational. OIUERRSRC0006 The target port chosen for migration has already reached the maximum permissible host group objects. OIURSLSRC0006 Delete the host group objects that are not required or connect the peer port to another target port. OIUERRSRC0007 There is no free LUN available in the selected host group object. OIURSLSRC0007 Contact Hewlett Packard Enterprise support. OIUERRSRC0008 Command execution on source storage array timed out. OIURSLSRC0008 Make sure the array is not busy performing other management operations. If required increase the timeout value in config file Table Continued 294 Troubleshooting resources

295 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0009 Unable to find the LUN to be unpresented. OIURSLSRC0009 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0010 Unable to find the peer host port for host creation. OIURSLSRC0010 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0011 An internal error occurred when processing the command. OIURSLSRC0011 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0012 This storage system cannot be removed. It is involved in an active migration. OIURSLSRC0012 Migrations have to be in complete/abort state for the storage system to be removed. NOTE: The removemigration command was issued while the specified migration is still in progress. Wait for the migration to complete before removing it. OIUERRSRC0013 The object is not available in the array. OIURSLSRC0013 Retry the operation after some time. OIUERRSRC0014 A migration is already going on between the selected source and destination array. OIURSLSRC0014 Please wait till the ongoing migration is complete or delete the migration and then retry the operation. OIUERRSRC0015 Peer port is already assigned to another host object. OIURSLSRC0015 Please delete the host(s) that has the peer ports assigned to it to proceed OIUERRSRC0016 One or more hosts associated with the migration does not support singlevv migration. OIURSLSRC0016 Please ensure that the hosts associated with migration have persona 2, 11, 13 or 15. OIUERRSRC0017 SingleVV migration is not supported for the selected source array. OIURSLSRC0017 Please ensure that singlevv option is only used when the source array is of type 3PAR. Table Continued Troubleshooting resources 295

296 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0018 The source/ destination HPE 3PAR StoreServ Storage firmware version does not support singlevv migration. OIURSLSRC0018 Please ensure that both the source and destination arrays are running the firmware version necessary for singlevv OIUERRSRC0019 The number of nodes migrated exceeds the maximum limit. OIURSLSRC0019 Ensure that the number of nodes migrated does not exceed 4. OIUERRSRC0020 Unsupported EMC storage system version. OIURSLSRC0020 Check support matrix and upgrade EMC. NOTE: The EMC Storage source system check has failed. The EMC Storage source system must be running at a supported version. OIUERRSRC0021 There is an active migration on this source. So this source can not be removed.dev comment: source: "There is a active migration on this source. So this source can not be removed."; OIURSLSRC0021 Migrations have to be complete/ aborted for the storage system source to be removed. NOTE: The removemigration command was issued while the specified migration is still in progress. Wait for the migration to complete before removing it. OIUERRSRC0022 Cannot find specified source Array. OIURSLSRC0022 An array has to be added first in order to be removed. NOTE: The removesource command was issued with a specific UID does not exist in the 3PAR Online Import Utility database. Issue the 3PAR Online Import Utility showsource command to determine storage systems that are currently known. Table Continued 296 Troubleshooting resources

297 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0023 You have selected a migration with an unsupported storage group configuration. OIURSLSRC0023 Verify that your storage group configuration matches one of the supported configurations. NOTE: 3PAR Online Import The LUN associated with the 3PAR Online Import Utility createmigration command that was issued is not in a supported storage group configuration. Review supported storage group configurations in Migrations supported by the 3PAR Online Import Utility on page 152 and make appropriate changes before creating the migration. OIUERRSRC0024 EMC storage system is already added. OIURSLSRC0024 If necessary remove the array and add it again. OIUERRSRC0025 Unable to unpresent the volume from the host in EMC array. OIURSLSRC0025 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0026 Cannot find specified port group. OIURSLSRC0026 A port group has to be created first in order to create masking view. OIUERRSRC0027 Cannot find specified initiator group. OIURSLSRC0027 A initiator group has to be created first in order to create masking view. OIUERRSRC0028 Cannot find peer port host alias. OIURSLSRC0028 A alias has to be set to peer port first in order to present volumes to host. OIUERRSRC0029 Cannot find target port host alias. OIURSLSRC0029 A alias has to be set to target port first in order to present volumes to host. OIUERRSRC0030 Host cannot access storage. OIURSLSRC0030 Check zoning 3PAR peer port cannot access EMC director port. OIUERRSRC0031 A host with one of destination initiator exists. OIURSLSRC0031 Remove destination initiator from existing host. Table Continued Troubleshooting resources 297

298 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0051 Failed to create destination host. OIURSLSRC0051 Please try migration again. OIUERRSRC0052 Unable to present the volume to the host in XiV array. OIURSLSRC0052 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0053 Unable to unpresent the volume from the host in XiV array. OIURSLSRC0053 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0054 The array you are trying to add is not supported. OIURSLSRC0054 Check the support matrix and make sure that only the supported arrays with supported firmware versions are added. OIUERRSRC0055 Cannot find specified IBM XiV source Array. OIURSLSRC0055 An array has to be added first in order to be removed. OIUERRSRC0056 Currently this command not supported for this array type. OIURSLSRC0056 Please verify input parameters and the array type. OIUERRSRC0057 Unable to find the array specified serial number. OIURSLSRC0057 Please ensure that array UID has been entered correctly. OIUERRSRC0058 Unable to find the array specified IP address. OIURSLSRC0058 Please ensure that array IP address has been entered correctly. OIUERRSRC0059 Unable to find the array specified serial number. OIURSLSRC0059 Please ensure that UID is specified with addsource command. OIUERRSRC0060 XiV storage system is already added. OIURSLSRC0060 If necessary remove the array and add it again. OIUERRSRC0073 Inconsistent mapping of volume with Initiator in the host group. OIURSLSRC0073 Map volume to all initiators in host group. OIUERRSRC0074 P2000 Supports creation of host with only one Initiator. OIURSLSRC0074 Create multiple host with one Initiator each. OIUERRSRC0101 Unable to present the volume to the host in 3PAR array OIURSLSRC0101 Contact Hewlett Packard Enterprise support if the problem persists. Table Continued 298 Troubleshooting resources

299 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0102 Unable to unpresent the volume from the host in 3PAR array. OIURSLSRC0102 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0111 Unable to present the volume to the host in EMC array. OIURSLSRC0111 Contact Hewlett Packard Enterprise support if the problem persists. NOTE: The 3PAR Online Import Utility is unable to add the HPE 3PAR peer port host to the EMC Storage source system group. Check the 3PAR Online Import Utility log file for a specific description of the problem from SMI-S Provider. OIUERRSRC0112 Unable to unpresent the volume from the host in EMC array. OIURSLSRC0112 Contact Hewlett Packard Enterprise support if the problem persists. NOTE: The 3PAR Online Import Utility is unable to remove the HPE 3PAR peer port host from the source storage system group. Check the 3PAR Online Import Utility log file for a specific description of the problem from SMI-S Provider. OIUERRSRC0120 The source/ destination HPE 3PAR StoreServ Storage firmware version does not support online Windows cluster migration. OIURSLSRC0120 Please ensure that both the source and destination arrays are running the firmware version necessary for online Windows cluster migration. OIUERRSRC0121 The destination HPE 3PAR StoreServ Storage model does not support online Windows cluster migration. OIURSLSRC0121 Please ensure that only supported array models are used for online Windows cluster migration. Table Continued Troubleshooting resources 299

300 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0122 The number of NPIV ports is less than the required count. OIURSLSRC0122 Please ensure that each peer port is configured with twice as many NPIV ports as the number of nodes being migrated. OIUERRSRC0123 The source HPE 3PAR StoreServ Storage array is running on higher firmware version. OIURSLSRC0123 Please ensure that source HPE 3PAR StoreServ Storage system is running on either the same firmware version or a lower firmware version than destination HPE 3PAR StoreServ Storage system. OIUERRSRC0124 No such source storage system found. OIURSLSRC0124 Please verify whether source is present. OIUERRSRC0125 Failed to retrieve the volumes information from the source 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0126 Failed to retrieve hosts from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0127 Failed to retrieve all the LUNs from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0128 Failed to get all the presented hosts for the volumes in the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0129 Failed to create peer host in the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0130 Unable to verify the peer host existence in the source 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0131 Failed to present the volumes to the peer host at the source 3PAR array as the array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. Table Continued 300 Troubleshooting resources

301 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0132 Failed to unpresent the volumes from host in the source 3PAR array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0133 Failed to retrieve host information from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0134 Failed to retrieve volumes from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0135 Unable to retrieve presentations from the source array. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0136 Failed to retrieve members of the host set from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0137 Failed to get members of the volume set from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0138 Failed to retrieve the volumes information from the source 3PAR array as the array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0139 Failed to get volume set presentations from the source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0140 Failed to unpresent volumes from the peer host in source system as the array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. Table Continued Troubleshooting resources 301

302 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0141 Failed to unpresent volumes from the peer host in source system as the array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0142 Failed to remove volumes from volume set in source system. OIURSLDST0037 Refer the logs for further details of the problem reported, rectify the problem and then retry the migration. If the issue persists, contact HPE support. OIUERRSRC0143 Failed to retrieve hosts from the source system as the array is not reachable. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0144 Failed to get presentations for the volumes in source system. OIURSLDST0033 Clean up any failed migrations, ensure pre-migration checks are taken care and retry the migration. OIUERRSRC0145 Failed to get all the presented hosts for the volumes in the source system as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0147 Unable to verify the peer host existence in the source 3PAR array as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0148 Failed to retrieve host information from the source system as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0149 Failed to retrieve volumes from the source system as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0150 Unable to retrieve presentations from the source array as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. Table Continued 302 Troubleshooting resources

303 Error Code Error Message Resolution Code Resolution Message OIUERRSRC0151 Failed to get presentations for the volumes in source system as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0152 Failed to retrieve the source array information as the array is not reachable. OIURSLSRC0125 Ensure that the source array is properly connected and retry the operation. OIUERRSRC0212 Unable to find the array specified serial number. OIURSLSRC0212 Please ensure that UID is specified with addsource command. OIUERRSRC0213 Unable to find the array specified serial number. OIURSLSRC0213 Please ensure that array type has been entered correctly. OIUERRSRC0214 Unable to present the volume to the host in source array. OIURSLSRC0214 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0215 Unable to unpresent the volume from the host in source array. OIURSLSRC0215 Contact Hewlett Packard Enterprise support if the problem persists. OIUERRSRC0217 HDS storage system is already added. OIURSLSRC0217 If necessary remove the array and add it again. OIUERRSSM0000 Destination array firmware version is earlier than 3.2.2, Subset migration is supported only with firmware version or later. OIURSLSSM0000 Please upgrade firmware version & try again OIUERRSST0001 Unable to connect to the source storage system. OIURSLDST0001 Ensure that the valid IP address, credentials and certificate are provided. OIUERRUHP0000 Failed to add the port WWNs for the host. OIURSLUHP0000 OIUERRUHP0001 Failed to remove the peer host from the destination. OIURSLUHP0001 OIUERRUHP0002 Failed to update the host port WWNs. OIURSLUHP0002 Table Continued Troubleshooting resources 303

304 Error Code Error Message Resolution Code Resolution Message OIUERRVER0000 Unable to load the deploy properties. OIUERRVVS0000 Preparation failed as virtual volume set by given name already exists. OIURSLVVS0000 Please use different virtual volume set name. OIUERRVVS0001 Preparation failed as virtual volume set by given name doesn't exist. OIURSLVVS0001 Please make sure that volume set to be migrated is present in array. OIUERRVVS0002 Preparation failed as no volumes found in volume set. OIURSLVVS0002 Please add volumes to volume set & try migration. OIUERRVVS0003 Migration failed. OIURSLVVS0003 Please provide either vvset or volume as input. OIUERRVVS0003 Migration of volumes is not supported when vvset is exported to a host/hostset. OIURSLVVS0004 OIUERRVVS0005 Migration cannot proceed. OIURSLVVS0005 All the Volumes of the vvset are found in another VVset which has presentation. OIUERRVVS0006 Migration cannot proceed. OIURSLVVS0006 One or more VV is part of another VVset. OIUERRVVS0008 Migration cannot proceed. OIURSLVVS0008 All the volumes of the vvset being migrated belong to another vvset. 3PAR Online Import Utility troubleshooting Understanding the migration process and the order of the procedures is helpful when troubleshooting. This section provides an overview of the 3PAR Online Import Utility CLI commands used and what happens when you use them, from setup to completion of the migration. The following diagnostic tools can be used for troubleshooting: For the 3PAR StoreServ Storage: the 3PAR Online Import Utility management interface For EMC Storage: Unisphere For HDS Storage: the HiCommand Suite More information 3PAR Online Import Utility CLI commands on page PAR Online Import Utility troubleshooting

305 3PAR Online Import Utility CLI commands addsource No change on source or destination storage system. Source storage system information added into the 3PAR Online Import Utility database. adddestination No change on source or destination storage system. Destination storage system information added into the 3PAR Online Import Utility database. createmigration For an EMC Storage source system, OIU creates a new storage group and assigns peer ports for the storage group. The newly created storage group contains migrated LUNs as well. This new storage group is reused for subsequent migrations. For an HDS Storage system: The two host groups containing both HPE 3PAR peer ports are created on channel adapters with names in the following format: HCMDxxxx Migrating LDEV(s)get exports to both newly created host groups representing HPE 3PAR peer ports. For an IBM Storage system: Two host groups for IBM XiV Gen3 containing HPE 3PAR peer ports are created with names in the following format: hostxxxx One host group for IBM XiV Gen2 containing both HPE 3PAR peer ports are created with names in the following format: HPE 3PAR peer port wwn On the destination storage system: One or more hosts are created. LUNs are admitted. LUNs are presented to the hosts (online migration). startmigration On an EMC Storage source system, hosts are removed from the storage group. On an HDS Storage source system, LUNs are unpresented from the migrating host group. 3PAR Online Import Utility CLI commands 305

306 For an IBM XIV source system, after the migration has completed, LUNs are unpresented from the host group representing the HPE 3PAR peer ports from the source storage system. On the destination storage system: LUNs are presented to the hosts (MDM). Import is started. Once the migration is completed: For an EMC Storage source system, the HPE 3PAR peer ports are removed from the source storage system group. For an HDS Storage source system, after migration has been completed, LUNs are unpresented from the host group representing HPE 3PAR peer ports from the source storage system. Utility logs 3PAR Peer Motion Utility When an issue occurs during data migration, log content can be useful in identifying and solving the problem. The Peer Motion Utility service must be stopped before getting the logs or changing the logging level and restarted afterward. To do this, go to the Windows Computer > Manage screen, right-click the name of the service, and select Stop/Start. Figure 98: Stopping the 3PAR Peer Motion Utility 3PAR Peer Motion Utility log files 3PAR Peer Motion Utility logs are located in the following location: <Install drive/folder>hewlett Packard Enterprise\hpe3parpmu\OIUTools\tomcat \32-bit\apache-tomcat \logs 3PAR Peer Motion Utility configuration data is located in the following location: <Install drive/folder>\hewlett Packard Enterprise\hpe3parpmu\OIUData\data Increasing the 3PAR Peer Motion Utility server logging level Perform the following to set the 3PAR Online Import Utility log to the most verbose level: 306 Utility logs

307 Procedure 1. Stop the 3PAR Peer Motion Utility service. 2. Edit the following file: <Install drive/folder>\hewlett Packard Enterprise\hpe3parpmu\OIUTools \tomcat\32-bit\apache-tomcat \webapps\oiuweb\web-inf\classes \applicationconfig.properties 3. Change log4j.rootcategory=info, DebugLogAppender to log4j.rootcategory=all, DebugLogAppender. 4. Restart the 3PAR Peer Motion Utility service so that the logging changes are picked up. 3PAR Online Import Utility server logs and output When an issue occurs during data migration, server logs and output can be useful in identifying and solving the problem. The Online Import Utility service must be stopped before getting the logs or changing the logging level and restarted afterward. To do this, go to the Windows Computer > Manage screen, right-click the name of the service, and select Stop/Start. Figure 99: Stopping the 3PAR Online Import Utility To find the 3PAR Online Import Utility logs, go to: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu\OIUTools \tomcat\32-bit\apache-tomcat \logs Get all versions of the log if it has rolled over. Get 3PAR Online Import Utility output by issuing these commands: showmigration migrationid xxx showmigrationdetails migrationid xxx showconnection showsource showdestination 3PAR Online Import Utility server logs and output 307

308 Increasing the 3PAR Online Import Utility server logging level Perform the following to set the 3PAR Online Import Utility log to the most verbose level: Procedure 1. Stop the 3PAR Online Import Utility service. 2. Edit the following file: C:\Program Files (x86)\hewlett Packard Enterprise\hpe3paroiu\OIUTools \tomcat\32-bit\apache-tomcat \webapps\oiuweb\web-inf\classes \applicationconfig.properties 3. Change log4j.rootcategory=info, DebugLogAppender to log4j.rootcategory=all, DebugLogAppender. 4. Restart the 3PAR Online Import Utility service so that the logging changes are picked up. Troubleshooting communication between the 3PAR Online Import Utility and an EMC Storage source array When an issue occurs, get all the EMC SMI-S Provider server logs and output to assist with troubleshooting. The logs and output are located at: Windows: C:\Program Files\EMC\ECIM\ECOM\log\cimomlog Procedure Troubleshooting communication with the EMC SMI-S Provider on page 308 Using the TestSmiProvider tool on page 310 Troubleshooting communication with the EMC SMI-S Provider Procedure 1. Execute the desired URL from the 3PAR Online Import Utility server: Secure: Server IP address>:<port>/ecomconfig Unsecure: Server IP address>:<port>/ecomconfig To increase EMC SMI-S Provider server logging to the most verbose level: a. Log in to the ECOM administration interface. 308 Increasing the 3PAR Online Import Utility server logging level

309 If the SMI-S Provider ECOM login page does not appear, check the following: Make sure you have the correct IP address for SMI-S Provider server. Make sure you are using the correct port for the EMC SMI-S Provider server. Log in to the EMC SMI-S Provider server and look at the port configuration file which shows which ports are enabled for use and whether they are expecting a secure or unsecure connection. Windows: C:\Program Files\EMC\ECIM\ECOM\conf\Port_settings.xml Make sure there is network connectivity between the 3PAR Online Import Utility server and the EMC SMI-S Provider. NOTE: Determine whether SMI-S Client filtering is enabled. If enabled, your 3PAR Online Import Utility server must be on the Trusted client IP list. Make sure that the SMI-S Provider is running. Windows Check that the EMC SMI-S Provider (ECOM/EMC) services are started: If the ECOM login page does appear, login using the credentials that you used when you issue the 3PAR Online Import Utility addsource command. If that fails, then you need to determine which are the correct credentials. b. Select Log Options. c. Set the Log file to CIMOMLOG. d. Set the Log Severity to NAVI_TRACE. e. Set Trace Settings to turn on all 3 traces. f. Replace these variables: Troubleshooting resources 309

310 <SMI-S Server IP address> with the Management_Server IP address displayed in 3PAR Online Import Utility showsource output <port> with the port that was used when the addsource command was issued The unsecure port default is 5988 The secure port default is 5989 g. Save the Log Setting. h. Restart the ECOM service. Using the TestSmiProvider tool Procedure 1. Start the SMI-S Provider TestSmiProvider tool. Example SMI-S Provider output on Windows: C:\Program Files\EMC\ECIM\ECOM\bin>TestSmiProvider.exe Connection Type (ssl,no_ssl,native) [no_ssl]: Host [localhost]: Port [5988]: Username [admin]: Password [#adminpw]: adminpw Log output to console [y n (default y)]: Log output to file [y n (default y)]: Logfile path [Testsmiprovider.log]: Connecting to localhost:5988 Using user account 'admin' with password 'adminpw' ######################################################################## ## ## ## EMC SMI Provider Tester ## ## This program is intended for use by EMC Support personnel only. ## ## At any time and without warning this program may be revised ## ## without regard to backwards compatibility or be ## ## removed entirely from the kit. ## ######################################################################## slp - slp urls slpv - slp attributes cn - Connect dc - Disconnect disco - EMC Discover rc - RepeatCount addsys - EMC AddSystem remsys - EMC RemoveSystem refsys - EMC RefreshSystem ec - EnumerateClasses ecn - EnumerateClassNames ei - EnumerateInstances ein - EnumerateInstanceNames ens - EnumerateNamespaces miner - Mine classes a - Associators an - AssociatorNames r - References rn - ReferenceNames gi - GetInstance gc - GetClass i - CreateInstance di - DeleteInstance mi - ModifyInstance eq - ExecQuery gp - GetProperty sp - SetProperty tms - TotalManagedSpace tp - Test pools ecap - Extent Capacity pd - Profile Discovery 310 Using the TestSmiProvider tool

311 im - InvokeMethod active - ActiveControls ind - Indications menu tv - Test views st - Set timeout value lc - Log control sl - Start listener dv - Display version info ns - NameSpace vtl - VTL menu chp - consolidated host provider menu q - Quit h - Help ######################################################################## Built with EMC SMI-S Provider: V4.6.1 Namespace: root/emc repeat count: 1 (localhost:5988)? 2. Use the dv command in the TestSmiProvider tool to see: SMI-S Provider version information List of storage systems being managed by SMI-S Provider Example dv command output: (localhost:5988)? dv ++++ Display version information ++++ CIM ObjectManager Name: EMC:XX.XX.XX.XX CIMOM Version: EMC CIM Server Version SMI-S qualified version: SMI-S Provider version: V SMI-S Provider Location: Proxy SMI-S Provider Server: Windows_NT WinSrv2008-testB Service Pack 1 x86_64 VM Guest OS (64bit Libraries) Solutions Enabler version: V Firmware version information: (Remote) CLARiiON Array APM (Rack Mounted CX4_120) : (Remote) CLARiiON Array APM (Rack Mounted VNX5300) : Retrieve and Display data - 1 Iteration(s) In Seconds Please press enter key to continue Use the eq command in the TestSmiProvider tool to see storage system information in the format that is being passed to the EMC plugin for all storage systems being managed by the SMI-S Provider. NOTE: Some statement parameters are case-sensitive and should be entered as displayed. Example eq command output on an EMC VMAX: (localhost:5988)? eq Query Language[DMTF:CQL]: Troubleshooting resources 311

312 Query []: SELECT EMC_ArrayChassis.SerialNumber FROM EMC_ArrayChassis ++++ Testing ExecQuery: ++++ Instance 0: ObjectPath : // /root/emc:Symm_ArrayChassis.CreationClassName="Symm _ArrayChassis",Tag="SYMMETRIX " <INSTANCE CLASSNAME="Symm_ArrayChassis" > <PROPERTY NAME="Tag" TYPE="string"> <VALUE>SYMMETRIX </VALUE> </PROPERTY> <PROPERTY NAME="ElementName" TYPE="string"> <VALUE>Symmetrix Array </VALUE> </PROPERTY> <PROPERTY NAME="Manufacturer" TYPE="string"> <VALUE>EMC Corporation</VALUE> </PROPERTY> <PROPERTY NAME="Model" TYPE="string"> <VALUE>VMAX-1SE</VALUE> </PROPERTY> </INSTANCE> Number of instance qualifiers: 0 Number of instance properties: 4 Property: Tag Number of qualifiers: 0 Property: ElementName Number of qualifiers: 0 Property: Manufacturer Number of qualifiers: 0 Property: Model Number of qualifiers: 0 ExceQuery 1 instances; repeat count 1;return data in 0 seconds Retrieve and Display data - 1 Iteration(s) In Seconds Please press enter key to continue... Example eq command output on an EMC CLARiiON CX4: (localhost:5988)? eq Query Language[DMTF:CQL]: Query []: select Tag,ElementName,Manufacturer,Model from EMC_ArrayChassis ++++ Testing ExecQuery: ++++ Instance 0: ObjectPath : // /root/emc:Clar_ArrayChassis.CreationClassName="Clar_ArrayChassis",Tag="CLARiiON+APM " <INSTANCE CLASSNAME="Tag" TYPE="string"> <PROPERTY NAME="Tag" TYPE="string"> <VALUE>CLARiiON+APM </VALUE> </PROPERTY> <PROPERTY NAME="ElementName" TYPE="string"> <VALUE>CLARiiON Array APM </VALUE> </PROPERTY> <PROPERTY NAME="Model" TYPE="string"> <VALUE>Rack Mounted CX4_120</VALUE> </PROPERTY> </INSTANCE> Number of instance qualifiers: 0 Number of instance properties: 4 Property: Tag Number of qualifiers: 0 Property: ElementName Number of qualifiers: 0 Property: Manufacturer Number of qualifiers: 0 Property: Model Number of qualifiers: 312 Troubleshooting resources

313 Troubleshooting issues Cleaning up and recovering from a failed data migration If a migration performed through the 3PAR Online Import or 3PAR Peer Motion Utility fails, follow the applicable procedure to clean up and recover before attempting the migration again: Cleaning up and recovering from a failed migration during Create Migration or Admit phase Cleaning up and recovering from a failed migration during start migration or import Cleaning up and recovering from a failed migration during Create Migration or Admit phase Procedure 1. Run removemigration twice to remove a failed migration completely. removemigration migrationid xxxxx 2. Stop the Online Import Utility or Peer Motion Utility service: a. Click Start > Run... b. In the Run field, type "services.msc," then click OK. c. In the Services window, verify that the service you need to stop appears in Started" mode (in the Status column), then Stop the service. 3. For an Online or MDM migration, if the host was already zoned to the destination array, that zoning needs to be removed. 4. Verify there are no SCSI reservations on the migrating volumes on source array and remove them if found. 5. If the failed migration was an Online migration, verify and delete the migration host/host sets from the target 3PAR StoreServ storage system if they were already created. If migration failed at a stage where all or some of the migrating volumes got presented on the target system, unpresent all of those volumes, then delete the migrating host -- provided there's no other existing volume presentation with that host. Remove host: cli% removehost <hostname> Remove host set: cli% removehostset <hostsetname> NOTE: The host or host set can also be deleted using SSMC or IMC. 6. Verify and delete migration volumes from target 3PAR StoreServ storage, if already created. Volumes can be removed as follows: cli% removevv <VVname> Troubleshooting issues 313

314 NOTE: Volumes can also be deleted using SSMC or IMC. 7. On the host, on the source array side that has WWNs matching the destination HPE 3PAR StoreServ Storage system, remove the exports of the volumes whose migration is being cleaned up. 8. Keep backup of current data folder and delete data folder from <Install drive/folder>/hewlett Packard Enterprise/hpe3paroiu/OIUData/data. 9. Keep backup of current log folder and delete log folder from <Install drive/folder>/hewlett Packard Enterprise/hpe3paroiu/OIUTools/tomcat/32-bit/apache-tomcat /logs. 10. Restart the Online Import Utility or Peer Motion Utility service: a. Click Start > Run... b. In the Run field, "type services.msc," then click OK. c. In the Services window, verify that the service you need to stop appears in Stop" mode (in the Status column) then Start the service. 11. Before running createmigration to start the migration again, perform the following pre-migration checks: a. Make sure the source has been added (via addsource command) and that the source details are listed. To list the source details, run showsource. b. Make sure the destination has been added (via adddestination command) and that the destination details are listed. To list the destination details, run showdestination. c. Make sure the zoning between the source and destination is proper. To verify the connection, run showconnection. Two controller ports on the source storage system must be connected to two peer ports on the destination 3PAR StoreServ Storage system. It is recommended that the peer ports on the destination 3PAR StoreServ Storage system be on adjacent nodes: 0/1, 2/3, 4/5, or 6/7.For more information on zoning, see Zoning the source storage system to the destination 3PAR StoreServ Storage system. d. Make sure the peer links on the peer ports are good by running cli% showport peer. e. Make sure the target port WWNs appear in the discovered port list. To check target port connections from the 3PAR StoreServ Storage system, run cli% showportdev ns <n:s:p>. f. Make sure the destination CPG has enough space. g. Make sure the disks are in good condition. h. Make sure the snapshot and replication volumes are not selected for migration. 12. Start the migration again by running createmigration. Once the peer host is created at source, verify that the source storage array is shown as a connected device on both peer ports of the destination 3PAR as follows: 314 Troubleshooting issues

315 a. Run cli% showtarget -rescan to perform a rescan. b. Run cli% showtarget to list the visible target WWNs. Cleaning up and recovering from a failed migration during Start Migration or Import phase Procedure 1. If the startmigration task fails for one of the reasons listed below, restart migration using the startmigration command. Sysmgr panic/restart - Example message that would appear in task detailed log: Task failed due to system manager restart. Non Master node panic/restart - Example messages that would appear in task detailed log: :33:37 PDT Failed region move of 256MB from (vv1.0.rmt. 0:0MB) to (vv1.0.ldv.0:0mb): unknown error :33:37 PDT Failed task. Moved 0 regions for a total of 0 MB in 1 minutes and 25 seconds :45:18 PDT Failed region move of 256MB from (vv1.0.rmt. 0:0MB) to (vv1.0.ldv.0:0mb): node down. VV block failure - Example message that would appear in task detailed log: :48:08 PDT Failed region move of 256MB from (x rmt. 0:0MB) to (x ldv.0:0mb): failure in blocking VV I/O. Manual task cancellation - Example messages that would appear in task detailed log: :08:05 PDT Cancelling task due to user request. Please wait :08:07 PDT Cancelled task. Moved 0 regions for a total of 0 MB in 10 seconds. 2. If the startmigration task fails for any other reason, contact HPE to check the log files to determine the appropriate action. SSMC temporarily displays peer volume as RAID level 0 When a volume is being imported (regardless of the migration method used), SSMC temporarily displays the peer volume as RAID level 0 during the migration. Upon completion of the migration, SSMC displays the correct RAID level for the volume. Despite this temporary RAID level assignment within SSMC, the data on the destination array is protected at the requested RAID level. Cannot add a source storage system Symptom The addsource command is unsuccessful. Cleaning up and recovering from a failed migration during Start Migration or Import phase 315

316 Cause General An invalid source array UID is specified when executing the addsource command. For an EMC Storage System The EMC SMI-S Provider is running an unsupported software version. An invalid user and/or password is provided for the EMC SMI-S Provider. An invalid IP address or port (secure/unsecure) is provided for the EMC SMI-S Provider. The EMC SMI-S Provider has client IP address filtering enabled and the 3PAR Online Import Utility server is not on the trusted IP address list. The EMC SMI-S Provider is not managing the source storage system. The 3PAR Online Import Utility server and the EMC SMI-S Provider do not have network connectivity. The EMC SMI-S Provider and the source storage system do not have network connectivity. The EMC Storage source array is running an unsupported firmware version. For an HDS Storage System The HDS Storage system is in a failed stage within the HiCommand Suite. The HiCommand Suite is running an unsupported software version. An invalid user and/or password is provided for the HiCommand Suite. An invalid IP address or port (secure/unsecure) is provided for the HiCommand Suite. The HiCommand Suite has client IP address filtering enabled and the 3PAR Online Import Utility server is not on the trusted IP address list. The HiCommand Suite is not managing the source storage system. The 3PAR Online Import Utility server and the HiCommand Suite do not have network connectivity. The HiCommand Suite and the source storage system do not have network connectivity. The HDS Storage source array is running an unsupported firmware version. Action Check each of the above conditions to identify which may be causing the problem, then resolve it. Cannot add a source or destination storage system with the 3PAR Peer Motion Utility Symptom While adding a 3PAR StoreServ Storage as a source or destination storage system, it takes too long to load the HPE 3PAR details. 316 Cannot add a source or destination storage system with the 3PAR Peer Motion Utility

317 Cause The source or destination 3PAR StoreServ Storage is unavailable, or the IP address is incorrect. Use the following steps to isolate and solve the problem: Action 1. Ping the 3PAR StoreServ Storage system IP address from the server where the 3PAR Peer Motion Utility is running. 2. Make sure that the port number specified when adding the source/destination storage system is correct. If you are still unable to add a destination storage system, continue to the next step. 3. Restart the 3PAR Peer Motion Utility. Contact Hewlett Packard Enterprise Support for assistance before attempting to restart the 3PAR Peer Motion Utility. Cannot connect to the 3PAR Peer Motion Utility Symptom When you open 3PAR Peer Motion Utility and enter the management IP, username, and password, the following error messages may appear: Invalid credentials, Please try with valid credentials. Error connecting to server using IP address x.x.x.x. Cannot communicate with server Authorization error. This user is not part of supported user groups Action 1. Verify the following: The management IP, user name, and password are correct. The username is a member of the HP Storage Migration Admins or HP Storage Migration Users group. The 3PAR Peer Motion service is running on the server to which the 3PAR Peer Motion Utility is communicating. If the 3PAR Peer Motion Utility is running on another server, ping the server on which 3PAR Peer Motion service is installed. Cannot create a migration Symptom The migration cannot be created. Cause General Cannot connect to the 3PAR Peer Motion Utility 317

318 Duplicate volumes exist. This may occur if a virtual volume with either the same name or with the same WWN exists on the destination storage system. The destination storage system has the same host name with a different WWN. The destination storage system does not have enough capacity. The LUNs or host provided are in an invalid storage group configuration. The LUNs are not eligible for migration (a LUN must be FC, and cannot be a replication LUN or reserved). For an EMC Storage system The EMC SMI-S Provider is running an unsupported software version. The EMC SMI-S Provider user name/password have changed since source storage system was added. The EMC SMI-S Provider is not managing the EMC Storage system. The 3PAR Online Import Utility server and the EMC SMI-S Provider do not have network connectivity. The EMC SMI-S Provider and the EMC Storage system do not have network connectivity. The EMC Storage system is running an unsupported firmware version. The operational state of the EMC Storage system does not allow migration. For an HDS Storage system If a LUN ID has been changed through the HDS service processor, the HDS Storage system must be refreshed through the HiCommand Suite manually. If changes to a LUN number have been made using the HDS CLI or the HiCommand Suite, a manual refresh is not required. IMPORTANT: If the HDS service processor is open in Modify Mode, then SMI-S calls to the HDS Storage system may not succeed. The HDS service processor must be kept in View Mode during the migration phase. IMPORTANT: The HDS SMI-S interface does not have a refresh array interface. If any changes in the HDS Storage system have been made through the HDS service processor or the HDS Storage Navigator, then the HDS Storage system must be refreshed through the HiCommand Suite manually. The HiCommand Suite takes approximately 3 minutes to refresh a single HDS Storage system. The HiCommand Suite user name/password have changed since source storage system was added. The HiCommand Suite is not managing the HDS Storage system. The 3PAR Online Import Utility server and the HiCommand Suite do not have network connectivity. The HiCommand Suite and the HDS Storage system do not have network connectivity. The LUNs or host provided are not found within the HiCommand Suite. If LUNs or host are found within the HDS service processor, refresh the HiCommand Suite for the storage system. LUNs or host names are case-sensitive. There is no connection between source and destination storage systems. 318 Troubleshooting issues

319 The wrong migration type is used. The operational state of the HDS Storage system does not allow migration. Action Check each of the above conditions to identify which may be causing the problem, then resolve it. Cannot log in to the HPE 3PAR Online Import Utility Symptom When you attempt to log in to the HPE 3PAR Online Import Utility, the following error occurs: Invalid credentials, Please try with valid credentials See the following example. Example 3PAR Online Import Utility login error: CLI Version: Enter IPADDRESS: Enter USERNAME: tester Enter PASSWORD: >ERROR: Invalid credentials, Please try with valid credentials Enter IPADDRESS: Action When the user in the User group was created, the User must change password at next logon check box was selected by default. Clear the check box, and log in again. Cannot validate certificate for a 3PAR StoreServ Storage system with the 3PAR Peer Motion Utility Symptom When you attempt to use the 3PAR Peer Motion Utility to issue the addsource command or the adddestination command with a 3PAR StoreServ Storage running 3PAR OS MU3 or later, the following error occurs: Example Certificate validation error: OIUERRDST0010 Unable to validate certificate for HPE 3PAR Storage System OR OIURSLDST0010. Please use the installcertificate command to accept the certificate. Perform the following steps to add the CA signed certificate to the 3PAR StoreServ Storage system: Action 1. Connect to the 3PAR StoreServ Storage system using PuTTY or 3PAR CLI. 2. Run the showcert -service cli -type rootca pem command. Cannot log in to the HPE 3PAR Online Import Utility 319

320 The root CA signed certificate should appear. If instead, you receive a message that "There are no certificates for the following service(s): cli," then run the showcert -type rootca pem command. 3. Copy and save the certificate with a.pem extension in the security folder (<home directory of current user>\informmc\security). NOTE: To view the home directory of current user, run the echo %HOMEDRIVE%%HOMEPATH% command from the Windows command prompt. If %HOMEDRIVE%%HOMEPATH% is blank or not the directory of the user, then check and use one of the following locations: C: \InFormMC\security C:\Windows\SysWOW64\config\systemprofile\InFormMC\security 4. Run the showcert -service cli -type intca pem command. The intermediate CA signed certificate should appear. If instead, you receive a message that "There are no certificates for the following service(s): cli," then run the showcert -type intca pem command. 5. Copy and save the certificate with a.pem extension in the security folder (mentioned above). 6. To install the root and intermediate CA signed certificates, run the following command (in the command line) twice, once for the root ca and once for the intermediate ca: keytool -import -file <path of security folder >\<filename>.pem -keystore HP-3PAR-MC-TrustStore Example: keytool -import -file rootca.pem -alias rootca keystore HP-3PAR-MC-TrustStore NOTE: To run the keytool commands, Java v6.0 or later must be installed and the PATH environment variable should contain the path to java.exe. If the path is not specified, you can set it dynamically by running set PATH=%PATH%;C:\Program Files (x86)\java\jre\bin. 7. Issue the addsource command or the adddestination command again to add the 3PAR StoreServ Storage system. Migration from multiple EMC Storage VMAX or DMX4 systems includes unexpected LUNs and hosts Symptom When the SMI-S provider manages multiple EMC VMAX or DMX4 Storage systems, LUNs with the same device ID or a host with the same name can result in the createmigration command migrating unexpected LUNs and hosts. 320 Migration from multiple EMC Storage VMAX or DMX4 systems includes unexpected LUNs and hosts

321 Action The SMI-S provider used during migration should only manage the EMC VMAX or DMX4 that is being migrated. For EMC VNX and CX4 storage controllers, the HPE 3PAR Peer Port HBA initiators must be set to failovermode 4 (Active/Active) Symptom The createmigration operation may not be successful during the admitvv phase due to an improper HPE 3PAR peer port HBA initiator failover mode setting. Failover mode 4 (ALUA) allows the HPE 3PAR peer port initiators to access LUNs through the non-controlling EMC service processor controller. Cause An improper HPE 3PAR peer port HBA initiator failover mode setting. The EMC storage controller failover mode may be displayed by using the EMC Naviseccli CLI command: # naviseccli -address <SP controller IP address> failovermode Action Change each HPE 3PAR peer port HBA initiator to failovermode 4 by issuing the following command: # naviseccli -address <SP controller IP address> Storagegroup setpath \ -gname <storage_group_name> \ -hbauid <HP_3PAR_peer_port_WWN> \ -sp <a b> \ -spport <port_no> \ -failovermode 4 Powering on or migrating a VMware ESX virtual machine fails after a successful migration Symptom Powering on or migrating an ESX virtual machine with an RDM may not succeed. The following message appears: Virtual Disk 'X' is a mapped direct access LUN that is not accessible When you check the VML identifier for an RDM on two or more ESX hosts, you see they are not referring to the same VML ID. Cause This error can occur upon rebooting the ESX host after a successful migration. The VML number generated by ESX changes after the host is rebooted. For EMC VNX and CX4 storage controllers, the HPE 3PAR Peer Port HBA initiators must be set to failovermode 4 (Active/Active) 321

322 Action Correct the mappings. For more information, see VMware KB article , available at the following website: VMware KB ( 3PAR Online Import Utility does not open in Windows 7 Symptom When using Windows 7, when you double-click the icon to open the 3PAR Online Import Utility, or when you right-click the icon and select Run As Administrator, the utility does not open. Action 1. Run a command prompt window and perform the following steps: a. Change the directory to the location where the 3PAR Online Import Utility was installed. The default location is: C:\Program Files (x86)\hewlett-packard\hp3paroiu b. Change the directory to the CLI subfolder. c. Run OIUCLI.bat to start the 3PAR Online Import Utility. 3PAR Peer Motion Utility loses communication with the 3PAR StoreServ Storage Symptom In rare instances, the 3PAR Peer Motion Utility loses communication with the 3PAR StoreServ Storage, resulting in the following message: OIUERRDST0001 Unable to connect to the 3PAR storage system. OIURSLDST0001 Please ensure that the IP address etc. is proper Action Restart the 3PAR Peer Motion Utility. 3PAR Peer Motion Utility cannot reach a source or destination storage system or does not load data on time Symptom The HPE 3PAR Peer Motion Utility cannot reach the source or destination storage system, or does not load data on time: 322 3PAR Online Import Utility does not open in Windows 7

323 Data transfer does not start The preparation stage does not succeed because data from the source or destination storage system cannot be fully loaded Host set creation does not succeed because data from the destination storage system cannot be fully loaded. Cause The source or destination storage system is busy and did not provide the required data within the expected time. Network latency between server component and the source or destination storage system. Action 1. Verify that the source or destination storage system is not overloaded. 2. Identify and correct any network latency issues. 3. Retry the operation. 4. If the problem persists, restart the 3PAR Peer Motion Utility. Cannot admit or importing a volume does not succeed Symptom Admitting or importing a volume does not succeed. Solution 1 Action For MDM and offline migration, verify that the destination storage system has no active paths. This can be done by checking the host information on the SSMC or by using the 3PAR CLI command, showhost. Monitor the migration task by checking the task screen or using the CLI command, showtask. Then, determine where the operation was unsuccessful. Solution 2 Action 1. If the task does not complete before the volume import stage, but after volumes have been admitted on the destination storage system, it is possible to manually return the system to the pre-admit state. This process is non-disruptive to the hosts provided the appropriate zoning and host multipathing have been re-established. The host must have access to the volume through the source system. NOTE: Unzoning operations are not required for single-volume migration. To return the system to its state before an unsuccessful volume admit: Cannot admit or importing a volume does not succeed 323

324 a. On the fabric and host If the host has already been unzoned from the source system then rezone it back to that system and confirm that I/O is once again being directed to the source system. b. On the fabric and host-unzone the host from the destination storage system and verify that all access to the volumes is now only through the source system. c. On the destination storage system-remove the VLUNs on the destination storage system for the peer volumes exported to the host(s). d. On the destination storage system-remove the peer volumes from the destination storage system. e. On the destination storage system When there are no volumes exported from the destination array to the host, remove the host from the destination storage system. f. On the source storage system-remove the VLUN exports to the host representing the destination storage system from the source storage system. g. On the source storage system-remove the host representing the destination storage system from the source storage system. Solution 3 Action 1. If task does not complete after volume import tasks have started, the hosts' access to the volumes on the source system is interrupted. A failed import returns the system to the point where the import can be retried after the cause of failure is resolved. It is also possible to revert the configuration back so that the I/O access is via the source system, but this is a manual process and requires down time. To revert the configuration so that the source system is servicing I/O: a. On the host To prevent consistency issues, any active applications should be cleanly shutdown before shutting down the hosts. Cleanly stop access to the destination storage system from the host. The host will lose access to the volumes being migrated, as part of the procedure. b. On the destination storage system Cancel any active import tasks for the volumes that were being migrated. To cancel the import task, issue the canceltask CLI command. c. On the destination storage system Remove the VLUNs on the destination storage system for the volumes exported to the host. d. On the destination storage system Remove the peer volumes from the destination storage system. e. On the destination storage system Only if no other volumes are exported from the destination array to the host, remove the host from the destination storage system. f. On the source system Remove the VLUN exports to the destination storage system from the source system. g. On the source system Remove the host representing the destination storage system from the source system. h. On the source system Execute the setvv -clrrsv CLI command to all volumes that were being migrated on the source system, and if the source system is running 3PAR OS or above, execute the setvv -clralua CLI command to all volumes that were being migrated. 324 Troubleshooting issues

325 i. On the fabric and host Rezone the host back to the source system, if needed. j. On the host Restart the host and any applications that were shut down at the beginning of this process. Solution 4 Action If import task fails with LD read failure (as in the log entry example that follows), the problem could be in reading from the source volume. Check for issues such as broken peer links, bad disks on the source, source volume over provisioned, or source runs out of space, and fix any issues before re-initiating the migration :32:31.87 PDT {10920} {events:normal } LD mirroring failed after 740 seconds due to failure in LD read(1). Request was for 256MB from LD 270:1280MB to be mirrored to LD 501:3328MB If a LD write error is reported from a destination volume, check for bad disks or insufficient space on destination and fix any issues before re-initiating the migration. Trailing spaces in IP address return login error Symptom When logging into the 3PAR Online Import Utility CLI, you receive the following error: Example trailing spaces in IP address login error EMC Storage: "ERROR: Could not process your request. Illegal character in authority at index..." Example trailing spaces in IP address login error HDS Storage: ERROR: Error connecting to server using IP XX.XX.XX.XX Cause This error is caused by a trailing space in the IP address. Action Re-open the 3PAR Online Import Utility CLI and enter the IP address without any spaces at the end. VMware datastore errors following a successful migration Symptom A datastore is missing, though you can still see the LUN presented to the host. When forcing a LUN that is formatted as VMFS-5 to mount (either with the name or the UUID), you may see errors like the following: # esxcli storage vmfs snapshot mount -l DEV-LUN03 No unresolved VMFS snapshots with volume label 'DEV-LUN03' found. Trailing spaces in IP address return login error 325

326 # esxcli storage vmfs snapshot mount -u 4f1d a3d2d2-f46b-14feb5cc149a No unresolved VMFS snapshots with volume UUID '4f1d a3d2d2-f46b-14feb5cc149a' found. In the vmkernel or messages.log file, you may see entries like the following: LVM: 8445: Device naa :1 detected to be a snapshot: LVM: 8445: Device eui :1 detected to be a snapshot: LVM: 8452: queried disk ID: <type 1, len 17, lun 36, devtype 0, scsi 0, h(id) > LVM: 8459: on-disk disk ID: <type 1, len 17, lun 17, devtype 0, scsi 0, h(id) > When force mounting a VMFS datastore, you may experience these symptoms: Other hosts in the same datacenter cannot mount that VMFS datastore from vcenter Server. The Resolve VMFS Volumes automatic task, seen in the task list of the vcenter Client, generates the following error: Error: Cannot change the host configuration. Error Stack Call "HostStorageSystem.ResolveMultipleUnresolvedVmfsVolumes" for object "storagevolume" on vcenter Server "MyVC" failed. where storagevolume is the name of the datastore and MyVC is the name of your vcenter server. Cause In VMware ESXi environments, upon the next ESX host reboot after a successful online migration, the ESX host might not mount the VMFS datastores automatically. To prevent duplicate copies of the same datastore being mounted in various replication scenarios, the ESX operating system might declare the migrated datastore as a snapshot and not mount it automatically. Action 1. After confirming that the original copy of datastore is no longer accessible by the ESX host, see the VMware KB article for available methods (vsphere client GUI or the esxcli command line) for mounting the datastore. Once the datastores are persistently mounted with an existing signature, or a re-signature is performed, the datastores are automatically mounted during subsequent reboots. If the re-signature method is used, additional steps might be required for updating references to the original signature in virtual machine files. For more information, see the following websites: VMware KB ( ( com.vmware.vsphere.storage.doc_50/guid- EBAB0D5A-3C77-4A9B D4AD69E28DC.html) The adddestination command returns error OIUERRDST0004 Symptom When you issue the 3PAR Online Import Utility adddestination command, error OIUERRDST0004 is displayed (see Figure 100: adddestination and error OIUERRDST0004 on page 327). 326 The adddestination command returns error OIUERRDST0004

327 Cause Error OIUERRDST0004 occurs when the 3PAR Online Import Utility detects a destination 3PAR StoreServ Storage with a firmware version earlier than the minimum. The following figure shows a successful and an unsuccessful execution of the adddestination operation. The failure occurs because the 3PAR OS version is not supported. Figure 100: adddestination and error OIUERRDST0004 Action Upgrade the 3PAR StoreServ Storage to a supported version. See the SPOCK website for the minimum supported version: The addsource command for HDS Storage fails Symptom Issuing the addsource command for an HDS Storage system fails and the following error message is displayed: "SMI-S Storage Provider Authorization error." Figure 101: 3PAR Online Import Utility: addsource command output The addsource command for HDS Storage fails 327

328 Action Verify that the HiCommand Suite SMI-ports (5988 for HTTP; 5989 for HTTPS) are open, then start the HiCommand Suite on those ports. The createmigration command fails for HDS storage with host "HCMDxxxx not found" error Symptom The Createmigration command fails with the following error: preparationfailed(-na-)(:oiuerrdst0008:admit has failed. Failed to export xxxx to HCMDxxxx: Error: Host HCMDxxxx not found. Action 1. Refresh the Hitachi HiCommand Suite. 2. Clean up the failed migration. 3. Re-issue the createmigration command. The createmigration command fails for migrated LUNs in a storage group or host group with no host Symptom When you issue the createmigration command for LUNs in a storage group that has no host, the command fails. Cause LUNs in a storage group (for EMC Storage) or in a host group(for HDS Storage)without a host cannot be migrated. Action For LUNs that do not have an associated host, you must use the offline migration type. The createmigration command fails for LUN name or host name containing special characters Symptom The 3PAR Online Import Utility does not support special characters in the LUN name or host name and will return an error on the createmigration command if special characters are present in the names. Action 1. To continue migration, change the unsupported LUN or host name. 328 The createmigration command fails for HDS storage with host "HCMDxxxx not found" error

329 NOTE: The allowable characters are as follows: letters numerals. (period) - (hyphen) _ (underscore) No other special characters are allowed. The createmigration is unsuccessful with the 3PAR Peer Motion Utility Symptom The createmigration task was unsuccessful, and the preparation process did not admit volumes on the destination storage system. Action 1. Verify the following: a. The destination storage system is accessible. b. The destination CPGs have enough capacity. If they do not, free up enough capacity in the CPGs or add more capacity to the CPGs on the destination storage system. c. No duplicate volumes exist. This may occur if either a virtual volume with same name or with the same WWN exists on the destination storage system. d. The required LUN number is available on the host object on the destination storage system. The createmigration command returns error OIUERRAPP0000 or OIURSLAPP0000 Symptom Issuing the 3PAR Online Import Utility createmigration command yields either of the following error messages: "ERROR: OIUERRAPP0000 An unexpected error occurred." "OIURSLAPP0000 Contact HP support. You may try restarting the application service." Cause The storage group (for EMC Storage) or host group (for HDS Storage) that contains LUNs identified for migration contains a replication LUN. The createmigration is unsuccessful with the 3PAR Peer Motion Utility 329

330 Action Remove the replication LUN from the storage group (for EMC Storage) or from the host group (for HDS Storage). The createmigration command returns error OIUERRDB1006 Symptom Issuing the 3PAR Online Import Utility createmigration command displays the following error message: "ERROR: OIUERRDB1006 Database constraint violated" Cause The host specified does not have any associated LUNs in its storage group (for EMC Storage) or host group (for HDS Storage). Action Add the LUNs to be migrated into the storage group (for EMC Storage) or host group (for HDS Storage). The createmigration command returns error OIUERRPREP1014 Symptom Issuing the 3PAR Online Import Utility createmigration command displays the following error message: "preparationfailed(-na-)(oiuerrprep1014: Error creating host on destination)" Solution 1 Cause The host already exists on the destination 3PAR StoreServ Storage system with a different WWN. Action If this is the same host, but is using a different WWN: On initial migration of a host, include all the WWNs in the storage group (for EMC Storage) or host group (for HDS Storage), even if the WWN is not managing any LUNs. Subsequent migration of any WWN for this host will find a match. If this is a different host, change the host name so that it does not match the existing host name on the destination 3PAR StoreServ Storage system. Solution 2 Cause If the host name differs across multisource array [N:1] migration, the createmigration operation will not succeed. 330 The createmigration command returns error OIUERRDB1006

331 For the multisource migration support introduced with 3PAR OS 3.2.2, it is essential that the same hostname, hostgroup name, or initiator group name be specified for the srchost parameter of the createmigration command. Action If the hostname or initiator group name cannot be modified on the source array for any reason, then before issuing the createmigration command, edit the host name on the destination 3PAR StoreServ Storage system so as to keep it consistent across the migrations from the multiple source arrays involved. The createmigration command returns error OIUERRCS1002 Symptom This error occurs when the 3PAR Online Import Utility tries to verify a one-to-one zoning between peer ports and source host ports and discovers there was no one-to-one mapping, as in the following examples. Example createmigration command and error OIUERRCS1002 EMC Storage: > createmigration -sourceuid BEA srchost R123-S02 -destcpg FC_r5 -destprov full -migtype MDM -persona "RHEL_5_6" > ERROR: OIUERRCS1002 There is no one-to-one mapping between the peer ports and the source host ports. OIURSLCS1002 Ensure that there is a one-to-one mapping between the peer ports and the source host ports, and that network connectivity between the host, source array and destination array is proper. Example createmigration command and error OIUERRCS1002 HDS Storage: > createmigration -sourceuid srchost R123-S02 -destcpg FC_r5 -destprov full -migtype MDM -persona "RHEL_5_6" > ERROR: OIUERRCS1002 There is no one-to-one mapping between the peer ports and the source host ports. OIURSLCS1002 Ensure that there is a one-to-one mapping between the peer ports and the source host ports, and that network connectivity between the host, source array and destination array is proper. Cause Action Make sure that each host port on the source system has been zoned to one physical peer port on the destination system. No virtual peer ports should be present on the physical peer ports of the destination system. Make sure that the source and destination systems are connected to the fabric. The createmigration command with -srcvolmap returns error OIUERRAPP0000 Symptom When you issue the createmigration command with -srcvolmap and error OIUERRAPP0000 is displayed, as in the following examples. The createmigration command returns error OIUERRCS

332 Example createmigration command with -srcvolmap and error OIUERRAPP0000 EMC: > createmigration -sourceuid C6E0167A -srcvolmap [{"temp_1",thin,"fc_r6"}] -migtype MDM -persona "RHEL_5_6" > ERROR: OIUERRAPP0000 An unexpected error occured. OIURSLAPP0000 Contact HPE support. You may try restarting the application service. Example createmigration command with -srcvolmap and error OIUERRAPP0000 HDS: > createmigration -sourceuid srcvolmap [{"temp_1",thin,"fc_r6"}] -migtype MDM -persona "RHEL_5_6" > ERROR: OIUERRAPP0000 An unexpected error occured. OIURSLAPP0000 Contact HP support. You may try restarting the application service. Cause This error can occur when the source array host group contains multiple volumes but only one of the volumes is specified in the -srcvolmap parameter. In this situation, the default -destprov and -destcpg parameters must also be included in the createmigration command. Without them, the migration behavior for the volumes not specified in the -srcvolmap parameter is not known. Action The createmigration command should be similar to the following examples. Example successful createmigration command with -srcvolmap EMC Storage > createmigration -sourceuid C6E0167A -srcvolmap [{"temp_1"}] -migtype MDM -persona "RHEL_5_6" -destcpg "FC_r6" -destprov thin > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: Example successful createmigration command with -srcvolmap EMC Storage > createmigration -sourceuid srcvolmap [{"temp_1"}] -migtype MDM -persona "RHEL_5_6" -destcpg "FC_r6" -destprov thin > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: The createmigration -hostset command returns error OIUERRDST0003 Symptom The createmigration -hostset command does not validate for invalid characters in a hostset name. When you use the -hostset parameter, the createmigration operation does not check for invalid characters in the hostset name. The createmigration operation will successfully submit the data migration job; however, the data migration job fails with the following error: preparationfailed(-na-)(:oiuerrdst0003:the 3PAR array is not in an usable state.;) In the following example, the invalid character is a space (0x20) in the hostset name, R65-S02 Hostset: Example createmigration -hostset command with invalid character in the hostset name: 332 The createmigration -hostset command returns error OIUERRDST0003

333 > createmigration -sourceuid F014B123 -srchost R65-S02-IG -destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" -vvset "R65-S02_VVset" -hostset "R65-S02 Hostset" SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: > showmigration MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME MDM SYMMETRIX TAY R123 Wed Aug 19 15:08:48 EDT 2015 END_TIME STATUS(PROGRESS) (MESSAGE) -NA preparationfailed(-na-)(:oiuerrdst0003:the 3PAR array is not in an usable state.;) Cause Action 1. Remove the data migration job task and resubmit it with a valid createmigration -hostset command: a. Remove the migration job from the 3PAR Online ImportUtility or 3PAR Peer Motion Utility queue. (The command may take a few moments to complete.) > removemigration -migrationid SUCCESS: The specified migration is successfully removed. b. Remove the migration job from the 3PAR Online ImportUtility or 3PAR Peer Motion Utility database. > removemigration -migrationid SUCCESS: The specified migration is successfully removed. c. Re-issue the createmigration command with a valid HPE 3PAR hostname in the following example, R65-S02_Hostset, where an underscore has replaced the invalid space: > createmigration -sourceuid F014B123 -srchost R65-S02-IG -destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" -vvset "R65-S02_VVset" -hostset "R65-S02_Hostset" The createmigration -vvset command succeeds but the data migration job stays indefinitely in the preparing state Symptom Invalid characters are not permitted in HPE 3PAR vvset names. When you use the -vvset parameter, the createmigration operation does not check for invalid characters in the vvset name. The createmigration command will successfully submit the data migration job; however, the job will stay in the preparing state indefinitely. In the following example, the invalid character is a space (0x20) in the vvset name, R65-S02 VVset: Example createmigration -vvset command with invalid character in the vvset name: > createmigration -sourceuid F014A123 -srchost R65-S02-IG -destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" -vvset "R65-S02 VVset" -hostset "R65-S02_Hostset" > showmigration MIGRATIONID TYPE SOURCE_NAME DESTINATION_NAME START_TIME MDM SYMMETRIX PAR R123 Wed Aug 19 13:36:41 EDT 2015 The createmigration -vvset command succeeds but the data migration job stays indefinitely in the preparing state 333

334 END_TIME STATUS(PROGRESS)(MESSAGE) -NA- preparing(50%) (-NA-) Errors will loop in the HPE 3PAR event log: 1 Minor Command error sw_cli {3paradm super all {{0 8}} } {Command: createvvset {R65-S02 VVset} Lun1 Lun3 Lun2 Error: Invalid character (0x20) in name } {} Action 1. Restart the 3PAR Online Import Utility or 3PAR Peer Motion Utility service, remove the current migration job, and then re-issue the createmigration command with a valid -vvset parameter: a. Exit from 3PAR Online Import Utility or 3PAR Peer Motion Utility CLI. > exit b. Restart the 3PAR Online Import Utility or 3PAR Peer Motion Utility service. c. Remove the migration job from the 3PAR Online Import Utility or 3PAR Peer Motion Utility queue. (The command may take a few moments to complete.) > removemigration -migrationid SUCCESS: The specified migration is successfully removed. d. Remove the migration job from the 3PAR Online Import Utility or 3PAR Peer Motion Utility database. > removemigration -migrationid SUCCESS: The specified migration is successfully removed. e. Re-issue the createmigration command with a valid HPE 3PAR vvset name in the following example, R65-S02_VVset, where an underscore has replaced the invalid space: > createmigration -sourceuid F014D400 -srchost R65-S02-IG -destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" -vvset "R65-S02_VVset" -hostset "R65-S02_Hostset" NOTE: Depending on when the 3PAR Online Import Utility or 3PAR Peer Motion Utility service is restarted, some additional cleanup or removal might be necessary on the 3PAR StoreServ Storage of the admitted LUNs, new hosts, and/or new hostsets before you re-issue the createmigration command. The 3PAR Online Import Utility stalls without an error message Symptom If you issue the 3PAR CLI tcli -e powerfail command after the createmigration operation is initiated, and after the startmigration operation has been started and has reached the import stage, the 3PAR Online ImportUtility stalls, and no error message is generated (see Solution 1). If you initiate panic by issuing the echo "sys -panic" crash" command on the 3PAR StoreServ Storage target storage system after the createmigration operation is initiated, and after the 334 The 3PAR Online Import Utility stalls without an error message

335 startmigration operation has been started and has reached the import stage; the 3PAR Online Import Utility hangs, and no error message is generated (see Solution 2). Solution 1 Action Rerun the migration and contact Hewlett Packard Enterprise Support. Solution 2 Action 1. Restart the 3PAR Online Import Utility. 2. Verify showconnection output. 3. Restart the migration from the 3PAR Online Import Utility. The showmigration command returns error OIUERRDST0008 Symptom After a createmigration command is issued with a LUN of size less than 256 MB, the following message occurs following a showmigration command: OIUERRDST0008: "Admit has failed." Solution 1 Cause The showmigration command returns error OIUERRDST

336 Action Increase the size of the source LUN on the source storage array to a minimum of 256 MB, and then retry the createmigration command. Solution 2 Action Check the size of the volumes selected for migration to make sure that each volume is greater than 256 MB. Solution 3 Action Check the paths between the source and destination systems. In OIU showconnection command should list two paths per source array. Solution 4 Action If none of the above solutions solve- the issue, contact Hewlett Packard Enterprise Support. The showtarget command does not return HDS Storage details Symptom Issuing the showtarget command does not return HDS Storage details. Action Issue the showportdev ns <n:s:p> command on the 3PAR CLI to view the connection details between the 3PAR StoreServ Storage peer port and the HDS Storage source array, as in the following example. 336 The showtarget command does not return HDS Storage details

337 Figure 102: Example 3PAR CLI showportdev command output The startmigration command fails with host name that exceeds 31 characters Symptom When you migrate a host with a long name (or, for an EMC Storage system, a name with spaces), the 3PAR Online Import Utility truncates the host name to 31 characters or replaces spaces with underscores in order to complete the createmigration command. However, startmigration will fail with Unable to unpresent a source volume to source host error. Action Before beginning the migration, rename the host so that there are no spaces in the name and the name length does not exceed 31 characters. The startmigration task fails without generating an appropriate error message Symptom The migration task does not fail gracefully. When the 3PAR Online Import Utility startmigration status reaches importing stage, the HPE 3PAR StoreServ Storage crashes, the 3PAR Online Import Utility hangs, and the startmigration task fails without generating an appropriate error message. Action Resubmit the migration and call HPE support. 3PAR Peer Motion Utility or 3PAR Online Import Utility CLI returns error OIUERRMS10006 Symptom After you issue a command (such as startmigration or showmigration, an error message like the following is displayed: OIUERRMS10006:Failed to start Data transfer. Failed to find SCSI PGR keys on destination... The startmigration command fails with host name that exceeds 31 characters 337

338 See Solution 1. OIUERRMS10006:Failed to start Data transfer. Port n:s:p not responding to Test Unit Ready... See Solution 2. Solution 1 Cause This error can occur when one or more of the LUNs in the host group being migrated has a persistent reservation. Action On the source array, clear the persistent reservation on the LUNs before you attempt to restart the migration. Solution 2 Cause The destination 3PAR StoreServ Storage cannot find any LUNs behind the peer port n:s:p. These LUNs are on the source system and are accessed by the host over the destination 3PAR StoreServ Storage. Action 1. Check the zoning of the host to the destination 3PAR StoreServ Storage. 2. Check the connectivity of the peer links between the source and destination systems by using the showconnection command in either the 3PAR Peer Motion Utility or 3PAR Online Import Utility environment. Preventing VMFS datastore loss in VMware ESXi 5.0 In VMware ESXi 5.0 environments, upon successful completion of createmigration source arrays paths are removed before startmigration. The ESXi 5.0 host might lose VMFS data stores if perturbation occurs on source or destination arrays. Perturbation here means array controller reboot or cable pull. This issue is seen to occur in EMC CLARiiON CX4 and VNX arrays, but is not restricted to these. To work around this issue in ESXi 5.0, disable VAAI ATS locking mechanism on your hosts: Procedure 1. Check whether VAAI features are enabled on the ESXi 5.0 host from a console or SSH session by issuing the following command: # esxcli system settings advanced list -o /VMFS3/HardwareAcceleratedLocking The output will be similar to the following example. Example checking whether VAAI is enabled: 338 Preventing VMFS datastore loss in VMware ESXi 5.0

339 Path: /VMFS3/HardwareAcceleratedLocking Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Enable hardware accelerated VMFS locking (requires compliant hardware) The Int Value of 1 indicates that VAAI features are enabled. 2. Disable hardware accelerated locking to ensure that the VMFS data stores are stable on the cluster. a. Log in to vcenter Server 5.0 or the ESXi 5.0 host using vsphere Client. b. In the vsphere Client inventory panel, click the ESXi 5.0 host. c. Click the Configuration tab. d. Under Software, click Advanced Settings, click VMFS3, and then change the value of this parameter to 0 (zero). NOTE: Alternatively, you can disable hardware accelerated locking from the console with the following command: # esxcli system settings advanced set -i 0 -o /VMFS3/HardwareAcceleratedLocking For more information, see the VMware website: language=en_us&cmd=displaykc&externalld= Troubleshooting issues 339

340 Reference 340 Reference

341 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands This appendix describes and gives general guidelines for the 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands. Command usage guidelines The CLI commands and their arguments are case insensitive, but the values that the command argument takes, such as user name, password, volume name, and host name, are case sensitive. For usage details about a specific command, use the <command name> -help option. If any value with a space is to be specified in commands, it needs to be enclosed in double quotes ("). If any value to be specified has double quotes, it can be escaped using a backslash ("\"). If any value to be specified has a backslash ("\") within it, it can be escaped using another backslash ("\\"). Any hyphen ("-"), when specified within the value of a parameter, should also be escaped by using a backslash("\"). Information about which substeps are under execution will be displayed in the output. The status of the overall command execution ( ERROR or SUCCESS ) is displayed as the last statement. All show commands have the -csvtable option, which when used will print delimited output. All show commands have the -filewrite option, which when used will print delimited output to a.txt file. Any error executing a command will display as: ERROR: <Error code>:<error message>:<resolution code>:<resolution message>. For a list of supported host operating systems(oss) when migrating a host, see the SPOCK website: Using read-only commands Default behavior is to display all details with headings in a readable format. If nothing is specified with the commands, the default parameter is -all. For any entry that is not known, unknown is displayed. If no entry found in the database, an empty list is displayed. You can cannot modify the sort order. Any argument that follows -showcols depicts fields to display (columns). CLI provides output with only those columns that are mentioned in showcols. Any argument that follows -<parameter> after a space depicts input for that command. Delimiter: Primarily, a comma (,) is the delimiter used. A semicolon(;) is used as a delimiter within multiple entries for same field. 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 341

342 To filter parameters by column name, use the wildcard entry. Only the "*" wildcard can be used with the values as a filtering option. If the value for an entry is not available, --NA-- or Not Assigned appears. Commands Quick Reference Table 15: 3PAR Peer Motion Utility and 3PAR Online Import Utility commands Command Definition Details Parameters adddestination Adds destination storage system. Adds a destination storage system to the 3PAR Peer Motion Utility or 3PAR Online Import Utility. type name uid mgmtip user password port secure help addsource Adds source storage system. Adds a source storage system to the 3PAR Peer Motion Utility or 3PAR Online Import Utility for migration. type name uid mgmtip user password port secure help Table Continued 342 Commands Quick Reference

343 Command Definition Details Parameters createmigration Performs the preparation phase for the f migration. Performs the preparation phase of a migration. Data transfer is not triggered. allvolumesincg autoresolve cgvolmap clunodes destcpg destinationuid destprov help hostset migtype persona priorityvolmap singlevv sourceuid srchost srcvolmap volmapfile vvset help Lists common help for all the commands. Displays help for all the commands domain help installcertificate Installs the certificate from specified Storage system. Allows addition of source or destination storage system in secure mode. It fetches and displays certificate details. force help mgmtip port source removedestination Removes destination Storage system. Removes already added destination storage system and clears all related migrations. help type uid Table Continued 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 343

344 Command Definition Details Parameters removemigration Removes migration. Clear completed or abort prepared migrations. Active/ Incomplete migrations cannot be removed. help migrationid removesource Removes source storage system. Removes already added source storage system and clears all related migrations. help type uid showcluster Displays all supported clusters. Displays all supported clusters for the supported destination array firmware version and os mode. showconnection Lists connected destination or source storage system. Shows the connectivity between all defined source and destination systems. all csvtable destination_name destination_peer_port destination_unique_id filewrite help listcols showcols source_host_port source_name source_unique_id Table Continued 344 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

345 Command Definition Details Parameters showdestination Lists destination storage system. Lists the storage systems that are already added to the 3PAR Peer Motion Utility or 3PAR Online Import Utility, to be treated as the destination of migration job. all csvtable filewrite firmware help listcols management_server name operational_state peer_ports showcols type unique_id showmigration Lists migrations. Lists active migrations in preparation or data transfer phase and migrations that were successful or aborted. By default, all migrations are listed. Status of a migration will show the current state, progress percentage, and failure reason. all csvtable destinationname filewrite help listcols migrationid showcols sourcename status type Table Continued 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 345

346 Command Definition Details Parameters showmigrationdetails Shows details of a specific migration. Shows details at volume level for a given migration. all csvtable consistencygroupname destinationvolume filewrite help migrationid priority showcols sourcevolume taskid showmigrationhosts Lists all hosts selected for the migration. Displays list off all hosts selected for the migration. all csvtable filewrite host listcols showcols type showmigrationhostsd etails Lists all hosts and corresponding volumes selected for the migration. Displays list of all hosts and volumes selected for the migration. all csvtable filewrite host listcols showcols type Table Continued 346 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

347 Command Definition Details Parameters showpersona Displays persona value for host on destination system. Displays all the supported persona for the supporting firmware version and operating system (OS) mode. all csvtable filewrite hostpersona hosttype listcols osversion showcols showpremigration Displays a premigration checklist that lists both common and source storage system-specific prerequisites for performing migrations through the 3PAR Peer Motion Utility or 3PAR Online Import Utility. Lists common prerequisites for data migration from a specific source array using 3PAR Peer Motion Utility or 3PAROnline Import Utility. type help showsource Lists source storage system. Lists the source storage systems that are already added to the 3PAR Peer Motion Utility or 3PAR Online Import Utility, to be treated as source for a migration job. all csvtable filewrite firmware help listcols management_server name operational_state showcols type unique_id Table Continued 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 347

348 Command Definition Details Parameters startmigration Starts the data transfer for a migration. Starts the prepared migration or restarts a prepared migration. help migrationid subsetvolmap updatedestination Updates the destination storage system. Updates a destination storage system that has already been added to the 3PAR Peer Motion Utility or the 3PAR Online Import Utility. type name uid mgmtip user password port secure help updatesource Updates the source storage system. Updates a source storage system that has already been added to the 3PAR Peer Motion Utility or the 3PAR Online Import Utility. type name uid mgmtip user password port secure help Any successful execution of a write command appears with SUCCESS in the body of the message. For example: <message> SUCCESS: Added source storage system. Command descriptions adddestination Syntax > adddestination -[name] -<password> -[port] -[secure {true false}] -[type] -[uid] -<uid> 348 Command descriptions

349 Description Adds a destination storage system to the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. Parameters mgmtip (Mandatory) Management port IP address of the HPE 3PAR controller. name (Optional) Name of the storage system, or serial number, or 64-bit hyphenated/non-hyphenated WWN. This is required especially when multiple storage systems are managed under same IP address. password port (Mandatory) Plain text password to be used to connect to the management application. (Optional) Port number on which the source storage system management application accepts requests to connect and provide source storage system details. If not supplied, the default port number, based on the storage system type, is used. secure type uid user (Optional) This enables or disables secure channel communication with the source storage system, wherever applicable. Default value will be false or the default used by the source storage system communication layer. Options are: true false (Optional) Storage system family type, such as 3PAR. (Optional) Unique number that represents a source in the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. (Mandatory) User name to be used to connect to management application. Example command > adddestination -mgmtip user 3pardst -password 3parhelp > SUCCESS: Added destination storage system. addsource Syntax > addsource -<mgmtip> -[name] -<password> -[port] -[secure] -<type> -[uid] addsource 349

350 Description Adds a source storage system that is to be migrated by the 3PAR Peer Motion Utility or 3PAR Online Import Utility. Parameters mgmtip name (Mandatory) Management IP address of the HPE 3PAR controller or the IP address of the third-party SMI-S server. (Optional) Required when multiple storage systems are managed under the same IP address. Name of the storage system or serial number or 64 bit hyphenated/non-hyphenated WWN to identify the storage system. This is required especially when multiple storage systems are managed under the same IP address. password port (Mandatory) Plain text password used to connect to the source array management application. (Optional) Port number on which the source storage system management application accepts requests to connect and provide source storage system details. If not supplied, the default port number, based on the storage system type, is used. secure type uid (Optional) This parameter enables or disables secure channel communication between the HPE 3PAR Online Import Utility and the source array management application. (Mandatory) Storage system family name: For 3PAR: 3PAR For EMC Storage: VNX CX VMAX DMX4 For HDS Storage: HDS For IBM XIV Storage: XIV (Optional) For EMC Storage: Name of the storage system or serial number or 64-bit hyphenated/nonhyphenated WWN to identify the storage system. For HDS Storage: Five-digit serial number of the HDS Storage source array. For IBM XIV Storage: Seven-digit serial number of the IBM XIV Storage source array PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

351 user (Mandatory) User name for the source array management application. Example command Adding a 3PAR system >addsource -mgmtip user 3parsrc -password 3parpeer -type 3PAR >SUCCESS: Added source storage system. Adding an EMC Storage system > addsource -type VNX -mgmtip XX.XX.XX.XX -user admin -password adminpw -uid xxxxxxxxxxxxxxxx > > SUCCESS: Added source storage system. Adding an HDS Storage system > addsource -type HDS -mgmtip XX.XX.XX.XX -user admin -password adminpw -uid xxxxx > SUCCESS: Added source storage system. createmigration Syntax >createmigration -[allvolumesincg] -[autoresolve] -[cgvolmap] -[clunodes] - <cluster> -[destinationuid] -[hostset] -[migtype {online MDM offline}] - [persona] -[priorityvolmap] -[singlevv] -<sourceuid> -<srcvolmap volmapfile> -<srchost> -<destcpg> -<destprov> -[vvset] Description Performs the preparation phase of a migration. Actual data transfer will not be triggered by this command. The 3PAR Peer Motion Utility supports multiple hosts or host sets in a single migration. Parameters allvolumesincg (Optional) Migrates all volumes specified in srcvolmap consistently. NOTE: Either the allvolumesincg parameter or the cgvolmap parameter can be present at any given point, but not both. autoresolve (Optional) Resolves LUN conflicts automatically. cgvolmap (Optional) Migrates a subset of volumes consistently. NOTE: Either the allvolumesincg parameter or the cgvolmap parameter can be present at any given point, but not both. clunodes (Optional) Number of nodes in cluster. This signifies that the host under consideration for migration is a clustered host. cluster (Mandatory option for cluster-based online migration and MDM.) createmigration 351

352 The -cluster parameter is supported only with the HPE 3PAR Online Import Utility, not with the HPE 3PAR Peer Motion Utility. The -cluster and -persona parameters are mutually exclusive. If you are using the -persona parameter for a cluster-based migration, you must remove it and use instead the -cluster parameter. With the -cluster parameter, the migration is treated as a cluster-based migration. For example: createmigration -sourceuid 2FF70002AC003F8E -srchost 254 -destcpg FC_r1 -destprov thin -migtype online -destinationuid 2FF70002AC001BA0 -cluster Win2008_SFHA destinationuid (Optional) Mandatory if multiple destinations are added in the 3PAR Peer Motion Utility. UID of the destination storage system. Since there is a 1:1 mapping, this is an optional parameter, as the destination will be autoselected. hostset (Optional) Name of the host set that will be created on the destination 3PAR system as a result of migration. All migrated hosts will be members of this host set. This parameter should be provided only in combination with vvset. migtype Defaults to offline Migration type: online MDM offline persona (Mandatory for non-cluster-based online migration and MDM) Sets the host persona type for the host on the destination storage system. Example: createmigration -sourceuid 8F0002AC001BA4 -srchost "hostname" -migtype online -destcpg testcpg -destprov thin -cgvolmap{"values": {"cg1":["showvol1", "vol2","vol3"],"cg2": ["vol4","vol5","vol6"]}} -migtype online 352 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

353 -persona "RHEL_5_6" For valid values of this option, see showpersona. priorityvolmap (Optional) If priority is not specified, "medium" is set as the default priority. The HPE 3PAR Peer Motion Utility allows you to specify a priority at the volume level or the volume set level. If a priority is set for both, the volume level takes precedence over the volume set. The priorityvolmap option can also be used with srchost option to set the priority to volumes exported to a host or host set. singlevv (Optional) Migrates a subset of volumes provisioned to a host. sourceuid (Mandatory) UID of the source array. srcvolmap volmapfile (Mandatory) A map that provides information about the source volumes for migration. A list of volume names that identifies the volume or volume set name should be provided. NOTE: The volume set name must be preceded by set:. It can also have mapping of every source volume to the required provisioning or CPG. This is required if destcpg and destprov are not provided. Example: -srcvolmap[{"source volume path/unique name", desired destination provisioning after migration (Thin/Full/Dedupe), desired destination CPG name},{ }, ] Sample content of volmapfile for EMC Storage: "vdisk4",thin,cpg1 "vdisk2",full,cpg2 "vdisk2",dedupe,cpg2 TestVolume,Thin,Test_CPG Vlun1,Thin,Test_CPG,compress Vlun1,Thin,Test_CPG,compress,high Sample content of volmapfile for HDS Storage: "11:09",thin,cpg1 "11:06",full,cpg2 "11:06",dedupe,cpg2 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 353

354 Vlun1,Thin,Test_CPG,compress Vlun1,Thin,Test_CPG,compress,high Any implicitly defined volumes will get their provisioning type from the value of -destprov, and their CPG from the value of -destcpg. NOTE: Possibilities for 3PAR StoreServ Storage: "vv2",thin "vv3",cpg3 "vv" "vv4",full,cpg4 "vv5",dedupe,cpg5 Vlun1,Thin,Test_CPG,compress Vlun1,Thin,Test_CPG,compress,high Possibilities for EMC Storage: "vdisk",thin "vdisk",cpg "vdisk" "vdisk",thin,cpg "vdisk",dedupe,cpg Vlun1,Thin,Test_CPG,compress Vlun1,Thin,Test_CPG,compress,high If the device ID does not already contain 5 characters, add a " 0 " (zero) in front of the device id to make it 5 characters in length. For an EMC VMAX or DMX4, the <volmap_id> parameter for srcvolmap must be in the form of Volume <device id>, meaning that it must include the word "Volume" in front of the device ID. Also, the <device id> value must be five characters in length. If the actual device ID is less than five characters, add a "0" at the front to make five characters. For example, if the VMAX or DMX4 device ID is 0192: createmigration -sourceuid C012F400 -srcvolmap [{"Volume 00192"}] -destcpg testcpg1 -destprov thin -migtype MDM -persona "RHEL_5_6" Provisioning and CPG entries in the srcvolmap or volmapfile parameters override the default in the destcpg and destprov parameters. srchost (Mandatory, as specified by user) For example, if srchost is used, destcpg and destprov should be used PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

355 Name of the host for which all source volumes should be considered for migration. This should not be there if srcvolmap or volmapfile is there. destcpg (Mandatory, as specified by user) For example, if srchost is used, destcpg and destprov should be used. Name of CPG where the migrated volumes will be made available. destprov (Mandatory, as specified by user) For example, if srchost is used, destcpg and destprov should be used. Provisioning type of the volumes that will be created in destination as a result of migration (thin, full, or dedupe. domain (Optional) Volumes and host that will be created under this domain on the destination 3PAR StoreServ Storage. vvset (Optional) Name of the virtual volume set that will be created on the destination 3PAR StoreServ Storage system. All migrated volumes will be members of this vv set. This parameter should be provided only in combination with hostset for a 3PAR StoreServ Storage system. Restrictions NOTE: After using this command and before start of migration, you should perform unzoning activities and/or rescan of HBA and/or reboot of source host and/or perform any other management jobs as required. For multiple-host migration, all hosts should be from same source array, and all migrations should be of the same supported type. To migrate individual volumes, use the createmigration command with the -srcvolmap and - singlevv options. In the case of comma-separated, multiple-host migration using single createmigration command, ensure that all implicitly selected hosts are of same type. For example, if your are migrating host h1 and h2 in single createmigration operation, and h1 is implicitly linked with h3 and h4, then h1, h3 and h4 must have the same operating system. An entry of destination storage system is created on source storage system. It is treated as a Linux host. Source volumes are made available to this host representing the 3PAR destination. Source host information is captured from source storage system and accordingly created at destination storage system. During the admit phase, a virtual volume corresponding to the volume being migrated from the host is created on the destination 3PAR. If the migration is online, the virtual volume is exported to the source host. If the migration is MDM, the export is deferred until after the host is rebooted. The persona setting for the host definition created at the destination will be identical to the persona setting on the 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 355

356 source system. The virtual volume will be in peer mode. The volume on the destination 3PAR acts as a proxy for I/O to the source volume during the admit phase. If a source volume is in an implicit relationship with other source volumes, these volumes are autoselected, and applicable source volume CPG and provisioning type are applied to these implicit selections as well. The RAID level at the destination, once migrated, will be dependent on the CPG chosen and can be different from the RAID level in the source system. NOTE: If a destination storage system has dedupe capability, you can reduce its storage space by eliminating redundant data during migration at the 3PAR destination. To do this in the createmigration command, specify the destprov parameter as dedupe. You can then specify those CPGs that support the dedupe feature as destcpg. You can trigger multiple migrations at a time for each source and destination pair. You can migrate more than one host using a single createmigration command. The createmigration command may take several minutes to complete. The volumes or host specified in the createmigration command are mapped to a storage group on the associated source storage system. All LUNs and all hosts in the mapped storage group will be migrated even if only a subset are entered in the createmigration command. With the HPE 3PAR Online Import Utility, either the -cluster or -persona option is mandatory. Example command This example command uses the srchost with destcpg, destprov, migtype, and persona parameters. > createmigration -sourceuid XXXXXXXXXXXXXXXX -srchost "HPDL585-01" -destcpg FC_r5 -destprov thin -migtype MDM -persona "RHEL_5_6" > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: This example command uses the srcvolmap with destcpg, destprov, cgvolmap, and persona parameters. > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg},{vol2,thin,testcpg},{vol3,thin,testcpg}]" -destcpg testcpg -destprov testcpg -cgvolmap{"values":{"cg1":["vol1","vol2"]}} - persona "RHEL_5_6" This example command uses the srchost with compress, destcpg, destprov, cgvolmap and persona parameters. > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg,compress}, {vol2,thin,testcpg,compress},{vol3,thin,testcpg,compress}]" -destcpg testcpg -destprov testcpg - cgvolmap{"values":{"cg1":["vol1","vol2"]}} -persona "RHEL_5_6" This example command uses the srchost with compress, priority,destcpg, destprov, cgvolmap, and persona parameters. > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap "[{vol1,thin,testcpg,compress,high}, {vol2,thin,testcpg,compress,low},{vol3,thin,testcpg,,medium}]" -destcpg testcpg -destprov testcpg - cgvolmap{"values":{"cg1":["vol1","vol2"]}} -persona "RHEL_5_6" 356 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

357 This example command uses the srchost with destcpg, destprov, migtype, and persona parameters. > createmigration -sourceuid srchost "HPDL585-01" -destcpg FC_r5 -destprov thin -migtype MDM -persona "RHEL_5_6" > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: This example command uses the srchost with destcpg, destprov, migtype, persona, and domain parameters. > createmigration -sourceuid srchost "HPDL585-01" -destcpg FC_r5 - destprov thin -domain "domainname" -migtype online -persona "RHEL_5_6" This example command uses the srcvolmap with destcpg, destprov, cgvolmap, and persona parameters. > createmigration -sourceuid srcvolmap "[{"00:03:01",thin,testcpg}, {"00:03:02",thin,testcpg},{"00:03:03",thin,testcpg}]" -destcpg testcpg - destprov testcpg -cgvolmap{"values":{"cg1":["00:03:01","00:03:02"]}} - persona "RHEL_5_6" To create a migration with mixed provisioning (for example, with a single LUN or LDEV to be thin, but others to be fully provisioned), issue the following command: LUN EMC 1 thin, other LUNs fully provisioned > createmigration -sourceuid XXXXXXXXXXXXXXXX -srcvolmap [{"EMC_1",thin,FC_r5}] -destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: LDEV 00:1A:34 thin, other LDEVs fully provisioned > createmigration -sourceuid srcvolmap [{"00:1A:34",thin,FC_r5}] - destcpg FC_r5 -destprov full -migtype MDM -persona "WINDOWS_2008_R2" > SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: Migrating to multiple destinations using the destinationuid parameter. > createmigration -sourceuid XXX srchost win_host1 -destcpg FC_r1 -destprov thin -migtype mdm -destinationuid XX Migrating a host to a Windows Server 2008 cluster. > createmigration -sourceuid YYY srchost 254 -destcpg FC_r1 - destprov thin -migtype online -destinationuid XXXX cluster Win2008_SFHA >SUCCESS: Migration job submitted successfully. Please check status/details using showmigration command. Migration id: Migrating multiple hosts using a single createmigration parameter. > createmigration -sourceuid xxxxxx x -srchost host1,host2 -destcpg SSD_R5 -destprov thin -migtype online Migrating one of the exported volumes from each host using the singlevv parameter. In this example, volume V1 is exported to host 1 and volume V2 is exported to host 2. createmigration -sourceuid srcvolmap [{V1,full,cpg1}, {V2,thin,cpg2}] -destprov full -destcpg xxxx -migtype online -singlevv 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 357

358 Migrating multiple hosts using the priorityvolmapparameter. > createmigration -sourceuid xxxxxxx srchost "host1,host2" - migtype online -priorityvolmap {"values":{"low":["v1_1.0"],"high": ["V3_1.0"]}} -destcpg FC_r1 -destprov thin Migrating multiple hosts using the cgvolmap parameter. In this example, vol.0 and vol.1 are exported to host1, and vol.2 and vol.3 are exported to host2. > createmigration -sourceuid xxxxxxx srchost "host1,host2" - migtype online -cgvolmap {"values":{"cg1":["vol.0","vol.1","vol.2","vol. 3"]}} -destcpg FC_r1 -destprov thin help Syntax -help Description Global help command for the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility CLI commands. Displays a list of available HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility CLI commands and short descriptions. Help on a specific <command> can also be obtained by typing <command> -help. Parameters None adddestination - Adds a destination storage system that is intended to be migrated. addsource - Adds a source storage system that is intended to be migrated. createmigration - Performs the preparation phase of a migration. Actual data transfer will not be triggered by this command. help - Global Help Command for CLI. installcertificate - Installs certificate from given server IP. removedestination - Removes already added destination storage system and clears all related migrations. removemigration - Removes the migration identified by the Migration ID specified. removesource - Removes already added source storage system and clears all related migrations. showcluster - Displays all supported clusters for the supported destination array firmware version and os mode. showconnection - This command displays connection between all the configured source and destination storage systems. -source or -destination can't be used together. showdestination - Lists the destination storage systems that are already added to be treated as destination for a migration job. It shows an empty table if no destination storage systems are found. showmigration - Displays the list of available migrations. showmigrationdetails - Displays the list of volumes associated with the specified migration and their corresponding status. showmigrationhosts - Displays the list of hosts associated with migrations. showmigrationhostsdetails - Displays the list of status, hosts, and volumes 358 help

359 showpersona showpremigration showsource startmigration updatedestination updatesource associated with migrations. - Displays all supported personas for the supporting firmware version and os mode. - Displays a premigration checklist. - Lists the source storage systems that are already added to be treated as source for a migration job. It shows an empty table if no source storage systems found. - Starts or restarts a prepared migration. - Updates a destination storage system that is intended to be migrated. - Updates a source storage system that is intended to be migrated. Example input Help on a specific command can also be obtained by typing <command> -help. help -help installcertificate Syntax For the HPE 3PAR Peer Motion Utility > installcertificate -<mgmtip> -[force] -<source> -[port] For the HPE 3PAR Online Import Utility > installcertificate -<mgmtip> -[force] -[source] -[port] Description Installs certificates from the specified storage system. This command installs a certificate to enable secure communication with the source and/or destination 3PAR StoreServ Storage system. The command fetches and displays certificate details for the specified array. Parameters force (Optional) Use this parameter to skip displaying the certificate details. mgmtip port (Mandatory) IP address of the management station to connect to in order to get details. (Optional) Port number on which the management application accepts a request to connect and provide source storage system details. If not found, the default port based on the storage system type will be used. source Mandatory (for the HPE 3PAR Peer Motion Utility; use 3PAR as the source type) Optional (for the HPE 3PAR Online Import Utility) Use this parameter to determine whether the certificate is installed for the source array or the destination array. If used, this parameter requires an argument to decide the type of array. installcertificate 359

360 Usage NOTE: Follow the screen instructions to accept or reject the certificate. If you accept the certificate, the certificate downloads in the InFormMC/security folder of the user directory and then installs. After the certificate is installed successfully, you can add the storage system in secure mode. By default, the command is interactive and cannot be used for scripting. In the scripting mode, you can choose the force option to render the confirmation certificate screen as non-interactive. Thereafter, the certificate will not appear, and the prompt to confirm or reject the certificate will not appear. Example command This example command installs a certificate. > installcertificate -mgmtip XX.XX.XX.XX Certificate details: Issue by: InServ F Issue to: InServ F Valid from: 11/26/2014 Valid to: 11/25/2017 SHA-1 Fingerprint: 29:31:F2:D8:D2:36:C5:3B:DD:7B:18:9F:48:2B:FC:39:27:63:07:0F Version: v3 Do you accept the certificate? Y/N > Y SUCCESS: Installed certificate successfully. This example command installs a certificate using the -force parameters. > installcertificate -mgmtip XX.XX.XX.XX -force Certificate details: Issue by: InServ F Issue to: InServ F Valid from: 11/26/2014 Valid to: 11/25/2017 SHA-1 Fingerprint: 29:31:F2:D8:D2:36:C5:3B:DD:7B:18:9F:48:2B:FC:39:27:63:07:0F Version: v3 Do you accept the certificate? Y/N > Y SUCCESS: Installed certificate successfully. This example command installs a certificate using the -force and -source parameters. > installcertificate -mgmtip XX.XX.XX.XX -force -source 3par Certificate details: Issue by: InServ F Issue to: InServ F Valid from: 11/26/2014 Valid to: 11/25/2017 SHA-1 Fingerprint: 29:31:F2:D8:D2:36:C5:3B:DD:7B:18:9F:48:2B:FC:39:27:63:07:0F Version: v3 Do you accept the certificate? Y/N > Y SUCCESS: Installed certificate successfully PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

361 removedestination Syntax removedestination -<uid> -[type] Description Removes an already added destination storage system and clears all related migrations. NOTE: When the removedestination command is used, historic information about previous completed migrations related to the source and destination storage systems will be removed. Parameters type (Optional) Type of destination storage system. uid (Mandatory) Unique number that represents a destination in the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. Example command > removedestination -uid 2FF70002AC003F8E > SUCCESS: Removed destination storage system. removemigration Syntax > removemigration -<migrationid> Description Clears completed migrations or aborts prepared migrations. Active or incomplete migrations cannot be removed. Parameters migrationid (Mandatory) Number that represents a migration in the completed or aborted state. Example command > removemigration -migrationid > SUCCESS: The specified migration is successfully removed. removesource Syntax > removesource -<type> -<uid> Description Removes already added source storage systems. A sources cannot be removed if there is an active migration. removedestination 361

362 Parameters type uid (Mandatory) Type of storage system: For 3PAR StoreServ Storage: 3PAR For EMC Storage: VNX CX VMAX DMX4 For HDS Storage: HDS (Mandatory) Unique number that represents a source storage system in the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. Restrictions If the state of the source storage system is: No migration created: The removesource command deletes the source storage system from the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. Active migration (prepared or data transfer in progress): The removesource command yields an output error. Completed migrations: The removesource command removes already added source storage system and clears all related migrations. NOTE: When the removesource command is used, historic information about previous completed migrations related to the source storage systems will be removed. Example command The removesource command for 3PAR StoreServ Storage > removesource -uid XXXXXXXXXXXXXXXX -type 3PAR >SUCCESS: Removed source storage system. The removesource command for EMC Storage source array. > removesource -uid BEE0177F -type VNX The removesource command for HDS Storage source array. > removesource -uid type HDS 362 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

363 showcluster Syntax > showcluster Description Displays all the supported clusters for the supporting firmware version and operating system (OS) mode. Parameters SOURCE_HOST_TYPE OS_VERSION DESTINATION_HOST_PERS ONA windows2012 >=3.2.2 Win2012_SFHA or Win2012_MSFC ALUA_SUPPORT NON_ALUA ibmaix >=3.2.2 AIX_Power_HA NON_ALUA hpux >=3.2.2 HPUX_SFHA or HPUX_SG NON_ALUA windows2008 >=3.2.2 Win2008_SFHA or Win2008_MSFC linux >=3.2.2 RHEL_Native_Cluster or Linux_Oracle_RAC NON_ALUA NON_ALUA linux >=3.2.2 RHEL_SFHA NON_ALUA linux >=3.2.2 SUSE_Native_Cluster NON_ALUA linux >=3.2.2 SUSE_SFHA NON_ALUA vmware >=3.2.2 ESX_Native_Cluster NON_ALUA mswindows >=3.2.2 Win2003_MSCS or Win2003_MSFC windows2012 >=3.1.3 Win2012_SFHA or Win2012_MSFC NON_ALUA NON_ALUA ibmaix >=3.1.3 AIX_Power_HA NON_ALUA hpux >=3.1.3 HPUX_SFHA or HPUX_SG NON_ALUA windows2008 >=3.1.3 Win2008_SFHA or Win2008_MSFC NON_ALUA vmware >=3.1.3 ESX_Native_Cluster NON_ALUA linux >=3.1.3 RHEL_Native_Cluster or Linux_Oracle_RAC NON_ALUA linux >=3.1.3 RHEL_SFHA NON_ALUA Table Continued showcluster 363

364 linux >=3.1.3 SUSE_Native_Cluster NON_ALUA linux >=3.1.3 SUSE_SFHA NON_ALUA mswindows >=3.1.3 Win2003_MSCS or Win2003_MSFC NON_ALUA windows2008 <=3.1.2 WINDOWS_2012 NON_ALUA ibmaix <=3.1.2 AIX_Power_HA NON_ALUA hpux <=3.1.2 HPUX_SG or HPUX_SFHA NON_ALUA windows2008 <=3.1.2 Win2008_SFHA or Win2008_MSFC NON_ALUA vmware <=3.1.2 ESX_Native_Cluster NON_ALUA linux <=3.1.2 RHEL_Native_Cluster or Linux_Oracle_RAC NON_ALUA linux <=3.1.2 SUSE_SFHA NON_ALUA mswindows <=3.1.2 Win2003_MSCS or Win2003_SFHA NON_ALUA showconnection Syntax > showconnection -[all] -[csvtable] -[destination] -[destination_unique_id] - [filewrite] -[source] -[source_unique_id] -[showcols] -[listcols] - [source_name] -[destination_name] -[source_host_port] - [destination_peer_port] Description Lists connected destination and source storage systems. By default, the showconnection command displays a map of all connected source and destination storage systems. There should be 1:1 mapping between the source and destination storage systems over the peer port connections. Given a source storage system, this command displays all the configured destination storage systems it can see and vice versa. Parameters all (Optional) Displays all details with headings. csvtable (Optional) This command can be used to print delimited output. destination (Optional) Destination UID. Displays the connection between this destination and the source storage systems connected to it. 364 showconnection

365 NOTE: The -source and -destination parameters cannot be used together in a single command. destination_unique_id (Optional) UID of the destination storage system. Displays all peer connections for the given destination storage system. filewrite (Optional) Redirects the output of the command to a file. source (Optional) Source UID. Displays the connection between this source and the destination storage systems connected to it. NOTE: The -source and -destination parameters cannot be used together in a single command. source_unique_id (Optional) UID of the source storage system. Displays all peer connections for a given source storage system. showcols (Optional) Any argument that follows showcols depicts fields (columns) to display. Accepts a comma-separated list of column names. listcols (Optional) Display the list of column names applicable to the command. These column names can be used for filtering using the showcols command. source_name (Optional) Name of the source storage system. destination_name (Optional) Name of the destination storage system. source_host_port (Optional) Host port WWPN (name) of the source which is connected to a corresponding peer port. destination_peer_port (Optional) Peer port WWPN (WWNN) (N:S:P) to which the source is connected. Example command Example of the showconnection command for EMC Storage. > showconnection -source BEE0177F SOURCE_NAME SOURCE_UNIQUE_ID DESTINATION_NAME DESTINATION_UNIQUE_ID DESTINATION_PEER_PORT SOURCE_HOST_PORT CLARiiON+APM BEE0177F 3PAR_Array_1 2FF70002DY005F DY00-5F92(0:2:4) EE0-177F(Port SP_B:0) CLARiiON+APM BEE0177F 3PAR_Array_1 2FF70002DY005F DY00-5F92(0:2:4) EE0-177F(Port SP_A:0) Example of the showconnection command for HDS Storage. > showconnection -source SOURCE_NAME SOURCE_UNIQUE_ID DESTINATION_NAME DESTINATION_UNIQUE_ID DESTINATION_PEER_PORT SOURCE_HOST_PORT 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 365

366 USP_V PAR_Array_1 2FF70002DY005F DY00-5F92(0:2:4) E E62(CL7-C) USP_V PAR_Array_1 2FF70002DY005F DY00-5F92(0:2:4) E E72(CL8-C) showdestination Syntax > showdestination -[all] -[csvtable] -[filewrite] -[firmware] -[listcols] - [management_server] -[name] -[operational_state] -[peer_ports] -[showcols] - [type] -[unique_id] Description Lists the storage systems that are already added to the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility, to be treated as the destination of a migration job. The command displays an empty table if no destination storage systems are found. Parameters all (Optional) Displays all details with headings. csvtable (Optional) This parameter can be used to print delimited output. filewrite (Optional) Redirects the output of the command to a file. firmware (Optional) Firmware version. listcols (Optional) Displays the list of column names applicable to the command. management_server name (Optional) The IP address/fqdn of the management server that manages this storage system. (Optional) Name of the storage system. operational_state (Optional) Operational state of the storage system: Good Failed Attention peer_ports (Optional) List of host ports hyphenated 64 bit WWN (N:S:P). Peer ports will be marked as peer showcols (Optional) Any argument that follows showcols depicts fields (columns) to display. Accepts a comma-separated list of column names. 366 showdestination

367 type (Optional) This is storage system family name (for example, 3PAR). unique_id (Optional) 3PAR StoreServ Storage: Serial number or WWN or hyphenated WWN. EMC Storage: Serial number or WWN or hyphenated WWN of the destination EMC Storage system. HDS Storage: Five-digit serial number of the destination HDS Storage system. Example command > showdestination NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE PEER_PORTS DMMT3PAR02 3PAR 2FF70002AC003F9C Normal AC00-3F9C(2:1:1) AC00-3F9C(3:1:1) showmigration Syntax > showmigration -[all] -[csvtable] -[destination_name] -[end_time] - [filewrite] -[listcols] -[migrationid] -[showcols] -[source_name] - [start_time] -[status] -[type] Description Lists active migrations in preparation or data transfer phase as well as the ones that are successful or aborted. By default, all migrations are listed. The status of a migration will show the current state as well as the progress percentage whenever applicable and failure reason whenever applicable. Parameters all (Optional) Displays all details with headings. csvtable (Optional) This parameter can be used to print delimited output. destination_name (Optional) Destination storage system name. Displays migrations created between this destination and source storage systems connected to it. end_time (Optional) Displays the time at which the migration was successfully completed. filewrite (Optional) Redirects the output of the command to a file. listcols (Optional) Displays the list of column names applicable to the command. migrationid (Optional) Unique number that identifies a migration after createmigration is complete. If this is specified, there is no need to provide source and destination details. showmigration 367

368 showcols (Optional) Any argument that follows showcols depicts field to display (columns). Accepts a comma separated list of column names. source_name (Optional) Source storage system name. Displays migrations created between this source and destination storage systems connected to it. start_time (Optional) Displays the time at which the migration was added using the createmigration command. status type (Optional) Displays migrations that are in the specified state. (Optional) Displays all migrations that are of the type specified: Online MDM Offline Usage NOTE: If the showmigration command displays the migration status as failed, at least one volume migration has failed. To get details for individual volumes, use the showmigrationdetails command. To restart the failed migration, use the startmigration command again with the same migration ID. Example command Example of the showmigration command for 3PAR StoreServ Storage. > showmigration >MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME END_TIME STATUS(PROGRESS)(MESSAGE) > offline PMM3PAR DMMT3PAR02 Sat Apr 12 00:52:38 IST NA- preparationcomplete(100%)(-na-) Example of the showmigration command for EMC Storage. > showmigration MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME END_TIME STATUS(PROGRESS)(MESSAGE) offline CLARiiON+APM DMMT3PAR02 Sat Apr 12 00:52:38 IST NA- preparationcomplete(100%)(-na-) Example of the showmigration command for HDS Storage. > showmigration MIGRATION_ID TYPE SOURCE_NAME DESTINATION_NAME START_TIME END_TIME STATUS(PROGRESS)(MESSAGE) offline USP_V DMMT3PAR02 Sat Apr 12 00:52:38 IST NA- preparationcomplete(100%)(-na-) 368 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

369 showmigrationdetails Syntax > showmigrationdetails -[all] -[csvtable] -[consistencygroupname] - [destination_volume] -[filewrite] -<migrationid> -[priority] -[progress] - [showcols] -[source_volume] -[task_id] Description Use the showmigrationdetails command to view details at volume level for a specific migration. Task ID is the corresponding 3PAR OS task ID, if found. Parameters all (Optional) Displays all details with headings. csvtable (Optional) This parameter can be used to print delimited output. consistencygroupname (Optional) Displays the consistency group name of the volumes during migration. destination_volume (Optional) Destination volume name. Displays information corresponding to this volume filewrite (Optional) Redirects the output of the command to a file. migrationid (Mandatory) Unique ID that gets generated after the createmigration operation. Displays the migration corresponding to the ID specified. priority (Optional) Displays the priority of the volumes. Even when the priority is set at the vvset level, the showmigrationdetails command shows the priority of the volumes in the vvset. progress (Optional) Displays the migration status (complete, running, failed) of individual volumes. showcols (Optional) Any argument that follows showcols depicts fields to display (columns). Accepts a comma separated list of column names. source_volume (Optional) Source volume name. Displays information corresponding to this volume task_id (Optional) Integer number for the ID of the import task being performed on the destination 3PAR StoreServ Storage. showmigrationdetails 369

370 Example command Example of the showmigration command for 3PAR StoreServ Storage > showmigrationdetails -migrationid SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS PRIORITY CONSISTENCYGROUPNAME test test 6134 Completed HIGH Not Assigned Example of the showmigration command for EMC Storage > showmigrationdetails -migrationid SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS PRIORITY CONSISTENCYGROUPNAME test test 6134 Completed HIGH Not Assigned Example of the showmigration command for HDS Storage > showmigrationdetails -migrationid SOURCE_VOLUME DESTINATION_VOLUME TASK_ID PROGRESS PRIORITY CONSISTENCYGROUPNAME 00:07:00 00_07_ Completed HIGH Not Assigned showmigrationhosts Syntax > showmigrationhosts -[all] -[csvtable] -[filewrite] -[host] -[listcols] -[migrationid] -[showcols] -[type] Description Lists the hosts associated with all or specified migrations in the OIU database. Parameters all (Optional) Displays all details with headings. csvtable (Optional) Used to print delimited output. filewrite host (Optional) Redirects the output of the command to a file. (Optional) The host name for the migration details to be displayed. listcols (Optional) Displays the list of column names applicable to the command. migrationid (Optional) Unique ID that gets generated after CreateMigration operation. Displays the migration corresponding to the ID specified. showcols (Optional) Specifies which columns to display. Accepts a comma-separated list of column names after -showcols. 370 showmigrationhosts

371 type (Optional) Displays all migrations of the type specified; values include online and MDM. Example command > showmigrationhosts MIGRATIONID HOST test-host showpersona Description Displays the supported destination HPE 3PAR host entry persona values for all the specified host operating systems. Parameters SOURCE_HOST_TYPE OS_VERSION DESTINATION_HOST_PERS ONA ALUA_SUPPORT windows2012 >=3.1.2 WINDOWS_2012 ALUA sunsolaris >=3.1.2 SOLARIS_9_10 NON_ALUA sunsolaris >=3.1.2 SOLARIS_11 ALUA ibmaix >=3.1.2 AIX NON_ALUA hpux >=3.1.2 HPUX NON_ALUA windows2008 >=3.1.2 WINDOWS_2008_R2 ALUA vmware >=3.1.2 ESX_4_5 ALUA linux (for RHEL 5, 6, 7) >=3.1.2 RHEL NON_ALUA linux (for SLES) >=3.1.2 SUSE NON_ALUA openvms >=3.1.2 OPENVMS NON_ALUA mswindows >=3.1.2 WINDOWS_2003 NON_ALUA windows2012 >=3.2.2 WINDOWS_2012_R2 ALUA sunsolaris >=3.2.2 SOLARIS_9_10 NON_ALUA sunsolaris >=3.2.2 SOLARIS_11 ALUA ibmaix >=3.2.2 AIX NON_ALUA hpux >=3.2.2 HPUX_11_v1_v2 NON_ALUA hpux >=3.2.2 HPUX_11_v3 ALUA Table Continued showpersona 371

372 windows2008 >=3.2.2 WINDOWS_2008_R2 ALUA vmware >=3.2.2 VMWARE_ESXI ALUA linux (for RHEL) >=3.2.2 RHEL_5_6 ALUA linux (for SLES) >=3.2.2 SUSE_10_11 ALUA openvms >=3.2.2 OPENVMS NON_ALUA mswindows >=3.2.2 WINDOWS_2003 NON_ALUA windows2012 >=3.1.3 WINDOWS_2012_R2 ALUA sunsolaris >=3.1.3 SOLARIS_9_10 NON_ALUA sunsolaris >=3.1.3 SOLARIS_11 ALUA ibmaix >=3.1.3 AIX NON_ALUA hpux >=3.1.3 HPUX_11_v1_v2 NON_ALUA hpux >=3.1.3 HPUX_11_v3 ALUA windows2008 >=3.1.3 WINDOWS_2008_R2 ALUA vmware >=3.1.3 ESX_4_5 ALUA linux (for RHEL) >=3.1.3 RHEL_5_6 ALUA linux (for SLES) >=3.1.3 SUSE_10_11 ALUA openvms >=3.1.3 OPENVMS NON_ALUA mswindows >=3.1.3 WINDOWS_2003 NON_ALUA showpremigration Syntax > showpremigration [-type <type>] [-help] Description This command displays a premigration checklist that lists both common and source storage systemspecific prerequisites for performing migrations through the 3PAR Peer Motion Utility or 3PAR Online Import Utility. Parameters type (Optional) Storage system family name: 372 showpremigration

373 For 3PAR: 3PAR For EMC Storage: EMC VNX CX VMAX DMX For HDS Storage: HDS For IBM XIV Storage: XIV IBM help Describes the usage of showpremigration command. Example command > showpremigration type EMC showsource Syntax > showsource -[all] -[csvtable] -[filewrite] -[firmware] -[listcols] - [management_server] -[name] -[operational_state] -[showcols]-[type] - [unique_id] Description Lists the source storage systems that are already added to the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility, to be treated as source for a migration job. Parameters all (Optional) Displays all details with headings. csvtable (Optional) This parameter can be used to print delimited output. filewrite (Optional) Redirects the output of the command to a file. firmware (Optional) Firmware version of the source storage system. showsource 373

374 listcols (Optional) Displays the list of column names applicable to the command. management_server name (Optional) IP address of the controller management application. (Optional) Storage system family or serial number. operational_state (Optional) Operational state of the storage system. showcols type (Optional) Any argument that follows showcols depicts fields to display (columns). Accepts a comma separated list of column names. (Optional) Storage system family type: For 3PAR StoreServ Storage: 3PAR For EMC Storage: CX VNX VMAX DMX4 For HDS Storage: HDS unique_id (Optional) Controller UID Example command The showsource command for EMC Storage system > showsource -type VNX NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE CLARiiON+APM VNX BEE0177F XX.XX.XX.XX Good The showsource command for HDS Storage system > showsource -type HDS NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE USP_V HDS XX.XX.XX.XX Good startmigration Syntax startmigration -<migrationid> -[subsetvolmap] Description Starts or restarts a prepared migration to go through data transfer phase of migration. The data from source volumes is moved to destination volume over the peer ports. 374 startmigration

375 Parameters migrationid (Mandatory) Unique number representing the migration that is in prepared or incomplete state. subsetvolmap (Optional) Accepts the list of volumes for which data transfer will happen. Volumes that are part of the subsetvolmap should be part of the migration. Usage The host representing the destination storage system that was created during the migration preparation stage will remain at the source storage system. For another migration, this host will be reused. When you are finished with all the migration related to the host, you may choose to delete this host. To support migration of a subset of volumes, use the subsetvolmap parameter with the startmigration command. Please note that it is required to include a dash before the subsetvolmap parameter to ensure that all LUNS are not inadvertently migrated. NOTE: More than one subset migration can be triggered simultaneously using the startmigration command. Subset migration cannot be triggered once the data transfer is initiated for all volumes in the migration using the startmigration command. For MDM, the host export will take place for all the volumes, even if the startmigration command is triggered for a subset of volumes. A new migration cannot be triggered until migration of all the ongoing volumes is complete. The peer host will be deleted only after all volumes are migrated. Example command Example of the startmigration command. > startmigration -migrationid SUCCESS: Data transfer started successfully. Example of the startmigration command using the subsetvolmap parameter. > startmigration -migrationid subsetvolmap {V1,V2,V3} SUCCESS: Data transfer started successfully. updatedestination Syntax > updatedestinaton -<mgmtip> -<user> -<password> -[port] -[name] -[secure {true false}] -[type] -[uid] Description Updates an already added destination storage system in the HPE 3PAR Peer Motion Utility or the HPE 3PAR Online Import Utility. updatedestination 375

376 Parameters mgmtip (Mandatory) Management port IP address of the HPE 3PAR controller or the IP address of the thirdparty SMI-S server. name (Optional) Name of the storage system, or serial number, or 64-bit hyphenated/non-hyphenated WWN. This is required especially when multiple storage systems are managed under same IP address. password port (Mandatory) Plain text password to be used to connect to the management application. (Optional) Port number on which the management application accepts the request to connect and provide source storage system details. If not supplied, the default port number, based on the storage system type, is used. secure type uid user (Optional) This enables or disables secure channel communication with the source storage system, wherever applicable. Default value will be true or the default used by the source storage system communication layer. Options are: true false (Optional) Storage system family type, such as 3PAR. (Optional) Unique number that represents a source in the HPE 3PAR Peer Motion Utility or HPE 3PAR Online Import Utility. (Mandatory) User name to be used to connect to the storage system. Usage To update the destination storage system, follow these steps: 1. From the 3PAR Online Import Utility, issue the updatedestination command. Where XX.XX.XX.XX is the HPE 3PAR management port IP address. See the updatedestination example. If a certificate validation error occurs on the updatedestination command, (see the certificate validation error example), first run the installcertificate command (see the installcertificate example), and then run the updatedestination command again. 2. Issue the showdestination command to verify the destination storage system information. See showdestination example PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

377 Example command This example command updates the destination storage system. > updatedestination -mgmtip XX.XX.XX.XX -user 3paradm -password Password > SUCCESS: Updated destination storage system This example shows a certificate validation error. > updatedestinaton -mgmtip xx.x.xx.xx -user 3paradm -password ******* -port 5783 > ERROR: OIUERRDST0010 Unable to validate certificate for HP 3PAR Storage System. C:\\InFormMC\security\HP-3PAR-MC-TrustStore This example command installs a certificate. > installcertificate -mgmtip xx.xx.xx.xx TParCertifacteVO [issuedto=hp 3PAR HP_3PAR , commonname=null, issuedbyorganization=null, issuedtoorganization=null, serialno=null, issedby=hp 3PAR HP_3PAR , fingerprint=89:e5:d0:13:6f:d1:07:80:70:76:5c:fe:5b:65:e5:54:c0:18:21:2f, signaturealgo=sha1withrsa, version=v1,validfrom=08/14/2014, validto=08/11/2024. issuedon=null, expireson=null, validdaterange=true] Do you accept the certificate? Y/YES Y > SUCCESS: Installed certificate successfully. This example uses the showdestination command to verify the destination storage system information. > showdestination NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE PEER_PORTS 3par_7200_DCB_01 3PAR 2FF70002AC005F (MU3) XX.XX.XX.XX Normal AC00-5F91(0:2:1) AC00-5F91(1:2:1) updatesource Syntax > updatesource -<mgmtip> -<user> -<uid> -<password> -[name] -[port] -[secure] -[type] Description Updates an already added source storage system in the HPE 3PAR Peer Motion Utility or the HPE 3PAR Online Import Utility Parameters mgmtip (Mandatory) Management port IP address of the HPE 3PAR controller. name (Optional) Name of the storage system, or serial number, or 64-bit hyphenated/non-hyphenated WWN. This is required especially when multiple storage systems are managed under same IP address. password (Mandatory) Plain text password to be used to connect to the management application. updatesource 377

378 port (Optional) Port number on which the management application accepts the request to connect and provide source storage system details. If not supplied, the default port number, based on the storage system type, is used. secure type uid user (Optional) This enables or disables secure channel communication with the source storage system, wherever applicable. Default value will be true or the default used by the source storage system communication layer. Options are: true false (Optional) Storage system family type: For 3PAR StoreServ Storage: 3PAR EMC Storage: CX VNX VMAX DMX4 HDS Storage: HDS (Mandatory) For EMC Storage: Name of the storage system or serial number or 64-bit hyphenated/nonhyphenated WWN to identify the storage system. For HDS Storage: Five-digit serial number of the HDS Storage source array. (Mandatory) User name to be used to connect to the storage system. Usage To update the source storage system, follow these steps: 1. From the HPE 3PAR Online Import Utility, issue the updatesource command. See the updatesource command example. If a certificate validation error occurs on the updatesource command, (see the certificate validation error example), first run the installcertificate command (see the installcertificate example), and then run the updatesource command again. 2. Issue the showsource command to verify the destination storage system information. See the showsource command example PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands

379 Example command This example command updates the source storage system. > updatesource -mgmtip XX.XX.XX.XX -user 3paradm -password Password > SUCCESS: Updated source storage system This example shows a certificate validation error. > updatesource -mgmtip xx.x.xx.xx -user 3paradm -password ******* -port 5783 > ERROR: OIUERRDST0010 Unable to validate certificate for HP 3PAR Storage System. C:\\InFormMC\security\HP-3PAR-MC-TrustStore This example command installs a certificate. > installcertificate -mgmtip xx.xx.xx.xx TParCertifacteVO [issuedto=hp 3PAR HP_3PAR , commonname=null, issuedbyorganization=null, issuedtoorganization=null, serialno=null, issedby=hp 3PAR HP_3PAR , fingerprint=89:e5:d0:13:6f:d1:07:80:70:76:5c:fe:5b:65:e5:54:c0:18:21:2f, signaturealgo=sha1withrsa, version=v1,validfrom=08/14/2014, validto=08/11/2024. issuedon=null, expireson=null, validdaterange=true] Do you accept the certificate? Y/YES Y > SUCCESS: Installed certificate successfully. This example uses the showsource command to verify the destination storage system information. > showsource NAME TYPE UNIQUE_ID FIRMWARE MANAGEMENT_SERVER OPERATIONAL_STATE PEER_PORTS 3par_7200_DCB_01 3PAR 2FF70002AC005F (MU3) XX.XX.XX.XX Normal AC00-5F91(0:2:1) AC00-5F91(1:2:1) 3PAR Peer Motion Utility and 3PAR Online Import Utility CLI commands 379

380 Data migration with 3PAR Remote Copy group 3PAR Peer Motion allows the seamless migration of data from one 3PAR StoreServ Storage sytem to another. However, when the arrays involved are also using 3PAR Remote Copy for data replication, special considerations must be taken. Use the following procedures when migrating remote copy volumes to a new system using peer motion. The migration procedures differ, depending on whether the volumes to be migrated serve as remote copy primary or secondary storage. These procedures do not use either the GUI or the 3PAR Online Import Utility, but are executed directly from the 3PAR CLI. 3PAR Peer Motion requirements The destination/new system to which the remote copy groups will be migrated must be in the same FC SAN fabric as the primary or secondary system from which the groups are to be migrated. Additional ports to be configured in peer, RCFC, or RCIP modes will be required during the period of migration in both the destination system and the migrating remote copy system (primary or secondary). All storage systems involved in a migration, playing the role of either remote copy primary/secondary and/or migration source/destination, must be running 3PAR OS or later. The hosts to which the remote copy volumes are exported must be defined using an ALUA persona on both source and destination storage systems and the hosts must be configured to use ALUA. See host-specific documentation for enabling ALUA. For migration of primary remote copy volumes, only host FC SAN connectivity is supported for online migration. Migration must be done at the granularity of one remote copy group, which should be imported as a consistent group to maintain group cohesiveness. Volume migration in a Remote Copy Primary group Initial setup consists of establishing peer motion connectivity between the old primary and new primary systems and remote copy connectivity between the secondary and new primary systems (see Peer motion setup). The volumes in the remote copy group to be migrated will be admitted on the new primary system and exported to the host. Once the import tasks are started, the host I/O will traverse through the new primary system to the old primary system until the migration of the volume is completed. While the import is in progress: In a synchronous-mode configuration, a host write is acknowledged only after the write is committed in all three systems: new primary, old primary, and secondary. In a periodic-mode configuration, a host write is acknowledged after the write is committed in the new primary system and the old primary system. In both the synchronous and periodic modes, a host read will be serviced either from the new primary system or the old primary system. Remote Copy replication between the old primary system and the secondary system will continue during the period of migration. 380 Data migration with 3PAR Remote Copy group

381 Once the import of the entire remote copy group has completed, I/O forwarding to the old primary system will cease and a snapshot of each of the imported volumes will be taken on the new primary system. Similarly, replication between the old primary system and the secondary system will be stopped. In the new primary system, the imported volumes must be admitted to the RC group with the secondary system as the target. The snapshots will be used to resynchronize the RC group between the new primary system and the secondary system to avoid a full synchronization. Restriction The total exposure time, during which replication is halted for the migrating volumes, is the time between the end of import and the restart of remote copy replication on the new primary system. Initial Remote Copy setup Figure 103: Initial remote copy setup on page 381 shows the initial remote copy setup: Array A and array B are in unidirectional remote copy relationship. NOTE: For bidirectional remote copy configurations, each remote copy group must be migrated one at a time, using either the primary or secondary migration procedures described in this guide, one after another: Migrating the Volumes on page 382 Volume migration in a Remote Copy secondary group on page 386 Ports a1 and a2 are the RCFC/RCIP ports on primary array A, linked to the RCFC/RCIP ports b1 and b2, respectively, on secondary array B. Host initiator ports h1 and h2 are linked to FC target ports a5 and a6, respectively, on primary array A. Figure 103: Initial remote copy setup Peer Motion setup To move a remote copy group from a given primary storage system and restore host I/O and replication, new links must be created, as shown in Figure 104: Peer motion setup on page 382: The primary remote copy group from old primary array A will be migrated to new primary array C. After migration, the migrated group will be restored in a remote copy relationship between arrays C and B. Additional RCFC/RCIP ports on array B, ports b3 and b4, are configured and zoned to RCFC/RCIP ports c3 and c4 on array C. For peer motion, peer (initiator) ports c1 and c2 are configured and zoned to FC target ports a3 and a4 on array A. For the peer motion admit stage, host initiator ports h1 and h2 and target ports c5 and c6 on array C are zoned. Initial Remote Copy setup 381

382 Figure 104: Peer motion setup Migrating the Volumes Procedure 1. Create a remote copy connection between the new primary array and the secondary array on page Configure peer motion on page Reestablishing remote copy functionality on page 385 Create a remote copy connection between the new primary array and the secondary array Procedure 1. Configure new remote copy ports (either RCFC or RCIP) on both the new primary array (called System C) and the secondary array (called System B): For RCFC: cli% controlport offline -f <n:s:p> cli% controlport config rcfc -ct point -f <n:s:p> For RCIP: cli% controlport rcip addr <port_ip> <netmask> <n:s:p> 2. For RCFC, zone the newly configured RCFC ports between System C and System B. Use the showport ns <n:s:p> command to verify the zoning on the arrays. 3. Start remote copy on System C: cli% startrcopy cli% showrcopy 4. Verify connectivity on both System C and System B: 382 Migrating the Volumes

383 For RCFC: cli% showrctransport -rcfc For RCIP: cli% controlport rcip ping <IP_address_of_remote_port> <n:s:p> <n:s:p_of_local_port> Additionally, you can issue the following command: cli% showrctransport -rcip 5. Create remote copy targets on both System C and System B: For RCFC: cli% creatercopytarget [options] <target_name> FC <node_wwn> [node:slot:port:remote_port_wwn...] For RCIP: cli% creatercopytarget [options] <target_name> IP [n:s:p:remote_port_ip_address...] 6. Admit links on both System C and System B: For RCFC: cli% admitrcopylink <target_name> <n:s:p:remote_port_wwn>... For RCIP: cli% admitrcopylink <target_name> <n:s:p:remote_port_ip_address> Verify the links setup on both System C and System B: cli% showrcopy links 8. Create a remote copy volume group on System C with System B as the target: cli% creatercopygroup [options] <group_name> <target_name_for_b>:<mode> [<target_name for B>:<mode>...] cli% showrcopy groups Configure peer motion In this procedure, the peer motion source is the old remote copy primary array (System A). The peer motion destination is the new remote copy primary array (System C). Procedure 1. On System A, find the VV set associated with the remote copy group to be migrated: cli% showvvset 2. Connect the peer motion destination (System C) to the peer motion source (System A): Configure peer motion 383

384 a. On the peer motion destination (System C), configure peer ports: cli% controlport offline -f <n:s:p> cli% controlport config peer -f <n:s:p> cli% showport -peer b. On the peer motion source (System A), configure the peer motion destination (System C) as a host to the peer motion source (System A): cli% createhost <destination_system_c> <WWN1> <WWN2> where <WWN1> and <WWN2> are the WWNs of the peer ports configured on System C. c. On the peer motion source (System A), export the remote copy volume group to the host representing the peer motion destination (System C): cli% createvlun set:<rcopygrp_vv_set_name> auto <destination_system_c> d. On the peer motion destination (System C), rescan and admit each of the source remote copy volumes exported from System A: cli% showtarget -rescan cli% admitvv [-domain <domain>]<vvname>:<wwn> [<vvname>:<wwn>...] cli% showvv 3. Create SAN zones to connect the hosts that the remote copy volumes are currently exported to on System A to System C, which is now the new primary array: a. Configure the host to which the migrating volumes on System A (old primary array): cli% createhost [options] <hostname> <WWN...> b. Export the admitted volumes on System C to this host: cli% createvlun [options] <admitted_vv_name> auto <hostname> c. Make sure that the host discovers additional paths for each of the migrating remote copy volumes coming from System C. 4. For Windows and ESX hosts, perform the tasks in this step to pause the migration workflow and unexport volumes being migrated. (This step is optional for other OSs.) Remove the volume exports to the host on the source array. This should be done after the corresponding peer volume is exported to the host (and the host is actively using those destination paths) and before actual data migration is started. On System A, remove all the volume exposures associated with the remote copy group to be migrated from hosts -- except for the exports created in step 2c -- as follows: cli % removevlun [options] <VV VVSet> <LUN> <host hostset> 384 Data migration with 3PAR Remote Copy group

385 NOTE: If you do not remove zoning for the host from the source system, online migration requires that you: Configure the host with an AULA-capable host persona on both the source and destination systems. Enable ALUA on the host operating system. 5. Start peer motion import on System C: a. Create user and snap CPGs for the volumes to be migrated, if they do not already exist. b. Issue the importvv for each of the admitted volumes, using the -snap option: cli% importvv [options] <usrcpg> <VV_name pattern vv_set> Example: importvv command with -snap option cli% importvv -tpvv -snap -snp_cpg cpg2 cpg1 vv* c. Monitor the import tasks: cli% showtask Reestablishing remote copy functionality Remove the old remote copy group definition from System A and restore the remote copy relationship for the migrated volumes between arrays C and B. Procedure 1. Remove the remote copy volume group from System A (the old remote copy primary array): a. If the remote copy group is using periodic mode remote copy, manually synchronize the remote copy volume group to be migrated: cli% syncrcopy [options] [<group_name> <pattern>...] b. Stop the remote copy group: cli% stoprcopygroup [option] [<group_name> <pattern>...] c. Remove the remote copy volume group. This step also dismisses the volumes from the group: cli% removercopygroup [options] {<group_name>... <pattern>...} d. If the array is being replaced, clear the remote copy configuration: cli% stoprcopy -clear -keepalua e. If you use any of the following optional remote copy commands, make sure the -keepalua option is used as follows to preserve the target port group ID and ALUA state of remote copy volumes: cli% dismissrcopyvv keepalua {<pattern> <vv_name>} <group_name> cli% removercopygroup keepalua {<group_name>... <pattern>...} cli% removercopytarget -cleargroups keepalua <target_name> cli% stoprcopy -clear -keepalua 2. Admit the imported volumes into the new remote copy volume group on System C with System B (secondary array) as the target. Reestablishing remote copy functionality 385

386 These are the remote copy groups created in step 8 in Create a remote copy connection between the new primary array and the secondary array on page 382. cli% admitrcopyvv [options] <VV_name>[:<snapname>] <group_name> <target_name>:<sec_vv_name>... NOTE: Specify the read-only snapshots created at the end of importvv as starting snapshots for each of the volumes being admitted. 3. Restore the remote copy group characteristics as they were on System A. On System C (the new remote copy primary array), issue the following commands: cli% setrcopygroup pol [option] [<pattern>] <policy> [<group_name>] cli% setrcopygroup period [option] [<pattern>]<period_value> <target_name> [<group_name>] cli% setrcopygroup mode [option] [<pattern>]<mode_value> <target_name> [<group_name>] cli% setrcopygroup snap_freq [options] [<pattern>] [<target_name group_name>...] cli% setrcopygroup <dr_operation> [options][<pattern>][<target_name group_name>...] 4. Start the remote copy group: cli% startrcopygroup [options] <group_name> [<vv>...][options]<pattern> Volume migration in a Remote Copy secondary group Initial setup consists of establishing peer motion connectivity between the old secondary and new secondary systems and remote copy connectivity between the primary and the new secondary system. Coordinated snapshots will be taken for the remote copy group to be migrated. Export remote copy group secondary volume snapshots (created as coordinated snapshots) to the new secondary storage system. On the new secondary storage system, rescan and admit the remote copy group volume snapshots exported from the old secondary system. NOTE: Admit the snapshots with the same names and WWNs as their parent volumes on the old secondary system. Group the admitted volumes into a VV Set and start consistent peer motion imports for this set on the new secondary system. Once the import is completed, a new remote copy group will be created on the primary system with the new secondary system as the target. Existing primary remote copy volumes and the newly imported secondary volumes are admitted into this new remote copy group. The coordinated snapshot taken on the primary system before starting the migration will be used to perform a faster synchronization with the new secondary system. The synchronization delta will be between the time when the coordinated snapshots were taken and when resynchronization is started after volume imports complete. Restriction Replication will be ongoing between the primary array and the old secondary throughout the duration of import until the replication is stopped post-import. When the remote copy replication is restarted on the primary array with the new secondary array, the data needed to be resynchronized ranges from the time when the coordinated snapshots were taken until the time when the resynchronization begins. 386 Volume migration in a Remote Copy secondary group

387 Initial Remote Copy setup Figure 105: Initial remote copy setup on page 387 shows the initial remote copy setup: Array A and array B are in a unidirectional remote copy relationship. Ports a1 and a2 are the RCFC/RCIP ports on primary array A, zoned to the RCFC/RCIP ports b1 and b2 on secondary array B. Host initiator ports h1 and h2 are the FC host initiator ports zoned to the FC target ports a5 and a6 on primary array A. Figure 105: Initial remote copy setup Peer Motion setup To move a remote copy group from a given secondary storage system and restore replication, new links must be created between the old primary array and the new secondary array, as shown in Figure 106: Setting new links between old primary array and new secondary array on page 387: The remote copy group from secondary array B will be migrated to new secondary array C. After migration, the migrated group will be restarted in a remote copy relationship between arrays A and C. Additional RCFC/RCIP ports on array A, ports a3 and a4, are configured and zoned to RCFC/RCIP ports c3 and c4 on array C. For peer motion, peer (initiator) ports c1 and c2 are configured and zoned to FC target ports b3 and b4 on array B. Figure 106: Setting new links between old primary array and new secondary array Initial Remote Copy setup 387

388 Migrating the volumes Procedure 1. Create a remote copy connection between the primary array and the new secondary array on page Configure peer motion on page Reestablishing remote copy functionality on page 390 Create a remote copy connection between the primary array and the new secondary array Procedure 1. Configure new remote copy ports (either RCFC or RCIP) on both the primary array (called System A) and the new secondary array (called System C): For RCFC: cli% controlport offline -f <n:s:p> cli% controlport config rcfc -ct point -f <n:s:p> For RCIP: cli% controlport rcip addr <port_ip> <netmask> <n:s:p> 2. For RCFC, zone the RCFC ports to be used from both arrays. Use the showportdev ns <n:s:p> command to verify the zoning on the arrays: cli% controlport rcfc init -f <n:s:p> cli% showportdev ns <n:s:p> 3. Start remote copy on System C: cli% startrcopy cli% showrcopy 4. Verify connectivity on both System A and System C: For RCFC: cli% showrctransport -rcfc For RCIP: cli% controlport rcip ping <IP_address_of_remote_port><n:s:p_of_local_port> Additionally, you can issue the following command: cli% showrctransport -rcip 5. Create remote copy targets on both System A and System C: 388 Migrating the volumes

389 For RCFC: cli% creatercopytarget [options] <target_name> FC <node_wwn> [node:slot:port:remote_port_wwn...] For RCIP: cli% creatercopytarget [options] <target_name> IP [n:s:p:remote_port_ip_address...] 6. Admit links on both System A and System C. For RCFC: cli% admitrcopylink <target_name> <n:s:p:remote_port_wwn>... For RCIP: cli% admitrcopylink <target_name> <n:s:p:remote_port_ip_address> Verify the links setup on both System A and System C: cli% showrcopy links Configure peer motion NOTE: The peer motion source is the old remote copy secondary array (System B). The peer motion destination is the new remote copy secondary array (System C). Procedure 1. On System A (the primary array), take coordinated snapshots for the remote copy group to be migrated: cli% createsv -rcopy -ro sv_@vvname@_1 rcgroup:group1 NOTE: The same createsv command syntax applies to both synchronous and asynchronous periodic remote copy modes on storage systems running 3PAR OS or later. 2. Connect the peer motion destination (System C) to the peer motion source ( System B): a. On the peer motion destination (System C), configure peer ports: cli% controlport offline -f <n:s:p> cli% controlport config peer -f <n:s:p> cli% showport -peer b. On the peer motion source (System B), configure the peer motion destination (System C) as a host to the peer motion source (System B): cli% createhost <destination_system_c> <WWN1> < WWN2> where <WWN1> and <WWN2> are the WWNs of the peer ports configured on System C. Configure peer motion 389

390 c. Export the remote copy secondary volume snapshots (created when coordinated snapshots were taken in step 2a) to the peer motion destination (System C): cli% createvlun <sec_vv_snap> auto <destination_system_c> d. On the peer motion destination (System C), rescan and admit each of the source snapshot volumes exported from System B: NOTE: Admit the snapshot volume with the same name and WWN as that of its parent secondary volume on the source (System B). cli% admitvv <parent_secondary_vv_name>:<snapshot_wwn>:<parent_secondary_vv_wwn> cli% showvv 3. Start peer motion import on System C: a. Create user CPGs and snap CPGs for the volumes to be migrated, if they do not already exist. b. Issue the importvv command for each of the admitted volumes: cli% importvv [options] <usrcpg> <VV_name pattern VV_set>... c. Monitor the import tasks: cli% showtask Reestablishing remote copy functionality Remove the old remote copy group definition from System A and restore the remote copy relationship for the migrated volumes between arrays A and C. Procedure 1. Remove the old remote copy volume group from System A (the primary array): a. Issue the following command on System A: cli% stoprcopygroup [option][<group_name> <pattern>...] b. Remove the remote copy volume group. This step also dismisses the volumes from the group: cli% removercopygroup [options]{<group_name>... <pattern>...} 2. On System A, admit the primary volumes into a new remote copy volume group with the imported secondary volumes: cli% admitrcopyvv [options] <VV_name>[:<snapname>]<group_name> <target_name>:<sec_vv_name>... Where sec_vv_name is the name of the imported secondary volume on the new secondary array (System C). Specify the primary side coordinated snapshots created before the importvv operation as starting snapshots for each of the volumes being admitted. 3. Restore the remote copy group characteristics as they were in the old remote copy group: cli% setrcopygroup pol [option][<pattern>] <policy> [<group_name>] cli% setrcopygroup period[option] [<pattern>]<period_value> <target_name> [<group_name>] cli% setrcopygroup mode [option][<pattern>]<mode_value> <target_name> [<group_name>] 390 Reestablishing remote copy functionality

391 cli% setrcopygroup snap_freq[options] [<pattern>] [<target_name group_name>...] cli% setrcopygroup <dr_operation>[options][<pattern>][<target_name group_name>...] 4. On System A, start the remote copy group (using the coordinated snapshots created at the beginning of imports as starting snapshots): cli% startrcopygroup [options]<group_name> [<vv>...][options]<pattern> NOTE: If the old secondary array is being replaced, use the stoprcopy -clear command to clear the copy configuration on the array. Data migration with 3PAR Remote Copy group 391

392 Performing data migration with 3PAR Peer Persistence relationship Peer Persistence is an extension of Remote Copy functionality. The procedure for using Peer Motion to migrate volumes from an array that is in a Peer Persistence relationship is similar to the procedure for migrating Remote Copy groups; however, there are some significant differences. The requirements for using Peer Motion to migrate volumes from an array that is in a Peer Persistence relationship include: There may be a windows of exposure where the volumes being migrated are protected by Remote Copy functionality but not by Peer Persistence while other volumes are in the process of being migrated. All 3PAR StoreServ arrays involved (primary, secondary, and new secondary) must be running 3PAR OS or later. You must migrate all volumes from the secondary array in the Peer Persistence relationship if you intend on re-establishing a Peer Persistence relationship between the primary array and new secondary array. Only secondary arrays can be migrated to a new array using Peer Motion. If using Peer Persistence in a bi-directional manner, you must perform a Manual Failover Operation (see the Remote Copy guide) so that all volumes to be migrated are secondary from a Peer Persistence perspective. The Target Port Group ID must be set on the new secondary array prior to establish the Peer Persistence relationship between the primary and new secondary array. The host and host configuration involved in the migration must adhere to both Peer Persistence and Peer Motion requirements and limitations. Figure 107: Example of Peer Motion with Peer Persistence links Procedure 1. Establish a remote copy connection between the primary and the new secondary array. 392 Performing data migration with 3PAR Peer Persistence relationship

393 For information on how to setup Remote Copy between 2 arrays, see the HPE 3PAR Remote Copy Software User Guide. 2. Set up peer motion between the old peer persistence secondary array and the new array. NOTE: The Peer Motion source is the old Peer Persistence secondary array. The Peer Motion destination is the new Remote Copy secondary array. 3. On the primary array, take coordinated snapshots for the Remote Copy group to be migrated: This command creates snapshots on primary array and secondary array. cli% createsv -rcopy -ro sv_@vvname@_1 rcgroup:group1 4. Take a note of the target port group IDs in use on the source array (old secondary array) for the volumes to be migrated. cli% showvlun 5. On the Peer Motion source array (old secondary array), export the Remote Copy secondary volume snapshots (created in the previous step) to the destination array: cli% createvlun <sec_vv_snap> auto <destination_system_c> 6. NOTE: Admit the snapshot volume with the same name and WWN as that of its parent secondary volume on the source (old secondary array). On the Peer Motion destination array (new secondary array), rescan and admit each of the source snapshot volumes exported from old secondary array: cli% admitvv <parent_secondary_vv_name>:<snapshot_wwn>:<parent_secondary_vv_wwn> 7. Create user CPGs and snap CPGs (if they do not already exist) for the volumes to be migrated, and start importing the admitted volumes, by issuing the following command: cli% importvv [options] <usrcpg> <VV_name pattern VV_set> Monitor the import tasks: cli% showtask 9. After all importvv tasks have finished, unzone the host from the old secondary array and set the target port group IDs on the volumes migrated to match what was captured in step 4. cli% setvv -settpgid <TPG ID> <vvname> 10. Then remove the remote copy volume group from the primary array and admit the volumes into a new Remote Copy group with new secondary array. On the primary array, admit the primary volumes into a new Remote Copy volume group with the imported secondary volumes using the snapshots created earlier as starting snapshots: cli% admitrcopyvv [options] <VV_name>[:<snapname>]<group_name> <target_name>:<sec_vv_name> Start the Remote Copy group. On the primary array, issue the following command: cli% startrcopygroup [options] group_name [vv...][options]pattern NOTE: At this point the configuration will be under Remote Copy protection but not under Peer Persistence protection. 12. Repeat the procedure above for the remaining hosts/remote copy groups to be migrated. Performing data migration with 3PAR Peer Persistence relationship 393

394 NOTE: All hosts/remote Copy groups need to be migrated off of the old source array before removing the Peer Persistence relationship between the primary array and old secondary array. Therefore, hosts migrated first will incur a longer duration without Peer Persistence protection than will hosts migrated later. 13. After all Peer Persistence Remote Copy groups have been migrated, remove the Peer Persistence relationship with the old secondary array (including unconfiguring the quorum witness) and establish a Peer Persistence relationship between the primary array and new secondary array. 14. Export the volumes migrated to the hosts and zone the hosts back into the new secondary array. 15. If needed, perform a manual failover for the Remote Copy groups for which it is desired for them to be primary on the new secondary array, re-establishing a bi-directional Peer Persistence relationship. 394 Performing data migration with 3PAR Peer Persistence relationship

395 Identifying and deleting source array LUN paths Use this procedure to identify and delete source array LUN paths during online migration. In the examples, LUN paths on an EMC Storage array and on an HDS Storage array are identified and deleted. Identifying and deleting source array LUN paths with Linux Native Device-Mapper Multipath on page 395 Identifying and deleting source array LUN paths with HP-UX 11 v3 on HDS Storage on page 397 Identifying and deleting source array LUN paths with ESX 5.5 on page 400 Identifying and deleting source array LUN paths with Linux Native Device-Mapper Multipath After the createmigration task is completed successfully, rescan the HBAs on the Linux host. Procedure 1. Issue the multipath -ll command: EMC Storage Rescanning HBAs # ls /sys/class/fc_host host4 host5 # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "1" > /sys/class/fc_host/host5/issue_lip # echo "- - -" > /sys/class/scsi_host/host4/scan # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpathd (360060e80045be be ) dm-5 DGC, VRAID size=14g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 5:0:1:2 sde 8:64 active ready running - 4:0:2:2 sdg 8:96 active ready running - 5:0:0:2 sdp 8:240 active ready running `- 4:0:0:2 sdl 8:176 active ready running mpathc (360060e80045be be ) dm-3 DGC, VRAID size=244g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 5:0:1:1 sdd 8:48 active ready running - 4:0:2:1 sdf 8:80 active ready running - 5:0:0:1 sdo 8:224 active ready running `- 4:0:0:1 sdk 8:160 active ready running HDS Storage Rescanning HBAs # ls /sys/class/fc_host host4 host5 Identifying and deleting source array LUN paths 395

396 # echo "1" > /sys/class/fc_host/host4/issue_lip # echo "1" > /sys/class/fc_host/host5/issue_lip # echo "- - -" > /sys/class/scsi_host/host4/scan # echo "- - -" > /sys/class/scsi_host/host5/scan # multipath -ll mpathd (360060e80045be be ) dm-5 HITACHI,OPEN-V size=14g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 5:0:1:2 sde 8:64 active ready running - 4:0:2:2 sdg 8:96 active ready running - 5:0:0:2 sdp 8:240 active ready running `- 4:0:0:2 sdl 8:176 active ready running mpathc (360060e80045be be ) dm-3 HITACHI,OPEN-V size=244g features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 5:0:1:1 sdd 8:48 active ready running - 4:0:2:1 sdf 8:80 active ready running - 5:0:0:1 sdo 8:224 active ready running `- 4:0:0:1 sdk 8:160 active ready running HDS Storage Rescanning HBAs and listing the updated multipath mapping for SUSE # ls /sys/class/fc_host host2 host3 # echo "- - -" > /sys/class/scsi_host/host2/scan # echo "1" > /sys/class/fc_host/host3/issue_lip # # echo "- - -" > /sys/class/scsi_host/host3/scan # multipath ll e8005bc1f000000bc1f dm-11 HP,OPEN-V size=200g features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 2:0:0:0 sda 8:0 active ready running `- 3:0:1:0 sdt 65:48 active ready running e8006cf cf f26 dm-2 HP,OPEN-V size=10g features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active - 2:0:2:5 sdk 8:160 active ready running `- 3:0:2:5 sdad 65:208 active ready running 2. In the output, identify the LUN and its WWN. For example, assume that the LUN with WWN e80045be be is being migrated. The LUN with WWN e80045be be has four paths: sde, sdg, sdp, and sdl. a. To identify the paths that pertain to the source array model, issue the following command on each associated path: EMC Storage Identifying paths that pertain to the source array model # cat /sys/block/sde/device/model DGC, VRAID # cat /sys/block/sdg/device/model DGC, VRAID 396 Identifying and deleting source array LUN paths

397 # cat /sys/block/sdp/device/model VV # cat /sys/block/sdl/device/model VV HDS Storage Identifying paths that pertain to the source array model # cat /sys/block/sde/device/model OPEN-E # cat /sys/block/sdg/device/model OPEN-E # cat /sys/block/sdp/device/model VV # cat /sys/block/sdl/device/model VV The output shows that paths sde and sdg belong to the source storage array. Paths sdp and sdl belong to the destination 3PAR StoreServ Storage array. b. Issue the following command for each identified source path to delete it from the operating system: # echo "1">/sys/block/sde/device/delete # echo "1">/sys/block/sdg/device/delete 3. When more than one LUN is being migrated, repeat step2 for each LUN. 4. Repeat step1 step3 for all the nodes in the cluster. Identifying and deleting source array LUN paths with HP- UX 11 v3 on HDS Storage After the createmigration task is completed successfully, unzone the source array from the host. NOTE: If legacy DSF paths are used, then before continuing with these steps, clean up stale paths from the volume group using the pvchange(1m) and vgreduce(1m) commands. Procedure 1. Using the HP-UX CLI, issue the ioscan -fnn command: HDS Storage ioscan -fnn command # ioscan -fnn slot 2 0/0/0/9/0/0 pci_slot CLAIMED SLOT PCI Slot fc 2 0/0/0/9/0/0/0 fcd CLAIMED INTERFACE HP B21 8Gb Dual Port PCIe Fibre Channel Mezzanine (FC Port 1) /dev/fcd2 tgtpath 9 0/0/0/9/0/0/0.0x ac001abc estp CLAIMED TGT_PATH fibre_channel target served by fcd driver, target port id 0x10c00 lunpath /0/0/9/0/0/0.0x ac001abc.0x0 eslpt CLAIMED LUN_PATH LUN path for ctl30 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/0.0x ac001abc.0x Identifying and deleting source array LUN paths with HP-UX 11 v3 on HDS Storage 397

398 eslpt CLAIMED LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2149 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2154 lunpath /0/0/9/0/0/0.0x ac001abc.0x40fe eslpt CLAIMED LUN_PATH LUN path for ctl31 tgtpath 4 0/0/0/9/0/0/0.0x50060e8006cf4923 estp NO_HW TGT_PATH fibre_channel target served by fcd driver, target port id 0x10100 lunpath /0/0/9/0/0/0.0x50060e8006cf4923.0x0 eslpt NO_HW LUN_PATH LUN path for ctl13 lunpath /0/0/9/0/0/0.0x50060e8006cf4923.0x eslpt NO_HW LUN_PATH LUN path for disk2149 lunpath /0/0/9/0/0/0.0x50060e8006cf4923.0x eslpt NO_HW LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/0.0x50060e8006cf4923.0x eslpt NO_HW LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/0.0x50060e8006cf4923.0x eslpt NO_HW LUN_PATH LUN path for disk2154 fc 3 0/0/0/9/0/0/1 fcd CLAIMED INTERFACE HP B21 8Gb Dual Port PCIe Fibre Channel Mezzanine (FC Port 2) /dev/fcd3 tgtpath 10 0/0/0/9/0/0/1.0x ac001abc estp CLAIMED TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0b00 lunpath /0/0/9/0/0/1.0x ac001abc.0x0 eslpt CLAIMED LUN_PATH LUN path for ctl32 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2149 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2154 lunpath /0/0/9/0/0/1.0x ac001abc.0x40fe eslpt CLAIMED LUN_PATH LUN path for ctl31 tgtpath 5 0/0/0/9/0/0/1.0x50060e8006cf4933 estp NO_HW TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0100 lunpath /0/0/9/0/0/1.0x50060e8006cf4933.0x0 eslpt NO_HW LUN_PATH LUN path for ctl13 lunpath /0/0/9/0/0/1.0x50060e8006cf4933.0x eslpt NO_HW LUN_PATH LUN path for disk2149 lunpath /0/0/9/0/0/1.0x50060e8006cf4933.0x eslpt NO_HW LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/1.0x50060e8006cf4933.0x eslpt NO_HW LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/1.0x50060e8006cf4933.0x eslpt NO_HW LUN_PATH LUN path for disk2154 usb 4 0/0/0/29/0 uhci CLAIMED INTERFACE Intel UHCI Controller 398 Identifying and deleting source array LUN paths

399 usb 5 0/0/0/29/1 uhci CLAIMED INTERFACE Intel UHCI Controller 2. Remove the paths from the host to the source, using the following commands: HDS Storage ioscan -f command # rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x # rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x # rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x # rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf4933.0x # rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x # rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x # rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x # rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x # rmsf -H 0/0/0/9/0/0/0.0x50060e8006cf4923.0x0 # rmsf -H 0/0/0/9/0/0/1.0x50060e8006cf Issue the ioscan -fnn command once more to verify that all paths of the host to the source that were in the NO_HW state have been removed: HDS Storage ioscan -fnn command output after migration # ioscan -fnn slot 2 0/0/0/9/0/0 pci_slot CLAIMED SLOT PCI Slot fc 2 0/0/0/9/0/0/0 fcd CLAIMED INTERFACE HP B21 8Gb Dual Port PCIe Fibre Channel Mezzanine (FC Port 1) /dev/fcd2 tgtpath 9 0/0/0/9/0/0/0.0x ac001abc estp CLAIMED TGT_PATH fibre_channel target served by fcd driver, target port id 0x10c00 lunpath /0/0/9/0/0/0.0x ac001abc.0x0 eslpt CLAIMED LUN_PATH LUN path for ctl30 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2149 lunpath /0/0/9/0/0/0.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2154 lunpath /0/0/9/0/0/0.0x ac001abc.0x40fe eslpt CLAIMED LUN_PATH LUN path for ctl31 fc 3 0/0/0/9/0/0/1 fcd CLAIMED INTERFACE HP B21 8Gb Dual Port PCIe Fibre Channel Mezzanine (FC Port 2) /dev/fcd3 tgtpath 10 0/0/0/9/0/0/1.0x ac001abc estp CLAIMED TGT_PATH fibre_channel target served by fcd driver, target port id 0x1b0b00 lunpath /0/0/9/0/0/1.0x ac001abc.0x0 eslpt CLAIMED LUN_PATH LUN path for ctl32 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2153 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2150 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2149 Identifying and deleting source array LUN paths 399

400 lunpath /0/0/9/0/0/1.0x ac001abc.0x eslpt CLAIMED LUN_PATH LUN path for disk2154 lunpath /0/0/9/0/0/1.0x ac001abc.0x40fe eslpt CLAIMED LUN_PATH LUN path for ctl31 usb 4 0/0/0/29/0 uhci CLAIMED INTERFACE Intel UHCI Controller usb 5 0/0/0/29/1 uhci CLAIMED INTERFACE Intel UHCI Controller usb 6 0/0/0/29/7 ehci CLAIMED INTERFACE Intel EHCI 64-bit Controller Identifying and deleting source array LUN paths with ESX 5.5 After the createmigration task is completed successfully, log on to the ESX host and rescan HBAs in the ESX host. Procedure 1. Using the ESX CLI, issue the following command to rescan HBAs in the ESX host: In this example, vmhba2 and vmhba3 are the FC HBAs. # esxcfg-rescan vmhba3 # esxcfg-rescan vmhba2 The output shows LUNs with their source and destination array paths. 2. Issue the following command to list all LUNs and their corresponding paths: # esxcfg-mpath -b naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09 naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 400 Identifying and deleting source array LUN paths with ESX 5.5

401 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09 naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09 naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 50:06:0e:80:05:bc:1f:09 WWPN: 50:06:0e:80:05:bc:1f:09 The output shows LUNs with their source and destination array paths. 3. Remove the source array from the host zone, and issue the following command. The host will show the status of the source path as Target: Unavailable. # esxcfg-mpath -b naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l2 LUN:2 state:dead fc Adapter: Unavailable Target: Unavailable naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) Identifying and deleting source array LUN paths 401

402 vmhba3:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l3 LUN:3 state:dead fc Adapter: Unavailable Target: Unavailable naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l0 LUN:0 state:dead fc Adapter: Unavailable Target: Unavailable naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc vmhba3:c0:t2:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 50:06:0e:80:05:bc:1f:21 WWPN: 50:06:0e:80:05:bc:1f:21 vmhba2:c0:t0:l1 LUN:1 state:dead fc Adapter: Unavailable Target: Unavailable 4. Rescan the HBAs. In this example, the FC HBAs are vmhba2 and vmhba3. # esxcfg-rescan vmhba3 # esxcfg-rescan vmhba2 5. Verify the LUN paths, issuing the following command: # esxcfg-mpath -b naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l2 LUN:2 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc 402 Identifying and deleting source array LUN paths

403 naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l3 LUN:3 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l0 LUN:0 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc naa.60060e8005bc1f000000bc1f : HP Fibre Channel Disk (naa e8005bc1f000000bc1f ) vmhba3:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7f WWPN: 10:00:00:00:c9:71:bc:7f Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 20:22:00:02:ac:00:1a:bc vmhba2:c0:t3:l1 LUN:1 state:active fc Adapter: WWNN: 20:00:00:00:c9:71:bc:7e WWPN: 10:00:00:00:c9:71:bc:7e Target: WWNN: 2f:f7:00:02:ac:00:1a:bc WWPN: 21:22:00:02:ac:00:1a:bc Identifying and deleting source array LUN paths 403

404 Guidelines for rolling back to the original source array This appendix provides guidelines for rolling back data migration in the event that I/O must resume on the original array. Hewlett Packard Enterprise recommends that, before implementing any migration on live or production data, you include and test a migration fail-back policy as part of your premigration strategy. Typical considerations include: What if a migration fails? What will your process be to roll back to the original array? What needs to be done to facilitate the rollback? Thoroughly understanding and documenting the rollback process can help ensure that, if you need to revert to your original storage configuration, downtime is nonexistent or minimal. The steps you take will vary, depending on the specific hardware and software configurations involved in your migration. Figure 108: EMC Storage rollback process for a failed online migration on page 405, Figure 109: HDS Storage rollback process for a failed online migration on page 406, and Figure 110: IBM Storage rollback process for a failed online migration on page 407 show a general overview of the rollback process for a failed online migration. Figure 111: EMC Storage rollback process for a failed MDM on page 408, Figure 112: HDS Storage rollback process for a failed MDM on page 409, and Figure 113: IBM Storage rollback process for a failed MDM on page 410 show a general overview of the rollback process for a failed MDM. 404 Guidelines for rolling back to the original source array

405 Figure 108: EMC Storage rollback process for a failed online migration For 3PAR OS MU3 or later, see Clearing a SCSI reservation with 3PAR OS MU3 or later on page 411. For 3PAR OS MU2 or earlier, see Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage on page 412. Guidelines for rolling back to the original source array 405

406 Figure 109: HDS Storage rollback process for a failed online migration See Clearing a SCSI reservation with 3PAR OS MU3 or later on page Guidelines for rolling back to the original source array

407 Figure 110: IBM Storage rollback process for a failed online migration See Clearing a SCSI reservation with 3PAR OS MU3 or later on page 411 Guidelines for rolling back to the original source array 407

408 Figure 111: EMC Storage rollback process for a failed MDM For 3PAR OS MU3 or later, see Clearing a SCSI reservation with 3PAR OS MU3 or later on page 411. For 3PAR OS MU2 or earlier, see Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage on page Guidelines for rolling back to the original source array

409 Figure 112: HDS Storage rollback process for a failed MDM See Clearing a SCSI reservation with 3PAR OS MU3 or later on page 411 Guidelines for rolling back to the original source array 409

410 Figure 113: IBM Storage rollback process for a failed MDM See Clearing a SCSI reservation with 3PAR OS MU3 or later on page Guidelines for rolling back to the original source array

411 Clearing a SCSI reservation Procedure 1. Clearing a SCSI reservation with 3PAR OS MU3 or later on page Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage on page Clearing a SCSI reservation after an incomplete migration with 3PAR OS MU1 or MU2 on HDS Storage on page 416 Clearing a SCSI reservation with 3PAR OS MU3 or later NOTE: Clear the SCSI reservation only if you plan to return to the original source volumes, as described in Guidelines for rolling back to the original source array on page 404. After the SCSI reservation is cleared on these volumes, the startmigration command should not be re-issued on these volumes unless you start the migration over again. NOTE: Repeat this procedure for each virtual volume that has a SCSI reservation. To clear the reservation, follow these steps: Procedure 1. Connect to the 3PAR StoreServ Storage: a. Open the HPE 3PAR CLI. b. Enter the IP address of the 3PAR StoreServ Storage, your user name, and your password 2. Issue the showvv command to find the name of the volume for which a reservation must be removed. showvv command # showvv Id Name Prov Type CopyOf BsId Rd -Detailed_State- Adm Snp Usr VSize 0 admin full base RW normal vol0 peer base RW exclusive vol1 peer base RW exclusive vol2 peer base RW exclusive total Identify the name of virtual volume for which the reservation must be removed. The Prov (provisioning) value will be peer and the Detailed_State value will be exclusive. 3. Using the 3PAR CLI, clear the reservation on the virtual volume by issuing the setvv -clrrsv <virtual volume name> command. # setvv -clrrsv vol0 Clearing a SCSI reservation 411

412 Following successful execution of the setvv command, the reservation on the virtual volume will be cleared. 4. After the reservation is cleared, delete the volumes on the 3PAR StoreServ Storage. Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage With 3PAR OS MU2 or earlier, the SCSI reservation is not automatically cleared after an HPE 3PAR copy task is completed. As part of startmigration command, the 3PAR StoreServ Storage applies a SCSI-3 reservation on EMC Storage source storage devices (LUNs) during import to the 3PAR StoreServ Storage to ensure exclusive access by the HPE 3PAR peer ports. With 3PAR OS MU3 or later, the SCSI reservation is automatically cleared by 3PAR Peer Motion after the HPE 3PAR copy task is successfully completed. However, the SCSI reservation might not be cleared automatically in some scenarios, such as when an earlier version of 3PAR OS is on the destination array, or if an import fails. If a migration fails, use this procedure to clear the SCSI reservation in order to return to the original source volumes. Clearing a persistent SCSI reservation from an EMC CX4 or VNX device To clear a SCSI reservation on an EMC CX4 or VNX array, remove the HPE 3PAR peer port initiator WWPNs from the storage group used for migration. The HPE 3PAR peer port initiator WWPNs can be removed by using the EMC Navisphere CLI (NAVISECCLI) commands. For more information, see the latest edition of the EMC Navisphere Command Line Interface (CLI) Reference, available at the following website: EMC Storage ( In the following procedure, HPE 3PAR peer port initiators 20:11:02:02:AC:00:73:62 and 20:11:02:02:AC:00:73:62 are removed from the EMC storage group R115-S09: Procedure 1. Issue the naviseccli command to find the EMC storage group that contains the HPE 3PAR peer port WWPNs: # naviseccli -address storagegroup -list Storage Group Name: R115-S09 Storage Group Storage Group UID: C6:7F:A9:19:DD:E9:E3:11:98:C0:00:60:16:5F:8A:FF HBA/SP Pairs: HBA UID SP Name SPPort :01:43:80:02:3C:BE:E9:50:01:43:80:02:3C:BE:E8 SP B 0 50:01:43:80:02:3C:BE:EB:50:01:43:80:02:3C:BE:EA SP A 1 2F:F7:02:02:AC:00:73:62:21:11:02:02:AC:00:73:62 SP A 0 50:01:43:80:02:3C:BE:EB:50:01:43:80:02:3C:BE:EA SP B 1 2F:F7:02:02:AC:00:73:62:20:11:02:02:AC:00:73:62 SP B 1 50:01:43:80:02:3C:BE:E9:50:01:43:80:02:3C:BE:E8 SP A 0 HLU/ALU Pairs: HLU Number ALU Number Clearing a SCSI reservation with 3PAR OS MU2 or earlier on EMC Storage

413 Shareable: YES 2. Remove both HPE 3PAR peer port initiator WWPNs from the EMC storage group: # naviseccli -address storagegroup -disconnecthost -host 2F:F7:02:02:AC:00:73:62:21:11:02:02:AC:00:73:62 -gname "R115-S09 Storage Group" -o # naviseccli -address storagegroup -disconnecthost -host 2F:F7:02:02:AC:00:73:62:20:11:02:02:AC:00:73:62 -gname "R115-S09 Storage Group" -o Clearing a persistent SCSI reservation from an EMC VMAX or DMX4 device Use EMC Solutions Enabler Symmetrix CLI (SYMCLI) commands to clear the SCSI reservation from a LUN. See the EMC Solutions Enabler Symmetrix CLI Command Reference, which is available on the following website: EMC Support In the following procedure, a SCSI reservation is removed from the VMAX or DMX4 device LUN 0C4. Procedure 1. Issue the symdev command to verify that there is a persistent reservation on the device: symdev command # symdev -sid 1234 list -pgr Symmetrix ID : Device Reservation Sym Config Init SA :P WWN Key A9 TDEV 00 01F: AC00B08A b08a C4 TDEV 00 01E: AC00B08A b08a C5 TDEV 00 01E: AC00B08A b08a List the storage groups containing the device: # symaccess -sid 1234 list -type storage -dev 0c4 Symmetrix ID : Storage Group Name OIU_ _SG R65-S02-SG 3. If the device is part of an HPE 3PAR Online Import Utility storage group, delete the entire HPE 3PAR Online Import Utility masking view. # symaccess -sid 1234 delete view -noprompt -name OIU_ _MV # symaccess -sid 1234 delete -type initiator -force -noprompt -name OIU_ _IG # symaccess -sid 1234 delete -type initiator -force -noprompt -name HOST_FOR R1231_ # symaccess -sid 1234 delete -type initiator -force -noprompt -name HOST_FOR R1232_ # symaccess -sid 1234 delete -type storage -force -noprompt -name OIU_ _SG # symaccess -sid 1234 delete -type port -force -noprompt -name OIU_ _PG Clearing a persistent SCSI reservation from an EMC VMAX or DMX4 device 413

414 4. Remove the device from the server storage group. NOTE: A storage group associated with a masking view cannot be empty. If multiple devices in the storage group have SCSI reservations, then these steps must be run against each device one at a time. # symaccess -sid name R65-S02-SG -type storage remove dev 0C4 5. Modify the device that was removed from the storage group to be in the not_ready state: # symdev -sid noprompt not_ready 0C4 'Not Ready' Device operation successfully completed for the device. 6. Unmap the device using the three symconfigure commands shown in the following example: # symconfigure -sid noprompt -cmd "unmap dev 0C4 from dir ALL:ALL;" preview A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Locking devices...locked. Validating configuration changes...validated. Closing configuration change request...closed. Terminating the configuration change session...done. The configuration change session has completed successfully. # symconfigure -sid noprompt -cmd "unmap dev 0C4 from dir ALL:ALL;" prepare A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Locking devices...locked. Validating configuration changes...validated. Closing configuration change request...closed. Terminating the configuration change session...done. The configuration change session has completed successfully. # symconfigure -sid noprompt -cmd "unmap dev 0C4 from dir ALL:ALL;" commit A Configuration Change operation is in progress. Please wait... Establishing a configuration change session...established. Processing symmetrix Performing Access checks...allowed. Checking Device Reservations...Allowed. Locking devices...locked. Initiating COMMIT of configuration changes...queued. COMMIT requesting required resources...obtained. Step 004 of 028 steps...executing. 414 Clearing a SCSI reservation

415 Local: COMMIT...Done. Terminating the configuration change session...done. The configuration change session has successfully completed. 7. Confirm that the device is no longer listed as a device with a reservation: # symdev -sid 1234 list -pgr Symmetrix ID : Device Reservation Sym Config Init SA :P WWN Key A9 TDEV 00 01F: AC00B08A b08a C5 TDEV 00 01E: AC00B08A b08a Confirm that the device is unmapped: # symdev -sid 1234 list -noport Symmetrix ID: Device Name Directors Device Cap Sym Physical SA :P DA :IT Config Attribute Sts (MB) F5 Not Visible???:? 08C:D0 2-Way Mir N/Grp'd (M) RW FD Not Visible???:? 08C:D2 2-Way Mir N/Grp'd (M) RW C4 Not Visible???:? 08C:D4 2-Way Mir N/Grp'd (M) RW D Not Visible???:? 08A:C1 2-Way Mir N/Grp'd (M) RW Not Visible???:? 08A:C3 2-Way Mir N/Grp'd (M) RW Modify the device to be in the ready state: # symdev -sid noprompt ready 0C4 The device is already in the requested state 10. Add the device and verify that it has been added back into the storage group: # symaccess -sid name R65-S02-SG -type storage -dev 0C4 add # symaccess -sid 1234 show R65-S02-SG -type storage Symmetrix ID : Storage Group Name : R65-S02-SG Last updated at : 05:10:54 PM on Tue Aug 26,2014 Devices : 00C3,00C4 Clearing a SCSI reservation 415

416 Masking View Names { HPDL MV } 11. Repeat step4 - step10 for each device for which a reservation must be removed. Clearing a SCSI reservation after an incomplete migration with 3PAR OS MU1 or MU2 on HDS Storage If the migration of a LUN is not successfully completed, the SCSI reservation on the LUN on the HDS Storage source system is not cleared. Because the LUN has only partially migrated to the destination 3PAR StoreServ Storage, applications cannot start from the new storage system. To roll back to the initial situation and resume business using the HDS Storage system as the storage array, first remove the pending SCSI reservation by following these steps on a Windows host: Repeat this procedure for every volume whose migration was not completed. For HP-UX, see the following document: SCSI Persistent Reservation Utilities Release Notes Under Manuals, click General reference to find the latest release notes. Procedure 1. Stop all applications that are using the LUN whose migration did not complete. 2. Remove the zoning from the host to the 3PAR StoreServ Storage; restore the zoning from the host to the HDS Storage system if the migration was of the MDM type. 3. Using Storage Navigator or an equivalent tool, present the LUN whose migration did not complete to the host again. 4. Rescan the Windows host for new volumes. 5. Remove the reservation, using the Windows port of the sg3_utils package that was developed originally for Linux. The sg3_utils package is either part of the original Linux distribution or can be downloaded and installed separately. The port is available at the following website: 6. Install the utility on the host by extracting the archive into an empty directory. In the example below, the default directory is: C:\Temp\sg3_utils-1.37exe a. Open the utility and scan for the names of the volumes that are presented to the host: # sg_scan PD0 [C] HP LOGICAL VOLUME A9CC50 PD1 HDS OPEN-V Clearing a SCSI reservation after an incomplete migration with 3PAR OS MU1 or MU2 on HDS Storage

417 Volume PD0 is a host-local drive. Volume PD1 is a volume on the HDS Storage system with serial number b. Check whether volume PD1 has a SCSI reservation: # sg_persist -in --no-inquiry --read-reservation --device PD1 PR generation=0x8, Reservation follows: Key=0x9ad scope: LU_SCOPE, type: Exclusive Access, registrants only In the output above, the volume has a reservation key with value 0x9ad c. Prepare for the removal of the reservation: # sg_persist --out --no-inquiry --register --param-sark 0x9ad device PD1 HP OPEN-V 5001 Peripheral device type: disk d. Next, cancel the reservation: # sg_persist --out --no-inquiry --clear --param-rk 0x9ad device PD1 This command features no output in case the reservation was cleared. e. Check to make sure that the reservation was indeed removed: # sg_persist --in --no-inquiry --read-reservation --device PD1 PR generation-0xa, there is NO reservation held 7. Remove the presentation of the LUN to HCMDxxxxx hosts. 8. Rescan for new disks in Windows and map the disk to a drive letter. 9. Resume business by restarting the application. 10. Refresh the HDS Storage database to prepare for the next migration operation using the HPE 3PAR Online Import Utility. Clearing a SCSI reservation 417

418 Data migration for an Oracle RAC cluster use case This use case describes data migration with the HPE 3PAR Online Import Utility for an Oracle RAC cluster configured with the cluster registry (CRS), voting disks, and data disks distributed across multiple arrays belonging to the same storage vendor (for example, across the HDS NSC, HDS USP, HDS USP_V, HDS USP_VM, or HDS VSP arrays or across the EMC VMAX, DMX4, CX4 or VNX arrays). Oracle supports increases to database capacity by adding more arrays. The key restriction is that if ASM is enabled, disks in an ASM disk group that are included from different arrays should all have the same performance characteristics and size. Data disks can be distributed across multiple arrays. Because the CRS and voting disks are used for cluster configuration and integrity, not for storage integrity or availability, they do not have to be configured on every array. Data migration using the HPE 3PAR Online Import Utility for Oracle RAC clusters has been validated, and is documented here for two specific configuration scenarios, which cover all ASM-based Oracle RAC deployments: Oracle Database deployments before 11gR2, with the CRS and voting disks residing outside the ASM, and data disks included in the ASM disk group Oracle Database 11gR2 deployments with the CRS, voting disks, and data disks included in the ASM disk group Because the HPE 3PAR Online Import Utility can be used to migrate data from a single source array at a time, and because, for an Oracle RAC migration, all volumes must be included in a consistency group, the number of migrations made by the HPE 3PAR Online Import Utility for an Oracle RAC configuration distributed across multiple source arrays is equal to the number of source arrays deployed. For example, if the Oracle RAC database is distributed across three arrays (with the CRS and voting disks configured on one of them), then three migrations must be performed with the HPE 3PAR Online Import Utility, one for each array. The order in which the migrations are executed by the HPE 3PAR Online Import Utility does not matter. For this use case, the following sequences were verified: Completely migrating the source array disks with the CRS and voting disks, before migrating the source arrays with the data disks Completely migrating a source array with the data disks, before migrating the source array with the CRS and voting disks IMPORTANT: All of these migrations are performed using the online migration procedure, as described in this guide. During the migration, there will be instances when the Oracle RAC will be distributed across, and the CRS, voting, and data disks will coexist on, the deployed source arrays and the destination 3PAR StoreServ Storage. This coexistence is currently supported across the 3PAR StoreServ Storage and the source storage arrays listed in the SPOCK website: Data migration for an Oracle RAC cluster use case

419 Figure 114: Oracle RAC data migration from multiple arrays Figure 114: Oracle RAC data migration from multiple arrays on page 419 shows two source arrays one with CRS, voting, and data disks, and the other with data disks only. The objective is to migrate the disks online from the two source arrays to a single destination 3PAR StoreServ Storage system, using the HPE 3PAR Online Import Utility. This will be done in two serially implemented phases. Configuration details: The deployment uses Oracle Database 11gR2 RAC, with ASM enabled and the CRS, voting disk, and data disks on both source arrays included in the ASM. Implementation would not be affected even if the CRS and voting disks are excluded from the ASM, as is required by Oracle Database implementations with releases earlier than 11gR2). There are two source arrays: source array 1 and source array 2 Node 1 and node 2 are RHEL 6.4 servers NOTE: More than two nodes in a cluster and/or more than two source arrays is also supported for this implementation. Because, in this example, there are two source arrays, there are two phases or two HPE 3PAR Online Import Utility migrations that must be deployed for a complete migration: Migration 1: The CRS, voting disks, and data disks are migrated from source array 1 to the destination 3PAR StoreServ Storage Migration 2: Data disks from source array 2 are migrated to the destination 3PAR StoreServ Storage Data migration for an Oracle RAC cluster use case 419

HPE 3PAR Online Import Utility 1.5.0

HPE 3PAR Online Import Utility 1.5.0 HPE 3PAR Online Import Utility 1.5.0 Release tes Abstract This document provides information about modifications, corrected issues, or known issues related to the HPE 3PAR Online Import Utility. Part Number:

More information

HP 3PAR OS MU3 Patch 17

HP 3PAR OS MU3 Patch 17 HP 3PAR OS 3.2.1 MU3 Patch 17 Release Notes This release notes document is for Patch 17 and intended for HP 3PAR Operating System Software. HP Part Number: QL226-98310 Published: July 2015 Edition: 1 Copyright

More information

HP 3PAR OS MU3 Patch 18 Release Notes

HP 3PAR OS MU3 Patch 18 Release Notes HP 3PAR OS 3.2.1 MU3 Patch 18 Release Notes This release notes document is for Patch 18 and intended for HP 3PAR Operating System Software 3.2.1.292 (MU3). HP Part Number: QL226-98326 Published: August

More information

HPE 3PAR OS MU3 Patch 24 Release Notes

HPE 3PAR OS MU3 Patch 24 Release Notes HPE 3PAR OS 3.1.3 MU3 Patch 24 Release Notes This release notes document is for Patch 24 and intended for HPE 3PAR Operating System Software + P19. Part Number: QL226-99298 Published: August 2016 Edition:

More information

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Technical white paper HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Handling HP 3PAR StoreServ Peer Persistence with HP Storage Provisioning Manager Click here to verify the latest

More information

HP 3PAR OS MU1 Patch 11

HP 3PAR OS MU1 Patch 11 HP 3PAR OS 313 MU1 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software HP Part Number: QL226-98041 Published: December 2014 Edition: 1

More information

HP 3PAR Host Explorer MU1 Software User Guide

HP 3PAR Host Explorer MU1 Software User Guide HP 3PAR Host Explorer 1.1.0 MU1 Software User Guide Abstract This guide is for Microsoft Windows, Red Hat Linux, and Solaris Sparc administrators who are responsible for maintaining the operating environment

More information

HPE 3PAR StoreServ Management Console 3.0 User Guide

HPE 3PAR StoreServ Management Console 3.0 User Guide HPE 3PAR StoreServ Management Console 3.0 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information

More information

HPE 3PAR OS MU3 Patch 28 Release Notes

HPE 3PAR OS MU3 Patch 28 Release Notes HPE 3PAR OS 3.2.1 MU3 Patch 28 Release tes This release notes document is for Patch 28 and intended for HPE 3PAR Operating System Software 3.2.1.292 (MU3)+Patch 23. Part Number: QL226-99107 Published:

More information

HPE 3PAR OS MU5 Patch 49 Release Notes

HPE 3PAR OS MU5 Patch 49 Release Notes HPE 3PAR OS 3.2.1 MU5 Patch 49 Release Notes This release notes document is for Patch 49 and intended for HPE 3PAR Operating System Software + P39. Part Number: QL226-99362a Published: October 2016 Edition:

More information

HP 3PAR OS MU2 Patch 11

HP 3PAR OS MU2 Patch 11 HP 3PAR OS 321 MU2 Patch 11 Release Notes This release notes document is for Patch 11 and intended for HP 3PAR Operating System Software 321200 (MU2) Patch 11 (P11) HP Part Number: QL226-98118 Published:

More information

StoreServ Management Console 3.2 User Guide

StoreServ Management Console 3.2 User Guide StoreServ Management Console 3.2 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

StoreServ Management Console 3.3 User Guide

StoreServ Management Console 3.3 User Guide StoreServ Management Console 3.3 User Guide Abstract This user guide provides information on the use of an installed instance of HPE 3PAR StoreServ Management Console software. For information on installation

More information

HPE 3PAR OS GA Patch 20 Release Notes

HPE 3PAR OS GA Patch 20 Release Notes HPE 3PAR OS 3.3.1 GA Patch 20 Release Notes Abstract This release notes document is for Patch 20 and intended for HPE 3PAR Operating System Software OS-3.3.1.215-GA. Part Number: QL226-99808 Published:

More information

HPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1)

HPE OneView for Microsoft System Center Release Notes (v 8.2 and 8.2.1) Center Release Notes (v 8.2 and 8.2.1) Part Number: 832154-004a Published: April 2017 Edition: 2 Contents Center Release Notes (v 8.2 and 8.2.1)... 4 Description...4 Update recommendation... 4 Supersedes...

More information

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions

HPE 3PAR OS MU3 Patch 97 Upgrade Instructions HPE 3PAR OS 3.2.2 MU3 Patch 97 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 97 on the HPE 3PAR Operating System Software. This document is for Hewlett Packard

More information

XP7 Online Migration User Guide

XP7 Online Migration User Guide XP7 Online Migration User Guide Abstract This guide explains how to perform an Online Migration. Part Number: 858752-002 Published: June 2016 Edition: 6 Copyright 2014, 2016 Hewlett Packard Enterprise

More information

HPE ProLiant Gen9 Troubleshooting Guide

HPE ProLiant Gen9 Troubleshooting Guide HPE ProLiant Gen9 Troubleshooting Guide Volume II: Error Messages Abstract This guide provides a list of error messages associated with HPE ProLiant servers, Integrated Lights-Out, Smart Array storage,

More information

HPE 3PAR OS MU2 Patch 36 Release Notes

HPE 3PAR OS MU2 Patch 36 Release Notes HPE 3PAR OS 321 MU2 Patch 36 Release Notes This release notes document is for Patch 36 and intended for HPE 3PAR Operating System Software 321200 (MU2)+P13 Part Number: QL226-99149 Published: May 2016

More information

HPE 3PAR OS GA Patch 12

HPE 3PAR OS GA Patch 12 HPE 3PAR OS 3.3.1 GA Patch 12 Upgrade Instructions Abstract This upgrade instructions document is for installing Patch 12 on the HPE 3PAR Operating System Software OS-3.3.1.215-GA. This document is for

More information

HP 3PARInfo 1.4 User Guide

HP 3PARInfo 1.4 User Guide HP 3PARInfo 1.4 User Guide Abstract This guide provides information about installing and using HP 3PARInfo. It is intended for system and storage administrators who monitor and direct system configurations

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE Digital Learner 3PAR Content Pack

HPE Digital Learner 3PAR Content Pack Content Pack data sheet HPE Digital Learner 3PAR Content Pack HPE Content Pack number Content Pack category Content Pack length Learn more CP004 Category 1 24 Hours View now Managing HPE 3PAR StoreServ

More information

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide

HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide HP 3PAR StoreServ Storage VMware ESX Host Persona Migration Guide Abstract This guide is intended to assist customers in successfully migrating their VMware ESX/ESXi hosts on HP 3PAR StoreServ storage

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

HPE 3PAR OS MU2 Patch 53 Release Notes

HPE 3PAR OS MU2 Patch 53 Release Notes HPE 3PAR OS 3.2.2 MU2 Patch 53 Release Notes Abstract This release notes document is for Patch 53 and intended for HPE 3PAR Operating System Software 3.2.2.390 (MU2). Part Number: QL226-99481 Published:

More information

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions

HPE 3PAR OS MU3 Patch 18 Upgrade Instructions HPE 3PAR OS 3.1.3 MU3 Patch 18 Upgrade Instructions This upgrade instructions document is for installing Patch 18 on the HPE 3PAR Operating System Software 3.1.3.334 (MU3). This document is for Hewlett

More information

HPE 3PAR OS MU3 Patch 23 Release Notes

HPE 3PAR OS MU3 Patch 23 Release Notes HPE 3PAR OS 321 MU3 Patch 23 Release tes This release notes document is for Patch 23 and intended for HPE 3PAR Operating System Software 321292 (MU3)+Patch 18 Part Number: QL226-98364 Published: December

More information

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP StorageWorks Continuous Access EVA 2.1 release notes update HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.

More information

HP StoreOnce Recovery Manager Central for VMware User Guide

HP StoreOnce Recovery Manager Central for VMware User Guide HP StoreOnce Recovery Manager Central 1.2.0 for VMware User Guide Abstract The guide is intended for VMware and database administrators who are responsible for backing up databases. This guide provides

More information

HPE StoreVirtual OS v13.5 Release Notes

HPE StoreVirtual OS v13.5 Release Notes HPE StoreVirtual OS v13.5 Release Notes Part Number: 865552-006 Published: May 2017 Edition: 2 Contents Release notes...4 Description... 4 Platforms supported for this release... 4 Update recommendation...4

More information

HPE XP7 Performance Advisor Software 7.2 Release Notes

HPE XP7 Performance Advisor Software 7.2 Release Notes HPE XP7 Performance Advisor Software 7.2 Release Notes Part Number: T1789-96464a Published: December 2017 Edition: 2 Copyright 1999, 2017 Hewlett Packard Enterprise Development LP Notices The information

More information

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide

HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide HPE 1/8 G2 Tape Autoloader and MSL Tape Libraries Encryption Kit User Guide Abstract This guide provides information about developing encryption key management processes, configuring the tape autoloader

More information

HP 3PAR Recovery Manager Software for Oracle

HP 3PAR Recovery Manager Software for Oracle HP 3PAR Recovery Manager 4.6.0 Software for Oracle User Guide Abstract This document provides the information needed to install, configure, and use the HP 3PAR Recovery Manager 4.6.0 Software for Oracle

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

HP 3PAR Management Console

HP 3PAR Management Console HP 3PAR Management Console User Guide Abstract The HP 3PAR Management Console and the topics in this Help system are for use by system and storage administrators who monitor and direct system configurations

More information

HPE MSA 2050 and MSA 2052 Storage VL100P001 Firmware Release Notes

HPE MSA 2050 and MSA 2052 Storage VL100P001 Firmware Release Notes HPE MSA 2050 and MSA 2052 Storage VL100P001 Firmware Release Notes Abstract This package delivers firmware for the HPE MSA 2050 and MSA 2052 Storage systems. Part Number: Q1J79-62019 Published: August

More information

HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault

HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault HPE 3PAR File Persona on HPE 3PAR StoreServ Storage with Veritas Enterprise Vault Solution overview and best practices for data preservation with Veritas Enterprise Vault Technical white paper Technical

More information

HPE 3PAR Storage Replication Adaptor for Stretched storage support with VMware vcenter Site Recovery Manager. Technical White Paper

HPE 3PAR Storage Replication Adaptor for Stretched storage support with VMware vcenter Site Recovery Manager. Technical White Paper HPE 3PAR Storage Replication Adaptor for Stretched storage support with VMware vcenter Site Recovery Manager Technical White Paper Contents Introduction... 3 Scope of the document... 3 Document purpose...

More information

HP Virtual Connect Enterprise Manager

HP Virtual Connect Enterprise Manager HP Virtual Connect Enterprise Manager Data Migration Guide HP Part Number: 487488-001 Published: April 2008, first edition Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential

More information

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1)

HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) HPE OneView for VMware vcenter Release Notes (8.2 and 8.2.1) Abstract This document describes changes in HPE OneView for VMware vcenter to help administrators understand the benefits of obtaining the 8.2

More information

HP 3PAR StoreServ 7000 Storage Installation and Startup Service

HP 3PAR StoreServ 7000 Storage Installation and Startup Service Technical data HP 3PAR StoreServ 7000 Storage Installation and Startup Service HP Care Pack Services For a smooth startup, the HP 3PAR StoreServ 7000 Storage Installation and Startup Service provides deployment

More information

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V

HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V HP 3PAR Recovery Manager 2.0 Software for Microsoft Hyper-V User Guide Abstract This document provides information about using HP 3PAR Recovery Manager for Microsoft Hyper-V for experienced Microsoft Windows

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

HPE 3PAR StoreServ Management Console 3.3 Release Notes

HPE 3PAR StoreServ Management Console 3.3 Release Notes HPE 3PAR StoreServ Management Console 3.3 Release Notes Abstract The information in this document is developed for use by Hewlett Packard Enterprise customers, partners, and Hewlett Packard Enterprise

More information

HPE 3PAR Performance and Capacity Trending Service

HPE 3PAR Performance and Capacity Trending Service Data sheet HPE 3PAR Performance and Capacity Trending Service HPE Lifecycle Event Services HPE 3PAR Performance and Capacity Trending Service provides data collection, analysis, and reports with key performance

More information

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Part number: 5697-0025 Third edition: July 2009 Legal and notice information Copyright

More information

HP XP7 High Availability User Guide

HP XP7 High Availability User Guide HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

XP7 External Storage for Open and Mainframe Systems User Guide

XP7 External Storage for Open and Mainframe Systems User Guide XP7 External Storage for Open and Mainframe Systems User Guide Abstract This guide provides information and instructions for planning, setup, maintenance, and troubleshooting the use of external volumes

More information

HP StorageWorks. EVA Virtualization Adapter administrator guide

HP StorageWorks. EVA Virtualization Adapter administrator guide HP StorageWorks EVA Virtualization Adapter administrator guide Part number: 5697-0177 Third edition: September 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard Development Company,

More information

HPE StoreVirtual 3200 Storage Release Notes

HPE StoreVirtual 3200 Storage Release Notes HPE StoreVirtual 3200 Storage Release Notes LeftHand OS 13.0 Part Number: 865552-002 Published: October 2016 Edition: 2 Copyright 2016, Hewlett Packard Enterprise Development LP The information contained

More information

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide

HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide HPE StoreEver MSL6480 Tape Library CLI Utility Version 1.0 User Guide Abstract This document explains how to install and use the HPE StoreEver MSL6480 Tape Library CLI utility, which provides a non-graphical

More information

HPE 3PAR Remote Copy Extension Software Suite Implementation Service

HPE 3PAR Remote Copy Extension Software Suite Implementation Service Data sheet HPE 3PAR Remote Copy Extension Software Suite Implementation Service HPE Lifecycle Event Services HPE 3PAR Remote Copy Extension Software Suite Implementation Service provides customized deployment

More information

R3T StoreVista Reports by

R3T StoreVista Reports by R3T StoreVista Reports by Email Hewlett Packard Enterprise Storage Division User Guide v1.3 Table of contents R3T SOFTWARE LICENSE TERMS 3 Purpose of this Document 4 Background 5 R3T StoreVista 5 Available

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02

EMC ViPR Controller. Service Catalog Reference Guide. Version REV 02 EMC ViPR Controller Version 3.5 Service Catalog Reference Guide 302-003-279 REV 02 Copyright 2013-2019 EMC Corporation All rights reserved. Published February 2019 Dell believes the information in this

More information

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3

OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3 IBM System Storage N series OnCommand Unified Manager Installation and Setup Guide for Use with Core Package 5.2 and Host Package 1.3 GA32-1020-03 Table of Contents 3 Contents Preface... 10 Supported

More information

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment Part number: 5697-8185 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,

More information

HP Serviceguard for Linux Certification Matrix

HP Serviceguard for Linux Certification Matrix Technical Support Matrix HP Serviceguard for Linux Certification Matrix Version 04.05, April 10 th, 2015 How to use this document This document describes OS, Server and Storage support with the listed

More information

HP OneView for VMware vcenter User Guide

HP OneView for VMware vcenter User Guide HP OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HP OneView for VMware vcenter (formerly HP Insight Control for VMware vcenter Server).

More information

HPE 3PAR Service Processor Software 5.0.x User Guide

HPE 3PAR Service Processor Software 5.0.x User Guide HPE 3PAR Service Processor Software 5.0.x User Guide Abstract This user guide provides information on using HPE 3PAR Service Processor software 5.0.x and the HPE 3PAR StoreServ Service Console. Part Number:

More information

Technical Support Matrix

Technical Support Matrix Technical Support Matrix Serviceguard Disaster Recovery Products Compatibility and Feature Matrix (Metrocluster with 3PAR Remote Copy) - Linux and HPUX Version 3.14, Nov 25, 2016 1 Serviceguard Disaster

More information

HP 3PAR StoreServ Storage PowerShell Toolkit v1.0 User Guide

HP 3PAR StoreServ Storage PowerShell Toolkit v1.0 User Guide HP 3PAR StoreServ Storage PowerShell Toolkit v1.0 User Guide Abstract This document contains detailed instructions on the HP 3PAR StoreServ Storage PowerShell Toolkit v1.0 installation, features, and PowerShell

More information

HP Operations Orchestration

HP Operations Orchestration HP Operations Orchestration For Windows and Linux operating systems Software Version: 9.07.0006 System Requirements Document Release Date: April 2014 Software Release Date: February 2014 Legal Notices

More information

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide

HPE Storage Optimizer Software Version: 5.4. Best Practices Guide HPE Storage Optimizer Software Version: 5.4 Best Practices Guide Document Release Date: November 2016 Software Release Date: November 2016 Legal Notices Warranty The only warranties for Hewlett Packard

More information

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary

More information

Release Notes. Operations Smart Plug-in for Virtualization Infrastructure

Release Notes. Operations Smart Plug-in for Virtualization Infrastructure Operations Smart Plug-in for Virtualization Infrastructure Software Version: 12.04 Operations Manager for Windows, HP-UX, Linux, and Solaris operating systems Release Notes Document Release Date: August

More information

MSA Event Descriptions Reference Guide

MSA Event Descriptions Reference Guide MSA Event Descriptions Reference Guide Abstract This guide is for reference by storage administrators to help troubleshoot storage-system issues. It describes event messages that may be reported during

More information

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide

HP Storage Provisioning Manager (SPM) Version 1.3 User Guide HP Storage Provisioning Manager (SPM) Version 1.3 User Guide Abstract This guide provides information to successfully install, configure, and manage the HP Storage Provisioning Manager (SPM). It is intended

More information

HP LeftHand SAN Solutions

HP LeftHand SAN Solutions HP LeftHand SAN Solutions Support Document Installation Manuals VSA 8.0 Quick Start - Demo Version Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

Dell PowerVault MD Storage Array VMware Storage Replication Adapter (SRA) Installation and Configuration Manual

Dell PowerVault MD Storage Array VMware Storage Replication Adapter (SRA) Installation and Configuration Manual Dell PowerVault MD Storage Array VMware Storage Replication Adapter (SRA) Installation and Configuration Manual Regulatory Model: E16S Series Regulatory Type: E16S001 Notes, Cautions, and Warnings NOTE:

More information

HPE 3PAR Service Processor Software 5.x Release Notes

HPE 3PAR Service Processor Software 5.x Release Notes HPE 3PAR Service Processor Software 5.x Release Notes Abstract The information in this document is intended for use by Hewlett Packard Enterprise customers, partners, and HPE field representatives. These

More information

HP MSA Family Installation and Startup Service

HP MSA Family Installation and Startup Service Technical data HP MSA Family Installation and HP Services Service benefits Allows your IT resources to stay focused on their core tasks and priorities Reduces implementation time, impact, and risk to your

More information

HPE OneView for VMware vcenter User Guide

HPE OneView for VMware vcenter User Guide HPE OneView for VMware vcenter User Guide Abstract This document contains detailed instructions for configuring and using HPE OneView for VMware vcenter. It is intended for system administrators who are

More information

HP 3PAR Recovery Manager Software for Oracle

HP 3PAR Recovery Manager Software for Oracle HP 3PAR Recovery Manager 4.2.0 Software for Oracle User s Guide Abstract This document provides the information needed to install, configure, and use the HP 3PAR Recovery Manager 4.2.0 Software for Oracle

More information

HP 3PAR Storage Replication Adapter 5.0 for VMware vcenter Site Recovery Manager

HP 3PAR Storage Replication Adapter 5.0 for VMware vcenter Site Recovery Manager HP 3PAR Storage Replication Adapter 5.0 for VMware vcenter Site Recovery Manager Troubleshooting Guide Abstract This document provides troubleshooting and workflow information for the HP 3PAR Storage Replication

More information

HP Universal CMDB. Software Version: DDMI to Universal Discovery Migration Walkthrough Guide

HP Universal CMDB. Software Version: DDMI to Universal Discovery Migration Walkthrough Guide HP Universal CMDB Software Version: 10.22 DDMI to Universal Discovery Migration Walkthrough Guide Document Release Date: December 2015 Software Release Date: December 2015 Legal Notices Warranty The only

More information

HP StorageWorks. P4000 SAN Solution user guide

HP StorageWorks. P4000 SAN Solution user guide HP StorageWorks P4000 SAN Solution user guide This guide provides information for configuring and using the HP StorageWorks SAN Solution. It includes hardware configuration and information about designing

More information

Disaster Recovery-to-the- Cloud Best Practices

Disaster Recovery-to-the- Cloud Best Practices Disaster Recovery-to-the- Cloud Best Practices HOW TO EFFECTIVELY CONFIGURE YOUR OWN SELF-MANAGED RECOVERY PLANS AND THE REPLICATION OF CRITICAL VMWARE VIRTUAL MACHINES FROM ON-PREMISES TO A CLOUD SERVICE

More information

HP UFT Connection Agent

HP UFT Connection Agent HP UFT Connection Agent Software Version: For UFT 12.53 User Guide Document Release Date: June 2016 Software Release Date: June 2016 Legal Notices Warranty The only warranties for Hewlett Packard Enterprise

More information

HPE Intelligent Management Center v7.3

HPE Intelligent Management Center v7.3 HPE Intelligent Management Center v7.3 Service Operation Manager Administrator Guide Abstract This guide contains comprehensive conceptual information for network administrators and other personnel who

More information

HP Intelligent Management Center Remote Site Management User Guide

HP Intelligent Management Center Remote Site Management User Guide HP Intelligent Management Center Remote Site Management User Guide Abstract This book provides overview and procedural information for Remote Site Management, an add-on service module to the Intelligent

More information

HP P6000 Enterprise Virtual Array Compatibility Reference

HP P6000 Enterprise Virtual Array Compatibility Reference HP P6000 Enterprise Virtual Array Compatibility Reference 1.0 HP P6000 software solution compatibility 2.0 HP P6000 Command View Software interoperability support 2.1 HP P6000 Command View Software upgrade

More information

HP P4000 SAN Solution User Guide

HP P4000 SAN Solution User Guide HP P4000 SAN Solution User Guide Abstract This guide provides information for configuring and using the HP SAN Solution. It includes hardware configuration and information about designing and implementing

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

HP StorageWorks Enterprise Virtual Array

HP StorageWorks Enterprise Virtual Array Release Notes HP StorageWorks Enterprise Virtual Array Product Version: v3.025 First Edition March, 2005 Part Number: 5697 5237 *5697-5237* This document contains the most recent product information about

More information

HPE Intelligent Management Center

HPE Intelligent Management Center HPE Intelligent Management Center Service Health Manager Administrator Guide Abstract This guide provides introductory, configuration, and usage information for Service Health Manager (SHM). It is for

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

HP 3PAR Remote Copy Software User s Guide

HP 3PAR Remote Copy Software User s Guide HP 3PAR Remote Copy 3.1.1 Software User s Guide This guide is for System and Storage Administrators who monitor and direct system configurations and resource allocation for HP 3PAR Storage Systems. HP

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

HPE LTO Ultrium 30750, 15000, 6250, 3000, 1760, and 920 External Tape Drives Start Here

HPE LTO Ultrium 30750, 15000, 6250, 3000, 1760, and 920 External Tape Drives Start Here HPE LTO Ultrium 30750, 15000, 6250, 3000, 1760, and 920 External Tape Drives Start Here Abstract This document describes how to connect a StoreEver LTO Ultrium SAS external tape drive to an external high-density

More information

HP XP P9000 Remote Web Console Messages

HP XP P9000 Remote Web Console Messages HP XP P9000 Remote eb Console Messages Abstract This document lists the error codes and error messages for HP XP P9000 Remote eb Console for HP XP P9000 disk arrays, and provides recommended action for

More information

RAID-01 (ciss) B Mass Storage Driver Release Notes

RAID-01 (ciss) B Mass Storage Driver Release Notes RAID-01 (ciss) B.11.31.1705 Mass Storage Driver Release Notes HP-UX 11i v3 Abstract This document contains specific information that is intended for users of this HPE product. Part Number: Published:

More information

HP 3PAR Storage System Installation and Startup Service

HP 3PAR Storage System Installation and Startup Service HP 3PAR Storage System Installation and Startup Service HP Care Pack Services Technical data For smooth startup, the HP 3PAR Storage System Installation and Startup Service provides deployment of your

More information

HPE 3PAR StoreServ Storage Concepts Guide

HPE 3PAR StoreServ Storage Concepts Guide HPE 3PAR StoreServ Storage Concepts Guide Abstract This Hewlett Packard Enterprise (HPE) concepts guide is for all levels of system and storage administrators who plan storage policies, configure storage

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information