HP P6000 Continuous Access Implementation Guide

Size: px
Start display at page:

Download "HP P6000 Continuous Access Implementation Guide"

Transcription

1 HP P6000 Continuous Access Implementation Guide Abstract This guide explains the major factors in designing a successful disaster recovery solution using HP P6000 Continuous Access. In addition to explaining how distance and bandwidth affect performance and cost, this guide describes optional configurations and key planning for your operating systems, applications, and arrays. This guide is intended for IT managers, business managers, and storage area network (SAN) architects working in environments that include any EVA model (EVA3000/5000, EVA4x00/6x00/8x00, EVA4400, EVA6400/8400, P6300/P6500). IMPORTANT: General references to HP P6000 Continuous Access may also refer to earlier versions of HP Continuous Access EVA. P6000 is the new branding for the Enterprise Virtual Array product family. HP Part Number: T Published: July 2012 Edition: 10

2 Copyright 2008, 2012 Hewlett Packard Development Company, L.P. Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR and , Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. Warranty The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Acknowledgements Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. UNIX is a registered trademark of The Open Group. Linux is a registered trademark of Linus Torvalds in the U.S. and other countries.

3 Contents 1 HP P6000 Continuous Access...7 Features...7 Remote replication concepts...8 Write modes...8 DR groups...8 DR group write history log...9 Managed sets...9 Failover...10 Failsafe mode...10 Failsafe on Link-down/Power-up...11 Implementation checklist Designing a remote application solution...14 Tools for gathering SAN data...14 Choosing the remote site...14 High availability...15 Recovery time objective...15 Disaster tolerance...15 Disaster tolerance and distance...15 Determining the minimum separation distance...15 Distance and performance...16 Latency factor...16 Determining intersite latency...17 Evaluating intersite latency...17 Cost...17 Choosing the intersite link...17 Distance...17 Recovery point objective...18 Bandwidth...18 Bandwidth capacity and peak loads...18 Determining the critical sample period...19 Sizing bandwidth for synchronous replication...19 Sizing bandwidth for asynchronous replication...20 Evaluating bandwidth capacity...20 Choosing a write mode...20 Asynchronous write mode...21 Basic asynchronous mode...21 Enhanced asynchronous mode...21 Synchronous mode...22 Maintaining DR group I/O consistency Planning the remote replication fabric...23 Basic dual-fabric configuration...23 Basic configuration limits...23 Basic configuration rules...24 Extended fabric using long-distance GBICs and SFPs...24 Extended fabric using WDM...24 Fabric to IP...25 Fibre Channel-to-IP...25 FC-to-IP configuration limits...25 FC-to-IP configuration rules...25 Configurations with application failover...26 Contents 3

4 HP Cluster Extension...26 HP Metrocluster Continuous Access...27 HP Continentalcluster...27 Reduced-availability configurations...27 Single-fabric configuration...27 Single-switch configuration...28 Single-HBA configuration...29 Advanced configurations...29 Five-fabric configuration...29 Six-fabric configuration Planning the array configuration...35 Planning disk groups...35 Determining the number of disk groups...35 Specifying disk group properties...35 Planning DR groups...36 DR group guidelines...36 Implicit LUN transition and HP P6000 Continuous Access...37 DR group name guideline...37 Increasing the size of the write history log file in enhanced or basic asynchronous mode...38 DR groups with FATA or SAS Midline drives...38 Planning the data replication protocol...38 Selecting the data replication protocol...39 Data replication protocol performance considerations...40 Tunnel thrash...41 Planning for DR group write history logs...42 Logging in synchronous or basic asynchronous mode...42 Logging in enhanced asynchronous mode...42 Normalization...43 DR group write history log size...43 Write history log size in synchronous or basic asynchronous mode...44 Write history log file size in enhanced asynchronous mode...44 Incorrect error message for minimum asynchronous replication log size...44 Log size displayed incorrectly when creating DR groups in a mixed controller software environment...44 DR group write history log location...44 Planning replication relationships...46 Bidirectional replication...46 System fan-out replication...47 Fan-in replication...48 Cascaded replication Planning the solution...50 Operating system considerations...50 Supported operating systems...50 Operating system capabilities...50 Boot from SAN...50 Bootless failover Implementing remote replication...51 Remote replication configurations...51 Verifying array setup...51 Installation checklist...51 Verifying Fibre Channel switch configuration...51 B-series switch configuration...52 C-series switch configuration Contents

5 H-series switch configuration...54 M-series switch configuration...54 Verifying cabling...54 Changing host port data replication settings...56 Verifying path status...57 Installing replication licenses...58 Installing HP P6000 Replication Solutions Manager (optional)...58 DC-Management and HP P6000 Continuous Access...58 Creating fabrics and zones...58 Fabric configuration drawings...58 Two-fabric configuration...59 HP P6000 Command View management connections in five-fabric and six-fabric configurations...61 A single physical fabric...66 Dual physical fabric with six zones...67 Best practices for using zones with HP P6000 Continuous Access...69 Zoning management servers...70 Zoning best practices for traffic and fault isolation...70 Recommended single-fabric zoning configurations...71 Recommended dual-fabric zoning configurations...80 FCIP gateway zoning configurations...95 Configuring hosts...96 Configuring disk groups for remote replication...96 Creating and presenting source virtual disks...96 Selecting a preferred controller...97 Using the failover/failback setting...97 Using the failover only setting...98 Presenting virtual disks...98 Adding hosts...98 Creating DR groups...98 Specifying virtual disks...98 Adding members to a DR group...99 Selecting replication mode...99 Specifying DR group write history log location and size...99 Presenting destination virtual disks Backing up the configuration Setting up remote and standby management servers Testing failover Failover and recovery Failover example Planning for a disaster Failover and recovery procedures Performing failover and recovery Choosing a failover procedure Planned failover Planned Failover Procedure Unplanned failover Recover from failsafe-locked after destination loss Failback to the original source following a planned or unplanned failover Return operations to new hardware Recovering from a disk group hardware failure Failed disk group hardware indicators Disk group hardware failure on the source array Disk group hardware failure on the destination array Contents 5

6 Protecting data from a site failure Operating system procedures Resuming host I/O after failover HP OpenVMS HP Tru64 UNIX HP-UX IBM AIX Linux Novell NetWare Sun Solaris VMware Windows Red Hat and SUSE Linux LifeKeeper clusters Bootless failover using LVM with Linux Source host procedure Destination host procedure Managing remote replication Using remote replication in a mixed array environment Managing merges and normalization Throttling a merge I/O after logging Maintaining I/O performance while merging Preparing for a normalization Optimizing performance Load balancing Backing up replication configuration Using HP P6000 Replication Solutions Manager for backups Using HP Storage System Scripting Utility to capture your configuration Keeping a written record of your configuration Upgrading controller software Support and other resources Contacting HP HP technical support Subscription service Documentation feedback Product feedback Related information Documentation HP websites Typographical conventions Glossary Index Contents

7 1 HP P6000 Continuous Access HP P6000 Continuous Access is the remote replication component of HP controller software. When this component is licensed and configured, the controller copies data online, in real time, to a remote array over a SAN. Properly configured, HP P6000 Continuous Access provides a disaster-tolerant storage solution that ensures data integrity and, optionally, data currency RPO if an array or site fails. Figure 1 (page 7) shows a typical remote replication setup with arrays on local and remote sites connected by two linked fabrics. Two ISLs connect the local and remote fabrics. NOTE: In SAN design terminology, an ISL is also referred to as an interswitch link. Figure 1 Basic HP P6000 Continuous Access configuration 1. Local site 2. Remote site 3. LAN connection 4. Management server 5. Hosts 6. Local/remote fabric blue 7. ISL blue 8. Local/remote fabric gold 9. ISL gold 10. Arrays Features HP P6000 Continuous Access features include: Continuous replication of local virtual disks on remote virtual disks Synchronous, basic asynchronous, and enhanced asynchronous replication modes Automated failover when used with other cluster software Failsafe data protection Ability to suspend and resume replication Bidirectional replication Graphical and command line user interfaces (GUI and CLUI) to simplify replication management Automatic suspension of replication if the link between arrays is down. Features 7

8 Support for array-to-array fan-in and fan-out HP SCSI FC Compliant Data Replication Protocol (HP SCSI-FCP), a full SCSI protocol implementation that takes advantage of the exchange-based routing available in fabric switches. For more information, see Planning the data replication protocol (page 38). See the HP P6000 Enterprise Virtual Array Compatibility Reference for more information on remote replication support by controller software version. See Documentation (page 128) for the link to this document. NOTE: HP P6000 Continuous Access interacts with HP P6000 Command View or HP P6000 Replication Solutions Manager to manage remote replication. To perform replication tasks, HP P6000 Command View and HP P6000 Replication Solutions Manager must be installed on a management server. If you are using the array-based management version of HP P6000 Command View, you cannot perform remote replication tasks. Remote replication concepts Write modes DR groups Remote replication is the continuous copying of data from selected virtual disks on a source (local) array to related virtual disks on a destination (remote) array. Applications continue to run while data is replicated in the background. Remote replication requires a fabric connection between the source and destination arrays and a software connection (DR group) between source virtual disks and destination virtual disks. The remote replication write modes are as follows: Asynchronous The array acknowledges I/O completion before data is replicated on the destination array. Asynchronous write mode can be basic or enhanced, depending on the software version of the controller. Synchronous The array acknowledges I/O completion after the data is cached on both the local and destination arrays. For more information on write modes, see Choosing a write mode (page 20). A DR group is a logical group of virtual disks in a remote replication relationship between two arrays. Hosts write data to the virtual disks in the source array, and the array copies the data to the virtual disks in the destination array. I/O ordering is maintained across the virtual disks in a DR group, ensuring I/O consistency on the destination array in the event of a failure of the source array. The virtual disks in a DR group fail over together, share a write history log (DR group log), and preserve write order within the group. Figure 2 (page 9) illustrates the replication of one DR group between a source array and a destination array. For more information, see Planning DR groups (page 36). 8 HP P6000 Continuous Access

9 Figure 2 DR group replication 1. Host server 2. Fibre Channel switch 3. Host I/O 4. Replication writes 5. Source array 6. Destination array 7. Source virtual disk 8. Destination virtual disk 9. DR group DR group write history log The DR group write history log is a virtual disk that stores a DR group's host write data. The log is created when you create the DR group. Once the log is created, it cannot be moved. For more information, see Planning for DR group write history logs (page 42). Managed sets Managed sets are a feature of HP P6000 Replication Solutions Manager. A managed set is a named collection of resources banded together for convenient management. A managed set can contain DR groups, enabled hosts, host volumes, storage systems, or virtual disks. Performing an action on a managed set performs the action on all members of the set. For example, the managed set Sales_Disks might include two virtual disks, West_Sales and East_Sales. If you perform the New Snapshot action on the managed set Sales_Disks, the interface creates a new snapshot of West_Sales and a new snapshot of East_Sales. Remote replication concepts 9

10 Failover NOTE: Managed sets are simply a feature that enables you to manage multiple resources easily. They do not contribute to the data consistency of a DR group. Write order consistency is maintained at the DR group level. In managed sets: All resources, or members, in a single managed set must be of the same type (for example, all virtual disks). You can add a specific resource to more than one managed set. You can add resources on more than one array to a managed set. You should create separate managed sets for DR groups so that if a failover occurs, you can perform the actions that correspond to the changed source/destination role of the managed set members. In HP P6000 Continuous Access replication, failover reverses replication direction for a DR group. The destination array assumes the role of the source, and the source array assumes the role of the destination. For example, if a DR group on array A is replicating to array B, a failover would cause data for the DR group to be replicated from array B to array A. You can failover a single DR group or you can failover multiple DR groups with a single command using a managed set. When you specify a failover action for a specific managed set, the failover occurs for all DR groups contained in the specified managed set. Without managed sets, you must fail over each DR group individually. For more information on failover settings, see Creating and presenting source virtual disks (page 96). NOTE: Failsafe mode Failover can take other forms: Controller failover The process that occurs when one controller in a pair assumes the workload of a failed or redirected controller in the same array. Fabric or path failover I/O operations transfer from one fabric or path to another. This guide describes the failover of DR groups and managed sets. It does not address controller failover within a cabinet, or path, or fabric failover, because redundancy is assumed. Failsafe mode is only available when a DR group is being replicated in synchronous mode and specifies how host I/O is handled if data cannot be replicated between the source and destination array. The failsafe mode can be on of the following: Failsafe enabled All host I/O to the DR group is stopped if data cannot be replicated between the source array and destination array. This ensures that both arrays will always contain the same data (RPO of zero). A failsafe-enabled DR group can be in one of two states: Locked (failsafe-locked) Host I/O and remote replication have stopped because data cannot be replicated between the source and destination array. Unlocked (failsafe-unlocked) Host I/O and remote replication have resumed once replication between the arrays is re-established. Failsafe disabled If replication of data between the source and destination array is interrupted, the host continues writes to the source array, but all remote replication to the destination array stops and I/Os are put into the DR group write history log until remote replication is re-established. 10 HP P6000 Continuous Access

11 NOTE: Failsafe mode is available only in synchronous write mode. Host I/O can be recovered by changing affected DR groups from failsafe-enabled mode to failsafe-disabled mode. This action will begin logging of all incoming writes to the source member of the Data Replication group. Failsafe on Link-down/Power-up Failsafe on Link-down/Power-up is a setting that specifies whether or not virtual disks in a DR group are automatically presented to hosts after a power-up (reboot) of the source array when the links to the destination array are down and the DR group is not suspended. This prevents a situation where the virtual disks in a DR group are presented to servers on the destination array following a failover and then the virtual disks on the source array are also presented when it reboots. Values for Failsafe on Link-down/Power-up are as follows: Enabled Virtual disks in a source DR group are not automatically presented to hosts. This is the default value assigned to a DR group when it is created. This behavior is called presentation blocking and provides data protection under several circumstances. Host presentation remains blocked until the destination array becomes available (and can communicate with the source array) or until the DR group is suspended. Disabled Virtual disks in a source DR group are automatically presented to hosts after a controller reboot. This feature can be disabled only after the DR group is created. See the HP P6000 Enterprise Virtual Array Compatibility Reference to determine if your controller software supports disabling this feature. Implementation checklist Table 1 (page 11) provides an overview and checklist of the primary tasks involved in planning and implementing an HP P6000 Continuous Access environment. Table 1 (page 11) also provides a link from each task to more detail elsewhere in this guide. Use the checklist to record your progress as you perform each task. For links to the documentation identified, see Documentation (page 128). IMPORTANT: Table 1 (page 11) is provided as an aid for implementing HP P6000 Continuous Access. It should be used in conjunction with the remaining content of this guide. If you are installing your first HP P6000 Continuous Access environment, before starting the installation read through this entire guide to ensure that you implement HP P6000 Continuous Access successfully. Table 1 HP P6000 Continuous Access implementation checklist Planning tasks (Perform before implementing HP P6000 Continuous Access) Designing a remote application solution (page 14) Things you will need: Appropriate tool for gathering SAN latency data. See Tools for gathering SAN data (page 14). Evaluate all factors impacting selection of a remote site. Define the RTO. Consider the impact of intersite latency on all applications. Bandwidth capacity and peak loads (page 18) Things you will need: Appropriate tool for gathering I/O data. See Tools for gathering SAN data (page 14). Monitor and record sustained I/O load. Monitor and record burst I/O load. Choosing the intersite link (page 17) Things you will need: Implementation checklist 11

12 Table 1 HP P6000 Continuous Access implementation checklist (continued) Appropriate tool for gathering bandwidth data. See Tools for gathering SAN data (page 14). Evaluate all factors impacting selection of the ISL. Define the RPO. Select a write mode that supports the RPO. Verify that the ISL meets bandwidth and Quality of Service (QoS) requirements for HP P6000 Continuous Access. Planning the remote replication fabric (page 23) Things you will need: HP SAN Design Reference Guide Appropriate switch best practices guide Select the desired fabric configuration. Ensure all SAN switches are compatible. Observe any switch-specific requirements for HP P6000 Continuous Access. Planning the array configuration (page 35) Things you will need: HP Enterprise Virtual Array Configuration Best Practices White Paper HP SAN Design Reference Guide HP P6000 Enterprise Virtual Array Compatibility Reference Ensure that the array configuration meets best practices guidelines. Ensure that the array has adequate capacity for DR group write history logs. Ensure that only supported arrays are used in each HP P6000 Continuous Access relationship. Implementation tasks Implementing remote replication (page 51) Verify array setup. Ensure all cabling requirements are met when connecting each array to the fabric. Install replication licenses and replication management software. Verifying Fibre Channel switch configuration (page 51) Check current switch configuration settings. Change settings as required. See switch documentation for procedures. Creating fabrics and zones (page 58) Things you will need: HP SAN Design Reference Guide Review fabric and zoning requirements for HP P6000 Continuous Access. Create the desired fabrics and zones. Configuring disk groups for remote replication (page 96) Determine the number of disk groups required to support application data being replicated. Create and present the virtual disks. Creating DR groups (page 98) Create the necessary DR groups, ensuring that all DR group guidelines are met. Select the desired replication mode. Select the DR group write history log location and size. Present the destination virtual disks. Failover and recovery (page 101) Create a disaster plan. 12 HP P6000 Continuous Access

13 Table 1 HP P6000 Continuous Access implementation checklist (continued) Select a failover and recovery procedure. Observe all operating system-specific failover and recovery procedures. Test failover and recovery. Implementation checklist 13

14 2 Designing a remote application solution This chapter describes important factors involved in designing a successful remote replication solution using HP P6000 Continuous Access: Choosing the remote site (page 14) Choosing the intersite link (page 17) Choosing a write mode (page 20) Tools for gathering SAN data A critical task in designing your HP P6000 Continuous Access implementation is gathering and analyzing the characteristics of your current SAN operating environment. This includes parameters such as I/O patterns, bandwidth, and latency. This data enables you to design a replication configuration that matches your SAN environment. NOTE: Before implementing an HP P6000 Continuous Access replication solution, contact your authorized HP account representative for assistance. They can provide you with access to the tools mentioned in this section. The following tools are available to assist you in gathering SAN data: HP Essentials Data Replication Designer A GUI-based automated tool for gathering SAN data. The Data Replication Designer gathers the SAN data and copies it to the Replication Workload Profiler (RWP), an interactive spreadsheet used by HP to evaluate how well HP P6000 Continuous Access will work within your current SAN environment. Complete instructions for using the designer are included in the HP Essentials Data Replication Designer 1.0 User Guide. NOTE: For more information on support for Data Replication Designer, go to the HP SPOCK website: Replication Workload Profiler An interactive spreadsheet that calculates replication requirements based on current SAN data. On Windows, the Data Replication Designer automatically populates the RWP with SAN data. On other operating systems, it is necessary to populate the RWP manually with the SAN data. The RWP and other spreadsheets used for gathering SAN data are available from your authorized HP account representative. HP Command View EVAPerf Gathers performance data on the array. Complete instructions for using HP Command View EVAPerf are included in the HP P6000 Command View User Guide. For more information, see Documentation (page 128). Choosing the remote site The location of the remote site can be the most important and the most complex decision in implementing a remote replication solution. It requires an analysis of competing objectives: High availability (page 15) The business need for continuous access to data without downtime Disaster tolerance (page 15) The business need for data to survive a site disaster Distance and performance (page 16) The effect of distance on replication throughput and data currency Cost (page 17) The cost of data transmission lines between sites This section examines these objectives in detail. 14 Designing a remote application solution

15 High availability High availability reduces the risk of downtime through redundant systems, software, and IT processes with no SPOF. HP P6000 Continuous Access contributes to high availability by providing redundant data. Products such as HP Metrocluster and HP Cluster Extensions provide highly available applications. For more information about these and other high-availability products, see the HP website: If your business needs high availability, but does not require disaster tolerance, local and remote sites can be in the same room, building, or city. Distance and its effect on cost and performance are not important issues. Recovery time objective The RTO is a measure of high availability it is the length of time the business can afford to spend returning an application to operation. It includes the time required to detect the failure, to fail over the storage, and to restart the application on a new server. RTO is usually measured in minutes or hours, and, occasionally, in days. A shorter RTO increases the need for products that automatically fail over applications and data. HP Metrocluster and HP Continentalcluster work with HP P6000 Command View to provide application and data failover in ServiceGuard HP-UX environments. HP Cluster Extension provides similar functionality for Microsoft Windows clusters and ServiceGuard on Linux environments. For more information on remote replication configurations with cluster software, see Configurations with application failover (page 26). Disaster tolerance If your business requires disaster tolerance, the location of the remote site is critical. Distance and its relationship to cost and performance are major concerns. Disaster tolerance and distance Disaster tolerance uses redundant components to enable the continued operation of critical applications during a site disaster. When two sites are separated by a distance greater than the potential size and scope of a disaster, each site is protected from a disaster on or near the other site. HP P6000 Continuous Access enables applications to build two copies of application data at sites that are far enough apart to provide disaster tolerance. Determining the minimum separation distance In disaster-tolerant solutions, the size of the threat to each site determines the required distance between local and remote sites. The threat radius is the distance from the center of a threat to the outside perimeter of that threat. Figure 3 (page 16) shows sample threat radii for common threat classifications. The required distance between the two sites is the sum of the maximum threat radius for each site when subjected to a common threat. Choosing the remote site 15

16 Figure 3 Threat categories based on radius 1. Regional threat (radius between 10 and 100 kilometers, affecting up to 314,000 square kilometers) 2. Metropolitan threat (radius between 1 and 10 kilometers, affecting up to 314 square kilometers) 3. Local threat (radius less than 1 kilometer, affecting 3 square kilometers) When determining the threat radius, identify the threats to both sites and the specific threat range. Sample threats include tornados, fires, flood, power loss, chemical incidents, earthquakes, hurricanes, and typhoons. Consider the shape, center, and direction of each threat. For example, if severe storms tend to travel in a specific direction, you can place the second site perpendicular to the expected route of travel from the first site. If multiple threats range in size and severity, develop your solution for the largest threat. Distance and performance Latency factor HP P6000 Continuous Access can move data at extreme distances. However, the speed of light in fiber optic cables (1 millisecond per 100 kilometers, round trip) causes inherent delays, called latency. At extreme distances, latency is the limiting factor in replication performance, regardless of bandwidth. Latency becomes an even larger factor when switched IP networks are used to replicate data. IMPORTANT: Intersite latency may impact your applications. You must ensure that all applications can accommodate the intersite latency interval. This is important when using synchronous write mode because every write I/O will incur the intersite latency. The greater the distance, the greater the impact intersite latency has on replication performance. For example, a 1-block write that completes in 0.25 milliseconds without replication takes 0.35 milliseconds to complete with synchronous replication and zero distance between copies. Add 100 kilometers of cable and the replication of that block takes 1.35 milliseconds. The additional millisecond is the time it takes for the data to travel 100 kilometers to the destination array and for the acknowledgement to travel 100 kilometers back to the source array. Add another 100 kilometers of cable, and the same write requires 2.35 milliseconds. NOTE: When replicating synchronously, the total write I/O service time latency is the sum of the write I/O service time on the local array, plus the round-trip network latency, plus the write I/O service time on the destination array. On slow links, queuing effects (I/Os waiting in the queue for transfer over the network) can introduce additional latency, extending the total write I/O service time far beyond that imposed by the other factors mentioned. 16 Designing a remote application solution

17 Table 2 (page 17) lists the intersite one-way latency inherent to other distances. Table 2 Sample one-way latencies One-way latency (ms) Point-to-point cable distance in km (miles) 200 (125) 600 (375) 1,800 (1,125) 3,600 (2,250) 7,200 (4,500) 12,000 (7,500) 20,000 (12,500) current maximum limit Determining intersite latency To determine intersite latency on an existing network, use network utilities such as the ping command. Obtain a 24-hour average. See the HP SAN Design Reference Guide for more information about network utilities. Evaluating intersite latency The Data Replication Designer can be used to gather intersite latency data. The designer is supported on Windows only. See Tools for gathering SAN data (page 14) for more information on the Data Replication Designer and other tools available for evaluating intersite latency. Cost The cost associated with transmission lines increases with distance and bandwidth requirements. The cost may prohibit the selection of a favorite site. If the required bandwidth proves too costly, consider moving the remote site closer to the local site or replicating only the most critical data, such as transaction or retransmission logs. Choosing the intersite link Distance The location of the remote site determines the intersite link technologies that meet your distance and performance requirements. Including distance, the following factors affect bandwidth: Distance (page 17) The transmission distance supported by specific link technologies Recovery point objective (page 18) The acceptable difference between local and remote copies of data Bandwidth capacity and peak loads (page 18) The effect of application peak loads on bandwidth HP P6000 Continuous Access supports direct Fibre Channel (FC) and extended Fibre Channel-to-IP (FC-to-IP) links ranging in bandwidth from Mb/s to more than 4 Gb/s. The supported transmission distance varies with the technology. Basic fiber supports a maximum of 500 meters at 1 Gb/s; shorter lengths are supported at higher bandwidths. The distance varies with the speed of the link. For more information, see the HP SAN Design Reference Guide. Fiber with long-distance and very-long-distance GBICs can support up to 200 times the basic fiber distance. Choosing the intersite link 17

18 Fiber with WDM supports up to 500 kilometers. FC-to-IP gateways support the longest distances. For detailed descriptions of supported link technologies, see Planning the remote replication fabric (page 23). NOTE: Regardless of the transmission technology, HP P6000 Continuous Access does not take into account the type of media used for the intersite network connection. Acceptable media is determined by the quality of the link as described in Part IV, SAN extension and bridging of the HP SAN Design Reference Guide. Recovery point objective Bandwidth The RPO is the amount of data loss that the business can tolerate as a result of a disaster or other unplanned event requiring failover. RPO is measured in time, and ranges from no time (zero) to hours, or in some instances even days. An RPO of zero means no completed transaction can be lost and requires synchronous replication. Note that synchronous replication mode may require more bandwidth than asynchronous. For descriptions of synchronous and asynchronous write modes, see Choosing a write mode (page 20). When you cannot adjust the distance between sites, you may be able to improve performance by increasing bandwidth. Consider a low-bandwidth link and a high-bandwidth link that are moving writes containing identical amounts of data from site A to site B. See Figure 4 (page 18). The writes move through low- and high-bandwidth links at the same speed, so the leading edges of both writes arrive simultaneously at site B. The upper link in Figure 4 (page 18) is one-third the bandwidth of the lower link, so the bits are three times longer. Because the bits are longer in the low-bandwidth link, the data takes more time to unload than the data in the high-bandwidth link. The same advantage applies to loading data into a high-bandwidth link compared to a low-bandwidth link. Figure 4 Bandwidth and I/O rate 1. Site A 2. T3 link (44.5 Mb/s) 3. OC3 link (155 Mb/s) 4. Site B Bandwidth capacity and peak loads With synchronous replication, the intersite link must accommodate the peak write rate of your applications. With asynchronous replication, the intersite link must support the peak average write 18 Designing a remote application solution

19 rate based on the RPO (or over the RPO interval). Insufficient replication bandwidth impacts user response time, RPO, or both. Determining the critical sample period Working with large measurement samples can be tedious and problematic. A two-stage approach to data collection generally helps to reduce the effort. In the first stage, the historical write byte rate trends are analyzed to determine peak periods that can occur during monthly or yearly business cycles and daily usage cycles. Once a peak period is identified, a more granular measurement (the second stage in the analysis) can be made to collect detailed one second measurements of the I/O write profile. A 1- to 8-hour interval is ideal because the measurements can be easily imported into a Microsoft Excel worksheet and charted for reduction and analysis. If you have a good understanding of your organization's business cycles, the critical sample period can be selected with very little additional data collection or reduction. If the write profile is unknown, then the critical sample period can generally be identified from daily incremental backup volumes or transaction rates from application logs. Setting up a long term collection for trending is generally impractical as this could delay the sizing process by several weeks or more. It is imperative that measurement data for all volumes sharing the intersite replication bandwidth is collected over a common time frame so that the aggregate peak can be determined. This is especially important when selecting the critical sample period. Table 3 (page 19) shows recommended sample rate intervals for various RPOs. Remember that the shorter the sample rate interval, the closer the solution will be to meeting your desired RPO. Table 3 RPO sample rate intervals Desired RPO 0 60 minutes 1 2 hours 2 3 hours 3 4 hours > 4 hours Sample rate interval 1 second 30 seconds 1 minute 2 minutes Up to 5 minutes Sizing bandwidth for synchronous replication Application data that is replicated synchronously is highly dependent on link latency because write requests must be received at the recovery site before the application receives a completion. Write response time in a replicated environment is greatly affected by propagation delays and queuing effects. Latencies due to propagation delays can generally be measured and are typically fixed for a given configuration. Latencies due to queuing effects at the link are more difficult to estimate. Propagation delays due to distance can be estimated at 1 millisecond per 100 kilometers to account for the round-trip exchange through dark fiber. Most applications easily accommodate an additional 1-millisecond of latency when DR sites are separated by a distance of 100 kilometers, a typical metro-replication distance. At 500 to 1,000 kilometers, the 5- to 10-millisecond propagation latency accounts for 25% to 50% of the 20-millisecond average latency budgeted to applications such as . This puts a practical cap for synchronous replication at about 100 kilometers. This is also the distance that Fibre Channel data can be transmitted on a single-mode 9-mm fiber at 1 Gb/s with long-distance SFPs. Congestion delays on the interconnect is another source of replication latency. For example, a 4-KB write packet routed onto an IP link operating at 44 Mb/s (T3) incurs approximately 1 millisecond in latency as the fiber channel packets are serialized onto the slower link. A burst of 10 writes means the last write queued to the IP link experiences a 10-millisecond delay as it waits for the previous 9 writes to be transmitted. This sample congestion delay also consumes 50% of a Choosing the intersite link 19

20 20-millisecond average response time budget for latency sensitive applications such as Microsoft Exchange. Sizing bandwidth for asynchronous replication HP P6000 Continuous Access includes an enhanced buffering technique for asynchronous replication solutions that utilizes the DR group write history log. This disk-based journaling insulates application users from latency delays caused by propagation delays and intermittent congestion on the link. However, enhanced asynchronous replication can leave user data can be at risk from a site disaster. The capacity of the link to move data from the log file determines the amount of exposure. Establishing an optimal balance between the cost of bandwidth and the value of the data being protected requires an accurate sizing. It may appear that sizing link capacity equal to the average write byte rate is optimal. After all, whatever data goes into the log must be replicated. However, there are several problems in using averages. The primary issue is that averaging fails to take into account the tolerance to data loss as specified in the RPO. The second is a practical matter when computing averages. What time interval is meaningful for the averaging? While it may be convenient to look at the average change rate because this information is often readily available, using averages will usually lead to a sub-optimal or undersized bandwidth capacity. Evaluating bandwidth capacity Compare the average load and peak write rate of your applications with the capacity of intersite link technologies and determine which technology is most effective. With XCS and later, the maximum capacity measure is averaged over the RPO interval. This limitation allows I/O from a failed link or fabric to run on the active link or fabric without additional failures caused by overloading the surviving fabric. You can use the Data Replication Designer to analyze bandwidth data. The Data Replication Designer is supported on Windows only. See Tools for gathering SAN data (page 14) for more information on Data Replication Designer and other tools available for evaluating bandwidth. Choosing a write mode You specify the replication write mode when you create DR groups. The choice of write mode, which is a business decision, has implications for bandwidth requirements and RPO. Synchronous mode provides greater data currency because RPO will be zero. Asynchronous mode provides faster response to server I/O, but at the risk of losing data queued at the source side if a site disaster occurs. Asynchronous write mode can be basic or enhanced, depending on the software version of the controller. The write mode selection has implications for the bandwidth required for the intersite link. In general, synchronous mode (and shorter RPOs) requires higher bandwidth and smaller network latencies. For instance, synchronous mode can require twice the bandwidth during average workloads and ten times the bandwidth during peak loads. For complete information on which write modes are supported on each version of controller software, see the HP P6000 Enterprise Virtual Array Compatibility Reference. For more information about RPO, see Recovery point objective (page 18). 20 Designing a remote application solution

21 NOTE: A request to convert from synchronous mode to asynchronous mode is executed immediately. A request to convert from asynchronous mode to synchronous mode is executed after the data in the DR group write history log or the I/Os in the write pending queue are merged to the destination array. During the conversion from enhanced asynchronous mode to synchronous mode, an I/O throttling mechanism allows one new write to a DR group for every two DR group write history log entry merges. Asynchronous write mode In asynchronous write mode, the source array acknowledges I/O completion after the data is mirrored across both controllers on the source array. Asynchronous replication prioritizes response time over data currency. The asynchronous replication sequence depends on the version of controller software running on the arrays. For more information, see the HP P6000 Enterprise Virtual Array Compatibility Reference. XCS supports the selection of either basic asynchronous mode or enhanced asynchronous mode. The asynchronous write mode behavior for both basic and enhanced is unchanged from earlier controller software. The user selectable asynchronous functionality is available only when both the local and remote arrays are running XCS or later. Basic asynchronous mode The basic asynchronous sequence is as follows: 1. A source array controller receives data from a host and stores it in cache. 2. The source array controller acknowledges I/O completion to the host. 3. The source array controller sends the data to the destination array controller. 4. The destination controller stores the data in cache. 5. The destination array controller mirrors data in its write cache and acknowledges I/O completion to the source controller. 6. When the source array receives the acknowledgment from the target array, it removes the data from the write history log. The maximum size of the write pending queue limits asynchronous performance. VCS 3.xxx supports 64 outstanding host writes. All other VCS versions support 128 outstanding host writes. With a small write pending queue, lower bandwidths struggle to support applications with erratic or high peak load rates. Enhanced asynchronous mode XCS 6.xxx and later supports an enhanced asynchronous write mode, which uses the write history log for the DR group as a place to hold I/Os waiting to be replicated to the remote array. You can select the size of the log file up to a maximum of 2 TB. NOTE: Enhanced asynchronous mode requires XCS 6.xxx or later on both arrays. The enhanced asynchronous write sequence is as follows: 1. A source array controller receives data from a host and stores it in the cache of the local LUN and the write history log cache. 2. The source array controller acknowledges I/O completion to the host. 3. The source controller takes data from the write history log and sends the data to the remote array. 4. The destination array controller stores the data in cache and acknowledges I/O completion to the source controller. Choosing a write mode 21

22 5. When the source array receives the acknowledgment from the target array, it removes the data from the write history log. Synchronous mode In synchronous write mode, the source array acknowledges I/O completion after replicating the data on the destination array. Synchronous replication prioritizes data currency over response time. 1. A source array controller receives data from a host and stores it in cache. 2. The source array controller replicates the data to the destination array controller. 3. The destination array controller stores the data in cache and acknowledges I/O completion to the source controller. 4. The source array controller acknowledges I/O completion to the host. Synchronous replication has no need for a write pending queue. Maintaining DR group I/O consistency HP P6000 Continuous Access maintains write order across all members of the DR group. This is done by adding a DR group-specific sequence number to each write as the data is replicated from the source array to the destination array. Before processing a write, the destination array verifies that it was received in the correct order. If any write is received out of order, processing for all writes to any member of the DR group stops. Once write order is re-established, the processing continues. NOTE: If the write history log overflows for any reason, an out-of-order normalization occurs to resynchronize the DR group. The destination virtual disks will be I/O inconsistent until normalization completes. 22 Designing a remote application solution

23 3 Planning the remote replication fabric This chapter describes the supported HP P6000 Continuous Access configurations. Basic dual-fabric configuration Figure 5 (page 23) shows the basic HP P6000 Continuous Access configuration with Fibre Channel links. Hosts (5) have two HBAs, one connected to the blue fabric (6) and the other connected to a redundant gold fabric (8). Arrays have two controllers, each connected to both fabrics for a total of four connections. (For configurations with more than two ports per controller, see Advanced configurations (page 29). Local and remote fabrics are connected by intersite links (7 and 9). Figure 5 Basic configuration over fiber 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O and replication fabric blue 7. Intersite link blue 8. Host I/O and replication fabric gold 9. Intersite link gold 10. Dual-controller arrays This configuration provides no SPOF at the fabric level. If broken cables, switch updates, or an error in switch zoning cause one fabric to fail, the other fabric can temporarily carry the entire workload. Basic configuration limits For fabric rules and size limits, see Part II of the HP SAN Design Reference Guide. Limits apply to the combined local-remote fabric in HP P6000 Continuous Access configurations. The number of supported arrays and hosts in an HP P6000 Continuous Access configuration depends on the array. This information is available in Part III of the HP SAN Design Reference Guide. Basic dual-fabric configuration 23

24 You can use switches to create zones and work around some fabric limitations. For example: Unsupported hosts and incompatible arrays can be on a fabric with HP P6000 Continuous Access if they are in independent zones. For compatibility with operating systems and other software, see the HP P6000 Enterprise Virtual Array Compatibility Reference. A fabric can include multiple HP P6000 Continuous Access solutions. For more information about zoning, see Creating fabrics and zones (page 58). Basic configuration rules The following rules apply to the basic HP P6000 Continuous Access configuration: Each array must have dual array controllers. Host operating systems should implement native or installed multipath software. For compatible multipath solutions, see the HP P6000 Enterprise Virtual Array Compatibility Reference. Local and remote arrays must be running compatible controller software. For information about supported replication relationships between arrays with VCS and XCS controller software, see the HP P6000 Enterprise Virtual Array Compatibility Reference. A minimum of two HBAs (or one dual-port HBA) is recommended for each host to ensure no SPOF between the host and the array. For maximum HBA ports, see the HP SAN Design Reference Guide. All virtual disks used by a single application must be in a single DR group; only one application per DR group is recommended. All members of a DR group will be assigned to the same array controller. Each site must have at least one management server. Two management servers are recommended for high availability after a disaster. It is highly recommended that you use dedicated HP P6000 Command View management servers with HP P6000 Continuous Access. Although you can install HP P6000 Command View on an application server that is accessing the array, care should be taken if you decide to do so. If LUN presentation is not managed properly, a shared HP P6000 Command View management/application server may have access to LUNs on both the source array and the destination array. This is undesirable and could result in two servers simultaneously performing I/O to a common LUN, resulting in an undesirable data state. In addition, copies of HP P6000 Continuous Access and array/operating system support documentation should be accessible at each site. The documentation facilitates disaster recovery, rebuilding, or repair of the surviving system if access to the other site is lost. Extended fabric using long-distance GBICs and SFPs Adding long-distance and very-long-distance GBICs and SFPs to simple Fibre Channel links increases the possible distance between sites. For more information about the use of long-distance fiber and supported GBICs and SFPs, see the HP SAN Design Reference Guide. Extended fabric using WDM Adding dense or coarse WDM to basic Fibre Channel allows greater distances between sites than long-distance GBICs and SFPs. The difference between WDM and basic fiber configurations is the addition of a multiplex unit on both sides of the intersite link. 24 Planning the remote replication fabric

25 Fabric to IP When using WDM, consider the following: WDM installation must conform to vendor specifications. Performance is affected by extreme distance and/or limited buffer-to-buffer credits on the Fibre Channel switch. Some switch vendors may limit the maximum distance between sites. Additional configuration rules apply to WDM configurations: Connecting the switch to the WDM unit typically requires one switch-to-wdm interface cable per wavelength of multimode fiber. Switches may require an Extended Fabric license. For more information about using WDM with fiber, see the HP SAN Design Reference Guide. Extended fabrics convert Fibre Channel to IP to maximize the separation distance. NOTE: Many solutions include dual fabrics between data centers to ensure there is no SPOF in the event of a fabric failure, but redundant fabrics are not required to run HP P6000 Continuous Access. Some best practice cabling solutions, such as the five-fabric configuration, do not support dual-fabric solutions. See Five-fabric configuration (page 29). Fibre Channel-to-IP The remote replication configuration over IP is similar to the basic remote replication configuration over fiber with the addition of Fibre Channel-to-IP (FC-to-IP) gateways. Dual fabrics require two dedicated gateways at each site, one per fabric for a total of four per solution, to eliminate SPOFs. See Figure 10 (page 31) and Figure 12 (page 34). For a current list of supported gateways and network specifications, see the HP SAN Design Reference Guide. FC-to-IP configuration limits Remote replication over IP has the same configuration limits as described in Basic configuration limits (page 23). Multiple instances can share the fabric if the network bandwidth is sufficient for all traffic flowing between sites in a worst-case scenario. In addition, some gateways do not support combining the data from two FC ports into one IP port. For details, see the HP SAN Design Reference Guide and vendor documentation. For information about different requirements for single intersite (interswitch) links versus shared or dual intersite (interswitch) links, see the HP SAN Design Reference Guide. FC-to-IP configuration rules In addition to the Basic configuration rules (page 24), consider the following specific requirements for IP configurations: Some FC-to-IP gateways are supported only on older B-series switches and require the Remote Switch Key (vendor-dependent). For gateways requiring the Remote Switch Key, and on switches where the Remote Switch Key is installed, do not enable suppression of F-Class frames. Doing so limits the supported configuration to one switch per fabric at each site. See the HP SAN Design Reference Guide for more information. The FC-to-IP gateways should be same model and software version on each side of the IP network in the fabric. The first Fibre Channel switch at each end of an FC-to-IP gateway (the first hop) should be same model and software version. This avoids interoperability issues because the MPX will merge the fabrics. Fabric to IP 25

26 Contact a third-party vendor to acquire and install all SMF optic cables, any MMF optic cables longer than 50 meters, and the FC-to-IP interface boxes. Configurations with application failover The configurations in this section include host software that works with HP P6000 Continuous Access to provide application and data failover capability. HP Cluster Extension HP Cluster Extension offers protection against application downtime due to a fault, failure, or site disaster by extending a local cluster between data centers over metropolitan distance. HP Cluster Extension reinstates critical applications at a remote site within minutes of an adverse event, integrating your open-system clustering software and HP P6000 Continuous Access to automate failover and failback between sites. This dual integration enables the cluster software to verify the status of the storage and the server cluster. The cluster software can then make correct failover and failback decisions, thus minimizing downtime and accelerating recovery. For more information, download the HP Cluster Extension documentation from the Manuals page of the HP Business Support Center website: In the Storage section, click Storage Software and then select HP Cluster Extension Software in the Storage Replication Software section. The HP P6000 Continuous Access links must have redundant, separately routed links for each fabric. The cluster network must have redundant, separately routed links. Cluster networks and HP P6000 Continuous Access can share the same links if the link technology is protocol-independent (for example, WDM) or if the Fibre Channel protocol is transformed to IP. Figure 6 (page 26) shows an example of a cluster configuration. NOTE: Most of the configurations shown in this guide will work with HP Cluster Extension provided the configuration used meets all HP Cluster Extension and any underlying cluster software requirements. Figure 6 HP Cluster Extension configuration 1. Data center 1 2. Data center 2 3. LAN connection 7. Intersite link blue fabric 8. Host I/O and replication fabric gold 9. Intersite link gold fabric 26 Planning the remote replication fabric

27 4. Management server 5. Hosts 6. Host I/O and replication fabric blue 10. Dual-controller arrays 11. Cluster site HP Metrocluster Continuous Access HP P6000 Continuous Access supports a ServiceGuard cluster running on HP-UX 11i v1 or HP-UX 11i v2 Update 2. Also known as HP-UX Metrocluster Continuous Access, this configuration has half the cluster in each of two data centers and uses HP P6000 Continuous Access to replicate data between the data centers. In the event of a fault, failure, or disaster, HP-UX Metrocluster Continuous Access automatically reconfigures the destination DR group. This allows automatic failover of ServiceGuard application packages between local and remote data centers. For more information about HP-UX Metrocluster Continuous Access EVA, see the HP website: HP Continentalcluster HP P6000 Continuous Access supports HP Continentalcluster on HP-UX 11i v1 and HP-UX 11i v2 Update 2 that is spread across separate data centers at unlimited distances. In this configuration, HP P6000 Continuous Access is used to replicate data from one site (where the primary cluster resides) to the other site (where the recovery cluster resides). Upon primary cluster failure, HP Continentalcluster fails over the ServiceGuard application packages from the primary cluster to the recovery cluster. Reduced-availability configurations IMPORTANT: The following reduced-availability configurations are not recommended for production environments. The following configurations are supported primarily to reduce the cost of test and development configurations. They are not recommended for production environments. Because they have one or more SPOFs, they do not offer the same level of disaster tolerance and/or high availability as described in Basic dual-fabric configuration (page 23). Single-fabric configuration The single-fabric HP P6000 Continuous Access solution is designed for small, entry-level tests or proof-of-concept demonstrations where some distance is needed between each of the two switches in the solution. This solution can also be used for producing copies of data needed for data migration or data mining, but is not recommended for ongoing production due to multiple SPOFs. Fabric zoning is required to isolate hosts, as documented in the HP SAN Design Reference Guide. The two switches share one intersite link, leaving the remaining ports for hosts, array controllers, and a management server. For example, if a 16-port switch is being used, the remaining 15 ports support up to: Four hosts, one array, and one management server Two hosts, two arrays, and one management server Figure 7 (page 28) shows a single-fabric configuration. Any supported fabric topology described in the HP SAN Design Reference Guide can be used. All intersite links supported in the basic remote replication configuration are also supported in the single-fabric configuration. This means that the intersite link can be direct fiber, a single WDM wavelength, or a single FCIP link. Reduced-availability configurations 27

28 NOTE: When creating an intersite FCIP link using B-series or C-series routers, the respective LSAN and IVR functionality can provide SAN traffic routing over the FCIP connection while preventing the merging of the two sites' fabrics in to a single fabric. LSANs and IVR enable logical fabric separation of the two sites, ensuring that a change on one site's fabric does not affect the other site. The HP FCIP Distance Gateways (MPX110) will allow the fabrics on both sites to merge into a single large fabric. SAN traffic isolation can still be accomplished with the FCIP Distance Gateways using SAN zoning, but this will not provide fabric separation. When using the FCIP Distance Gateways, fabric configuration changes should be made carefully, and it may be desirable to disable the ISL until all changes are complete and the fabric is stable. Figure 7 Single-fabric configuration 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O and replication fabric 7. Intersite link 8. Dual-controller arrays Single-switch configuration The single-switch HP P6000 Continuous Access configuration is designed for small, single-site, entry-level tests or proof-of-concept demonstrations. This non-disaster-tolerant solution can also be used for producing copies of data needed for data migration or data mining. A 16-port switch can support a maximum of three hosts, two arrays, and one management server. Large switches support more hosts and/or storage arrays if all HBA and array ports are connected to the same switch. Fabric zoning is required to isolate servers as defined in HP SAN Design Reference Guide. The fabric can be any supported fabric topology described in the HP SAN Design Reference Guide. An example of the single-switch configuration is shown in Figure 8 (page 29). NOTE: This solution can also be used for producing copies needed for data migration or data mining, but is not recommended for ongoing production due to multiple SPOFs. 28 Planning the remote replication fabric

29 Figure 8 Single-switch configuration 1. LAN connection 2. Switch 3. Management server 4. Hosts 5. Dual-controller arrays In this example, two hosts might be clustered together using a supported cluster technology for the operating system. The third host would be a single server running the same operating system as the clustered hosts, and therefore available as a backup to the cluster. In another example, the third host could have a different operating system and be a standalone server used for training on storage failover. Single-HBA configuration A host containing a single HBA can be attached to any of the following configurations: Basic HP P6000 Continuous Access and optional links Single fabric Single switch This option allows the use of hosts that support only one HBA. Availability is reduced due to the SPOF in connecting the server to the array. Advanced configurations The following configurations use separate controller host ports for host I/O and replication I/O. This enhances security and reduces contention between host I/O and replication I/O on a single controller host port. Any HP P6000 Continuous Access configuration using FC-to-IP gateways to communicate with the destination site should adhere to this recommendation. Five-fabric configuration The five-fabric solution shown in Figure 9 (page 30) consists of one fabric (8) that is dedicated to replication and four fabrics (blue and gold at each site) that are dedicated to I/O between hosts and arrays. Figure 10 (page 31) shows the same configuration using FC-to-IP for the replication fabric. In the five-fabric configuration, blue and gold fabrics (6 and 7) are dedicated to host I/O. A separate black fabric consisting of the switches (8) and the single intersite link (9) pass all the replication I/O. The dedicated switch (8) combines the data into one intersite link. Advanced configurations 29

30 NOTE: For more information about the connections used to implement this configuration, see A single physical fabric (page 66). The five fabric configurations can be physically separate fabrics (see Figure 24 (page 62)) or a single physical fabric zoned into five logical fabrics using switch zoning (see Figure 27 (page 66)). When creating an intersite FCIP link using B-series or C-series routers, the respective LSAN and IVR functionality can provide SAN traffic routing over the FCIP connection while preventing the merging of the two sites' fabrics in to a single fabric. LSANs and IVR enable logical fabric separation of the two sites, ensuring that a change on one site's fabric does not affect the other site. The HP FCIP Distance Gateways (MPX110) will allow the fabrics on both sites to merge into a single large fabric. SAN traffic isolation can still be accomplished with the FCIP Distance Gateways using SAN zoning, but this will not provide fabric separation. When using the FCIP Distance Gateways, fabric configuration changes should be made carefully, and it may be desirable to disable the ISL until all changes are complete and the fabric is stable. If you use FCIP, use the same model FCIP gateway at each location. To compensate for the SPOF in a five-fabric solution (the intersite link), HP recommends that availability be a QoS metric on the intersite link. The first Fibre Channel switch at each end of an FC-to-IP gateway (the first hop) should be same model and software version. This avoids interoperability issues because the MPX will merge the fabrics. Figure 9 Five-fabric configuration 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O fabric blue 7. Host I/O fabric gold 8. Dedicated replication fabric 9. Intersite link 10. Dual-controller arrays 30 Planning the remote replication fabric

31 Figure 10 Five-fabric configuration with FC-to-IP 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O fabric blue 7. Host I/O fabric gold 8. Dedicated replication fabric 9. Intersite link 10. Dual-controller arrays 11. FC-to-IP Advanced configurations 31

32 Six-fabric configuration The six-fabric configuration shown in Figure 11 (page 33) consists of two fabrics that are dedicated to replication and four fabrics that are dedicated to I/O between the hosts and arrays. Figure 12 (page 34) shows the same configuration using FC-to-IP for the replication fabrics. NOTE: For more information about the connections used to implement this configuration, see Dual physical fabric with six zones (page 67). The six fabric configurations can be physically separate fabrics (see Figure 25 (page 63) and Figure 26 (page 64)) or two physical fabric zoned into six logical fabrics using switch zoning (see Figure 28 (page 68) and Figure 29 (page 69)). When creating an intersite FCIP link using B-series or C-series routers, the respective LSAN and IVR functionality can provide SAN traffic routing over the FCIP connection while preventing the merging of the two sites' fabrics into a single fabric. LSANs and IVR enable logical fabric separation of the two sites, ensuring that a change on one site's fabric does not affect the other site. The HP FCIP Distance Gateways (MPX110) will allow the fabrics on both sites to merge into a single large fabric. SAN traffic isolation can still be accomplished with the FCIP Distance Gateways using SAN zoning, but this will not provide fabric separation. When using the FCIP Distance Gateways, fabric configuration changes should be made carefully, and it may be desirable to disable the ISL until all changes are complete and the fabric is stable. In a six-fabric configuration, each HP P6000 Continuous Access relationship must include at least one eight-port controller pair. The first Fibre Channel switch at each end of an FC-to-IP gateway (the first hop) should be same model and software version. This avoids interoperability issues because the MPX will merge the fabrics. If you use FCIP, use the same model FCIP gateway at each location. In this example, four local and remote fabrics (6 and 7 at each site) are dedicated to host I/O. At the same time, there are separate redundant fabrics made up of switches (8 and 10) and two intersite links (9 and 11). 32 Planning the remote replication fabric

33 Figure 11 Six-fabric configuration 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O fabric blue 7. Host I/O fabric gold 8. Dedicated replication fabric gold 9. Intersite link gold fabric 10. Dedicated replication fabric blue 11. Intersite link blue fabric 12. Dual-controller arrays Advanced configurations 33

34 Figure 12 Six-fabric configuration with FC-to-IP 1. Data center 1 2. Data center 2 3. LAN connection 4. Management server 5. Hosts 6. Host I/O fabric blue 7. Host I/O fabric gold 8. Dedicated replication fabric gold 9. Intersite link gold fabric 10. Dedicated replication fabric blue 11. Intersite link blue fabric 12. Dual-controller arrays 13. FC-to-IP gold replication fabric 14. FC-to-IP blue replication fabric 34 Planning the remote replication fabric

35 4 Planning the array configuration This chapter provides an overview of factors to consider when planning an array configuration for remote replication. Many remote replication features depend on the array controller software. For more information about planning, see the HP Enterprise Virtual Array Configuration Best Practices White Paper for your array model. Planning disk groups Planning the necessary disk groups to meet your I/O requirements should be done when configuring the array. See the HP Enterprise Virtual Array Configuration Best Practices White Paper for recommendations on configuring the array properly. When data is replicated remotely, application performance is not necessarily improved by increasing the number of disks in a disk group because response time for application writes includes the time for replication. In addition, sequential access (read or write) is limited by the per-disk performance rather than by the number of disks in the disk group. In synchronous mode, performance will likely be limited by replication before it is limited by the number of disks. When using enhanced asynchronous write mode, DR group logging will impact the write workload imposed on the array. Determining the number of disk groups To determine if the default disk group will meet your remote replication needs, consider the following: Separate disk groups can help ensure that data is recoverable if a disk group fails. However, multiple disk groups result in a slightly higher cost of ownership and potentially lower performance. In general, distributing the workload across the greatest number of disks in a single disk group provides the best performance. Disk groups must provide sufficient free space for snapshots and snapclones (if used), and for DR group write history logs. A DR group can contain virtual disks from multiple disk groups, but all DR group member virtual disks must be in the same array and must be set to use the same preferred controller on the array. For specific guidelines on choosing the number and size of disk groups, see the HP Enterprise Virtual Array Configuration Best Practices White Paper for your array model. Specifying disk group properties Use HP P6000 Command View to initialize each local and remote array and create additional disk groups as needed. When configuring disk groups for remote replication, consider the following: Assign different names to local and remote arrays to ensure the ability to fail over DR groups. Select disk protection level double for disk groups that will contain DR group members. (Note that this differs from the best practice in nonreplicating configurations.) The protection level determines the capacity reserved for reconstructing disks after a disk failure. Selecting double disk protection reserves the largest amount of disk capacity and provides the most data protection. Calculate the disk group occupancy alarm setting according to the array best practice, and ensure that you include the total maximum capacity for all DR group write history logs. See the HP Enterprise Virtual Array Configuration Best Practices White Paper for your array model. Planning disk groups 35

36 Planning DR groups Virtual disks that contain the data for one application must be in one DR group. For optimum failover performance, limit the virtual disks in a DR group to as few as possible. DR group guidelines The following guidelines apply for DR groups: Source and destination arrays must have remote replication licenses. The maximum number of virtual disks in a DR group and the maximum number of DR groups per array vary with controller software versions. For current supported limits, see the HP P6000 Enterprise Virtual Array Compatibility Reference. The array selected as the destination is specified by the administrator when setting up replication and creating the DR group. All virtual disks that contain data for a common application must be in a common DR group. HP recommends one application per DR group. All interdependent virtual disks such as host based logical volumes must be in a common DR group. DR groups can contain members from multiple disk groups. DR group members can exist in any supported redundancy (Vraid) level. DR groups that include FATA drives must meet the necessary configuration requirements. For more information on FATA drives, see DR groups with FATA or SAS Midline drives (page 38). All members of a DR group have a common preferred controller. Virtual disks added to a DR group will take on the presentation and failover status of the first virtual disk that was added to the disk group. The preferred mode setting is failover/failback. With some versions of controller software, virtual disks cannot be added to or deleted from a DR group when operating in enhanced asynchronous or basic asynchronous write mode. You must change the DR group to synchronous write mode and wait for the write history log or the write pending queue to drain before you can add or remove members. This restriction is controller software version dependent. To determine if your array has this restriction, see the HP P6000 Enterprise Virtual Array Compatibility Reference. Failover is permitted only from the destination array. When failsafe mode is enabled, a DR group cannot be suspended. A suspended DR group cannot be failed over and its members cannot be removed. Exception: if ISLs are broken, a suspended DR group can be failed over. A DR group cannot be deleted if a member of the DR group is presented on the destination array. LUN shrink of virtual disks in a DR group is not supported for any VCS or XCS controller software versions. See the HP P6000 Enterprise Virtual Array Compatibility Reference to determine if your array supports this feature. LUN expansion is not supported on VCS 3.xxx. A DR group can be failed over or deleted while it is normalizing in controller software version 0952x000 or later. Some versions of controller software support auto suspend when a full copy of the DR group is required. This feature can be used to protect the data at the destination site by delaying the full copy operation until a snapshot or snapclone of the data has been made. See the HP P6000 Enterprise Virtual Array Compatibility Reference to determine if your array supports this feature. 36 Planning the array configuration

37 To be added to a DR group, a virtual disk: Cannot be a member of another DR group Cannot be a snapshot Cannot be a mirrorclone Must be in a normal operational state Must use mirrored cache CAUTION: Before replicating a Vraid0 source Vdisk or creating a Vraid0 remote copy, consider the limitations of Vraid0. Although Vraid0 offers the most compact storage of your data, it carries no data redundancy. If you select a Vraid0 source Vdisk, the failure of a single drive in its disk group will cause the complete loss of its Vraid0 data, and replication from it will stop. If you choose to create a Vraid0 destination Vdisk, the failure of a single drive in its disk group will cause the complete loss of its Vraid0 data, and replication to it will stop. Implicit LUN transition and HP P6000 Continuous Access Implicit LUN transition automatically transfers management of a virtual disk to the array controller receiving the most read requests for that virtual disk. This feature improves performance by reducing the overhead incurred when servicing read I/Os on the non-managing controller. Implicit LUN transition is enabled in VCS 4.xxx and all versions of XCS. When creating a virtual disk, one of the array controllers is selected for managing the virtual disk. Only the managing controller can issue I/Os to a virtual disk in response to host read and write requests. If a read I/O request arrives on the non-managing controller, the read request must be transferred to the managing controller for servicing. The managing controller issues the I/O request, caches the read data, and mirrors that data to the cache on the non-managing controller, which then transfers the read data to the host. Because this type of transaction, referred to as a proxy read, requires additional overhead, it provides less than optimal performance. (There is little impact on a write request as all writes are mirrored in both controllers caches for fault protection.) With implicit LUN transition, if the array detects that a majority of read requests for a virtual disk are proxy reads, management of the virtual disk will be transitioned to the non-managing controller. This improves performance by making the controller that is receiving most of the read requests the managing controller, which reduces the proxy read overhead for subsequent I/Os. On XCS and later, implicit LUN transition is disabled for all members of an HP P6000 Continuous Access DR group. Because HP P6000 Continuous Access requires that all members of a DR group be managed by the same controller, it would be necessary to move all members of the DR group if excessive proxy reads were detected on any virtual disk in the group. This would impact performance and create a proxy read situation for the other virtual disks in the DR group. Not implementing implicit LUN transition on a DR group does create the possibility that a virtual disk in the DR group may be experiencing excessive proxy reads. DR group name guideline HP recommends that you assign unique names to DR groups. Duplicate names are supported for DR groups on different arrays, but they are not supported for DR groups on the same array. DR group names are case sensitive in HP P6000 Command View but are not case sensitive in HP P6000 Replication Solutions Manager. In rare cases, this can lead to issues in HP P6000 Replication Solutions Manager. For example, if you use HP P6000 Command View to create two DR groups, DRgroupA and drgroupa on the same array, there will be no problem in HP P6000 Command View; however, actions and jobs in HP P6000 Replication Solutions Manager involving either of these DR groups may fail or result in unexpected behavior. Planning DR groups 37

38 Increasing the size of the write history log file in enhanced or basic asynchronous mode You can expand the size of a DR group member (virtual disk) whether the DR group is in synchronous or asynchronous mode. However, when you expand the size of a virtual disk in a DR group operating in enhanced asynchronous or basic asynchronous mode, the write history log file size does not increase. See the HP P6000 Enterprise Virtual Array Compatibility Reference. To increase the log file size in enhanced asynchronous mode, you must first set the DR group to synchronous mode as follows: NOTE: The following procedure requires you to temporarily change the write mode of the DR group to synchronous. While host I/O is being replicated synchronously, server performance will be negatively impacted. 1. Change the write mode from asynchronous to synchronous. 2. Allow the write history log to drain. 3. Increase the size of the log. 4. Change the write mode back to asynchronous. DR groups with FATA or SAS Midline drives HP P6000 Continuous Access supports the use of FATA or SAS Midline drives on an array. However, you must ensure that I/O activity to the source array DR group members does not exceed the reduced duty cycle requirements for FATA or SAS Midline drives. Before using FATA or SAS Midline drives in an HP P6000 Continuous Access environment, consider the following factors: FATA drive duty cycle FATA or SAS Midline drives have a significantly reduced duty cycle relative to Fibre Channel drives. Write I/O rate on the source array Evaluate the level of write activity to the DR group. Type of write I/Os (sequential or random) Evaluate the type of write I/O to the DR group. When using FATA or SAS Midline drives on the destination array, you must ensure that the drive duty cycle will not be exceeded if a DR group failover occurs. In the event of a failover, write and read I/O is directed to the new source (previously the destination) DR group members. If the read and write rates and types of writes to the source array do not meet the reduced duty cycle requirements of the FATA drives, the drives should not be used for DR group destination virtual disks. IMPORTANT: If your environment requires higher performance and reliability, you may want to consider using Fibre Channel drives rather than FATA drives for the destination array DR group. This will ensure a consistent level of operation in the event of a failover. When using FATA drives, make sure they have the latest drive firmware installed. Planning the data replication protocol Fibre Channel switches typically offer two types of frame routing between nports: Source ID/Destination ID (SID/DID) routing Routes all exchanges between a port pair through the fabric using the same path through the fabric. Exchange-based routing Transfers all frames within a SCSI exchange using the same path. Other SCSI exchanges can use alternate paths. 38 Planning the array configuration

39 NOTE: With HP B-series switches the SID/DID protocol is known as Port-based routing; with HP C-series switches the SID/DID protocol is known as Flow based load balancing. Both B-series and C-series switches use exchange-based routing terminology for SID/DID and originator exchange ID (OXID) routing. The original replication protocol used with HP P6000 Continuous Access is HP FC Data Replication Protocol (HP-FC). This protocol uses multiple exchanges for each data replication transfer. One or more exchanges are used to transfer data, and an additional exchange is used for command information. HP-FC requires that all frames in the transfer be delivered in order using the same path even if they are for different exchanges. Therefore, if HP-FC is enabled, the fabric must use SID/DID routing. The new HP SCSI FC Compliant Data Replication Protocol (HP SCSI-FCP) is a full SCSI protocol implementation and can take advantage of the exchange-based routing available in fabric switches. Replication data transfer SCSI exchanges can use alternate paths. HP SCSI-FCP and HP-FC protocols are supported on controller software versions 0953xxxx or later; earlier versions of the controller software support HP-FC only. The successful creation of DR groups requires that both the source and destination arrays be configured for compatible replication protocols. NOTE: H-series FC switches are only supported with XCS or later and the HP SCSI-FCP protocol. For the latest information about supported firmware, see the P6000/EVA Array Streams and the H-series FC Switch Connectivity Stream on the HP Single Point of Connectivity Knowledge (SPOCK) website at You must sign up for an HP Passport to enable access. Selecting the data replication protocol When selecting the data replication protocol for the array, three options are available. The three available options are shown in Data replication protocol selection (page 40). The option currently being used is indicated by the option selected. HP FC Data Replication Protocol (HP-FC) Choosing this option requires that all transfers be completed in order, which is accomplished by the proper configuration of the SAN fabric switches for SID/DID. For more information on the required switch settings, see Verifying Fibre Channel switch configuration (page 51). NOTE: routing. HP-FC protocol should not be used with a SAN configured for exchange-based An array configured for HP-FC will successfully create a DR group only with another array using HP-FC. HP-FC is the default protocol for controller software versions prior to XCS 0953xxxx and can be selected as a protocol in later versions. Use of HP-FC with improper fabric configuration will result in significant performance degradation. HP SCSI FC Compliant Data Replication Protocol (HP SCSI-FCP) Choosing this option supports the transfer of data replication traffic in a fabric configured for exchange-based (SID/DID OXID) routing. This protocol takes advantage of the exchange-based fabric setting, but is not dependent on the fabric setting. Exchange-based routing is not required when using this protocol. For more information on the required switch settings, see Verifying Fibre Channel switch configuration (page 51). Either Choose this option when the SAN contains arrays running both HP-FC and HP SCSI-FCP protocols. This option enables arrays capable of HP-FC only to successfully create DR groups with arrays running controller software version 0953xxxx or later. This facilitates the migration of data from older arrays to newer arrays. The same fabric considerations for HP FC Data Replication Protocol also apply to the Either option. Planning the data replication protocol 39

40 NOTE: An array running XCS version or later with the protocol selection set for Either may have DR groups created with arrays configured for HP-FC or HP SCSI-FCP. However, the fabric must be set to SID/DID routing protocol only. The three available options are shown in Data replication protocol selection (page 40). The window indicates the protocol currently selected. The following guidelines apply when selecting the replication protocol for each indicated task: Installing a new array When initializing a new array with controller software version or later installed, the replication protocol defaults to HP SCSI-FCP. The protocol can be changed to HP-FC if necessary. Upgrading an array When upgrading an array to or later from an earlier version that does not support HP SCSI-FCP, the replication protocol remains at HP-FC regardless of whether DR groups are present or not. Downgrading an array When downgrading an array from controller software version or later to an earlier version that does not support HP SCSI-FCP, you must configure the fabric for SID/DID then change the data replication protocol to HP FC before downgrading. These configuration changes must be completed prior to downgrading the array and bringing the DR groups back online. When changing the protocol, data replication will stop until both arrays in the HP P6000 Continuous Access relationship are set to a compatible protocol. Figure 13 Data replication protocol selection Data replication protocol performance considerations The HP-FC modified SCSI protocol replicates data using a single round trip as compared to the two round trips used by the standard SCSI protocol. HP SCSI-FCP follows the standard SCSI protocol, requiring two round trips to complete a data transfer. The following considerations can minimize the additional link overhead introduced by the dual round trip required by HP SCSI-FCP. NOTE: The term IP acceleration is used here to collectively refer to the Brocade FCIP FastWrite feature and the Cisco FCIP Write Acceleration feature. IP acceleration can only be used if the replication network between two arrays is using the HP SCSI-FCP protocol. IP acceleration MUST NOT be enabled when using the HP-FC protocol. The following information assumes that in both single-network and dual-network link environments each individual network link is capable of maintaining the solution s defined RPO. If this is not the case, each network link is considered a SPOF because the failure of one network link prevents the solution from meeting its defined RPO. If each network link cannot maintain the solution s defined RPO, you must not enable the IP acceleration feature of the Brocade or Cisco FC-IP routers. It is 40 Planning the array configuration

41 also assumed that the packet loss and network latency jitter on the networks falls within HP s defined acceptable range for HP P6000 Continuous Access replication. HP currently supports three FCIP router families: B-series, C-series, and HP IP Distance Gateway. The FCIP configuration can have either single or dual long-distance network links between the local and remote gateways. Both the Brocade and the Cisco FC-IP routers support IP acceleration, which optimizes the SCSI protocol across the IP network by reducing the number of round trips. The HP IP Distance Gateway does not currently support IP acceleration capability. Data compression remains supported on all of these products. For detailed information on FCIP router support, see the HP SAN Design Reference Guide. In a single long-distance IP network link solution, consider enabling the FCIP router IP acceleration capability to optimize the SCSI protocol on the IP network. If IP acceleration is not enabled in a dual network environment, the FCIP router load-balances I/O equally across the two links. If IP acceleration is enabled in a dual network environment, both the Brocade and Cisco FCIP routers will force all replication traffic to use a single network path, even if redundant long-distance IP networks are available. This behavior is characteristic of these routers and cannot be changed. In a redundant replication network solution, consider enabling IP acceleration only if both links are capable of independently meeting the solution s required RPO. IP acceleration should not be enabled if either one of the networks is not capable of meeting the solution s RPO. When using two equivalent (QOS) redundant long distance IP networks that are independently capable of meeting the solution s RPO, consider enabling IP acceleration. HP SCSI-FCP also allows a tunnel to use any or all of the available paths to the remote controller, which may improve overall performance. To enable the simultaneous use of more than one array host port for tunnel traffic, the ports used for data replication on both the source and destination arrays should be set for an equivalent priority. NOTE: These priority settings should only be applied to ports running the HP P6000 Continuous Access protocol. If it is necessary to change the data replication protocol setting with existing DR groups, data replication will stop until both arrays in the HP P6000 Continuous Access relationship are set to a compatible protocol. Tunnel thrash Tunnel thrash is the frequent closing and opening of a tunnel while holding host I/O in the transition. This occurs when peer controllers can see each other, but cannot sustain replication on any path. Tunnel thrash can be caused by the following conditions: High volumes of packet loss Incorrectly configured routers Rerouted IP circuits Oversubscribed circuits Although tunnel thrash is rare, if it occurs a critical event is placed in the controller event log and displayed in HP Command View EVA. An event will be logged for each DR group that shares the affected tunnel. Planning the data replication protocol 41

42 If tunnel thrash occurs, perform the following tasks to resolve the situation and return to normal operation: Check all switches and routers to determine if there are high volumes of packet loss. Ensure that all switches and routers are configured correctly. Contact your service provider to determine if the circuit routing has been changed. Determine if tunnel thrash only occurs during periods of peak activity. If it does, the circuit may be oversubscribed and you may need to increase the bandwidth. Tunnel thrash can occur during normalization in a configuration with two separate IP paths (two fabric or six fabric). During normalization, the process may detect high latency on the link being used but low latency on the unused link. This can cause the normalization process to switch to the unused link. This pattern can repeat itself, causing thrashing. To avoid this situation, disable one link until the normalization is complete. NOTE: An informational event is generated when a tunnel opens or closes. Excessive numbers of these events is an indication of tunnel thrashing, which may lead to DR group forced logging to maintain host accessibility. Planning for DR group write history logs The DR group write history log is a virtual disk that stores a DR group's host write data. The log is created when you create the DR group. Once the log is created, it cannot be moved. You must plan for the additional disk capacity required for each DR group write history log. For more information on DR group log size, see DR group write history log size (page 43). NOTE: You must plan for the consumption of additional capacity before implementing HP P6000 Continuous Access. If insufficient capacity is encountered, the request to enable asynchronous replication will fail. This need for sufficient capacity applies to both the source and destination arrays. In all write modes, the DR group write history log is structured as Vraid1, which consumes twice the capacity requested for the log. Although Vraid1 consumes more capacity, it provides the highest level of protection for the log content. A portion of the write history log is used for metadata. On the EVA3000/5000 and EVA4x00/6x00/8x00, 3.125% of the log is reserved for metadata. On the EVA4400, EVA6400/8400, and the P6300/P6500, 6.24% of the log is reserved. Logging in synchronous or basic asynchronous mode In synchronous mode or basic asynchronous mode, the DR group write history log stores data when replication to the destination DR group is stopped because the destination DR group is unavailable or suspended. This process is called logging. When replication resumes, the contents of the log are sent to the destination virtual disks in the DR group. This process of sending I/Os contained in the write history log to the destination array is called merging. Because the data is written to the destination in the order that it was written to the log, merging maintains an I/O-consistent copy of the DR group's data at the destination. Logging in enhanced asynchronous mode In enhanced asynchronous mode, the DR group write history log acts as a buffer and stores the data until it can be replicated. The consumption of the additional capacity required for the log should not be viewed as missing capacity it is capacity used to create the log. If necessary, you can reclaim allocated log disk space from a DR group in enhanced asynchronous mode. You must first change the write mode to synchronous and then use the log control feature 42 Planning the array configuration

43 to reduce the log size. When the log content has been drained, you can return the DR group to enhanced asynchronous mode. Until the DR group is returned to enhanced asynchronous mode, the DR group operates in synchronous mode, which may impact performance. Allocated log file space is not decreased when DR group members are removed. Log space usage will increase when members are being added to an existing DR group unless the size of the log disk has reached the maximum of 2 TB or has been fixed to a user-defined value. For the default maximum size, see the HP P6000 Enterprise Virtual Array Compatibility Reference. NOTE: Normalization For XCS 6.0xx and 6.1xx, asynchronous replication mode is not available during the creation of a DR group. When creating a DR group in either of these versions of XCS, you must wait for the completion of the initial normalization to change the replication mode to asynchronous. For XCS or later that are managed using HP P6000 Command View versions or later, the creation of DR groups in asynchronous mode is allowed. These combinations of XCS controller software and management software enable the addition or removal of DR group members while in asynchronous replication mode. See the HP P6000 Enterprise Virtual Array Compatibility Reference for details. The method of synchronizing source and destination virtual disks is called normalization. A normalization can occur whenever the source and destination array need to be brought back into synchronization. When a DR group is first created, a normalization occurs. If a DR group write history log overflows, a normalization occurs to bring the source and destination arrays back into synchronization. When a DR group is first created, a full copy normalization occurs to copy all the data in the DR group from the source array to the destination array, bringing the two arrays into synchronization. A normalization can also occur if the write history log used by a DR group overflows or is invalidated by the storage administrator. Normalizations copy data from the source array to the destination array in 128 KB blocks. When a write history log overflows, the controller invalidates the log contents and marks the DR group for normalization. In some cases, normalization will be optimized to copy only blocks that were written before the write history log overflowed; not all the data in the DR group. DR group write history log size You can set the maximum size for the DR group write history log while in synchronous mode. The minimum size of the log depends on the replication mode. The default maximum value for the log will differ for each replication mode, and is based on the controller software version. For details on maximum and default log sizes, see the HP P6000 Enterprise Virtual Array Compatibility Reference. NOTE: If you are using XCS or later and you choose enhanced asynchronous mode, the same amount of space must be available for the DR group write history log at both the source and destination sites when specifying the maximum log size. XCS and higher you can specify the size of the DR group write history log. It is important to ensure that the write history log is large enough that, under normal operating circumstances, it will not overflow and result in an unexpected normalization. For XCS 6.2xx, and or later, space for the log can be de-allocated by converting the DR group to synchronous mode, waiting for the write history log to drain, and then specifying the new size. Adding or removing members can be done in either synchronous or asynchronous mode of operation. Planning for DR group write history logs 43

44 Write history log size in synchronous or basic asynchronous mode When using synchronous mode or basic asynchronous mode, if logging occurs because replication has been suspended or the replication links have failed, the size of the log file expands in proportion to the amount of writes. The size of the log file can increase only up to the user-specified maximum value or to the controller's software default maximum value. The size of the log can't be changed while in basic asynchronous mode. You must change the write mode to synchronous, change the log file size, and then return to basic asynchronous mode. In synchronous mode and basic asynchronous mode, the log grows as needed when the DR group is logging and it shrinks as entries in the log are merged to the remote array. The controller considers the log disk full when one of the following occurs: No free space remains in the disk group that contains the log disk. The log disk reaches 2 TB of Vraid1 space. The log reaches the default or user-specified maximum log disk size. Write history log file size in enhanced asynchronous mode The DR group write history log file size is set when you transition the DR group to enhanced asynchronous mode. The space for the DR group write history log must be available on both the source and destination arrays before the DR group is transitioned to enhanced asynchronous mode. Once set, the space is reserved for the DR group write history log and cannot be reduced in size without first transitioning to synchronous mode and allowing the log to drain. Note that running in synchronous mode will probably have a large impact on server write I/O response times. For known link outage periods, the size of the log required is directly related to the length of time the link is down and the data generation rate. For planned periods of link down time, you can calculate the log size required. Any write history log sizing done in anticipation of planned or unplanned link outages will be in addition to the sizing done to ensure the log doesn't overflow during normal operation. Incorrect error message for minimum asynchronous replication log size If you change the write mode of a DR group to enhanced asynchronous (between arrays running XCS 6.0xx or later), you may see the following error message: Invalid user defined log size. If this occurs, check the size of the log. The log size must be a minimum of 1624 MB before you can change the mode to enhanced asynchronous replication. Log size displayed incorrectly when creating DR groups in a mixed controller software environment When creating a DR group in an environment that includes arrays running different versions of controller software, the requested and actual log size for the DR group may not match. This is due to the mixed controller software environment, and also to the limited available space for the log. This is a HP P6000 Command View display issue only and does not indicate a problem with the DR group, which will function normally. DR group write history log location The array controller software chooses a default location for the DR group write history log if one is not specified. The type of disks (online or near-online) used as the default location are determined by the version of controller software. If you want to override the default, you can specify the location of DR group. Table 4 (page 46) identifies the default process for selecting a disk group for the log. Once the log is created, it cannot be moved. Most arrays allow you to create DR group write history logs using economical near-online disks. If the DR pair includes an array running VCS 3.xxx, the DR group write history log is automatically created using a near-online disk group if one is available when the DR group is created. If the array is running VCS 4.xxx, XCS 6.xxx, or XCS 09xxxxxx or later, you can override the automatic 44 Planning the array configuration

45 log group assignment and specify the location of the log when you create the DR group. For version-specific features, see the HP P6000 Enterprise Virtual Array Compatibility Reference. IMPORTANT: When using XCS or later, create the DR group write history log using online Fibre Channel disks, not near-online disks. Constant writes to the DR group write history log in enhanced asynchronous mode significantly shorten the expected lifetime of near-online disks. Using online disks for the DR log also improves I/O performance. When using XCS or later, select online disks for the DR group write history log by default. The ability to specify DR group write history log location and size is not supported with VCS 3.xxx. Planning for DR group write history logs 45

46 Table 4 Default DR group write history log placement Array status The array contains one defined disk group. The array contains one near-online disk group and one online disk group. The array contains only multiple near-online disk groups. The array contains only multiple online disk groups. VCS, XCS or earlier Use the defined disk group. Use the near-online disk group. Use the near-online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. Use the online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. XCS or later XCS or later Use the defined disk group. Use the online disk group. Use the near-online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. Use the online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. The array contains one or more near-online disk groups, and one or more online disk groups. Use the near-online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. If all near-online disk groups are full or inoperative, use an online disk group based on the same space criteria. Use the online disk group containing the most average free space based on the number of DR group logs assigned to the disk groups. If all online disk groups are full or inoperative, use a near-online disk group based on the same space criteria. Planning replication relationships One array can have replication relationships with multiple arrays. This section describes creative ways to optimize your remote replication resources. Bidirectional replication In bidirectional replication, an array can have both source and destination virtual disks that will reside in separate DR groups. (One virtual disk cannot be both a source and destination simultaneously.) For example, one DR group can replicate data from array A to array B, and another DR group can replicate data from array B to array A. Bidirectional replication enables you to use both arrays for primary storage while they provide disaster protection for another site. When using bidirectional replication: disk groups on arrays in a bidirectional relationship should be appropriately sized for the load they will be carrying. Consider the bandwidth as two unidirectional flows, and add the two flows together to determine the bandwidth requirements. For other considerations related to bidirectional replication, see the HP Enterprise Virtual Array Configuration Best Practices White Paper for your array model. 46 Planning the array configuration

47 System fan-out replication In the system fan-out replication shown in Figure 14 (page 47), one DR group is replicated from array A to array B, and another DR group is replicated from array A to array C. CAUTION: In a mixed array environment that includes an EVA3000/5000 array, the host ports of the fan-in target or the fan-out source should be isolated. This is necessary because an EVA3000/5000 has fewer resources to handle inter-array replication traffic. To accommodate the reduced number of available resources on the EVA3000/5000, an EVAx100 or EVAx400 array limits its resource allocation to match the lower protocol of the EVA3000/5000. This may result in reduced replication traffic performance between arrays normally capable of higher performance. This situation occurs if the same host port (shared port) is used to connect to both an EVA3000/5000 and an EVA4000/6000/8000, EVAx100, or EVAx400. (For information on displaying array-to-array connection information, see Changing host port data replication settings (page 56).) If the shared host port configuration is temporary, after the EVA3000/5000 is removed the shared port must be disabled and then enabled, which forces the connection to the remaining arrays to close and reopen. This ensures that the remaining arrays use the higher level of available resources within the controller software. If the shared host port configuration is not temporary, the connections must use isolated host ports for EVA3000/5000 data replication connections to eliminate the possibility of reduced replication performance. For details on creating port isolation using zoning, see Dual-fabric replication zone fan-in/fan-out for port isolation: sheet 1 (page 91). Figure 14 System fan-out replication 1. Array A 2. Array B 3. Array C Planning replication relationships 47

48 Fan-in replication In the fan-in replication shown in Figure 15 (page 48), one DR group is replicated from array A to array C, and another DR group is replicated from array B to array C. Figure 15 Fan-in replication 1. Array A 2. Array B 3. Array C 48 Planning the array configuration

49 Cascaded replication In cascaded replication, one DR group is replicated from array A to array B, and another DR group is replicated from array B to array C. In this configuration, the source disk for replication from the array B to array C is a snapclone of the destination disk in the replication from array A to array B. See Figure 16 (page 49). Snapclone normalization must complete on array B before the new snapclone can be put in a new DR group. Figure 16 Cascaded replication 1. Array A 2. Array B 3. Array C NOTE: Using a mirrorclone instead of a snapclone to make the point-in-time copy of the destination on array B is not supported. Planning replication relationships 49

50 5 Planning the solution This chapter describes general design considerations for the different operating systems, applications, and storage management components that you can use when planning a remote replication solution. Operating system considerations This section describes the operating systems supported in remote replication solutions. It also describes the operating system capabilities that are available in an HP P6000 Continuous Access environment. NOTE: These capabilities are not always available in non HP P6000 Continuous Access environments. Supported operating systems The ability for HP P6000 Continuous Access to work with an operating system is determined by the operating system support documents posted on the SPOCK website at storage/spock If an operating system is supported with a particular array model and controller software version, then HP P6000 Continuous Access is supported with that operating system. Operating system capabilities Boot from SAN This section describes two operating system capabilities that are available in an HP P6000 Continuous Access solution: boot from SAN and bootless failover. With HP P6000 Continuous Access, you can replicate boot disks to a remote array and use them to recover the host and applications. Refer to the OS-specific documentation on the SPOCK website for operating systems that support boot from SAN and boot disk failover. IMPORTANT: Do not use HP P6000 Continuous Access to replicate swap files. Bootless failover Bootless failover allows destination servers to find the new source (after failover of the storage) without rebooting the server. This capability also includes the fail back to the original source without rebooting. NOTE: For any operating system you use, refer to the OS-specific documentation on the SPOCK website to ensure that you use compatible versions of multipath drivers and HBAs. Each boot disk must be in its own DR group. 50 Planning the solution

51 6 Implementing remote replication This chapter describes the basic steps for setting up HP P6000 Continuous Access. Remote replication configurations There are a number of options for configuring your solution to support remote replication. For detailed information on remote replication configurations, see Planning the remote replication fabric (page 23). Verifying array setup Your array purchase may include installation services provided by HP-authorized service representatives at local and remote sites. You should, however, verify that the arrays are set up and cabled properly for remote replication. Installation checklist Verify that the following items are installed or configured: External connections from the array controllers to two (or for some controllers, four) fabrics that are also connected to application servers (hosts) Internal connections from array controllers to disk enclosures and loop switches (Optional) HP P6000 Command View password on the array HP P6000 Command View on a management server connected to the fabrics that connect the local array to the remote array (Optional) HP P6000 Command View on a management server connected to the fabrics that connect the local array to the remote array Storage system data replication protocol set properly NOTE: Make sure Fibre Channel switches are configured properly as described in Verifying Fibre Channel switch configuration (page 51). HP P6000 Continuous Access is not supported when the management server is attached directly to the array. Verifying Fibre Channel switch configuration To ensure proper operation in an HP P6000 Continuous Access environment, the necessary FC switches must be configured properly. Make sure the FC switches meet the following requirements. For more information on FC switch operation and the procedures and commands used to configure the switch, see the following: HP SAN Design Reference Guide Documentation for the model and version of the FC switch(es) you are using You can find FC switch documentation on the Manuals page of the HP Business Support Center website: In the Storage section, click Storage Networking, and then select your switch product. Remote replication configurations 51

52 B-series switch configuration The following routing policies are available on B-series switches: Exchange-based routing The routing path is based on the SID, DID, and OXID optimizing path utilization for the best performance. Each SCSI exchange can take a different path through the fabric. Exchange-based routing requires using the Dynamic Load Sharing (DLS) feature. When this policy is in effect, you cannot disable DLS. Exchange-based routing also supports the following AP policies: AP shared link policy (default) AP dedicated link policy This policy dedicates links to egress traffic and links to ingress traffic. Port-based routing The routing path is based only on the incoming port and the destination domain. B-series switch settings for the HP-FC protocol NOTE: The following configuration must be used in an HP P6000 Continuous Access environment for all Fibre Channel switches in the path from the source array to the destination array. This includes FCIP routers that are connected to the fabric. Execute the following commands to establish the required switch settings: 1. switchdisable 2. aptpolicy 1 (enable port-based routing) 3. dlsreset (disable DLS) 4. iodset (enable in-order delivery) 5. switchenable B-series switch settings for the HP SCSI-FCP protocol NOTE: The HP SCSI-FCP protocol supports aptpolicy of 1 or 3, and iod enabled or disabled. The best performance in a configuration with multiple paths between source and destination is achieved using an aptpolicy of 3 and iod disabled (iodreset). Please review the Brocade switch documentation for a detailed discussion of these settings. Example: The following commands set the switch settings for best performance. 1. disable switch 2. aptpolicy 3 (enable exchange-based routing) DLS is enabled when aptpolicy is set to aptpolicy ap 0 (enable shared link policy) 4. iodreset (disable in-order delivery) 5. switchenable C-series switch configuration The following routing policies are available on C-series switches: Exchange based routing The first frame in the exchange chooses a link, and subsequent frames in the exchange use the same link. However, subsequent exchanges can use a different link. This provides more granularity to load balancing while preserving the order of frames for each exchange. Flow based routing All frames between the source array and the destination array follow the same links for a given flow. The link that is selected for the first exchange of the flow is used for all subsequent exchanges. 52 Implementing remote replication

53 C-series switch settings for the HP-FC protocol NOTE: The following configuration must be used in an HP P6000 Continuous Access environment for all Fibre Channel switches in the path from the source array to the destination array. This includes FCIP routers that are connected to the fabric. Execute the following commands to establish the required switch settings: 1. config terminal 2. (config)# in-order-guarantee (enable in-order delivery) 3. (config)# vsan database 4. (config-vsan-db)# vsan x loadbalancing src-dst-id (load balancing policy set to Src-ID/D-ID) 5. (config-vsan-db)# end 6. copy running-config startup-config 7. config terminal 8. (config)# interface fcip (fcip #) 9. (config-if)# no write-accelerator (set write accelerator to off) 10. (config-if)# end 11. copy running-config startup-config 12. show fcip summary (review your settings) When using FCIP, the TCP send buffers must be set to 4,096 K on all FCIP profiles on both the source and destination sites. One exception is a solution using a link less than OC3 (155 Mb/s), in which case buffers must be set to 8,192 K (tcp send-buffer-size 8192). In-order delivery guarantees that frames are delivered to the destination in the same order in which they were sent by the source. C-series switch settings for the HP SCSI-FCP protocol NOTE: The HP SCSI-FCP protocol supports load balancing options src-dst-id or src-dst-ox-id, and iod enabled or disabled. The best performance in a configuration with multiple paths between source and destination is achieved using the load balancing option src-dst-ox-id and iod disabled (no in-order-guarantee). Please review the Cisco switch documentation for a detailed discussion of these settings. Example Example: The following commands set the switch settings for best performance. 1. config terminal 2. (config)# no in-order-guarantee (disable in-order delivery) 3. (config)# vsan database 4. (config-vsan-db)# vsan x loadbalancing src-dst-ox-id (load balancing policy set to Src-OX/ID) 5. (config-vsan-db)# end 6. copy running-config startup-config 7. config terminal 8. (config)# interface fcip (fcip #) 9. (config-if)# no write-accelerator (set write accelerator to off) 10. (config-if)# end 11. copy running-config startup-config Verifying array setup 53

54 12. show fcip summary (review your settings) When using FCIP, the TCP send buffers must be set to 4,096 K on all FCIP profiles on both the source and destination sites. One exception is a solution using a link less than OC3 (155 Mb/s), in which case buffers must be set to 8,192 K (tcp send-buffer-size 8192). H-series switch configuration H-series switches only support either the HP IP Distance Gateway or the MPX200 Multifunction Router. The FC-to-IP gateways should be same model and software version on each side of the IP network in the fabric. NOTE: H-series FC switches are only supported with XCS or later and the HP SCSI-FCP protocol. For the latest information about supported firmware, see the P6000/EVA Array Streams and the H-series FC Switch Connectivity Stream on the HP Single Point of Connectivity Knowledge (SPOCK) website at You must sign up for an HP Passport to enable access. M-series switch configuration Both HP-FC protocol and HP SCSI-FCP protocol can be used with M-series switches. If the M-series switch is installed in an environment with other model switches (for example, B-series), follow the configuration settings for the other (non-m-series) switch. M-series switch settings for the HP-FC and HP SCSI-FCP protocols NOTE: The following configuration must be used in an HP P6000 Continuous Access environment for all Fibre Channel switches in the path from the source array to the destination array. The Reroute Delay (in-order delivery) must be enabled on all M-series switches (reroutedelay 1) Verifying cabling Verify that the cabling between the arrays and Fibre Channel switches meets remote replication requirements. The supported cabling scheme depends on the array controller hardware and software features, as shown in Figure 17 (page 55) through Figure 19 (page 55). In mixed array configurations, use the cabling scheme specific to each controller. For a description of all cabling options, as well as best practices for cabling, see the HP SAN Design Reference Guide. NOTE: For low-bandwidth intersite links, HP recommends using separate array ports for host I/O and replication I/O. When connecting the array, the even-numbered controller ports should be connected to one fabric, and the odd-numbered controller ports should be connected to a different fabric. On arrays running VCS 3.xxx, straight or cross-cabled connections are supported with HP P6000 Continuous Access. Figure 17 (page 55) shows the standard cabling scheme for remote replication on arrays with two-port controllers. The even-numbered ports on each controller are connected to one fabric and the odd-numbered ports on each controller are connected to the other fabric. Figure 18 (page 55) shows the same connections for the EVA4400 single-enclosure controllers. 54 Implementing remote replication

55 Figure 17 Cabling for arrays with two-port controllers Figure 18 Cabling for EVA4400 with two-port controllers Figure 19 (page 55) shows the standard cabling scheme for remote replication on arrays with four-port controllers. Each controller has redundant connections to both fabrics. Even-numbered ports are connected to one fabric and odd-numbered ports are connected to the other fabric. Figure 19 Cabling for arrays with four-port controllers Verifying array setup 55

56 Changing host port data replication settings NOTE: Manually setting host port preferences should be done carefully and only when absolutely necessary. Using this feature requires advanced knowledge of array operation to ensure the expected results are achieved. Host port replication settings can be managed using HP P6000 Command View, HP P6000 Replication Solutions Manager, or the HP Storage System Scripting Utility. The following information assumes the use of HP P6000 Command View, which provides more capability when performing this task. The host port preferences for EVA3000 and EVA5000 arrays should be left at the predefined defaults. HP P6000 Command View 9.0 or later enables the monitoring and setting of controller host ports preferences for data replication traffic. This functionality is supported on XCS 6.xxx or later and XCS 09xxxxxx or later. Host port replication settings are managed from the Data Replication Folder Properties page within HP P6000 Command View. See Figure 20 (page 57). Host port preferences are set to defaults during controller initialization. The port settings can be changed, regardless of whether there are active DR groups on the array or not. This enables a source array to establish a unique host port priority sequence for each destination array visible to the source: 1st 4th for 8-port controller pairs, and 1st or 2nd for 4 port controller pairs. There is also a No DR host port option which blocks the creation of tunnels on the specified port, disabling all remote replication traffic for that port. The port preferences are automatically checked by the controller software at the Check interval setting, one minute after a change is sent to the controller software by the management software, and when a controller state change results in a data replication tunnel opening (for example, a controller resync or link state change). The Check interval setting can be changed or disabled. If port checking is disabled and changes to the port preferences are saved, the changes will not be invoked until port checking is re-enabled or a controller state change occurs. IMPORTANT: Even when port checking is not enabled, the port preferences that were saved will become active with controller state changes. A port actively forwarding data replication traffic may be changed to No DR. This change moves the tunnel to the next highest priority port when the next port preference check occurs. Changing a host port preference to NO DR does not cause the remaining host port values to be reset. For example, if the host port with 1st priority is set to No DR, the preferences for the 2nd, 3rd, and 4th host ports will remain in effect and will not change value. In this example, the controller software would make the host port with a value of 2nd the highest priority host port available on the controller. Purging a node removes it from the Remote Nodes list. A node can only be purged when there are no DR groups between the selected source array and the array to be purged and the array is unreachable. It may be necessary to purge a node when it is removed from an environment, or when remote replication is no longer in use. Purging a node deletes it from the Remote System list on the Host port data replication settings property page and the Remote System Connection status properties page on the View remote connections tab. The Reset to defaults button returns all host port priority and the port checking values to their defaults. 56 Implementing remote replication

57 Figure 20 Host port data replication settings Verifying path status The current remote connection path status can be checked from HP P6000 Command View. In the navigation pane, select the Data Replication folder for the storage system. The Data Replication Folder Properties page opens. Click View remote connections to view the Remote System Connections Status page. To view additional information, click the path details icon. A window opens showing local and remote controller host ports, paths and path status. A host port can have one of the following path status values: Available. The host port has an available path to the target storage system; and, the host port can create a tunnel to the target storage system. Disabled. One of the following conditions exists: The host port has been intentionally disabled. The port preference has been set to No DR. No DR tunnels can be created on the host port. There is a remote replication protocol mismatch. One storage system is set to HP FC Data Replication Protocol and the other storage system is set to HP SCSI FC Compliant Data Replication Protocol. For more information, see Planning the data replication protocol (page 38). DR active. The host port has a remote replication tunnel. No Path. No path exists between the host ports on the storage system pair, either due to the host ports not being connected to the fabric, improper zoning, or a hardware failure. Unavailable. The host port has reached the limit for tunnels on the port. The existing tunnels are operating normally but no additional tunnels can be created. This can be resolved by ensuring that additional host ports are available for tunnel creation. Consider changing host port preferences by setting multiple host ports to the same priority, or by zoning changes. NOTE: There is no host port tunnel limit for storage system pairs running XCS 0953x000 or later with the remote data replication protocol set to HP SCSI FC Compliant Data Replication Protocol. Verifying array setup 57

58 Installing replication licenses When you purchase HP P6000 Continuous Access, you receive a replication license for each array (local and remote) in a remote replication relationship. Replication licenses are based on the amount (in TB) of replicated data on each array. License kits include instructions for retrieving electronic license keys and for installing the keys in HP P6000 Command View. Entering the keys on the HP P6000 Command View license page activates remote replication features on specified arrays. For license offerings, see the product QuickSpecs on the HP P6000 Continuous Access website at Follow the instructions provided in the license kit for retrieving electronic license keys from the HP License Key Renewal website and for installing the keys using HP P6000 Command View. License keys arrive in within 48 hours after you submit the credentials from the license kit. Install the license keys on each active and standby management server at the local and remote sites. Installing HP P6000 Replication Solutions Manager (optional) For additional replication capabilities, install HP P6000 Replication Solutions Manager on local and remote management servers. For more information on the unique features of this software, as well as installation requirements and instructions, see the HP P6000 Replication Solutions Manager Installation Guide. DC-Management and HP P6000 Continuous Access When using DC-Management (dynamic capacity management) in HP P6000 Continuous Access environments, you must ensure that DC-Management is installed on each management server and that a DC-Management license is installed on both the local and remote management server running HP P6000 Replication Solutions Manager. For more information on DC-Management, see the HP P6000 Replication Solutions Manager User Guide or the HP P6000 Replication Solutions Manager online help. Creating fabrics and zones This section identifies the recommended fabric configurations and zoning for HP P6000 Continuous Access environments. Switch zoning allows incompatible resources to coexist in a heterogeneous SAN. Use switch management software to create separate zones for incompatible hosts in the SAN. In each zone, include the arrays that the host will access before and after a failover. Array controller ports can be in overlapping zones. For instructions for creating zones, see your switch user guide and follow the best practices in Part II of the HP SAN Design Reference Guide. Fabric configuration drawings The drawings in this section illustrate the preferred method for implementing remote replication using different fabric configurations. These configurations optimize replication and facilitate supportability. The drawings represent physical configurations created using discreet hardware components. However, you can create the same configurations by using zoning to create logical fabrics. 58 Implementing remote replication

59 Two-fabric configuration Figure 21 (page 59) through Figure 23 (page 61) show two-fabric connection configurations. In these configurations each fabric is used for both host I/O and replication. Figure 21 (page 59) shows the basic two-fabric connections. Figure 22 (page 60) shows the two-fabric connection used between an EVA3000/5000 (four port controller pair) and an EVA8x00 (eight port controller pair). These connections are also used for the EVA4400 controller without the embedded FC switch. Figure 23 (page 61) shows the two-fabric connection used with the embedded switch EVA4400 controllers. The Fibre Channel switch functionality is integrated into the EVA4400 controllers. Figure 21 Two-fabric configuration 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 6. Fibre Channel switches 7. Host I/O and replication fabric 8. Intersite link 9. Host I/O and replication fabric 10. Intersite link Creating fabrics and zones 59

60 Figure 22 Two-fabric configuration (EVA3000/5000 to EVA8x00) 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 6. Fibre Channel switches 7. Host I/O and replication fabric 8. Intersite link 9. Host I/O and replication fabric 10. Intersite link 60 Implementing remote replication

61 Figure 23 Two-fabric configuration (EVA4400 with embedded Fibre Channel switch) 1. Data center 1 2. Data center 2 3. Array controller pair with embedded Fibre Channel switches 4. Management server 5. Hosts 6. Replication fabric 7. Intersite link 8. Replication fabric 9. Intersite link NOTE: On an EVA4400 with the embedded switch option, the connections between controller host ports FP1 and FP2 and controller switch ports 0 and 11 are internal to the controller and cannot be changed. The World Wide ID (WWID) (for example XXX8) associated with each controller host port is shown. HP P6000 Command View management connections in five-fabric and six-fabric configurations The connections for HP P6000 Command View management servers have been modified to separate HP P6000 Command View management traffic from HP P6000 Continuous Access replication traffic on the EVA controller host port (see Figure 27 (page 66), Figure 28 (page 68), and Figure 29 (page 69)). In the original connection configuration, the HP P6000 Command View server HBA ports are connected directly to the Fibre Channel switches in the replication fabrics. HP still supports this connection but the HP P6000 Command View response times may be slower. This occurs because the HP P6000 Command View server traffic must compete with HP P6000 Continuous Access traffic using the same host port on the array. The original configuration connections are shown in Figure 24 (page 62), Figure 25 (page 63), and Figure 26 (page 64). These configurations ensure the fabrics will not merge and fabric changes are isolated. Creating fabrics and zones 61

62 Figure 24 Five-fabric configuration 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 6. Fibre Channel switches 7. Host I/O fabric 8. Host I/O fabric 9. Replication fabric 10. Intersite link 62 Implementing remote replication

63 Figure 25 Six-fabric configuration (eight port controller pairs) 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 6. Fibre Channel switches 7. Host I/O fabric 8. Replication fabric 9. Intersite link 10. Host I/O fabric 11. Replication fabric 12. Intersite link Creating fabrics and zones 63

64 Figure 26 Six-fabric configuration (four port controller pair to eight port controller pair) 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 6. Fibre Channel switches 7. Host I/O fabric 8. Replication fabric 9. Intersite link 10. Host I/O fabric 11. Replication fabric 12. Intersite link The new configuration connections are shown in Figure 27 (page 66), Figure 28 (page 68), and Figure 29 (page 69). For additional details describing these zoned solutions, see Zoning best practices for traffic and fault isolation (page 70). NOTE: These configurations can cause the fabrics to merge in to a single fabric unless the necessary steps are taken. When creating an intersite FCIP link using B-series or C-series routers, the respective LSAN and IVR functionality can provide SAN traffic routing over the FCIP connection while preventing the merging of the two sites' fabrics in to a single fabric. LSANs and IVR enable logical fabric separation of the two sites, ensuring that a change on one site's fabric does not affect the other site. The HP FCIP Distance Gateways (MPX110) will allow the fabrics on both sites to merge into a single large fabric. SAN traffic isolation can still be accomplished with the FCIP Distance Gateways using SAN zoning, but this will not provide fabric separation. When using the FCIP Distance Gateways, fabric configuration changes should be made carefully, and it may be desirable to disable the ISL until all changes are complete and the fabric is stable. The HP P6000 Command View management server in each of the five-fabric and six-fabric configurations now connects directly to the Fibre Channel switches used for host I/O. The host I/O fabric switch is also connected to the appropriate replication fabric switches, providing a communication path from the management server to the remote array. The management server 64 Implementing remote replication

65 HBA ports are now zoned to use the same ports on the array controller as other server traffic, and are also zoned to see the remote array across the replication fabric. Using switch zoning, you can separate host I/O traffic from HP P6000 Continuous Access traffic in each data center. If FCIP communication devices are used to create the intersite link and merging of fabrics is not desired, the Fibre Channel Routing capability is necessary to ensure that the fabrics on the each side of the replication intersite link do not merge. See the HP SAN Design Reference Guide for the latest supported Fibre Channel routers. Creating fabrics and zones 65

66 A single physical fabric Figure 27 (page 66) shows a single physical fabric zoned into five logical fabrics. Four zones are dedicated to host I/O, and a single zone is used for replication. NOTE: To compensate for the SPOF in a five-zone solution (the intersite link [10]), HP recommends that availability be a QoS metric on the intersite link. Figure 27 Single physical fabric with five zones 1. Data center 1 6. Fibre Channel switches 2. Data center 2 7. Host I/O zone 1 3. Array controller pair 8. Host I/O zone 1 4. Management server 9. Replication zone 5. Hosts 10. Intersite link 1 The zones depicted in the figure above as yellow or green should not be zoned across the sites. Each site should have a unique yellow and green zone. 66 Implementing remote replication

67 Dual physical fabric with six zones Figure 28 (page 68) shows a dual physical fabric with six zones used between eight port controller pairs. Four zones are dedicated to host I/O and two zones are dedicated to replication. Figure 29 (page 69) shows the connections used when one four-port controller pair is used. NOTE: In a dual physical fabric with a six-zone configuration, each HP P6000 Continuous Access relationship must include at least one EVA8x00 or one EVA6400. When using a dual physical with a six-zone configuration with fabric zoning, ISLs cannot be included in the zones. See Figure 28 (page 68). Replication can be load balanced across ISLs due to the preferred port algorithm and by preferring DR groups to both controllers on the array. Controller software releases prior to XCS 6.xxx do not implement the preferred port algorithm for tunnel creation and cannot guarantee proper I/O load balancing regardless of which controller DR groups are preferred to. Not properly load balancing DR groups across controllers may result in one of the ISLs being overloaded, negatively impacting replication performance. Creating fabrics and zones 67

68 Figure 28 Dual physical fabric with six zones (eight-port controller pairs) 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 7. Host I/O zone 1 8. Replication zone 9. Intersite link 10. Host I/O zone 1 68 Implementing remote replication

69 5. Hosts 11. Replication zone 6. Fibre Channel switches 12. Intersite link 1 The zones depicted in the figure above as yellow or green should not be zoned across the sites. Each site should have a unique yellow and green zone. Figure 29 Dual physical fabric with six zones (four-port controller pair to eight-port controller pair) 1. Data center 1 2. Data center 2 3. Array controller pair 4. Management server 5. Hosts 7. Host I/O zone 1 8. Replication zone 9. Intersite link 10. Host I/O zone Replication zone 6. Fibre Channel switches 12. Intersite link 1 The zones depicted in the figure above as yellow or green should not be zoned across the sites. Each site should have a unique yellow and green zone. Best practices for using zones with HP P6000 Continuous Access Observe the following when using zones with HP P6000 Continuous Access. The even-numbered controller host ports should be included in one zone, and the odd-numbered host ports should be included in a different zone. In configurations with eight-port controller pairs, use host ports F1 and F2 for host I/O, and use ports F3 and F4 for replication when setting up either a five or six zone configuration. For five zone configurations it is acceptable to use a single port per controller for replication if you don't want to dedicate two ports per controller. Creating fabrics and zones 69

70 When adding array controllers to a zone, use the controller port World Wide Port Name (WWPN), not the node WWN. For example, use the WWPN FE1-001A-012A, not the WWN FE1-001A When creating a replication zone, include only arrays that have an HP P6000 Continuous Access relationship with each other. Avoid including other arrays in the zone. Zoning allows you to enforce supported maximum visible connections. Create separate zones for resources that exceed the following: Maximum number of arrays in the SAN Maximum number of HBAs per array controller (see Part III of the HP SAN Design Reference Guide) Maximum number of switches in the SAN (see Part II of the HP SAN Design Reference Guide) When setting up specific HP P6000 Continuous Access replication zones, HP recommends that you exclude all servers from these zones. Zoning management servers Management servers must be in the same zones as the local and remote array host ports that are used for the intersite links. Only one server at a time can be used to manage an array. However, you should include one active management server and one standby management server in each array management/intersite link zone. Zoning best practices for traffic and fault isolation This section describes how to use switch zoning to achieve two important benefits when using HP P6000 Continuous Access: traffic isolation and fault isolation. Traffic isolation separates host I/O traffic from HP P6000 Continuous Access replication traffic. This prevents excessive latency, caused by insufficient ISL bandwidth or line quality issues, from negatively impacting host I/O performance. Fault isolation is achieved through the use of multiple zones, which facilitates the quick isolation of SAN components when a disruption occurs. This zoning technique also supports the recommended practice of isolating an initiator with its targets; for example, zoning a host s HBA ports with the host ports of a target array. The initiator-to-target port zoning is applicable to all initiator-target relationships, including the HP P6000 Command View server-to-array and source array-to-destination array communications. NOTE: This information is applicable to all HP P6000 Continuous Access implementations but is particularly important in configurations using FCIP intersite links. See FCIP gateway zoning configurations (page 95) for more information on FCIP configurations. Three types of zones are used for traffic and fault isolation. Host I/O zones The I/O hosts are each zoned separately to the host ports of the targeted array. If the configuration includes multiple arrays, additional zones are created to isolate the I/O hosts HBA ports with the additional arrays. Host fan-out configurations are shown in the following sections. Replication zone The replication zone includes only the array host ports intended to provide HP P6000 Continuous Access replication traffic. The ports selected from each array should be similar. For example, use ports 1 and 3 from a pair of 8-port controllers or use FP1 from a 4-port to an 8-port relationship. This zone limits the FP ports that an array can access for creation of HP P6000 Continuous Access communication paths. When additional array pairs 70 Implementing remote replication

71 are placed into the configuration, separate replication zones should be created for those pairs of arrays. If array system fan-out is required, then the additional array should be added to a new replication zone. Only arrays that will participate in data replication relationships should be zoned together. Additional arrays (up to supported limits) can be added to either site. See the HP P6000 Enterprise Virtual Array Compatibility Reference for the number of arrays supported for system fan-out. HP P6000 Command View management zones The zones for a HP P6000 Command View management server must include both the source and destination arrays to enable the creation of DR groups and proper management of the arrays. The management server must also have at least one host port from each controller of the arrays zoned with the HP P6000 Command View management server HBA. Additional array pairs are zoned in a similar manner for each site. If an additional array is added to either site and it will participate in a DR relationship, then two additional zones should be created for each of the HP P6000 Command View management server HBA ports and the additional array s host ports. NOTE: The configurations shown assume HP P6000 Command View is installed on a stand alone server. HP P6000 Command View could also be installed on an application server running a supported version of Windows. When a server is used as both an application server and HP P6000 Command View management server, virtual disks must be properly presented from arrays to the intended hosts. Unintended cross-site virtual disk presentations should be avoided. A multipathing solution must be implemented that isolates the host I/O traffic to the targeted I/O array host ports paths, which will maintain the desired traffic isolation. See the appropriate multipathing documentation for path selection setup. Using a server for both host I/O and HP P6000 Command View traffic should be avoided if traffic isolation is not available. These zoning techniques provide a lower cost option than using discrete components to create the multiple fabrics shown in Fabric configuration drawings (page 58). Zoning does not provide physical separation of replication traffic over the intersite communication paths. These paths may use dark fiber or FCIP connections between the local and remote sites. The lack of physical separation in the configurations included here requires the zoning configuration be such that the intersite communications be used to service only HP P6000 Continuous Access replication and HP P6000 Command View management traffic. Application I/O should be limited to local arrays. Cross-site presentation of virtual disks will result in host I/O traffic across the ISLs which may impact HP P6000 Continuous Access performance. Cross-site presentations of virtual disks should be avoided without a careful historical analysis of the utilization rate on the intersite communication path. Understanding the zoning drawings The following sections illustrate the zones used for single-fabric and dual-fabric configurations. A topology drawing showing the physical components for each configuration is shown first. The zoning drawings use red and blue shading to identify the ports that comprise each zone. The ports included in a zone are further identified using the same colors. Any port that is not colored is not included in a zone. Recommended single-fabric zoning configurations The following zoning can be accomplished using a single fabric. This is a reduced availability configuration consisting of a single switch at both the local and remote sites. Creating fabrics and zones 71

72 IMPORTANT: This reduced availability configuration is not recommended for production environments requiring high availability. The single switch on each site represents an SPOF for the host I/O and the HP P6000 Continuous Access replication traffic. Single-fabric components Figure 30 (page 73) shows the various components that comprise the single-fabric topology. Host I/O zones Figure 31 (page 74) and Figure 32 (page 75) show the ports used to create the host I/O zones. HP P6000 Command View management zones Figure 33 (page 76) and Figure 34 (page 77) show the ports used to create the HP P6000 Command View management zones. Replication zone Figure 35 (page 78) shows the ports used to create the HP P6000 Continuous Access replication zone. Figure 36 (page 79) and Figure 37 (page 80) show replication zones when array system fan-out is used. 72 Implementing remote replication

73 Figure 30 Single-fabric zoning components 1. Site 1 Array controller pair 2. Site 1 Fibre Channel switch 3. Site 1 host 4. Site 1 management server 5. Site 2 Array controller pair 6. Site 2 Fibre Channel switch 7. Site 2 management serve 8. Site 2 host 9. Intersite link Creating fabrics and zones 73

74 Figure 31 Single-fabric host I/O zones: sheet 1 Zone 1 ports Site 1 host port HBA1 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 2 ports Site 2 host port HBA1 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 74 Implementing remote replication

75 Figure 32 Single-fabric host I/O zones: sheet 2 Zone 3 ports Site 1 host port HBA2 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 4 ports Site 2 host port HBA2 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 Creating fabrics and zones 75

76 Figure 33 Single-fabric HP P6000 Command View local management zones Zone 5 ports Site 1 management server port HBA1 Site 1 management server port HBA2 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 6 ports Site 2 management server port HBA1 Site 2 management server port HBA2 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 76 Implementing remote replication

77 Figure 34 Single-fabric HP P6000 Command View remote management zones Zone 7 ports Site 1 management server port HBA1 Site 1 management server port HBA2 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 Zone 8 ports Site 2 management server port HBA1 Site 2 management server port HBA2 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Creating fabrics and zones 77

78 Figure 35 Single-fabric replication zone Zone 9 ports Site 1 Array controller A port FP2 Site 1 Array controller B port FP2 Site 2 Array controller A port FP2 Site 2 Array controller B port FP2 78 Implementing remote replication

79 Figure 36 Single-fabric replication zone fan-out: sheet 1 Zone 10 ports Site 1 Array (1) controller A port FP2 Site 1 Array (1) controller B port FP2 Site 1 Array (10) controller A port FP2 Site 1 Array (10) controller B port FP2 Creating fabrics and zones 79

80 Figure 37 Single-fabric replication zone fan-out: sheet 2 Zone 11 ports Site 1 Array (10) controller A port FP2 Site 1 Array (10) controller B port FP2 Site 2 Array (5) controller A port FP2 Site 2 Array (5) controller B port FP2 Recommended dual-fabric zoning configurations This is a high availability configuration consisting of two switches per site, which creates two distinct extended fabrics that span the two sites. This configuration eliminates the SPOF for host I/O traffic. Dual-fabric components Figure 38 (page 81) shows the various components that comprise the dual-fabric topology. Host I/O zones Figure 39 (page 82) and Figure 40 (page 83) show the ports used to create the host I/O zones. HP P6000 Command View management zones Figure 41 (page 84) through Figure 44 (page 87) show the ports used to create the HP P6000 Command View management zones. Replication zones Figure 45 (page 88) shows the ports used to create the HP P6000 Continuous Access replication zones. Figure 46 (page 89) and Figure 47 (page 90) show replication zones when array system fan-out is used. Figure 48 (page 91) and Figure 49 (page 92) show replication zones used to create port isolation when array system fan-in or fan-out is used in a configuration that includes an EVA3000/5000. Figure 50 (page 93) and Figure 51 (page 94) show the replication zones when using a 4-port controller pair and an 8-port controller pair. NOTE: See FCIP gateway zoning configurations (page 95) if you are creating a dual-fabric configuration using FCIP gateways. 80 Implementing remote replication

81 Figure 38 Dual-fabric zoning components 1. Site 1 Array controller pair 2. Site 1 host 3. Site 1 management server 4. Site 1 Fibre Channel switch 5. Site 1 Fibre Channel switch 6. Site 2 Array controller pair 7. Site 2 host 8. Site 2 management server 9. Site 2 Fibre Channel switch 10. Site 2 Fibre Channel switch 11. Intersite link 12. Intersite link Creating fabrics and zones 81

82 Figure 39 Dual-fabric host I/O zones: sheet 1 Zone 1 ports Site 1 host port HBA1 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 2 ports Site 2 host port HBA1 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 82 Implementing remote replication

83 Figure 40 Dual-fabric host I/O zones: sheet 2 Zone 3 ports Site 1 host port HBA2 Site 1 Array controller A port FP2 Site 1 Array controller B port FP2 Zone 4 ports Site 2 host port HBA2 Site 2 Array controller A port FP2 Site 2 Array controller B port FP2 Creating fabrics and zones 83

84 Figure 41 Dual-fabric HP P6000 Command View local management zones: sheet 1 Zone 5 ports Site 1 management server port HBA1 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 6 ports Site 2 management server port HBA1 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 84 Implementing remote replication

85 Figure 42 Dual-fabric HP P6000 Command View local management zones: sheet 2 Zone 7 ports Site 1 management server port HBA2 Site 1 Array controller A port FP2 Site 1 Array controller B port FP2 Zone 8 ports Site 2 management server port HBA2 Site 2 Array controller A port FP2 Site 2 Array controller B port FP2 Creating fabrics and zones 85

86 Figure 43 Dual-fabric HP P6000 Command View remote management zones: sheet 1 Zone 9 ports Site 2 management server port HBA1 Site 1 Array controller A port FP1 Site 1 Array controller B port FP1 Zone 10 ports Site 1 management server port HBA1 Site 2 Array controller A port FP1 Site 2 Array controller B port FP1 86 Implementing remote replication

87 Figure 44 Dual-fabric HP P6000 Command View remote management zones: sheet 2 Zone 11 ports Site 2 management server port HBA2 Site 1 Array controller A port FP2 Site 1 Array controller B port FP2 Zone 12 ports Site 1 management server port HBA2 Site 2 Array controller A port FP2 Site 2 Array controller B port FP2 Creating fabrics and zones 87

88 Figure 45 Dual-fabric replication zone Zone 13 ports Site 1 Array controller A port FP3 Site 1 Array controller B port FP3 Site 2 Array controller A port FP3 Site 2 Array controller B port FP3 Zone 14 ports Site 1 Array controller A port FP4 Site 1 Array controller B port FP4 Site 2 Array controller A port FP4 Site 2 Array controller B port FP4 88 Implementing remote replication

89 Figure 46 Dual-fabric replication zone fan-out: sheet 1 Zone 15 ports Site 1 Array (1) controller A port FP3 Site 1 Array (1) controller B port FP3 Site 1 Array (13) controller A port FP3 Site 1 Array (13) controller B port FP3 Zone 16 ports Site 1 Array controller A port FP4 Site 1 Array (1) controller B port FP4 Site 1 Array (13) controller A port FP4 Site 1 Array (13) controller B port FP4 Creating fabrics and zones 89

90 Figure 47 Dual-fabric replication zone fan-out: sheet 2 Zone 17 ports Site 1 Array (13) controller A port FP3 Site 1 Array (13) controller B port FP3 Site 2 Array (6) controller A port FP3 Site 2 Array (6) controller B port FP3 Zone 18 ports Site 1 Array (13) controller A port FP4 Site 1 Array (13) controller B port FP4 Site 2 Array (6) controller A port FP4 Site 2 Array (6) controller B port FP4 90 Implementing remote replication

91 NOTE: Dual-fabric replication zone fan-in/fan-out for port isolation: sheet 1 (page 91) and Dual-fabric replication zone fan-in/fan-out for port isolation: sheet 2 (page 92) show a mixed array environment that includes an EVA3000/5000 (item 1). In this type of configuration, changes should be made to isolate host ports to eliminate the possibility of reduced replication performance caused by the reduction in resource utilization. Prior to creation of DR groups, the host ports should be isolated using the zoning shown. This zoning configuration isolates host ports on arrays 1 and 6 from creating connections on the same ports of array 13. Figure 48 Dual-fabric replication zone fan-in/fan-out for port isolation: sheet 1 Zone 19 ports Site 1 EVA3000/5000 (1) controller A port FP1 Site 1 EVA3000/5000 (1) controller B port FP1 Site 1 Array (13) controller A port FP3 Site 1 Array (13) controller B port FP3 Creating fabrics and zones 91

92 Figure 49 Dual-fabric replication zone fan-in/fan-out for port isolation: sheet 2 Zone 20 ports Site 1 Array (13) controller A port FP4 Site 1 Array (13) controller B port FP4 Site 2 Array (6) controller A port FP4 Site 2 Array (6) controller B port FP4 92 Implementing remote replication

93 Figure 50 Dual-fabric replication zone 4-port to 8-port controllers (straight-cabled) Zone 21 ports Site 1 Array controller A port FP1 Site 2 Array controller A port FP3 Site 2 Array controller B port FP3 Zone 22 ports Site 1 Array controller B port FP2 Site 2 Array controller A port FP4 Site 2 Array controller B port FP4 Creating fabrics and zones 93

94 Figure 51 Dual-fabric replication zone 4-port to 8-port controllers (cross-cabled) Zone 23 ports Site 1 Array controller A port FP2 Site 2 Array controller A port FP3 Site 2 Array controller B port FP3 Zone 24 ports Site 1 Array controller B port FP2 Site 2 Array controller A port FP4 Site 2 Array controller B port FP4 NOTE: This zoning is used when the 4-port controller pair is configured with FP1 cross-cabled as shown. The cross cabling configuration shown is applicable to EVA3000/5000 array models running VCS version 3.xxx. If the cabling configuration for these models is cross-cabled, the zoning shown is applicable. EVA3000/5000 arrays that use straight cabling should follow the zoning recommendation shown in Figure 50 (page 93). More recent 4- port controllers (EVA4400) do not require cross-cabling. Use the zoning applicable for your cabling configuration. It is not necessary to change an existing straight-cable configuration to a cross-cabled configuration for HP P6000 Continuous Access. 94 Implementing remote replication

95 FCIP gateway zoning configurations When using FCIP gateways as the interconnect between sites, consider the following: HP B-series and HP C-series switch products can be used to create the configuration shown in Figure 38 (page 81). The zoning options shown in Recommended dual-fabric zoning configurations (page 80) can be used with B-series and C-series switch products. These products allow independent fabrics on each site to access objects on the other site. To enable this functionality, B-series switches use LSANs and C-series switches use VSANs. For more information, see the HP SAN Design Reference Guide. NOTE: B-series or C-series FCIP-capable switches must have the LSAN or capability explicitly configured to prevent fabrics from merging. The array may be connected to the router's FC ports directly or to a fabric switch that is connected to the router. The B-series and C-series switches that are LSAN or VSAN capable may be connected to the HP FCIP Distance Gateway. The fiber channel routing functionality in the B-series and C-series switches is supported across the FCIP link provided by the HP FCIP Distance Gateway. The array may not be connected directly to the FC ports of an HP FCIP Distance Gateway product. Additional information is available in the HP SAN Design Reference Guide or in the Continuous Access section of the Applications Notes on the HP SPOCK website. Review the applicable documents listed under the Remote Replication section of the SPOCK website, which is located at Figure 52 (page 96) shows a data replication environment using mpx110 FCIP gateways as the interconnect between sites. This configuration differs from the standard dual-fabric configuration in that the mpx110 creates two merged fabrics. For more information on the zones used for this configuration, see Recommended dual-fabric zoning configurations (page 80). Disruptions in the WAN can cause a merged fabric to split into individual fabrics and then re-merge into a single fabric when the connection is reestablished. See the HP SAN Design Reference Guide for more information on HP FCIP Distance Gateway functionality. Use of HP IP Distance Gateway (mpx110) or MPX200 Multifunction Router as shown in Figure 52 (page 96) is the recommended supported configuration for H-series switches. Creating fabrics and zones 95

96 Figure 52 FCIP gateway zoning configurations using the mpx Site 1 Array controller pair 2. Site 1 host 3. Site 1 management server 4. Site 1 Fibre Channel switch 5. Site 1 Fibre Channel switch 6. Site 2 Array controller pair 7. Site 2 host 8. Site 2 management server 9. Site 2 Fibre Channel switch 10. Site 2 Fibre Channel switch 11. Intersite link 12. Intersite link 13. mpx110 gateways Configuring hosts Configure native or installed multipathing software on all hosts in remote replication configurations. Multipathing software redirects I/O requests from a failed path to the alternate path, preventing disruptions in data access if the path to the array fails. See your multipathing documentation for installation and configuration information. Configuring disk groups for remote replication Disk groups are created to meet performance and single array availability needs. However, you must consider HP P6000 Continuous Access requirements when choosing a disk group for the write history log. There must be enough space in the disk group for the log. Of additional concern may be the destination array disk group, which must contain enough disk drives to satisfy the write I/O load imposed by HP P6000 Continuous Access. HP P6000 Continuous Access should be taken it into account when evaluating the overall performance of the array. For more information, see Planning disk groups (page 35). Creating and presenting source virtual disks Virtual disks, created using HP P6000 Command View, are the primary storage objects on the array. When creating source virtual disks for a DR group, select the preferred controller carefully. Balancing DR groups across controllers contributes to the efficient utilization of remote replication resources and balances the I/O load. You should ensure that the DR groups imposing the heaviest workloads are distributed across the controllers. For information on creating virtual disks, see the HP P6000 Command View Online Help. 96 Implementing remote replication

97 Selecting a preferred controller The following are important points to remember when selecting a preferred controller for DR group members: The same preferred controller is used for all members of a DR group. The preferred controller value used for each virtual disk is set automatically based on the value assigned to the first DR group member. Ensure that the first member you add to the DR group is set to the desired preferred controller. On XCS or later and or later, the preferred controller value used for the DR group source virtual disks is automatically used for all destination virtual disks. On earlier versions of controller software, you should manually set the DR group destination virtual disks to the same preferred controller value used for the source. If the preferred controller value for the DR group source is set to No preference, different preferred controllers may be used for the DR group source and destination. If the DR group source and destination use different preferred controllers, the DR relationship will function properly but the load may not be balanced for optimal performance. Changing the preferred controller for an existing DR group member automatically changes all DR group members to the new value. On XCS or later and or later, changes made to the DR group source members are automatically propagated to the DR group destination members. However, changes made to the DR group destination members are not propagated to the source members. NOTE: HP P6000 Command View refers to the preferred controller as Preferred path/mode and the controller that currently owns the virtual disk as Managing Controller. The HP Storage System Scripting Utility refers to the controller that currently owns the virtual disk as online controller. Setting the Preferred path/mode for a virtual disk enables transition of controller ownership. To transition a virtual disk to the other controller, select the appropriate Preferred path/mode for the controller Path A (or B)-Failover/failback and save the change. The presented virtual disk will transition to the new controller after the change is saved. For the EVA4400 controller, HP P6000 Command View displays Controller 1 or Controller 2 as the Managing Controller for the selection of Path A or Path B respectively. Because of the effort required to transition a DR group and associated members between controllers, HP recommends that the Preferred path/mode of a DR group member LUN only be changed during light host loads. The Preferred path/mode can also be changed during a copy or when a merge is in progress. Using the failover/failback setting The recommended setting for virtual disks that are members of a DR group is failover/failback. This setting ensures that following a controller state change that results in a LUN transition, the original managing controller regains ownership of the LUN. This behavior maintains the DR group load balancing configured by the user. If the failover/failback setting is not used for DR group member virtual disks, the user must manually transition the virtual disks back to the original managing controller to re-establish DR group load balancing. HP Storage System Scripting Utility scripts can also be created for rebalancing the DR groups. The user must determine if the multipathing solution on the host the virtual disk is presented to has the ability to react to Unit Attentions generated by a LUN transition. There are some instances of LUN transition that do not generate a Unit Attention. The multipathing solution should provide a mechanism to handle this occurrence. Consult the appropriate operating system multipathing documentation to determine if this functionality is available. Creating and presenting source virtual disks 97

98 Using the failover only setting The failover only setting should be used for virtual disks that are presented to a host using HP Secure Path. This setting automatically redirects I/O to the alternate controller if the preferred controller becomes unavailable, but enables the host server to use HP Secure Path settings or commands to control the return to the preferred controller when it becomes available. For virtual disks that are presented to a host using other multipathing software, select either failover only or failover/failback for non-dr group member virtual disks. Presenting virtual disks Presenting virtual disks is required to allow host access. Use HP P6000 Command View or HP P6000 Replication Solutions Manager to present source virtual disks to the host servers that will use the storage. For the procedure for presenting virtual disks, see the application online help. Presenting virtual disks to hosts is an array requirement. You must also make virtual disks accessible from the host's perspective. Use host operating system tools to discover and mount the source virtual disks (disk devices) as required by the host. Adding hosts Adding a host makes it easier to define a path between the host HBAs and the virtual disks in an array. Use HP P6000 Command View to add each host that needs access to the source array. For convenience, you can perform this and subsequent HP P6000 Command View procedures on the local management server and copy the finished configuration to the standby and remote management servers. To present a virtual disk to a server so the server can perform I/O to that disk, map the virtual disk to the server WWN using HP P6000 Command View. For more information on presenting virtual disks, see the HP P6000 Command View Online Help. Creating DR groups Before you create DR groups, ensure that: Local and remote arrays are online and accessible to the management server. ISLs are operating properly. HP P6000 Command View on the active local management server has control of the local and remote arrays. After you create the DR groups, you can divide the management responsibilities for multiple arrays between the local and remote management servers. For instructions on changing the management server that is managing an array, see the HP P6000 Command View User Guide. Use HP P6000 Replication Solutions Manager or HP P6000 Command View to create DR groups on the source array. At a minimum, you must specify a source virtual disk and a destination array. The array software creates a corresponding DR group and virtual disk on the destination array. For more information, see Planning DR groups (page 36). Specifying virtual disks Select one virtual disk to create the DR group. Add other virtual disks as needed to ensure that all data for one application is in one DR group. For optimum failover performance, limit the virtual disks in a DR group to as few as possible, and do not group virtual disks that are assigned to separate applications. To be eligible for inclusion in a DR group, a source virtual disk: Cannot be a member of another DR group Cannot be added to a destination DR group 98 Implementing remote replication

99 Cannot be a snapshot Cannot be a mirrorclone Must be in normal operational state Must use mirrored write cache (the default) The maximum number of virtual disks in a DR group and the maximum number of DR groups per array vary with controller software versions. For current supported limits, see the HP P6000 Enterprise Virtual Array Compatibility Reference. Adding members to a DR group When adding new members to an existing DR group, it is important to ensure that I/O consistency of the destination virtual disks in the DR group is maintained at all times. This ensures data consistency should it become necessary to fail over to the destination array. To ensure I/O consistency of the destination array, the addition of new DR group members and the accompanying normalization should occur before the new members are presented to an application server for use. If the new virtual disks are used by an application server before they are added to the DR group, when normalization begins the destination DR group members will be in a data-inconsistent state until the new members have completed normalization. By adding the new members to the DR group and allowing normalization to complete before presenting them to the application server, I/O consistency is maintained at all times on the destination volumes in the DR group. Use the following steps when adding virtual disks to a DR group to ensure that the data on the destination array remains I/O consistent at all times. These steps apply to all synchronous and enhanced or basic asynchronous DR groups using XCS versions or later. 1. Add the new virtual disks to the DR group. The normalization process begins and should be allowed to complete before presenting the new members to the application server for use. During normalization, the virtual disk members on the destination array will be I/O consistent to the application. Because the application server has not yet had the new members presented to it, it cannot write to the new members. Consequently, the application server does not need the new members for an I/O consistent view of application data if a failover to the destination array is required. 2. When normalization of the new members is complete, present the virtual disks to the application server. Data now written to the new virtual disks will be I/O consistent within the DR group because of the guaranteed write ordering for multiple member DR groups. Selecting replication mode Select the replication mode when you create a DR group. Synchronous mode is the default, providing identical copies on local and remote arrays as long as live replication is occurring. Asynchronous mode may provide faster response to host server requests, but it does not ensure that data has been replicated to the remote array before the host is told that the I/O is complete. The choice of write mode has design implications and depends on business requirements. For detailed descriptions of synchronous and asynchronous modes, see Choosing a write mode (page 20). Specifying DR group write history log location and size You can specify the size and location of the DR group write history log when you create a DR group. The log size and location are dependent on the controller software version. For more information on DR group write history log, see Planning for DR group write history logs (page 42). Creating DR groups 99

100 Presenting destination virtual disks After creating the DR groups, you can present the destination virtual disks to the remote hosts in any of the following modes. You can use HP P6000 Command View, HP P6000 Replication Solutions Manager, or HP Storage System Scripting Utility to present destination virtual disks to remote hosts. None The virtual disk cannot be presented to any hosts. (HP Storage System Scripting Utility uses the term disable for this mode.) Read Only The virtual disk can be presented to hosts for read only. Inquiry Only The virtual disk can be presented to hosts for SCSI inquiry commands only. No reads or writes are allowed. (HP P6000 Replication Solutions Manager uses the term No Read for this mode.) NOTE: A destination DR group member should not be presented to the same host that is accessing the source member of the DR group if the presentation mode is Read Only or Inquiry Only. Backing up the configuration Back up your storage and replication configuration now and whenever it changes. Regular backups are essential for effective disaster recovery. You can use the initial backup to re-create the configuration on remote and standby management servers. For backup procedures, see Backing up replication configuration (page 126). Setting up remote and standby management servers Use the backup from the local management server to duplicate the replication configuration on remote and standby management servers. Before importing the configuration from the local instance of HP P6000 Replication Solutions Manager, you must assume active management of the arrays in the configuration using HP P6000 Command View on the remote or standby management server. For the procedure for acquiring control of the arrays, see the HP P6000 Command View User Guide. After setting up all management servers, acquire control of the arrays on the local management server. Only one management server at a time can manage an array. Testing failover Before you use the new DR groups, practice both planned and unplanned failovers. For failover procedures, see Planned failover (page 105) and Unplanned failover (page 108). 100 Implementing remote replication

101 7 Failover and recovery This chapter provides information about failing over and resuming operations after a planned or unplanned loss of operation. The several scenarios describe situations you may encounter, with procedures for handling each scenario. Failover example Figure 53 (page 102) shows data replication among DR groups at three locations. Arrays 1 and 4 are at the local site, and arrays 2 and 5 are at a remote site. On the local site, array 1 contains source virtual disks in a DR group (replicating to array 2), and array 4 contains destination virtual disks (replicated from array 5). If the arrays at the local site become unavailable, the DR group on array 2 fails over and becomes source virtual disks (in failover state), making the destination volumes in the DR group available to hosts connected to array 2, so that processing can resume at the remote site using array 2. On array 5, the DR group begins logging until the failed site is re-established or replaced with another destination. Array 2 will also start to log if the replication mode being used is synchronous. Failover example 101

102 Figure 53 DR groups before and after failover 1. Source array before failover 2. Destination array before failover 3. Replication 4. Destination array 5. Source array 6. Local site 7. Remote site 8. Failover 9. Logging 102 Failover and recovery

103 Planning for a disaster Planning helps to minimize downtime caused by a disaster. When planning for disaster recovery, include the following: Ensure that you have a supported disaster-tolerant solution. NOTE: Not all supported cable configurations will provide for dual fabrics and ISLs. Have at least one management server available at every site in case of a hardware or communication failure. Verify that each destination virtual disk within a DR group has been presented to a host. This allows the host access to the virtual disk immediately after a failover. Ensure that local and remote hosts have the latest patches, virus protection, HP Storage System Scripting Utility, and multipathing software versions for the specific operating system. Keep your configuration current and documented at all sites. Install the latest versions of controller software, HP P6000 Command View, and HP P6000 Replication Solutions Manager. Keep a record of your virtual disks, DR groups, and host volume and volume group names. Capture the configuration information after each significant change or at scheduled intervals. See Backing up replication configuration (page 126). Keep HP P6000 Replication Solutions Manager on every management server up-to-date with configuration changes. See the HP P6000 Replication Solutions Manager online help for the procedure for exporting and importing the HP P6000 Replication Solutions Manager database. Back up the HP P6000 Replication Solutions Manager database. It contains managed set and job information that you can restore on another management server if necessary. Practice the recovery plan. Ensure that everyone in your storage administration is prepared for disaster recovery. Practice different failure scenarios and make decisions ahead of time. For example, if a controller fails, is it more important not to disrupt processing by doing a planned failover, or not to be at risk for a second controller failure that requires an unplanned failover? In the case of multiple sites, which site has precedence for troubleshooting? Simulated disaster recoveries are a good way to verify that your records are up-to-date and that all required patches are installed. Failover and recovery procedures The failover procedure depends on the severity of the failure or the reason for the failover. For example, the procedure for a planned failover applies to anticipated power disruptions, scheduled equipment maintenance at the local site, or a need to transfer operations to another array. Another procedure applies to unplanned events such as multiple controller failures, multiple host failures, or an unplanned power outage at the local site. You may decide not to fail over in some situations. For example, if only one component fails, you can repair that component and avoid failing over an entire DR group. In the event of a data center failure, or if you are planning downtime with a local array, failing over to the remote array can ensure minimal interruption of data access. IMPORTANT: Always verify that all components of the remote array are 100% operational before you fail over. NOTE: HP recommends that you not fail over any DR group more than once every 15 minutes. Planning for a disaster 103

104 Performing failover and recovery Failover and recovery procedures include such actions as failover, suspend, resume, disable failsafe, mounting, and unmounting. You can perform these actions using the following interfaces and tools: HP P6000 Replication Solutions Manager GUI HP P6000 Replication Solutions Manager CLI HP P6000 Replication Solutions Manager jobs HP Storage System Scripting Utility HP P6000 Command View For specific procedures, see the interface documentation. Choosing a failover procedure Table 5 (page 105) summarizes situations that require a failover and those that do not. Each recommended action corresponds to a procedure documented later in this chapter. Because replication can be bidirectional, your array may be a source and a destination for separate DR groups. Use this table to customize contingency plans for your environment. 104 Failover and recovery

105 Table 5 When to fail over a DR group, managed set, or array Failure situation Recommended HP P6000 Continuous Access action DR group in normal mode DR group in failsafe mode Maintenance preventing access to source array Total loss of source array Loss of both source controllers Loss of single source controller Perform a planned failover on the destination array. See Planned failover (page 105). Manually intervene to fail over data on destination array, and then restart processing at the destination array. Perform an unplanned failover. See Unplanned failover (page 108). None Total destination array loss Loss of both destination controllers Loss of all ISLs None Manually intervene to continue processing at the source array. See Recover from failsafe-locked after destination loss (page 108). Loss of SAN connectivity from the server to the source array Loss of single source intersite switch Extended power outage at primary site Loss of managing server Loss of a single disk in redundant storage Loss of single host of cluster Disk group hardware failure (loss of redundancy) on the source array Disk group hardware failure (loss of redundancy) on the destination array Investigate to determine the reason for the outage and if appropriate manually intervene to fail over data on the destination array, and then restart processing at the destination array. Perform an unplanned failover. See Unplanned failover (page 108). None Manually intervene to fail over data to the destination array, and then restart processing at the destination array. Perform an unplanned failover. See Unplanned failover (page 108). Failover not necessary. Browse to standby managing server. None None Failover may not be necessary. Any data generated is being replicated to the destination array. If the virtual disk fails completely then fail over to the destination. See Disk group hardware failure on the source array (page 114). Failover not necessary. See Disk group hardware failure on the destination array (page 115). Planned failover Scenario: Due to scheduled maintenance at the local site, you need to move operations from the local array to the remote array. Action summary: Stop processing in the local data center and allow any data in the write history log to drain, and then begin the failover. When the failover is complete, you can continue to operate from the new source and enable failsafe mode if desired. When the planned maintenance is complete, you can fail back to the original source (this is another planned failover event). Figure 54 (page 106) shows a planned transfer of operations to a remote site. Failover and recovery procedures 105

106 Figure 54 Planned and unplanned failover TIP: With HP P6000 Replication Solutions Manager, you can designate a Home DR group to identify the preferred source. By default, the source at the time the DR group is created is Home. As the role of the array changes after multiple failover and failback events, the Home designation persists. 106 Failover and recovery

107 Planned Failover Procedure To execute a planned failover: 1. Optionally, move storage management to another management server. For instructions, see the HP P6000 Command View User Guide. 2. Ensure that all DR groups have resumed and are fully normalized. Check the DR group status using HP P6000 Command View or HP P6000 Replication Solutions Manager. If a DR group is merging or a normalization is in progress, wait for the process to complete. 3. If the write mode is currently set to asynchronous, set it to synchronous. If you are running XCS or later, consider the following: You must wait for the log to merge after changing the write mode to synchronous and then verify that the actual write mode has changed to synchronous. Transitioning to synchronous write mode will impact server I/O performance. For environments running in asynchronous mode on XCS or later, it is best to halt all server processing while data in the write history log is draining, as server I/Os will be treated in a synchronous replication manner, resulting in a severe performance impact on server write I/Os. When performing a planned site failover, you must put DR groups in synchronous mode before performing the actual failover activity. Omitting this mode change before performing the site failover results in performing an unplanned site failover (and the resulting normalization). Once the site failover has finished, you can change the replication mode back to asynchronous mode again. If the intersite link (or links) are broken and the DR group (or groups) has entered logging but has not yet been marked as a normalization (as occurs when the log reaches 100% capacity), a link is re-established and a log merge should be performed before the failover is performed. If the source site requires maintenance, put the DR groups into synchronous mode, wait for the write history logs to drain, and perform the failover. Once failover occurs, suspend replication and the destination array will start logging. When the primary array returns, re-establish a HP P6000 Continuous Access link, and take one of the following actions: Wait for the log to drain. Invalidate the log and start a normalization. 4. Properly halt all applications, and then shut down the servers. Ensure that the server has properly flushed all internally cached data. Failure to do this will result in the loss of data cached on the server. 5. Fail over the DR groups. 6. Issue operating system commands for resuming host I/O to the new source disks. For operating-system specifics, see Resuming host I/O after failover (page 118). 7. If you plan to operate for an extended time at the remote site, you can change the write mode to asynchronous after the original primary array is back up and running. a. If the DR group is currently suspended, resume it and wait for log to finish merging. b. Once the log has finished merging, set the write mode to asynchronous. NOTE: Remember to set the write mode back to synchronous and allow the write history log to completely drain before you fail back to the original source. Failover and recovery procedures 107

108 8. If you plan to operate for an extended time at the remote site and need to enable failsafe mode on a DR group. Make sure the new destination (previous source) and Fibre Channel links are functioning, and then perform the following steps: a. If the DR group is suspended, resume it and wait for the log disk to finish merging. b. Once the log has finished merging, change the DR group to failsafe mode. NOTE: You can enable failsafe mode at the destination array during a merge or normalization. After resolving the cause of a failover, you have three options: Remain failed over on the remote array. Return operations to the local array. See Failback to the original source following a planned or unplanned failover (page 110). Return operations to new hardware at the local site. Unplanned failover Scenario: You have experienced an unplanned loss of the local site or the source array. The duration of the outage is unknown. The hardware components (for example, hosts, array controllers, and switches) at the local site may or may not still remain intact. Action summary: Fail over the destination DR array. When the local site is back online, you can fail back to the previous source array or to a replacement array. Figure 54 (page 106) shows an unplanned transfer of operations to a remote site. When an unplanned site failover occurs when the DR group is in asynchronous mode, the DR group is put in synchronous mode and a normalization is initiated when the destination array becomes available. The replication mode returns to asynchronous mode once the normalization completes. Procedure: To resolve an unplanned failover: 1. If you cannot access the management server that is managing the arrays, establish management control with another management server. For instructions, see the HP P6000 Command View User Guide. 2. Fail over the DR groups. Using the HP P6000 Replication Solutions Manager, select the destination DR group, and then select Actions Failover. See the online help for additional information. 3. Issue operating system commands to resume host I/O to the new source. See Resuming host I/O after failover (page 118). After resolving the cause of a failover, you have three options: Remain failed over on the remote array. Return operations to the local array. See Failback to the original source following a planned or unplanned failover (page 110). Return operations to a replacement array at the local site. Recover from failsafe-locked after destination loss Scenario: You have experienced an unplanned loss of the remote array or a loss of the connection to the remote array, due to failure of the ISLs, loss of power at the remote site, loss of remote switches, and other similar events. The duration of the outage is unknown. The DR groups are failsafe-locked and host I/O is paused. 108 Failover and recovery

109 Action summary: Change from failsafe-enabled mode to normal mode and resume host I/O until the connection to the remote array is re-established. When the connection is stable, change back to failsafe-enabled mode. Figure 55 (page 109) illustrates the steps required to resume operations if you cannot access the destination while in a failsafe-locked state. Figure 55 Resumption of operation if unable to access destination in failsafe mode Failover and recovery procedures 109

110 Procedure: To resume operation if you are unable to access the destination in failsafe mode: 1. Change affected DR groups from failsafe-enabled mode to normal mode. 2. If necessary, issue operating system commands to the local hosts to restart I/O on the virtual disks that were failsafe-locked. See Resuming host I/O after failover (page 118). 3. When connections to the destination are re-established, control the merging of data at the destination by suspending replication on the less important DR groups. This forces the controllers to replicate the most important data first when the links to the destination are re-established. For more information, see Throttling a merge I/O after logging (page 124). NOTE: If DR groups are suspended for an extended amount of time, the log can run out of space. Restoring the connection to the destination array initiates a normalization of these DR groups. During the normalization operation, the data is inconsistent at the destination. 4. When merging is complete, change DR groups from normal mode to failsafe-enabled mode, if desired. NOTE: Once a DR group starts a normalization, you can enable failsafe mode for that DR group. Failback to the original source following a planned or unplanned failover Scenario: You are operating from an array that is not the original source (it's not designated as Home in HP P6000 Replication Solutions Manager). You need to move operations from the destination array back to the source array. Action summary: Prepare the source array for the failover and fail over the DR group. Failback (also known as reverting to Home) is identical to a planned failover. Fifteen minutes after failing over from a source to a destination array, you can fail back in the other direction. Procedure: To fail back to the original source: 1. If desired, move storage management to another management server. For instructions, see the HP P6000 Command View User Guide. 2. Properly shut down all applications, and then shut down the servers. Ensure the server has properly flushed all internally cached data. Failure to do this will result in a loss of data cached on the server. 3. Ensure that all DR groups have resumed and are fully normalized or merged. If a DR group is merging or a normalization is in progress, wait for the process to complete. 4. Fail over the DR groups (or revert to Home using HP P6000 Replication Solutions Manager). 5. Issue operating system commands to resume I/O to the new (original) source. See Resuming host I/O after failover (page 118). Return operations to new hardware NOTE: This section assumes a failure of the array hardware only and does not provide information for recovering from a site disaster that includes the failure and replacement of other hardware such as servers. Scenario: Some type of disaster damaged local equipment and forced a failover to a remote site. Hardware on the local source array was replaced. The new hardware now acts as the destination array; you are operating from an array that is not the original source (designated as Home). Action summary: When the site is back online, fail over to new hardware at the local site. Figure 56 (page 111) illustrates the steps to return operations to new hardware. 110 Failover and recovery

111 Figure 56 Returning operations to replaced hardware Procedure: This procedure does not include steps for rebuilding servers (this should be part of your disaster plan). For more information about the steps in this procedure, see the online help for HP P6000 Command View or HP P6000 Replication Solutions Manager. Failover and recovery procedures 111

112 Table 6 Sample array log Array with failed or new hardware Current source array Array name Array name Array name Array name Array name Array name Array name Array name 1. Record the name of the array with failed or new hardware (current destination) and the name of the current source array in a log such as the one shown in Table 6 (page 112). For example, the array with new hardware is named HSV01 and the current source array is named HSV02. Refer to this table during the procedure as needed. 2. If running controller software versions prior to 6.1xx on the current source array, resume all DR groups. 3. Delete all DR groups that have had a relationship with the failed hardware. 4. Install the replacement array and configure as necessary (for example, disk groups) 5. Re-establish communication between the source and new destination arrays. Add the new array to the SAN, enable the ISLs, or place the arrays into the same zone. 6. Perform one of the following: If the replaced array configuration was captured with HP Storage System Scripting Utility (called the utility), execute the script ConfigName_step1A on the new hardware, and then proceed to Step 11. See the HP Storage System Scripting Utility Reference for instructions. ConfigName is a user-assigned name given to the utility script at the time of creation. See Backing up replication configuration (page 126). If you are not using a utility script for recovery, initialize the repaired or replaced array using the information you recorded in Table 6 (page 112). See the HP P6000 Command View User Guide for initialization instructions. NOTE: To preserve existing zoning, assign the new hardware the WWNs of the failed hardware. 7. Add the disk groups on the new hardware. 8. Add the hosts for the system with new hardware. 9. Create the non-dr group virtual disks. 10. Present all non-dr group virtual disks to their hosts. 11. Perform one of the following: If the source array configuration was captured with the utility, execute ConfigName_step2 on the source array. ConfigName is a user-assigned name given to the utility script at the time of creation. DR groups are re-created with the utility if they were performing as the source when the configuration was captured. This step may take some time to complete. This action includes the following assumptions: The new array has the same type of hardware as the array that failed. All host WWNs are still the same. 112 Failover and recovery

113 You also have to take into account that the DR groups may have changed membership (or new ones have been created) since the time the original source array was destroyed. If you are not using a utility script for recovery, re-create all DR groups on the source array using the information recorded in Configuration form (page 126). Specify the replaced array for the destination. 12. If you used the utility to re-create DR groups on the source array, you must manually re-create any DR groups that had their source on the failed hardware. The utility will not re-create the DR groups on the source array if they performed as the destination when the configuration was captured. After you perform this step, all DR groups reside on the source array. 13. If desired, set all affected DR groups from normal mode to failsafe-enabled mode. 14. Perform one of the following: If the original array configuration was captured with the utility, execute ConfigName_step3 on the new hardware. ConfigName is a user-assigned name given to the utility script at the time of creation. If you are not using a utility script for recovery, present the destination virtual disks on the array with new hardware to the appropriate hosts using the information you recorded in Table 8 (page 126). 15. If you used the utility to present destination virtual disks to their hosts, you must manually present any additional virtual disks that originally had their sources on the failed hardware to their hosts on the array with new hardware. The utility will not present virtual disks whose destination was the current source array when the configuration was captured. After performing this step, all destination virtual disks are presented to hosts. 16. If the replaced array is to be the source for the DR groups, fail over any DR groups. See Planned failover (page 105). 17. Issue operating system commands to restart host I/O on the source array. For more information, see Resuming host I/O after failover (page 118). 18. (Optional) Set the DR groups to the desired Home setting. Recovering from a disk group hardware failure Disk group hardware failure occurs when a Vraid cannot be used because there are too many HDD failures in a disk group. The failure results in an inoperative disk group. This condition is the result of the loss of one disk for Vraid0, or the loss of two disks for Vraid1 and Vraid5. In each case, the hardware must be replaced and the disk group data rebuilt. (For a complete description of disk group failures, see the HP Enterprise Virtual Array Configuration Best Practices White Paper for your array model.) This section describes the symptoms and recovery of an inoperative disk group at either the source or destination array. If an array has only one disk group and that disk group fails, the array becomes inoperative. To manage the array, you must reinitialize it. Follow the procedure Disk group hardware failure on the source array (page 114) or Disk group hardware failure on the destination array (page 115). Failed disk group hardware indicators If disk group hardware fails, HP P6000 Replication Solutions Manager displays the icons described in Table 7 (page 114). Failover and recovery procedures 113

114 Table 7 HP P6000 Replication Solutions Manager display icons Resource Array Icon Description Indicates that the array is in an abnormal state and requires attention. Virtual disks Indicates a catastrophic failure and requires immediate action. DR groups Red indicates a failure; yellow indicates that the DR group is in a degraded state. Either condition requires immediate attention. Disk group hardware failure on the source array Scenario: A hardware failure on a source array causes a DR group to become inoperative. NOTE: The operational state of the DR group at the source array will show as Failed ( ); on the destination array the DR group will show as Good ( ). Action summary: If you plan to recover using data on the destination array, then fail over the destination array (unplanned failover) and delete DR groups and virtual disks on the failed array. Repair the failed disk group. Re-create DR groups, virtual disks, and host presentations. If the failed source array was logging at the time of the hardware failure, you must recover using data at the destination array (if you are running HP P6000 Continuous Access) or using a backup. There are two ways to recover from a disk group hardware failure on the source array: If data replication was occurring synchronously when the source disk group became inoperative, the data at the destination array is current and I/O consistent. Fail over on the destination array after performing the proper resolution process at the failed array as described in the following procedure. Repair the inoperative disk group and re-create the DR groups. Copy data from the destination array to the repaired source. If your disk group becomes inoperative when the DR groups are logging or while in enhanced asynchronous write mode, the data is not current, but still I/O consistent on the destination array. Stale data is not as current as the data on the source array. If you prefer to use stale data for recovery, the steps are the same as if replication were occurring normally. Procedure: Perform the following steps when a disk group hardware failure occurs on the source array and the data on the destination array is current: 1. Check to determine if the DR groups were logging or merging. 2. From HP P6000 Command View, navigate to each DR group on the destination array and fail over, if possible, if immediate access to the data is required. See Unplanned failover (page 108)). 3. Using HP P6000 Command View to manage the failed (previous source) array, navigate to the failed disk group. A list of failed virtual disks and DR groups is displayed. 114 Failover and recovery

115 Figure 57 Disk Group Hardware Failure window 4. Click Start Resolution Process. After a prompt for confirmation, a list of failed DR groups is displayed. 5. One at a time, select the affected DR groups and click Delete, then OK to confirm the deletion. Deleting a DR group removes the relationship between the virtual disk members; it does not delete data from the virtual disks as they remain intact on source and destination systems. 6. If the failover has not occurred, use HP P6000 Command View to navigate to each DR group on the destination array and fail over if possible. See Unplanned failover (page 108). Once a DR group is failed over, the DR group will be deleted automatically. If failover occurred during Step 2, the DR group must be deleted manually. 7. (Optional) Repair your hard drives on the failed array. Delete the virtual disks which were part of the original source DR group. For more information, see the EVA User Guide for your array model and the HP P6000 Command View Software Suite User Guide. 8. Refresh the new source array and re-create the DR groups. 9. After normalization occurs between the source and destination arrays, fail over the DR groups using the procedure in Planned failover (page 105). Disk group hardware failure on the destination array A hardware failure on a destination array causes the DR group to become inoperative. Action summary: Delete the DR groups on the source array that replicated to the failed disk group, repair the failed disk group on the destination array, re-create your DR groups on the source array, and make host presentations at the destination array. Failover and recovery procedures 115

HP StorageWorks Continuous Access EVA 2.1 release notes update

HP StorageWorks Continuous Access EVA 2.1 release notes update HP StorageWorks Continuous Access EVA 2.1 release notes update Part number: T3687-96038 Third edition: August 2005 Legal and notice information Copyright 2005 Hewlett-Packard Development Company, L.P.

More information

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide

Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Mac OS X Fibre Channel connectivity to the HP StorageWorks Enterprise Virtual Array storage system configuration guide Part number: 5697-0025 Third edition: July 2009 Legal and notice information Copyright

More information

HP StorageWorks Enterprise Virtual Array

HP StorageWorks Enterprise Virtual Array Release Notes HP StorageWorks Enterprise Virtual Array Product Version: v3.025 First Edition March, 2005 Part Number: 5697 5237 *5697-5237* This document contains the most recent product information about

More information

HP OpenView Storage Data Protector A.05.10

HP OpenView Storage Data Protector A.05.10 HP OpenView Storage Data Protector A.05.10 ZDB for HP StorageWorks Enterprise Virtual Array (EVA) in the CA Configuration White Paper Edition: August 2004 Manufacturing Part Number: n/a August 2004 Copyright

More information

HP P4000 Remote Copy User Guide

HP P4000 Remote Copy User Guide HP P4000 Remote Copy User Guide Abstract This guide provides information about configuring and using asynchronous replication of storage volumes and snapshots across geographic distances. For the latest

More information

HP P6000 Cluster Extension Software Installation Guide

HP P6000 Cluster Extension Software Installation Guide HP P6000 Cluster Extension Software Installation Guide This guide contains detailed instructions for installing and removing HP P6000 Cluster Extension Software in Windows and Linux environments. The intended

More information

HP EVA Cluster Extension Software Installation Guide

HP EVA Cluster Extension Software Installation Guide HP EVA Cluster Extension Software Installation Guide Abstract This guide contains detailed instructions for installing and removing HP EVA Cluster Extension Software in Windows and Linux environments.

More information

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo Exam : S10-200 Title : Snia Storage Network Management/Administration Version : Demo 1. A SAN architect is asked to implement an infrastructure for a production and a test environment using Fibre Channel

More information

QuickSpecs. SANworks Data Replication Manager HPUX By Compaq M ODELS

QuickSpecs. SANworks Data Replication Manager HPUX By Compaq M ODELS M ODELS SANworks Data Replication Manager by Compaq SANworks Data Replication Manager (DRM) is fibre channel controller-based data replication software for disaster tolerance, recovery and data movement

More information

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD.

HUAWEI OceanStor Enterprise Unified Storage System. HyperReplication Technical White Paper. Issue 01. Date HUAWEI TECHNOLOGIES CO., LTD. HUAWEI OceanStor Enterprise Unified Storage System HyperReplication Technical White Paper Issue 01 Date 2014-03-20 HUAWEI TECHNOLOGIES CO., LTD. 2014. All rights reserved. No part of this document may

More information

SAN extension and bridging

SAN extension and bridging SAN extension and bridging SAN extension and bridging are presented in these chapters: SAN extension on page 281 iscsi storage on page 348 280 SAN extension and bridging SAN extension SAN extension enables

More information

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous.

10 The next chapter of this Web Based Training module describe the two different Remote Equivalent Transfer Modes; synchronous and asynchronous. Slide 0 Welcome to this Web Based Training session providing an introduction to the fundamentals of Remote Equivalent Copy. This is one of the ETERNUS DX Advanced Copy Functions available in Fujitsu ETERNUS

More information

SAN for Business Continuity

SAN for Business Continuity SAN for Business Continuity How Cisco IT Achieves Subminute Recovery Point Objective A Cisco on Cisco Case Study: Inside Cisco IT 1 Overview Challenge Improve recovery point objective (RPO) and recovery

More information

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide

HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide HP StorageWorks 4000/6000/8000 Enterprise Virtual Array connectivity for Sun Solaris installation and reference guide Part number: 5697-5263 First edition: May 2005 Legal and notice information Copyright

More information

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment

HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment HP StorageWorks Enterprise Virtual Array 4400 to 6400/8400 upgrade assessment Part number: 5697-8185 First edition: June 2009 Legal and notice information Copyright 2009 Hewlett-Packard Development Company,

More information

HPE MSA 2042 Storage. Data sheet

HPE MSA 2042 Storage. Data sheet HPE MSA 2042 Storage HPE MSA 2042 Storage offers an entry storage platform with built-in hybrid flash for application acceleration and high performance. It is ideal for performance-hungry applications

More information

S SNIA Storage Networking Management & Administration

S SNIA Storage Networking Management & Administration S10 201 SNIA Storage Networking Management & Administration Version 23.3 Topic 1, Volume A QUESTION NO: 1 Which two (2) are advantages of ISL over subscription? (Choose two.) A. efficient ISL bandwidth

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance resilient disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

A GPFS Primer October 2005

A GPFS Primer October 2005 A Primer October 2005 Overview This paper describes (General Parallel File System) Version 2, Release 3 for AIX 5L and Linux. It provides an overview of key concepts which should be understood by those

More information

HP StorageWorks. EVA Virtualization Adapter administrator guide

HP StorageWorks. EVA Virtualization Adapter administrator guide HP StorageWorks EVA Virtualization Adapter administrator guide Part number: 5697-0177 Third edition: September 2009 Legal and notice information Copyright 2008-2009 Hewlett-Packard Development Company,

More information

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5 A6826A PCI-X Dual Channel 2Gb/s Fibre Channel Adapter Performance Paper for Integrity Servers Table of contents Introduction...2 Executive summary...2 Test results...3 IOPs...3 Service demand...3 Throughput...4

More information

HP P6000 Enterprise Virtual Array Compatibility Reference

HP P6000 Enterprise Virtual Array Compatibility Reference HP P6000 Enterprise Virtual Array Compatibility Reference 1.0 HP P6000 software solution compatibility 2.0 HP P6000 Command View Software interoperability support 2.1 HP P6000 Command View Software upgrade

More information

Oracle MaxRep for SAN. Configuration Sizing Guide. Part Number E release November

Oracle MaxRep for SAN. Configuration Sizing Guide. Part Number E release November Oracle MaxRep for SAN Configuration Sizing Guide Part Number E68489-01 release 1.0 2015 November Copyright 2005, 2015, Oracle and/or its affiliates. All rights reserved. This software and related documentation

More information

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

EMC Celerra Replicator V2 with Silver Peak WAN Optimization EMC Celerra Replicator V2 with Silver Peak WAN Optimization Applied Technology Abstract This white paper discusses the interoperability and performance of EMC Celerra Replicator V2 with Silver Peak s WAN

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

Step into the future. HP Storage Summit Converged storage for the next era of IT

Step into the future. HP Storage Summit Converged storage for the next era of IT HP Storage Summit 2013 Step into the future Converged storage for the next era of IT 1 HP Storage Summit 2013 Step into the future Converged storage for the next era of IT Karen van Warmerdam HP XP Product

More information

Real-time Protection for Microsoft Hyper-V

Real-time Protection for Microsoft Hyper-V Real-time Protection for Microsoft Hyper-V Introduction Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate of customer adoption. Moving resources to

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents companies

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

HP P6000 Enterprise Virtual Array Compatibility Reference

HP P6000 Enterprise Virtual Array Compatibility Reference HP P6000 Enterprise Virtual Array Compatibility Reference 1.0 HP P6000 software solution compatibility 2.0 HP P6000 Command View Software interoperability support 2.1 HP P6000 Command View Software upgrade

More information

IBM Europe Announcement ZG , dated February 13, 2007

IBM Europe Announcement ZG , dated February 13, 2007 IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and

More information

HP StorageWorks 4100/6100/8100 Enterprise Virtual Arrays

HP StorageWorks 4100/6100/8100 Enterprise Virtual Arrays HP StorageWorks 4100/6100/8100 Enterprise Virtual Arrays Family data sheet EVA4100 EVA6100 EVA8100 The HP StorageWorks 4100/6100/8100 Enterprise Virtual Arrays (EVAs) continue to offer customers in the

More information

Models PDC/O5000 9i W2K Cluster Kit B24

Models PDC/O5000 9i W2K Cluster Kit B24 Overview Models PDC/O5000 9i W2K Cluster Kit 252478-B24 Introduction The HP Parallel Database Clusters (PDC) for Windows are multi-node shared storage clusters, specifically designed, tested and optimized

More information

Exam Name: Midrange Storage Technical Support V2

Exam Name: Midrange Storage Technical Support V2 Vendor: IBM Exam Code: 000-118 Exam Name: Midrange Storage Technical Support V2 Version: 12.39 QUESTION 1 A customer has an IBM System Storage DS5000 and needs to add more disk drives to the unit. There

More information

3.1. Storage. Direct Attached Storage (DAS)

3.1. Storage. Direct Attached Storage (DAS) 3.1. Storage Data storage and access is a primary function of a network and selection of the right storage strategy is critical. The following table describes the options for server and network storage.

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

HPE Nimble Storage HF20 Adaptive Dual Controller 10GBASE-T 2-port Configure-to-order Base Array (Q8H72A)

HPE Nimble Storage HF20 Adaptive Dual Controller 10GBASE-T 2-port Configure-to-order Base Array (Q8H72A) Digital data sheet HPE Nimble Storage HF20 Adaptive Dual Controller 10GBASE-T 2-port Configure-to-order Base Array (Q8H72A) Disk Storage Systems What's new Inline variable block deduplication and compression

More information

NetApp Element Software Remote Replication

NetApp Element Software Remote Replication Technical Report NetApp Element Software Remote Replication Feature Description and Deployment Guide Pavani Krishna Goutham Baru, NetApp January 2019 TR-4741 Abstract This document describes different

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages

Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages Designing high-availability solutions using HP Integrity Virtual Machines as HP Serviceguard packages August 2006 Executive summary... 2 HP Integrity VM overview... 2 HP Integrity VM feature summary...

More information

HP EVA Cluster Extension Software Administrator Guide

HP EVA Cluster Extension Software Administrator Guide HP EVA Cluster Extension Software Administrator Guide Abstract This guide contains detailed instructions for configuring and troubleshooting HP EVA Cluster Extension Software. The intended audience has

More information

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC...

Introduction Optimizing applications with SAO: IO characteristics Servers: Microsoft Exchange... 5 Databases: Oracle RAC... HP StorageWorks P2000 G3 FC MSA Dual Controller Virtualization SAN Starter Kit Protecting Critical Applications with Server Application Optimization (SAO) Technical white paper Table of contents Introduction...

More information

IBM System Storage SAN24B-4 Express

IBM System Storage SAN24B-4 Express Designed for high-performance, scalability and simple-to-use in small to medium-size SAN environments IBM System Storage SAN24B-4 Express Highlights Simple-to-use SAN switch with Protect existing 4, 2

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

Cisco I/O Accelerator Deployment Guide

Cisco I/O Accelerator Deployment Guide Cisco I/O Accelerator Deployment Guide Introduction This document provides design and configuration guidance for deploying the Cisco MDS 9000 Family I/O Accelerator (IOA) feature, which significantly improves

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

Business Continuity and Disaster Recovery. Ed Crowley Ch 12

Business Continuity and Disaster Recovery. Ed Crowley Ch 12 Business Continuity and Disaster Recovery Ed Crowley Ch 12 Topics Disaster Recovery Business Impact Analysis MTBF and MTTR RTO and RPO Redundancy Failover Backup Sites Load Balancing Mirror Sites Disaster

More information

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HPE BladeSystem c-class Virtual Connect Support Utility Version Release Notes HPE BladeSystem c-class Virtual Connect Support Utility Version 1.12.0 Release Notes Abstract This document provides release information for the HPE BladeSystem c-class Virtual Connect Support Utility

More information

HP Storage Mirroring Application Manager 4.1 for Exchange white paper

HP Storage Mirroring Application Manager 4.1 for Exchange white paper HP Storage Mirroring Application Manager 4.1 for Exchange white paper Introduction... 2 Product description... 2 Features... 2 Server auto-discovery... 2 (NEW) Cluster configuration support... 2 Integrated

More information

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons:

Over 70% of servers within a data center are not connected to Fibre Channel SANs for any of the following reasons: Overview The provides modular multi-protocol SAN designs with increased scalability, stability and ROI on storage infrastructure. Over 70% of servers within a data center are not connected to Fibre Channel

More information

REC (Remote Equivalent Copy) ETERNUS DX Advanced Copy Functions

REC (Remote Equivalent Copy) ETERNUS DX Advanced Copy Functions ETERNUS DX Advanced Copy Functions (Remote Equivalent Copy) 0 Content Overview Modes Synchronous Split and Recovery Sub-modes Asynchronous Transmission Sub-modes in Detail Differences Between Modes Skip

More information

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide

HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide HPE Direct-Connect External SAS Storage for HPE BladeSystem Solutions Deployment Guide This document provides device overview information, installation best practices and procedural overview, and illustrated

More information

SAN Design Reference Guide

SAN Design Reference Guide SAN Design Reference Guide Abstract This reference document provides information about HPE SAN architecture, including Fibre Channel, iscsi, FCoE, SAN extension, and hardware interoperability. Storage

More information

Snia S Storage Networking Management/Administration.

Snia S Storage Networking Management/Administration. Snia S10-200 Storage Networking Management/Administration http://killexams.com/exam-detail/s10-200 QUESTION: 85 What are two advantages of over-subscription? (Choose two.) A. saves on ISL links B. decreases

More information

My First SAN solution guide

My First SAN solution guide My First SAN solution guide Digital information is a critical component of business today. It not only grows continuously in volume, but more than ever it must be available around the clock. Inability

More information

HP XP7 High Availability User Guide

HP XP7 High Availability User Guide HP XP7 High Availability User Guide Abstract HP XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

IBM TotalStorage FAStT900 Storage Server New Offering Expands IBM SAN Storage Solutions

IBM TotalStorage FAStT900 Storage Server New Offering Expands IBM SAN Storage Solutions Hardware Announcement February 17, 2003 IBM TotalStorage FAStT900 Storage Server New Offering Expands IBM SAN Storage Solutions Overview The IBM TotalStorage FAStT900 Storage Server expands the family

More information

Protect enterprise data, achieve long-term data retention

Protect enterprise data, achieve long-term data retention Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce

More information

Retired. Microsoft iscsi Software Target for ProLiant Storage Servers Overview

Retired. Microsoft iscsi Software Target for ProLiant Storage Servers Overview Overview Microsoft iscsi Software Target adds block storage capability to ProLiant Storage and creates a single platform that delivers file, print, and application storage services, all over existing Ethernet

More information

Network Layer Flow Control via Credit Buffering

Network Layer Flow Control via Credit Buffering Network Layer Flow Control via Credit Buffering Fibre Channel maintains throughput in the data center by using flow control via buffer to buffer credits Nominally switches provide credit buffering up to

More information

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview.

QuickSpecs. Models SATA RAID Controller HP 6-Port SATA RAID Controller B21. HP 6-Port SATA RAID Controller. Overview. Overview HP 6 Port SATA RAID controller provides customers with new levels of fault tolerance for low cost storage solutions using SATA hard drive technologies. Models SATA RAID Controller 372953-B21 DA

More information

HP StorageWorks Enterprise Virtual Array (EVA) online controller firmware upgrades white paper

HP StorageWorks Enterprise Virtual Array (EVA) online controller firmware upgrades white paper HP StorageWorks Enterprise Virtual Array (EVA) online controller firmware upgrades white paper Introduction... 2 Definition of terms... 2 Understanding how the feature works... 3 Considerations for systems

More information

Data Replication. Replication can be done at many levels. Replication can be done at many levels. Application. Application. Database.

Data Replication. Replication can be done at many levels. Replication can be done at many levels. Application. Application. Database. Data Replication Replication can be done at many levels Application Database Operating System Hardware Storage Replication can be done at many levels Application Database Operating System Hardware Storage

More information

QuickSpecs. ProLiant Cluster F500 for the Enterprise SAN. Overview. Retired

QuickSpecs. ProLiant Cluster F500 for the Enterprise SAN. Overview. Retired Overview The is designed to assist in simplifying the configuration of cluster solutions that provide the highest level of data and applications availability in the Windows Operating System environment

More information

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24

Architecture. SAN architecture is presented in these chapters: SAN design overview on page 16. SAN fabric topologies on page 24 Architecture SAN architecture is presented in these chapters: SAN design overview on page 16 SAN fabric topologies on page 24 Fibre Channel routing on page 46 Fibre Channel over Ethernet on page 65 Architecture

More information

QuickSpecs. HP SANworks Storage Resource Manager. Overview. Model HP SANworks Storage Resource Manager v4.0b Enterprise Edition.

QuickSpecs. HP SANworks Storage Resource Manager. Overview. Model HP SANworks Storage Resource Manager v4.0b Enterprise Edition. Overview Model v4.0b Enterprise Edition Introduction Storage Resource Manager (SRM) is web-based software for HP-UX, Windows 2000, Windows NT, Novell NetWare, Tru64 UNIX, OpenVMS, Sun Solaris, IBM AIX,

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

HyperIP : SRDF Application Note

HyperIP : SRDF Application Note HyperIP : SRDF Application Note Introduction HyperIP is a Linux software application that quantifiably and measurably enhances large data movement over big bandwidth and long-haul IP networks. HyperIP

More information

IBM TotalStorage SAN Switch M12

IBM TotalStorage SAN Switch M12 High availability director supports highly scalable fabrics for large enterprise SANs IBM TotalStorage SAN Switch M12 High port density packaging saves space Highlights Enterprise-level scalability and

More information

IBM System Storage SAN06B-R extension switch

IBM System Storage SAN06B-R extension switch IBM SAN06B-R extension switch Designed for fast, reliable and cost-effective remote data replication and backup over long distance Highlights Designed for high performance with up to sixteen 8 Gbps Fibre

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes

HP BladeSystem c-class Virtual Connect Support Utility Version Release Notes HP BladeSystem c-class Virtual Connect Support Utility Version 1.9.1 Release Notes Abstract This document provides release information for the HP BladeSystem c-class Virtual Connect Support Utility Version

More information

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information...

ProLiant Cluster HA/F500 for Enterprise Virtual Array Introduction Software and Hardware Pre-Checks Gathering Information... Installation Checklist HP ProLiant Cluster F500 for Enterprise Virtual Array 4000/6000/8000 using Microsoft Windows Server 2003, Enterprise Edition Stretch Cluster May 2005 Table of Contents ProLiant Cluster

More information

Overview of HP tiered solutions program for Microsoft Exchange Server 2010

Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Table of contents Executive summary... 2 Introduction... 3 Exchange 2010 changes that impact tiered solutions... 3 Hardware platforms...

More information

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring

Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring HP StorageWorks Guidelines for using Internet Information Server with HP StorageWorks Storage Mirroring Application Note doc-number Part number: T2558-96338 First edition: June 2009 Legal and notice information

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

HP Supporting the HP ProLiant Storage Server Product Family.

HP Supporting the HP ProLiant Storage Server Product Family. HP HP0-698 Supporting the HP ProLiant Storage Server Product Family https://killexams.com/pass4sure/exam-detail/hp0-698 QUESTION: 1 What does Volume Shadow Copy provide?. A. backup to disks B. LUN duplication

More information

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence

HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Technical white paper HP Storage Provisioning Manager HP 3PAR StoreServ Peer Persistence Handling HP 3PAR StoreServ Peer Persistence with HP Storage Provisioning Manager Click here to verify the latest

More information

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before

More information

IBM MQ Appliance HA and DR Performance Report Version July 2016

IBM MQ Appliance HA and DR Performance Report Version July 2016 IBM MQ Appliance HA and DR Performance Report Version 2. - July 216 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before using this report,

More information

ETERNUS SF AdvancedCopy Manager Overview

ETERNUS SF AdvancedCopy Manager Overview ETERNUS SF AdvancedCopy Manager 14.2 Overview J2X1-7443-04ENZ0(00) June 2011 Preface Purpose This manual provides an overview of the ETERNUS SF AdvancedCopy Manager. This manual describes the ETERNUS SF

More information

access addresses/addressing advantages agents allocation analysis

access addresses/addressing advantages agents allocation analysis INDEX A access control of multipath port fanout, LUN issues, 122 of SAN devices, 154 virtualization server reliance on, 173 DAS characteristics (table), 19 conversion to SAN fabric storage access, 105

More information

iscsi Target Usage Guide December 15, 2017

iscsi Target Usage Guide December 15, 2017 December 15, 2017 1 Table of Contents 1. Native VMware Availability Options for vsan 1.1.Native VMware Availability Options for vsan 1.2.Application Clustering Solutions 1.3.Third party solutions 2. Security

More information

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version : IBM 000-742 IBM Open Systems Storage Solutions Version 4 Download Full Version : https://killexams.com/pass4sure/exam-detail/000-742 Answer: B QUESTION: 156 Given the configuration shown, which of the

More information

Virtualization with Protection for SMBs Using the ReadyDATA 5200

Virtualization with Protection for SMBs Using the ReadyDATA 5200 Virtualization with Protection for SMBs Using the ReadyDATA WHITE PAPER For most small-to-medium size businesses, building a virtualization solution that provides scalability, reliability, and data protection

More information

What's in this guide... 4 Documents related to NetBackup in highly available environments... 5

What's in this guide... 4 Documents related to NetBackup in highly available environments... 5 Contents Chapter 1 About in this guide... 4 What's in this guide... 4 Documents related to NetBackup in highly available environments... 5 Chapter 2 NetBackup protection against single points of failure...

More information

PURE STORAGE PURITY ACTIVECLUSTER

PURE STORAGE PURITY ACTIVECLUSTER PURE STORAGE PURITY ACTIVECLUSTER Contents Overview... 4 Introduction... 4 Core Components... 5 Administration... 6 Deployment Options... 6 Uniform Storage Access... 6 Optimizing Performance in Uniform

More information

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR Tim Coulter and Sheri Atwood November 13, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS Introduction... 3 Overview of VERITAS Volume Replicator...

More information

HPE FlexFabric 7900 Switch Series

HPE FlexFabric 7900 Switch Series HPE FlexFabric 7900 Switch Series VXLAN Configuration Guide Part number: 5998-8254R Software version: Release 213x Document version: 6W101-20151113 Copyright 2015 Hewlett Packard Enterprise Development

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.1 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

XP7 Online Migration User Guide

XP7 Online Migration User Guide XP7 Online Migration User Guide Abstract This guide explains how to perform an Online Migration. Part Number: 858752-002 Published: June 2016 Edition: 6 Copyright 2014, 2016 Hewlett Packard Enterprise

More information

QuickSpecs. HP Serial-ATA (SATA) Entry (ETY) and Midline (MDL) Hard Drive Option Kits Overview

QuickSpecs. HP Serial-ATA (SATA) Entry (ETY) and Midline (MDL) Hard Drive Option Kits Overview Overview HP introduces the next generation of SATA drives that are designed for the reliability and larger capacities demanded by today's entry server and external storage environments. The new SATA portfolio

More information

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service

HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service HP Services Technical data The HP StorageWorks MSA/P2000 Family Disk Array Installation and Startup Service provides the necessary

More information

IBM TotalStorage Enterprise Storage Server (ESS) Model 750

IBM TotalStorage Enterprise Storage Server (ESS) Model 750 A resilient enterprise disk storage system at midrange prices IBM TotalStorage Enterprise Storage Server (ESS) Model 750 Conducting business in the on demand era demands fast, reliable access to information

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 HP Integrity VM (HP-UX 11i v3 VM Host) v4.0 Integrity Virtual Machines (Integrity VM) is a soft partitioning and virtualization

More information

HP EVA P6000 Storage performance

HP EVA P6000 Storage performance Technical white paper HP EVA P6000 Storage performance Table of contents Introduction 2 Sizing up performance numbers 2 End-to-end performance numbers 3 Cache performance numbers 4 Performance summary

More information

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions:

QuickSpecs. HP Integrity Virtual Machines (Integrity VM) Overview. Retired. Currently shipping versions: Currently shipping versions: HP Integrity VM (HP-UX 11i v3 VM Host) v4.2 HP Integrity VM (HP-UX 11i v2 VM Host) v3.5 Integrity Virtual Machines (also called Integrity VM or HPVM) is a hypervisor product

More information

XP7 High Availability User Guide

XP7 High Availability User Guide XP7 High Availability User Guide Abstract HPE XP7 High Availability helps you create and maintain a synchronous copy of critical data in a remote location. This document describes and provides instructions

More information

QuickSpecs. Models. Overview

QuickSpecs. Models. Overview Overview The HP Smart Array P400 is HP's first PCI-Express (PCIe) serial attached SCSI (SAS) RAID controller and provides new levels of performance and reliability for HP servers, through its support of

More information