IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2

Size: px
Start display at page:

Download "IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2"

Transcription

1 IBM Virtualization Engine TS7700 Series Best Practices Copy Consistency Points V1.2 Takeshi Nohta RMSS/SSD-VTS - Japan

2 Target Audience This document provides the Best Practices for TS7700 Virtualization Engine Copy Consistency Points (CCP). This document is targeted for the following: IBM FTSSs Business Partners System Architects TS7700 Customers Introduction This document is one of a series of TS7700 Virtualization Engine Best Practices documents. Their purpose is to describe best practices for the TS7700 based on theoretical data and practical experience. Copy Consistency Point recommendations are described for various configurations of two and three cluster grids. An explanation of various CCP terms is also provided to aid in your understanding. Since the original writing of this document the TS7720 has been introduced as well as hybrid grids. All of the discussions apply to grids with TS7720, TS7740, and TS7720T (tape attach) clusters including hybrid grids. Copy Consistency Points are used in the same manner for both TS7700 models. Summary of Changes April 2008 Version 1.0 Original Release December 2009 Version 1.1 Update for 4 cluster grid Update for Hybrid grid Change panels to show TS7700 Management Interface instead of Library Manager Web Specialist panels Add brief discussion of Retain Copy Mode and Cluster Families but refer to IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance Version 1.3 white paper for a complete description. November 2014 Version 1.2 Update for Synchronous and Time Delayed copy policy. Update for 6 cluster grid. Feedback Please! Feel free to send your feedback on this document or ideas for other Best Practices documents to the author of this document. Page 2 of 36

3 TARGET AUDIENCE... 2 INTRODUCTION... 2 SUMMARY OF CHANGES... 2 FEEDBACK PLEASE!... 2 COPY CONSISTENCY POINTS... 4 DEFINITIONS... 4 PRECEDENCE OF COPY CONSISTENCY POINTS OR WHICH CLUSTER S CACHE WILL BE CHOSEN?... 4 DESCRIPTION OF AVAILABLE COPY POLICY CONTROLS... 6 Management Class, Copy Consistency Points... 6 Copy Policy Override... 6 Treat Copied Data According to the Action Defined in the Storage Class... 8 HIGH AVAILABILITY GRID DISCUSSION... 9 CCP AND OVERRIDE SETTINGS FOR GDPS CCP AND HSM BACKUP OF ML2 DATA CCPS AND BULK VOLUME INFORMATION RETRIEVAL (BVIR) CCPS, COPY EXPORT, AND GRID TWO CLUSTER GRID Disaster Recovery Single Production site with Disaster Recovery site devices offline to host High Availability - Production Directed from Hosts to Both Clusters Dual Production Directed from Hosts to Different Clusters in Grid THREE CLUSTER GRID Dual Independent Production Sites, Disaster Recovery site devices offline High Availability, Dual production, Disaster Recovery site devices offline Triple Production Sites, Round-Robin Backup FOUR-CLUSTER GRIDS Two, Two-Cluster Grids in One! Hybrid Grid Large Cache Front End, Tape Back End Cluster Families SIX-CLUSTER GRIDS Dual production data centers with HA/DR configuation REFERENCES: DISCLAIMERS: Page 3 of 36

4 Copy Consistency Points This document describes the best practices for setting the Copy Consistency Points in a multi-cluster grid. We start with definitions and descriptions of the various controls available to customize how and when data is copied between clusters. We then dive into the recommended settings of these controls for specific configurations. Note: When referring to Copy Consistency Points (CCPs) in this document a single letter will be used to refer to the CCP for a cluster. R is used to represent Rewind/Unload, D is used to represent Deferred, and N is used to represent No Copy. The first letter listed represents the lowest numbered cluster; the second represents the next higher numbered cluster, and so forth. Definitions Copy Consistency Point This defines when the data for a virtual/logical volume is to be made valid or consistent at a cluster. The possible settings are Rewind/Unload, Deferred, and No Copy. Rewind/UNload This Copy Consistency Point means the data must be valid on a cluster before the TS7700 indicates the Rewind/UNload (RUN) command is complete. Deferred This Copy Consistency Point means the data does not need to be valid on a cluster at RUN time, but will be copied to the cluster later, in a deferred manner. No Copy This Copy Consistency Point means a copy of the data will not be made on a cluster. Synchronous This Copy Consistency Point provides true synchronous copy through real time duplexing up to any implicit or explicit sync point and achieve zero recovery point objective (RPO). This copy mode has been supported since Release 2.1. Time Delayed This Copy Consistency Point means the data duplication occurs after the userspecified delay period passes and allows the migration of aged data only to archive clusters. This copy mode has been supported since Release 3.1. Balanced Mode This is a term from the VTS Peer-to-Peer (PtP) subsystem that indicates host IO is directed, via the VTCs, to both VTSs. The workload is balanced between the two VTSs. In the TS7700 grid world, balanced mode means that virtual devices from two or more clusters are online to the host. Allocation at the host will select from all online virtual drives meaning that host writes will occur to virtual devices on multiple clusters. Preferred Mode This is a term from the VTS PtP subsystem that indicates host IO is preferred to one VTS over the other. In the TS7700 grid world, preferred mode means that one cluster s virtual devices are offline to the host, and therefore, allocation will only pick virtual devices from the cluster whose devices are online. Cluster Families The concept to group the clusters in Grid typically by geographic locality. This concept is used for the efficient cooperative replication as well as preferred TVC selection. Precedence of Copy Consistency Points or Which Cluster s Cache Will Be Chosen? When a mount request is sent to a virtual device in a multi-cluster grid, the TS7700 determines how a volume will be accessed using several factors. Below we discuss the most common factors used to determine which cluster s cache is used to access a logical volume. Page 4 of 36

5 When a mount request is received by a virtual device, the TS7700 determines which cluster s Tape Volume Cache (TVC) will be used for the mount. Management class is used to indicate which clusters are to get a copy of the data and when the copies are to be made. The Management Class as defined at the cluster of the virtual device that received the mount is used, even if the mount ends up using another cluster s TVC. In the following section we describe overrides that can be set to influence which cluster s TVC is used and when copies are made. These overrides are global to a cluster meaning they apply to all management class Copy Consistency Point (CCP) definitions within a cluster. Each cluster can have its own set of overrides. The following rules are used in determining the TVC selected. They are listed in order of precedence: 1. The TVC must be available on a cluster 2. The tape library associated with a cluster must be available. 3. Force volumes mounted on this cluster to be copied to the local cache override is set for virtual device s cluster. This applies to non-fast ready mounts only. 4. Whether a cluster has a consistent copy of a logical volume or not. This is for non-fast ready mounts only. 5. Prefer local cache for non-fast ready mounts override is set for the virtual device s cluster. This is for non-fast ready mounts only. 6. Prefer local cache for fast ready mount requests override is set for the virtual device s cluster and the CCP for this cluster is RUN or Deferred. 7. CCP of Synchronous on the virtual device s cluster. 8. CCP of RUN on the virtual device s cluster. 9. CCP of Deferred on the virtual device s cluster. 10. CCP of Time Delayed on the virtual device s cluster. 11. Sort by cluster families within the same copy mode groups. 12. If all of the above are equal for one or more clusters, chose the TVC with the best performance. Performance includes cache residency and historical grid network performance. Note: The Synchronous copy mode takes precedence over any copy override settings. Below are generalized statements concerning the selection of a cluster s TVC. For a specific mount, an overriding rule for which cluster s cache is chosen is whether the cluster has a consistent copy of the volume or not. If a virtual device is selected for a non-fast ready mount on cluster 0 (CL0), the CCP is RD, and CL0 does not have a consistent copy of the data, the cache on CL1 will be selected, assuming the cluster has a consistent copy of the data. This is because the mount will complete faster. If the CL0 cache had been chosen, the volume s data would have to be copied from CL1 s cache before the mount could be completed. Copy Consistency Point has one of the largest influences on which cluster s cache is used for a mount. The CCP of Synchronous (S) takes precedence over other CCPs. Rewind/Unload (R) takes precedence over a CCP of Deferred (D) and Time Delayed (T). Deferred (D) takes precedence over a CCP of Time Delayed (T). For example, assuming each cluster has a consistent copy of the data, if a virtual device on Cluster 0 is selected for a mount and the CCP is RD then the CL0 cache will be selected for the mount. However, if the CCP is DR, then CL1 s cache will be selected. If the CCPs for a cluster are DD, then other factors are used to determine which cluster s cache to use. The Prefer local cache for fast ready mount requests and Prefer local cache for non-fast Page 5 of 36

6 ready mounts overrides will cause the cluster that received the mount request to be the cache used to access the volume. Description of Available Copy Policy Controls Management Class, Copy Consistency Points The 3494 Library Manager (LM)(pre-R1.5), 3953 LM (pre-r1.5), or TS7700 (R1.5+) Web Specialist is where Management Class policies are defined, which include the TS7700 Virtualization Engine Copy Consistency Points (CCP). The same management classes should be defined at each cluster in the grid; however, the CCPs can be different at each cluster. The CCPs are set up from the point of view of the cluster they are defined at. It is possible that, through adding and removing clusters in a grid, there can be gaps in the cluster numbering. For instance, if cluster 0 is removed from a three-cluster grid, clusters 1 and 2 remain. The TS7700 Management Class panel indicates the cluster names and numbers in the column headers. In the screen shot below, the CCPs for Management Class TESTMC indicate Cluster 4 has a CCP of Deferred, Cluster 5 has a CCP of RUN, Cluster 6 has a CCP of No Copy, and Cluster 7 has a CCP of Time Delayed. Figure 1 - Management Class - TS7700 Web Specialist Copy Policy Override The TS7700 Management Interface allows certain copy policy overrides to be set by an administrator or a custom role that allows access to this function. These overrides allow you to modify the behavior of the copy policies as defined by the Management Class constructs. This does not change the definition of the management class, but serves to influence the replication policy. Each of the overrides is described below along with the effect the override has. Page 6 of 36

7 Figure 2 - Copy Policy Override - TS7700 Management Interface Prefer local cache for fast ready mount requests Typically, when a virtual device on a cluster receives a fast-ready mount request, that cluster s cache will be used for the mount. However, under certain circumstances, such as when DD is defined for the CCP, without this override the TS7700 may select the remote cluster s cache to perform the mount. This override will force the fast-ready mount to use the cache of the cluster where the mount was received. This override does not apply if the CCP is set to No Copy for the cluster. This override was originally implemented for the VTS Peer-to-Peer to handle Delete Expired logical volumes. Based on the settings at each VTS in the PtP, one VTS would delete a logical volume before the other. This override made sure a volume was mounted on a certain VTS even if a valid copy existed on the other VTS in the PtP. If your TS7700 grid is operating in preferred mode, you should set this override. Preferred mode means that one cluster s virtual devices are offline to the host, and therefore, allocation will only pick virtual devices from the cluster whose devices are online. Prefer local cache for non-fast ready mounts This override causes the local cluster to satisfy the mount request as long as the cluster is available and the cluster has a valid copy of the data, even if that data is only resident on physical tape. If the local cluster does not have a valid copy of the data, then default cluster selection criteria applies. Without this override, the TS7700 may select the remote cluster s cache for the non-fast ready mount if the local cache does not have a copy of the volume but the remote cache does. Page 7 of 36

8 One scenario where this override is useful is when there is a long distance between clusters resulting in lower performance across the grid links. It may be faster to recall a volume to the local cluster s cache than to access the volume in the remote cluster s cache across the links. This override is also useful if you are testing on a certain cluster and do not want to access a volume using the remote cluster s copy. Force volumes mounted on this cluster to be copied to the local cache For a non-fast ready mount, this override causes a copy to be performed on the local cluster as part of mount processing. For a fast ready mount, this setting has the effect of overriding the specified Management Class with a copy consistency point of Rewind/Unload for the cluster. This does not change the definition of the Management Class, but serves to influence the replication policy. This also overrides a CCP of No Copy for the cluster where the override is set. This override is useful to make sure a copy is on a cluster when it is being accessed at that cluster. The copy of the data may not have occurred yet due to a deferred copy or never been copied because of a CCP of No Copy on the cluster the volume was previously mounted upon. Allow fewer RUN consistent copies before reporting RUN command complete If selected, the value entered at Number of required RUN consistent copies including the source copy will be used to determine the number of copies to override before the Rewind/Unload operation reports as complete. If this option is not selected, the management class definitions are to be used explicitly. Thus, the number of RUN copies can be from one to the number of clusters in the grid. For example if, in a three cluster grid, the CCP is RRR and this override is selected with a value of 2 specified, the RUN complete will be returned to the host after just 2 of the 3 clusters have a consistent copy of the volume. The two copies can be on any of the clusters. Ignore cache Preference Groups for copy priority When the copy source volume is assigned to Level 0 Tape Volume Cache Preference, its copy is processed with higher priority. It is because the sooner the copy completes, the sooner the copy source volume can be migrated. If this option is selected, the copy operation ignores the cache preference. Treat Copied Data According to the Action Defined in the Storage Class By default, data is managed in a cluster s cache based on the type of data it is, either host written data or data copied from another cluster as part of a Sync, RUN, Deferred, or Time Delayed copy operation. The host written data is treated based on the definitions associated with the Storage Class assigned to the volume and the copied data is treated as Preference Group 0 (PG0). In a busy machine PG0 data is removed as space is needed. In a low activity period, PG0 data is removed smallest first as a background task independent of whether space is needed or not. When the machine is busy, the largest PG0 volumes are removed to quickly make space available. PG1 data is kept in cache as long as possible and then removed on an LRU (Least Recently Used) basis. Pre-migrated PG0 data is removed from cache before PG1 data. On a pre-r1.5 TS7700, the SSR is able to enable an override to this behavior on a SMIT menu. The override is called Copy Files Preferenced to Reside in Cache. The panel description is misleading because the effect of this control is slightly different than how it is worded. Page 8 of 36

9 On an R1.5 or higher TS7700, the override is set using the host console request command. The SETTING, CACHE, COPYFSC, ENABLE keywords are used to set the override. Refer to the IBM Virtualization Engine TS7700 Series z/os Host Command Line Request User's Guide on Techdocs for complete details. When enabled, copied data will be treated based on the storage class preference group setting for that volume. The policy defined in the receiving cluster is used to decide the action. For example, if a cluster is copying a volume from another cluster with a Storage Class of SCJIMBO, and the preference group specified in storage class SCJIMBO on the receiving cluster is set to PG1, the copied data will be treated as PG1 data on the receiving cluster. However, if the preference group for SCJIMBO on the receiving cluster is set to PG0, the copied data will be treated as PG0 on the receiving cluster. This setting does not affect other clusters in the grid. Each cluster can have its own setting. High Availability Grid Discussion The TS7700 Grid provides configuration flexibility to meet a variety of customer needs. Those needs are both customer and application dependent. This section is specifically to address configuring a two cluster Grid to meet the needs for high availability. The discussion easily translates to the two production clusters of a high availability, disaster recovery, three or four cluster grid configuration where the third and fourth clusters are strictly a disaster recovery site. High availability means being able to continue to provide access to logical volumes through planned and unplanned outages with as little customer impact or action as possible. It does not mean that all potential for customer impact or action is eliminated. The basic recommendations for establishing a Grid configuration for high availability are: The production systems (sysplexes, LPARs), have FICON channel connectivity to both clusters in the Grid. This means that DFSMS library definitions and IODF have been established and appropriate FICON directors, DWDM attachments or fibre are in place. Virtual tape devices in both clusters in a Grid configuration are varied online to the production systems. If virtual tape device addresses are not normally varied on to both clusters, the virtual tape devices to the other cluster would need to be varied on in the event of a planned or unplanned outage for production to continue. The workload placed on the Grid configuration should be such that when using only one of the clusters, performance throughput is sufficient to meet customer service level agreements. If both clusters are normally used by the production systems (the virtual devices in both clusters are varied online to production), then in the case where one of the clusters is unavailable, the available performance capacity of the Grid configuration is reduced to half. For all data that needs real time duplexing to achieve a zero recovery point objective (RPO), assign a management class whose copy consistency point definition has both clusters having a copy consistency point of Sync. This means when an explicit or implicit tape sync operation occurs, the data written by the host up to that point is secured on the two clusters with Sync copy mode before the volume is closed and unloaded. Page 9 of 36

10 For all data that is critical for high availability, assign a management class whose copy consistency point definition has both clusters having a copy consistency point of RUN. This means that each cluster is to have a copy of the data when the volume is closed and unloaded from the source cluster. The path length for grid links between the clusters should be no more than km using lowlatency directors/switches/dwdms. This is to minimize the job time impact of copying the volume between clusters at volume close time. Network Quality of Service (QoS) or other network sharing methods should be avoided as it can introduce amounts of packet loss which would greatly reduce the effective replication bandwidth between the clusters. For data that you want to have two copies of, but the copy can be deferred, the Copy Consistency Points for the two production clusters that have virtual devices from each of the two clusters online should be set to deferred (DD) for both clusters. This, in concert with the Prefer Local Cluster for Fast Ready Mounts override will provide the best performance. In order to prevent remote tape volume cache accesses during scratch mounts, the "Prefer Local Cluster for Fast Ready Mounts" copy override setting should be configured on both TS7700 clusters within the Grid. For data that you want to migrate to a cluster in the archive site only when it is aged, the Copy Consistency Points for the archive cluster can be set to Time Delayed (T) with the appropriate delay duration. This copy mode can be set to prevent the copy of data which expires quickly to the archive cluster. In order to improve performance and take advantage of cached versions of logical volumes, the "Prefer Local Cluster for Non-Fast Ready Mounts" and "Force Local Copy" override settings should not be configured in either cluster. To minimize operator actions when a failure has occurred in one of the clusters that makes it unavailable, the Autonomic Ownership Takeover Manager (AOTM) should be setup to automatically place the remaining cluster in at least the read ownership takeover mode. Read/write ownership takeover mode is recommended if the customer wants to write data to the remaining cluster. If AOTM is not used, or it cannot positively determine if a cluster has failed, an operator will need to make that determination and through the Management Interface on the remaining cluster and select one of the ownership takeover modes. If more than one Grid configuration is available for use by the production systems, on detection of a failure of a cluster in one of the Grid configurations, it is recommended that the state of the Storage Group(s) associated with that Grid configuration, i.e. the composite library, be changed to disallow scratch mounts. This allows all future write workloads to be directed to the other fully operational Grid configurations. If this is the case, using AOTM with Read ownership takeover is preferred and the impact to job performance may be minimized. By following these recommendations, The TS7700 Grid configuration supports the availability to performance customer workloads through: Page 10 of 36

11 Planned outages in a Grid configuration, such as microcode or hardware updates to a cluster. While one cluster is being serviced, production work continues with the other cluster in the Grid configuration once virtual tape device addresses are online to the cluster. Unplanned outage of a cluster. Because of the copy policy to make a copy of a volume when it is being closed, all jobs that completed prior to the outage will have a copy of their data available on the other cluster. For jobs that were in progress on the cluster that failed, they can be re-issued once virtual tape device addresses are online on the other cluster (if they were not already online) and an ownership takeover mode has been established (either manually or through AOTM). For jobs that were writing data, the written data is not accessible and the job must start again. Page 11 of 36

12 CCP and Override Settings for GDPS In a Geographically Dispersed Parallel Sysplex (GDPS) the first three Copy Policy Override Settings must be selected on each cluster, to ensure that wherever the GDPS primary site is, this TS7700 cluster is preferred for all I/O operations. These three overrides are: Prefer local cache for fast ready mount requests Prefer local cache for non-fast ready mounts Force volumes mounted on this cluster to be copied to the local cache Note: Currently there is no direct support of the TS7700 by GDPS, but the TS7700 can co-exist with GDPS managed DASD. To emulate the primary-site, secondary-site nature of GDPS DASD, these overrides can be used. CCP and HSM Backup of ML2 Data When HSM writes ML2 data to tape, it deletes the source data as it goes along but before the RUN is issued to the TS7700. This means that when an immediate copy to another cluster is specified, for a period of time, only one copy of the ML2 data exists. This is because the TS7700 grid, even with a CCP of RR, makes a second copy at RUN time. To ensure that two copies of the ML2 data exist, we recommend that HSM duplexing be utilized. This creates two separate copies of the ML2 data before HSM deletes it. Ideally, with a multi cluster grid, you want one copy of the data in one cluster and the second copy in another to avoid loss of data if one of the clusters experiences a disaster. You can use the CCPs to ensure that each copy of the duplexed data is sent to separate clusters. At each cluster that the host will allocate a virtual device to, you want to create two management classes, one for the primary and one for the duplexed copy. In a two cluster grid you will want a management class with CCPs of RD and NR at Cluster 0 and DR and RN at Cluster 1. The RD and DR CCPs at each cluster are for the primary copy and must use the same management class name. The same is true for the NR and RN CCPs which are used for the duplex copy. For a three cluster grid, where Clusters 0 and 1 are attached to the host backing up the ML2 data, you should create the primary copy management class with CCPs of RDN or RND for Cluster 0, and DRN or NRD for Cluster 1. The duplex copy management class CCPs should be NRN or NNR for Cluster 0, and RNN or NNR for Cluster 1. This same concept expands to a grid with more than three clusters. HSM duplexing should use the primary copy management class for the first copy and the duplex copy management class name for the duplex copy. This way, there will be a minimum of two copies of the ML2 data on two separate clusters at all times. If the ML2 data duplex completes there will be three copies of the data. The question then arises, why should I have the primary copy management class specify two copies? The reason is that it is much easier to access the primary copy from HSM. Accessing the duplexed copy from HSM requires manual intervention. The duplexed copy is there to ensure there are two copies of the migrated data while HSM is creating the primary volume. Page 12 of 36

13 With Sync copy mode introduced in Release 2.1, HSM duplex can be eliminated. With an explicit or implicit sync operation, the data is written to both S locations. HSM ML2 data still exists on two clusters in the same grid even if HSM deletes the source data as it completes the write before the RUN is issued. In a two cluster grid, you will want a management class with CCPs of SS at both clusters. For a three cluster grid where Cluster 0 and 1 are attached to the host, you should create the management class with CCP s of SSN can on Cluster 0 and 1. If the third copy of data is required, CCPs with SSR or SSD can be used. CCPs and Bulk Volume Information Retrieval (BVIR) The BVIR function provides the ability to upload a variety of information from the TS7700 by creating a logical volume with the request on a target cluster. The requested information is written to that logical volume by the TS7700 then the host reads that logical volume to obtain the information. BVIR can obtain information from any of the clusters in a multi cluster grid. The information can be different depending upon which cluster is targeted. For example, if you want to get Physical Volume Status for a pool from cluster 1 you must direct the request to use cluster 1 s cache to create the volume, and cluster 1 must be the only cluster to have a copy. The cluster 1 request can be driven using a virtual device on cluster 0, 1, 2, or 3 using a Management Class (MC) whose CCPs specify that cluster 1 s cache be used. This is accomplished by creating an MC on all clusters that has the same CCPs that point to cluster 1. For this example you would create an MC with a name of BVIRCL1 (name is arbitrary) and CCPs of NR for a two-cluster grid, NRN for a three-cluster grid, or NRNN for a four-cluster grid.. This MC with the same CCPs should be defined at all clusters in the grid so that no matter which cluster the virtual drive used belongs to the BVIR request volume will only be written to cluster 1 s cache. Two Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 BVIRCL0 RN RN BVIRCL1 NR NR Three Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 Cluster 2 BVIRCL0 RNN RNN RNN BVIRCL1 NRN NRN NRN BVIRCL2 NNR NNR NNR Four Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 Cluster 2 Cluster 3 BVIRCL0 RNNN RNNN RNNN RNNN BVIRCL1 NRNN NRNN NRNN NRNN BVIRCL2 NNRN NNRN NNRN NNRN BVIRCL3 NNNR NNNR NNNR NNNR Page 13 of 36

14 CCPs, Copy Export, and Grid The Copy Export function allows secondary copies of logical volumes to be copy exported from the TS7700 by creating an Export List File Volume logical volume with the request on a target cluster. After completing the Copy Export the results are written to that logical volume by the TS7700. This Copy Export volume can be directed to any virtual device on any cluster but must only be written to the cache of the cluster the copy export is to be performed upon. For example, if you want to perform a Copy Export on cluster 1 you must direct the request to cluster 1 s cache. The management class (MC) is used to direct the Copy Export request. You will need to create an MC on each cluster that points to cluster 1 s cache. For example you would create an MC with a name of CEXPTCL1 (name is arbitrary) and CCPs of NR for a two-cluster grid, NRN for a three-cluster grid, or NRNN for a four-cluster grid. This MC with the same CCPs should be defined at all clusters in the grid so that no matter which cluster the virtual drive used belongs to the Copy Export request volume will be written to cluster 1 s cache. Note: A Copy Export will fail if the export list file volume has a copy on more than one cluster in the grid. Two-Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 CEXPTCL0 RN RN CEXPTCL1 NR NR Three-Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 Cluster 2 CEXPTCL0 RNN RNN RNN CEXPTCL1 NRN NRN NRN CEXPTCL2 NNR NNR NNR Four-Cluster Grid MCs and CCPs Name Cluster 0 Cluster 1 Cluster 2 Cluster 3 CEXPTCL0 RNNN RNNN RNNN RNNN CEXPTCL1 NRNN NRNN NRNN NRNN CEXPTCL2 NNRN NNRN NNRN NNRN CEXPTCL3 NNNR NNNR NNNR NNNR Page 14 of 36

15 Two Cluster Grid The following sections describe specific configurations and the recommended CCPs and copy override settings. Disaster Recovery Single Production site with Disaster Recovery site devices offline to host In this configuration there is a production cluster (local) and a Disaster Recovery cluster (remote) where the production hosts have the local cluster virtual devices online. The Disaster Recovery site virtual devices are typically offline to any hosts. Figure 3 - Two-Cluster DR Config, Local Connectivity Only Page 15 of 36

16 CCPs - The settings we describe are for both the local and remote clusters, even though the remote virtual devices are offline. This is so they are set up in the event the remote virtual devices are brought online. o For data that needs to be replicated, but can be a deferred copy define a Management Class (MC) with the Cluster 0 CCP set to RD and Cluster 1 CCP set to DR. o For data that needs to be replicated immediately define an MC with Cluster 0 and 1 CCPs set to RR. o For data that does not need a copy at the other cluster define an MC with Cluster 0 set to RN and Cluster 1 set to NR. o For data that needs zero RPO define a MC with Cluster 0 and 1 CCPs set to SS. o For archive data to be replicated that is not quickly expired, define an MC with Cluster 0 CCP set to RT and Cluster 1 set to TR. o For disaster recovery testing you may want to create an MC on the Disaster Recovery cluster only that has a CCP of NR. This will allow you to create volumes during disaster recovery testing without creating a copy of the test data on the production cluster. o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create two management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on that one cluster. You should create one or two management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Recommended Management Class CCPs MC Cluster 0 Cluster 1 Default RD DR DEFERRED RD DR TWORUN RR RR ONECOPY RN NR DISREC RN NR BVIRCL0 RN RN BVIRCL1 NR NR CEXPTCL0 RN RN CEXPTCL1 NR NR SYNC SS SS ARCHIVE RT TR Overrides o Prefer local cache for fast ready mount requests There is no need to set this since, in the RD case the R CCP takes precedence over the D CCP. In the RR case, both cluster s CCPs Page 16 of 36

17 are equal, but the local cache will be chosen because it has a better performance rating than accessing the remote cluster. o Prefer local cache for non-fast ready mount requests - There is no need to set this since, in the RD case the R CCP takes precedence over the D CCP. In the RR case, both cluster s CCPs are equal, but the local cache will be chosen because it has a better performance rating than accessing the remote cluster. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this. o Allow fewer RUN consistent copies before reporting RUN command complete Typically there is no need to set this override. In a two cluster grid this override only applies if the CCP is set to RR. You could use this control to temporarily cause the RR CCPs to be treated as RD or DR. Treat Copied Data According to the Action Defined in the Storage Class In this configuration it is recommended that this override be set at both clusters. With this set and the Storage class set to PG1, the data copied to the remote cluster will be retained in cache. In the event of a disaster, when the virtual devices at the remote cluster are brought online, the remote cache will be filled with the latest data. Without this override, the cache will be nearly empty since PG0 data is removed from cache after it is pre-migrated to tape. To take advantage of this override, be sure the remote cluster s Storage Class for the copied volumes is set to PG1. Page 17 of 36

18 High Availability - Production Directed from Hosts to Both Clusters In this configuration the host has connectivity to both clusters and the virtual devices for each cluster are online to the host. Allocation will select from the online virtual devices in both clusters. This means that a volume may be mounted in one cluster one time, then the other cluster the next time. Figure 4 - Two Cluster Grid, Host Online to Both Clusters CCPs - The settings we describe are for both the local and remote clusters since a mount request could be allocated to either cluster. o For data that needs to be replicated, but can be a deferred copy define a Management Class (MC) with the Cluster 0 and 1 CCPs set to DD. Use these CCPs in conjunction with the Prefer Local Cache for Fast Ready Mounts override. o For data that needs to be replicated immediately define an MC with Cluster 0 and 1 CCPs set to RR. Page 18 of 36

19 o For data that does not need a copy at the other cluster define an MC with Cluster 0 set to RN and Cluster 1 set to NR. o For data that needs zero RPO define a MC with Cluster 0 and 1 CCPs set to SS. o For disaster recovery testing you may want to create an MC on one of the clusters, cluster 1 perhaps, that has a CCP of NR. This will allow you to create volumes during disaster recovery testing without creating a copy of the test data on the other cluster. o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create two management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on that one cluster. You should create one or two management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Recommended Management Class CCPs MC Cluster 0 Cluster 1 Default DD DD DEFERRED DD DD TWORUN RR RR ONECOPY RN NR DISREC RN NR BVIRCL0 RN RN BVIRCL1 NR NR CEXPTCL0 RN RN CEXPTCL1 NR NR SYNC SS SS Overrides o Prefer local cache for fast ready mount requests With the CCP of DD, we recommend that you set this override. With a CCP of DD, the TS7700 may determine that the remote cluster s cache be used instead of the local cache. For a fast ready mount this is not the most efficient method. The override will ensure the fast ready mount will use the local cluster s cache. o Prefer local cache for non-fast ready mount requests This override is not needed since the two clusters are a short distance from each other. If a non-fast ready mount is received at a cluster that has a valid copy of the data but it is not in cache, but the other cluster has a valid copy in cache, it is faster to access the volume over the grid links than to perform a recall to the mounting cluster. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this override for this scenario. Page 19 of 36

20 o Allow fewer RUN consistent copies before reporting RUN command complete Typically there is no need to set this override. In a two cluster grid this override only applies if the CCP is set to RR. You could use this control to temporarily cause the RR CCPs to be treated as RD. Treat Copied Data According to the Action Defined in the Storage Class In this configuration using or not using this override can be useful, based on your needs. Without this override, the effective size of the cache is the combination of both cluster s caches. This is because data written to a cluster s cache from a host is treated as PG1 and copied data is treated as PG0. Each cluster s cache will mostly contain PG1 data. However, if a volume s data is, for example, in the cluster 0 cache, but not in the cluster 1 cache, and a mount for the volume is issued to a virtual device in cluster 1, the data will most likely be accessed from the cluster 0 cache via the grid links. Accessing the data across the grid links is slower than directly accessing the data from the mounting cluster s cache, but is faster than performing a recall to the mounting cluster s cache. With this override, data copied from another cluster via the grid links is treated as defined in the volume s Management Class actions on the receiving cluster. If the actions indicate the data should be treated as PG1 data, it will be treated as PG1 data. When copied data is treated as PG1 data, it is kept in cache as long as possible and removed using an LRU algorithm. Also, the cache in both clusters will be approximately the same, assuming they have similarly sized caches. This increases the likelihood that data will still be in cache for a non-fast ready mount no matter which cluster it is received upon. However, the overall cache size will not be a combination of both clusters as when this override is not selected. Device Allocation Assist (DAA), which was introduced in R1.5 and is supported by JES2 (JES3 supports DAA with z/os V2R1 or later), will ask the TS7700 grid which cluster is the best cluster to mount a private volume on. The TS7700 provides an ordered list of the best cluster to allocate a virtual device on. The best cluster is typically the cluster that has the volume in cache. Page 20 of 36

21 Dual Production Directed from Hosts to Different Clusters in Grid In this scenario there are two hosts or sets of hosts that each has their own workloads. One set of hosts is connected to cluster 0 virtual devices, and the other set of hosts is connected to cluster 1 virtual devices. Cluster 0 attached hosts only have cluster 0 virtual devices online. The cluster 1 attached hosts only have cluster 1 virtual devices online. Each cluster is the Disaster Recovery cluster for the other. In other words, host written data to cluster 0 is copied to cluster 1 for disaster recovery purposes, and visa versa. Figure 5 - Two Cluster Grid, Hosts Online to Each Cluster CCPs - The settings we describe are for both clusters. The hosts attached to each cluster have that cluster s virtual devices online, and the other cluster s virtual devices are offline or not configured. Page 21 of 36

22 o For data that needs to be replicated, but can be a deferred copy define a Management Class (MC) with the Cluster 0 CCP set to RD and Cluster 1 CCP set to DR. o For data that needs to be replicated immediately define an MC with Cluster 0 and 1 CCPs set to RR. o For data that does not need a copy at the other cluster define an MC with Cluster 0 set to RN and Cluster 1 set to NR. o For data that needs zero RPO define an MX with Cluster 0 and 1 CCPs set to SS. o For archive data to be replicated that is not quickly expired, define an MC with Cluster 0 CCP set to RT and Cluster 1 set to TR. o For disaster recovery testing you may want to create an MC on the Disaster Recovery cluster only that has a CCP of RN for cluster 0 and NR for cluster 1. This will allow you to create volumes during disaster recovery testing without creating a copy of the test data on the production cluster. o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create two management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on that one cluster. You should create one or two management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache and the other should use Cluster 1 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Recommended Management Class CCPs MC Cluster 0 Cluster 1 Default RD DR DEFERRED RD DR TWORUN RR RR ONECOPY RN NR DISREC RN NR BVIRCL0 RN RN BVIRCL1 NR NR CEXPTCL0 RN RN CEXPTCL1 NR NR SYNC SS SS ARCHIVE RT TR Overrides o Prefer local cache for fast ready mount requests There is no need to set this since, in the RD case the R CCP takes precedence over the D CCP. In the RR case, both cluster s CCPs are equal, but the local cache will be chosen because it has a better performance rating than accessing the remote cluster. Page 22 of 36

23 o Prefer local cache for non-fast ready mount requests - There is no need to set this since, in the RD case the R CCP takes precedence over the D CCP. In the RR case, both cluster s CCPs are equal, but the local cache will be chosen because it has a better performance rating than accessing the remote cluster. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this in this scenario. o Allow fewer RUN consistent copies before reporting RUN command complete Typically there is no need to set this override. In a two cluster grid this override only applies if the CCP is set to RR. You could use this control to temporarily cause the RR CCPs to be treated as RD or DR. Treat Copied Data According to the Action Defined in the Storage Class In this configuration it is recommended that this override not be set at both clusters. Since the data copied across the grid links will not be accessed at the remote link, there is no need to treat the data as PG1. This will keep the host data written to each cluster in cache for as long as possible. Page 23 of 36

24 Three Cluster Grid Dual Independent Production Sites, Disaster Recovery site devices offline In this three cluster grid, there are two production clusters and a Disaster Recovery cluster. There are hosts with virtual devices online to cluster 0 but not cluster 1, and other hosts with virtual devices online to cluster 1 but not cluster 0. Optionally there is host connectivity to cluster 2; however, no host has cluster 2 s virtual devices online. Figure 6 - Three Cluster Grid: Dual Production Sites, Shared Disaster Recovery Site, Devices Offline CCPs Host data written to cluster 0 is copied to cluster 2 and host data written to cluster 1 is copied to cluster 2. Data is not copied between cluster 0 and 1. The hosts attached to cluster 0 have just that cluster s virtual devices online, and the remote virtual devices are offline. The same is true for cluster 1. o For data that needs to be replicated, but can be a deferred copy define a Management Class (MC) with the Cluster 0 CCP set to RND and Cluster 1 CCP set to NRD. If data is for archive that is not quickly expired, define a MC with Cluster 0 CCP set to RNT and Cluster Page 24 of 36

25 1 CCP set to NRT. Cluster 2 s CCP has several options. If the virtual devices are brought online at cluster 2, you will need to decide if you want to attempt to create copies back to clusters 0 and 1. If you are bringing cluster 2 virtual devices online for a Disaster Recovery test we recommend a CCP of NNR. This will keep Disaster Recovery test data from being copied back to the production clusters. You could set up an MC with a CCP of DNR for data that will be deferred copied to Cluster 0 and an MC with a CCP of NDR for data that will be deferred copied to Cluster 1. If you are bringing cluster 2 virtual devices online due to a real disaster and don t want to worry about which cluster, 0 or 1, is still active, we recommend you set the CCP to DDR. This will attempt to create a copy of all data on the production clusters, whichever cluster is still operational. o For data that needs to be replicated immediately, define an MC with the Cluster 0 CCP set to RNR and the Cluster 1 CCP set to NRR. If data needs zero RPO, define a MC with the Cluster 0 CCP set to SNS and the Cluster 1 CCP set to NSS. If the virtual devices are brought online at cluster 2, you will need to decide if you want to attempt to create copies back to clusters 0 and 1. If you are bringing cluster 2 virtual devices online for a Disaster Recovery test we recommend a CCP of NNR. This will keep Disaster Recovery test data from being copied back to the production clusters. You could set up an MC with a CCP of RNR for data that will be immediately copied to Cluster 0 and an MC with a CCP of NRR for data that will be immediately copied to Cluster 1. If you are bringing cluster 2 virtual devices online due to a real disaster and don t want to worry about which cluster, 0 or 1, is still active, we recommend you set the CCP to RRR. This will attempt to create a copy of all data on the production clusters, whichever cluster is still operational. o For data that does not need a copy at any other cluster define an MC with Cluster 0 set to RNN, Cluster 1 set to NRN, and Cluster 2 set to NNR.. o For disaster recovery testing we recommend a CCP of NNR for cluster 2 This will allows you to write test data during disaster recovery testing without creating a copy of the test data on the production cluster. o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create three management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache, one that uses Cluster 1 s cacahe, and one that uses Cluster 2 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on that one cluster. You should create one to three management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache, another that uses Cluster 1 s cache, and one that uses Cluster 2 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Page 25 of 36

26 Recommended Management Class CCPs MC Cluster 0 Cluster 1 Cluster 2 Default RND NRD NNR DEFERRED RND NRD NNR* TWORUN RNR NRR NNR** ONECOPY RNN NRN NNR DISREC RNN NRN NNR BVIRCL0 RNN RNN RNN BVIRCL1 NRN NRN NRN BVIRCL2 NNR NNR NNR CEXPTCL0 RNN RNN RNN CEXPTCL1 NRN NRN NRN CEXPTCL2 NNR NNR NNR SYNC SNS NSS NNR ARCHIVE RNT NRT NNR * Depending upon your needs, Cluster 2 CCPs could also be DNR, NDR, or DDR ** Depending upon your needs, Cluster 2 CCPs could also be RNR, NRR, or RRR with the Allow fewer RUN override set to 2. Overrides o Prefer local cache for fast ready mount requests There is no need to set this since, in the RND and NRD case the R CCP takes precedence over the D and N CCP. o Prefer local cache for non-fast ready mount requests - There is no need to set this since, in the RND and NRD cases the R CCP takes precedence over the D and N CCPs. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this override. o Allow fewer RUN consistent copies before reporting RUN command complete If you chose to set a CCP of RRR on Cluster 2 as described above, you may want to limit the number of immediate copies to two. Treat Copied Data According to the Action Defined in the Storage Class In this configuration it is recommended that this override be set in Cluster 2. With this set, the data copied to Cluster 2 will be retained in cache. In the event of a disaster and the virtual devices at cluster 2 are brought online, the cache will be filled with the latest data from Clusters 0 and 1. Without this override, the cache will be nearly empty since PG0 data is removed from cache after it is pre-migrated to tape. To take advantage of this override, be sure the Cluster 2 Storage Class for the copied volumes is set to PG1. Page 26 of 36

27 High Availability, Dual production, Disaster Recovery site devices offline In this three cluster grid, there are two production clusters and a Disaster Recovery cluster. There are hosts at both production sites with virtual devices online to cluster 0 and cluster 1. Clusters 0 and 1 are typically a short distance from each other. Optionally, there is host connectivity to cluster 2; however, no host has cluster 2 s virtual devices online. Figure 7 - Three Cluster Grid: High Availability, Dual Production, and Disaster Recovery Configuration CCPs Host data written to cluster 0 is copied to cluster 1 and 2 and host data written to cluster 1 is copied to cluster 0 and 2. o For high availability and disaster recovery, data should be copied between Cluster 0 and 1 at Rewind/Unload time, and deferred to Cluster 2. Set the CCPs for Cluster 0 and 1 to RRD. If data needs zero RPO, set the CCPs for Cluster 0 and 1 to SSD. If the virtual devices are brought online at Cluster 2, you will need to decide if you want to attempt to create copies back to clusters 0 and 1. If you are bringing Cluster 2 virtual devices online for a Disaster Recovery test we recommend a CCP of NNR. This will keep Disaster Recovery test data from being copied back to the production clusters. You could set up an MC with a CCP of DDR for data that will be deferred copied to Cluster 0 and 1. Page 27 of 36

28 If you want immediate copies to Cluster 0 and/or 1, set a CCP of RRR and limit the number of immediate copies to two using the Allow fewer RUN consistent copies before reporting RUN command complete override. o For data that needs to be replicated, but can be a deferred copy define a Management Class (MC) with the Cluster 0 and 1 CCPs set to DDD along with the Prefer local cache for fast ready mount requests override. If data is for archive that is not quickly expired, define a MC with the Cluster 0 and 1 CCPs set to DDT. Cluster 2 s CCP has several options. If the virtual devices are brought online at cluster 2, you will need to decide if you want to attempt to create copies back to clusters 0 and 1. If you are bringing Cluster 2 virtual devices online for a Disaster Recovery test we recommend a CCP of NNR. This will keep Disaster Recovery test data from being copied back to the production clusters. You could set up an MC with a CCP of DDR for data that will be deferred copied to Cluster 0 and 1. If you want immediate copies to Cluster 0 and/or 1, set a CCP of RRR and limit the number of immediate copies to two using the Allow fewer RUN consistent copies before reporting RUN command complete override. o For data that needs to be replicated immediately but only once, define an MC with the Cluster 0 CCP set to RNR and the Cluster 1 CCP set to NRR. If data needs zero RPO, define a MC with the Cluster 0 CCP set to SNS and the Cluster 1 CCP set to NSS. If the virtual devices are brought online at Cluster 2, you will need to decide if you want to attempt to create copies back to clusters 0 and 1. If you are bringing Cluster 2 virtual devices online for a Disaster Recovery test we recommend a CCP of NNR. This will keep Disaster Recovery test data from being copied back to the production clusters. You could set up an MC with a CCP of RNR for data that will be immediately copied to Cluster 0 and an MC with a CCP of NRR for data that will be immediately copied to Cluster 1. If you are bringing cluster 2 virtual devices online due to a real disaster and don t want to worry about which cluster, 0 or 1, is still active, we recommend you set the CCP to RRR. This will attempt to create a copy of all data on the production clusters, whichever cluster is still operational. Use the Allow fewer RUN consistent copies before reporting RUN command complete override to limit the number of RUN copies to one or two. o For data that does not need a copy at any other cluster define an MC with Cluster 0 set to RNN, Cluster 1 set to NRN, and Cluster 2 set to NNR.. o For disaster recovery testing we recommend a CCP of NNR for Cluster 2 This will allows you to write test data during disaster recovery testing without creating a copy of the test data on the production cluster. o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create three management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache, one that uses Cluster 1 s cacahe, and one that uses Cluster 2 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on Page 28 of 36

29 that one cluster. You should create one to three management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache, another that uses Cluster 1 s cache, and one that uses Cluster 2 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Recommended Management Class CCPs MC Cluster 0 Cluster 1 Cluster 2 Default DDD DDD NNR DEFERRED DDD DDD NNR* 2RUN1DEF RRD RRD NNR* TWORUN RNR NRR NNR** DISREC RNN NRN NNR BVIRCL0 RNN RNN RNN BVIRCL1 NRN NRN NRN BVIRCL2 NNR NNR NNR CEXPTCL0 RNN RNN RNN CEXPTCL1 NRN NRN NRN CEXPTCL2 NNR NNR NNR 2S1DEF SSD SSD NNR ARCHIVE DDT DDT NNR TWOSYNC SNS NSS NNR * Depending upon your needs, Cluster 2 CCPs could also be DDR, RRR with the Allow fewer RUN override set to 2 ** Depending upon your needs, Cluster 2 CCPs could also be RNR or NRR. Overrides o Prefer local cache for fast ready mount requests If you have set the CCPs to DDD then you should set this override. Otherwise, there is no need to set this since the R CCP takes precedence over the D and N CCP. o Prefer local cache for non-fast ready mount requests - There is no need to set this since the R CCP takes precedence over the D and N CCPs. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this override. o Allow fewer RUN consistent copies before reporting RUN command complete There is no need to set this override in this configuration unless you want to limit the number of RUN copies to one or two. Treat Copied Data According to the Action Defined in the Storage Class In this configuration it is recommended that this override be set in Cluster 2. With this set, the data copied to Cluster 2 will be retained in cache. In the event of a disaster and the virtual devices at cluster 2 are brought online, the cache will be filled with the latest data. Without this override, the cache will be nearly empty since PG0 data is removed from cache after it is pre-migrated to tape. To take advantage of this override, be sure the cluster 2 Storage Class for the copied volumes is set to PG1. Page 29 of 36

30 Triple Production Sites, Round-Robin Backup In this three cluster configuration there are three production sites that have the virtual devices of their local cluster online, and the virtual devices of the other clusters offline. Each cluster is the backup of one of the other clusters. For example, Cluster 0 copies its data to Cluster 1, Cluster 1 copies its data to Cluster 2, and Cluster 2 copies its data to Cluster 0. Figure 8 - Triple Production, Round Robin Backup CCPs For the round-robin copy scheme we recommend: o For immediate copies, the CCP for Cluster 0 should be set to RRN, for Cluster 1 should be set to NRR, for Cluster 2 should be set to RNR. o If data needs zero RPO, the CCP for Cluster 0 should be set to SSN, for Cluster 1 should be set NSS, for Cluster 2 should be set SNS. o For deferred copies, the CCP for Cluster 0 should be set to RDN, for Cluster 1 should be set to NRD, for Cluster 2 should be set to DNR. o For archive data that is not quickly expired, the CCP for Cluster 0 can be set to RTN, for Cluster 1 can be set to NRT, for Cluster 2 can be set to TNR. o For data that does not need a copy at any other cluster define an MC with Cluster 0 set to RNN, Cluster 1 set to NRN, and Cluster 2 set to NNR.. o If you are using one of the clusters for Disaster Recovery testing you will want to create a CCP that doesn t copy data to other clusters. For Cluster 0 the CCP should be set to RNN, for Cluster 1 should be set to NRN, and for Cluster 2 set to NNR. Page 30 of 36

31 o For Bulk Volume Information requests, the logical volume used for the request must be written to the cache of the cluster the information is to be gotten from. Also, the logical volume can only exist on that one cluster. You should create three management classes with CCPs for BVIR operations. One management class should have a CCP that uses Cluster 0 s cache, one that uses Cluster 1 s cache, and one that uses Cluster 2 s cache. o For Copy Export operations, the logical volume used for the request must be written to the cache of the cluster the copy export is to occur on. Also, the logical volume can only exist on that one cluster. You should create one to three management classes with CCPs for Copy Export operations. One management class should have a CCP that uses Cluster 0 s cache, another that uses Cluster 1 s cache, and one that uses Cluster 2 s cache. If you want to copy export from cluster 1 you must use the management class with the CCPs that cause the logical volume to be created on Cluster 1. Recommended Management Class CCPs MC Cluster 0 Cluster 1 Cluster 2 Default RDN NRD DNR DEFERRED RDN NRD DNR TWORUN RRN NRR RNR ONECOPY RNN NRN NNR DISREC RNN NRN NNR BVIRCL0 RNN RNN RNN BVIRCL1 NRN NRN NRN BVIRCL2 NNR NNR NNR CEXPTCL0 RNN RNN RNN CEXPTCL1 NRN NRN NRN CEXPTCL2 NNR NNR NNR SYNC SSN NSS SNS ARCHIVE RTN NRT TNR Overrides o Prefer local cache for fast ready mount requests There is no need to set this since the R CCP takes precedence over the D and N CCP. o Prefer local cache for non-fast ready mount requests - There is no need to set this since the R CCP takes precedence over the D and N CCPs. o Force volumes mounted on this cluster to be copied to the local cache There is no need to set this override. o Allow fewer RUN consistent copies before reporting RUN command complete There is no need to set this override in this configuration Treat Copied Data According to the Action Defined in the Storage Class In this configuration it is recommended that this override not be set in any cluster. This will create a cache size that is a combination of all three cluster s caches. Page 31 of 36

32 Four-Cluster Grids Most of the recommendations for two and three-cluster grids can easily be extended to a four-cluster grid and will not be expanded upon here. There are two interesting four-cluster grid configurations that will be discussed. Two, Two-Cluster Grids in One! A popular 4 cluster grid configuration is the two, two cluster grids in one. Here there are two clusters at a production site attached to hosts and two clusters at a DR site. Copy Consistency Points are set up to create two copies of data. Volumes written to cluster 0 are copied to cluster 2, and volumes written to cluster 1 are copied to cluster 3. All data remains available when one cluster is not available. Production Site Disaster Recovery Site Copy Mode DNDN Cluster 0 WAN Cluster 2 Cluster 1 Cluster 3 Copy Mode NDND Figure 9 - Two, Two Cluster's in One! In the configuration above the CCPs are set so that cluster 0 copies to cluster 2 (DNDN), and cluster 1 copies to cluster 3 (NDND). The Prefer local cache for fast ready mount requests override should be set when these CCPs are used. For immediate copies the CCPs should be set to RNRN for cluster 0 and NRNR for cluster 1. For data needs zero RPO, the CCPs should be set to SNSN for cluster 0 and NSNS for cluster 1. For archive data that is not quickly expired, the CCPs should be set to RNTN for cluster 0 and NRNT for cluster 1. In a JES2 environment with Device Allocation Assist (APAR OA24966), host allocation helps to direct private mounts to the best cluster. Typically the best cluster is the one that has the volume in its cache. In a JES3 environment, DAA is supported only with z/os V2R1. This means host allocation has a 50/50 shot of selecting the best cluster for a private mount. Retain Copy Mode was developed to assist in this situation. Retain Copy Mode will also prevent more than two copies from being made, in both a JES2 and JES3 environment, when one of the production TS7700s is not available. Refer to the IBM Virtualization Page 32 of 36

33 Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance Version 1.3 or higher white paper on techdocs for a complete description of Retain Copy Mode. Hybrid Grid Large Cache Front End, Tape Back End An exciting use of the 4 cluster grid is three TS7720 production clusters sharing a single TS7740 remote cluster. By replicating only to the TS7740, the host has access to the cache from the three TS7720s. This provides a very deep cache at the production site. Device Allocation Assist will help the host to direct a private mount to the TS7720 that has the volume in cache. If the volume is so old that it only exists in the TS7740, any of the TS7720s can access the volume via the grid links. The CCPs for the configuration where only two copies exists, one in a TS7720 and the second in the TS7740, and the copies are deferred, are DNND for cluster 0, NDND for cluster 1, and NNDD for cluster 2. The Prefer local cache for fast ready mount requests override should be set when these CCPs are used. For immediate copies the CCPs should be set to RNNR for cluster 0, NRNR for cluster 1, and NNRR for cluster 2. For data needs zero RPO, the CCPs should be set to SNNS for cluster 0, NSNS for cluster 1, and NNSS for cluster 2. For archive data that is not quickly expired, the CCPs should be set to RNNT for cluster 0, NRNT for cluster 1, and NNRT for cluster 2. Cluster 0 TS7720 LAN/WAN TS7740 Cluster 3 Cluster 1 TS7720 Cluster 2 Production Capacity of 210TB TS7720 Figure 10 - Large Cache Front End, Tape Back End Cluster Families Prior to release 1.6 and cluster families, only the Copy Consistency Points could be used to direct who gets a copy of data and when they get the copy. Decisions as to where to source a volume from were left to each cluster in the grid. Two copies of the same data may be transmitted across the grid links for two distant clusters. With the introduction of cluster families in release 1.6, you can influence and make the copying of data to other clusters more efficient as well as influencing where a copy of data is sourced from. This becomes very important with 3 and 4 cluster grids where the clusters may be geographically separated. For example, when two clusters are at one site and the other two are at a remote site, and the two remote clusters need a copy of the data, cluster families make it so only one copy of the data is sent across the long Page 33 of 36

34 grid link. Also, when deciding where to source a volume, a cluster will give higher priority to a cluster in its family over another family. Refer to the IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance Version 1.3 or higher white paper on techdocs for a complete description of Cluster Families. Production Site Family to Family DR Site Within Family WAN Within Family Family A Copy Mode RRDD Figure 11 - Cluster Families Family B Six-Cluster Grids Up to six clusters are supported in a Grid (an RPQ is required for six and five cluster Grid). An interesting six-cluster grid configuration will be discussed. Dual production data centers with HA/DR configuation There are two data centers. Each data center has two TS7720 clusters which the host has connectivity to both clusters and one TS7740 for storing the archive data. Data written to a TS7720 is replicated to a TS7720 in another data center and the high availability as well as the disaster recovery can be configured with the two TS7720 clusters. TS7740 receives a copy from TS7720 clusters in the same data center and is used for cost effective long term archive storage. Define an MC with Cluster 0 and 3 set to DNTDNT and Cluster 1 and 4 set to NDTNDT. Given the most volumes are expired within 50 days and removal takes place at 70 days on TS7720 clusters, the time delay 60 days is set to TS7740 clusters. With the time delay setting, TS7740 clusters receive aged archival data out of TS7720 clusters and early expired volumes will not be replicated to TS7740 clusters. If TS7720 clusters reach full capacity, the oldest data which has already been replicated to TS7740 clusters will be removed from TS7720 clusters and the automatic data migration from TS7720 to TS7740 clusters for the archive data can be achieved. Page 34 of 36

35 Figure 11 Six clusters Grid Page 35 of 36

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1 IBM Virtualization Engine TS7700 Series Best Practices TS7700 Hybrid Grid Usage V1.1 William Travis billyt@us.ibm.com STSM TS7700 Development Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills

More information

IBM TS7700 Series Grid Failover Scenarios Version 1.4

IBM TS7700 Series Grid Failover Scenarios Version 1.4 July 2016 IBM TS7700 Series Grid Failover Scenarios Version 1.4 TS7700 Development Team Katsuyoshi Katori Kohichi Masuda Takeshi Nohta Tokyo Lab, Japan System and Technology Lab Copyright 2006, 2013-2016

More information

TS7720 Implementation in a 4-way Grid

TS7720 Implementation in a 4-way Grid TS7720 Implementation in a 4-way Grid Rick Adams Fidelity Investments Monday August 6, 2012 Session Number 11492 Agenda Introduction TS7720 Components How a Grid works Planning Considerations TS7720 Setup

More information

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1 7/9/2015 IBM TS7700 Series Best Practices Flash Copy for Disaster Recovery Testing V1.1.1 Norie Iwasaki, norie@jp.ibm.com IBM STG, Storage Systems Development, IBM Japan Ltd. Katsuyoshi Katori, katori@jp.ibm.com

More information

TS7700 Technical Update TS7720 Tape Attach Deep Dive

TS7700 Technical Update TS7720 Tape Attach Deep Dive TS7700 Technical Update TS7720 Tape Attach Deep Dive Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 Quick background of TS7700 TS7720T Overview TS7720T Deep Dive TS7720T

More information

TS7700 Technical Update What s that I hear about R3.2?

TS7700 Technical Update What s that I hear about R3.2? TS7700 Technical Update What s that I hear about R3.2? Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 TS7700 Release 3.2 TS7720T TS7720 Tape Attach The Basics Partitions

More information

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 May 2013 IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 Kerri Shotwell Senior Design Engineer Tucson, Arizona Copyright 2007, 2009, 2011, 2012 IBM Corporation Introduction...

More information

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades IBM United States Announcement 107-392, dated July 10, 2007 IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades Key

More information

IBM TS7700 grid solutions for business continuity

IBM TS7700 grid solutions for business continuity IBM grid solutions for business continuity Enhance data protection and business continuity for mainframe environments in the cloud era Highlights Help ensure business continuity with advanced features

More information

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER Higher Quality Better Service! Exam Actual QUESTION & ANSWER Accurate study guides, High passing rate! Exam Actual provides update free of charge in one year! http://www.examactual.com Exam : 000-207 Title

More information

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0 IBM System Storage July 3, 212 IBM Virtualization Engine TS772 and TS774 Releases 1.6, 1.7, 2., 2.1 and 2.1 PGA2 Performance White Paper Version 2. By Khanh Ly Tape Performance IBM Tucson Page 2 Table

More information

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 April 2007 IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 By: Wayne Carlson IBM Senior Engineer Tucson, Arizona Introduction The IBM Virtualization Engine TS7700 Series is the

More information

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Art Tolsma CEO LUMINEX Greg Saccomanno Systems Programmer Wells Fargo Dealer Services Scott James Director, Business Development

More information

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 November 2009 IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 Tucson Tape Development Tucson, Arizona Target Audience This document provides the definition of the

More information

IBM TS7720 supports physical tape attachment

IBM TS7720 supports physical tape attachment IBM United States Hardware Announcement 114-167, dated October 6, 2014 IBM TS7720 supports physical tape attachment Table of contents 1 Overview 5 Product number 1 Key prerequisites 6 Publications 1 Planned

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1 IBM Virtualization Engine TS7700 Series Best Practices TPF Host and TS7700 IBM Virtualization Engine V1.1 Gerard Kimbuende gkimbue@us.ibm.com TS7700 FVT Software Engineer John Tarby jtarby@us.ibm.com TPF

More information

IBM High End Taps Solutions Version 5. Download Full Version :

IBM High End Taps Solutions Version 5. Download Full Version : IBM 000-207 High End Taps Solutions Version 5 Download Full Version : http://killexams.com/pass4sure/exam-detail/000-207 QUESTION: 194 Which of the following is used in a System Managed Tape environment

More information

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME?

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? The Business Value of Disk Library for mainframe OVERVIEW OF THE BENEFITS DLM VERSION 5.0 DLm is designed to reduce capital and

More information

Chapter 2 CommVault Data Management Concepts

Chapter 2 CommVault Data Management Concepts Chapter 2 CommVault Data Management Concepts 10 - CommVault Data Management Concepts The Simpana product suite offers a wide range of features and options to provide great flexibility in configuring and

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for mainframes. Geographically Dispersed

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM TS7700 Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-19 Note Before using this information and the product it supports, read the information

More information

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6 IBM Virtualization Engine TS7700 Series Best Practices Cache Management in the TS7720 V1.6 Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills North America Page 1 of 47 1 Introduction... 3 1.1

More information

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment Marketing Announcement February 14, 2006 IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment Overview GDPS is IBM s premier continuous

More information

Universal Storage Consistency of DASD and Virtual Tape

Universal Storage Consistency of DASD and Virtual Tape Universal Storage Consistency of DASD and Virtual Tape Jim Erdahl U.S.Bank August, 14, 2013 Session Number 13848 AGENDA Context mainframe tape and DLm Motivation for DLm8000 DLm8000 implementation GDDR

More information

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment Marketing Announcement February 14, 2006 IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment Overview GDPS is IBM s premier continuous

More information

Introduction and Planning Guide

Introduction and Planning Guide IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-21 IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for IBM zseries mainframes. Geographically

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance resilient disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents

More information

DLm8000 Product Overview

DLm8000 Product Overview Whitepaper Abstract This white paper introduces EMC DLm8000, a member of the EMC Disk Library for mainframe family. The EMC DLm8000 is the EMC flagship mainframe VTL solution in terms of scalability and

More information

Chapter 4 Data Movement Process

Chapter 4 Data Movement Process Chapter 4 Data Movement Process 46 - Data Movement Process Understanding how CommVault software moves data within the production and protected environment is essential to understanding how to configure

More information

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0 IBM Virtualization Engine TS7700 Series Best Practices Usage with Linux on System z 1.0 Erika Dawson brosch@us.ibm.com z/os Tape Software Development Page 1 of 11 1 Introduction... 3 1.1 Change History...

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

What's in this guide... 4 Documents related to NetBackup in highly available environments... 5

What's in this guide... 4 Documents related to NetBackup in highly available environments... 5 Contents Chapter 1 About in this guide... 4 What's in this guide... 4 Documents related to NetBackup in highly available environments... 5 Chapter 2 NetBackup protection against single points of failure...

More information

IBM DB2 Analytics Accelerator High Availability and Disaster Recovery

IBM DB2 Analytics Accelerator High Availability and Disaster Recovery Redpaper Patric Becker Frank Neumann IBM Analytics Accelerator High Availability and Disaster Recovery Introduction With the introduction of IBM Analytics Accelerator, IBM enhanced for z/os capabilities

More information

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties Chapter 7 GridStor Technology GridStor technology provides the ability to configure multiple data paths to storage within a storage policy copy. Having multiple data paths enables the administrator to

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC z/os IBM DFSMS Introduction Version 2 Release 3 SC23-6851-30 Note Before using this information and the product it supports, read the information in Notices on page 91. This edition applies to Version

More information

VTCS. Enhanced Migration Scheduling. Version 1.0

VTCS. Enhanced Migration Scheduling. Version 1.0 VTCS Enhanced Migration Scheduling Version 1.0 An Oracle White Paper April 2010 Table of Contents 1. Introduction... 3 1.1 Background... 3 1.2 Modification Overview... 3 2. Implementing Enhanced Migration

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Mainframe Tape Replacement with cloud connectivity ESSENTIALS A Global Virtual Library for all mainframe tape use cases Supports private and public cloud providers. GDDR Technology

More information

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2 IBM System Storage March 27, 213 IBM Virtualization Engine TS772 and TS774 Release 3. Performance White Paper - Version 2 By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Page 2

More information

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0 IBM System Storage May 7, 215 IBM TS772, TS772T, and TS774 Release 3.2 Performance White Paper Version 2. By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Copyright IBM Corporation

More information

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0 IBM System Storage September 26, 216 IBM TS776 and TS776T Release 4. Performance White Paper Version 2. By Khanh Ly Virtual Tape Performance IBM Tucson Copyright IBM Corporation Page 2 Table of Contents

More information

Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center

Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center Ralph Armstrong EMC Backup Recovery Systems August 11, 2011 Session # 10135 Agenda Mainframe Tape Use Cases

More information

Collecting Hydra Statistics

Collecting Hydra Statistics Collecting Hydra Statistics Fabio Massimo Ottaviani EPV Technologies White paper 1 Overview The IBM Virtualization Engine TS7700, code named Hydra, is the new generation of tape virtualization solution

More information

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices IBM Virtualization Engine TS7700 Series Best Practices TS7700 Logical WORM Best Practices Jim Fisher Executive IT Specialist Advanced Technical Skills (ATS) fisherja@us.ibm.com Page 1 of 10 Contents Introduction...3

More information

SwiftStack Object Storage

SwiftStack Object Storage Integrating NetBackup 8.1.x with SwiftStack Object Storage July 23, 2018 1 Table of Contents Table of Contents 2 Introduction 4 SwiftStack Storage Connected to NetBackup 5 Netbackup 8.1 Support for SwiftStack

More information

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Ralph Armstrong EMC Corporation February 5, 2013 Session 13152 2 Conventional Outlook Mainframe Tape Use Cases BACKUP SPACE MGMT DATA

More information

TSM Paper Replicating TSM

TSM Paper Replicating TSM TSM Paper Replicating TSM (Primarily to enable faster time to recoverability using an alternative instance) Deon George, 23/02/2015 Index INDEX 2 PREFACE 3 BACKGROUND 3 OBJECTIVE 4 AVAILABLE COPY DATA

More information

IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk Drives and Delivers New Solutions for Long Distance Copy

IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk Drives and Delivers New Solutions for Long Distance Copy Hardware Announcement April 23, 2002 IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk and Delivers New Solutions for Long Distance Copy Overview IBM continues to demonstrate

More information

FICON Extended Distance Solution (FEDS)

FICON Extended Distance Solution (FEDS) IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon bfallon@us.ibm.com FEDS: The Optimal Transport Solution

More information

Chapter 3 `How a Storage Policy Works

Chapter 3 `How a Storage Policy Works Chapter 3 `How a Storage Policy Works 32 - How a Storage Policy Works A Storage Policy defines the lifecycle management rules for all protected data. In its most basic form, a storage policy can be thought

More information

IBM. Hardware Configuration Definition Planning. z/os. Version 2 Release 3 GA

IBM. Hardware Configuration Definition Planning. z/os. Version 2 Release 3 GA z/os IBM Hardware Configuration Definition Planning Version 2 Release 3 GA32-0907-30 Note Before using this information and the product it supports, read the information in Notices on page 147. This edition

More information

IBM TotalStorage Enterprise Storage Server (ESS) Model 750

IBM TotalStorage Enterprise Storage Server (ESS) Model 750 A resilient enterprise disk storage system at midrange prices IBM TotalStorage Enterprise Storage Server (ESS) Model 750 Conducting business in the on demand era demands fast, reliable access to information

More information

MIMIX. Version 7.0 MIMIX Global Operations 5250

MIMIX. Version 7.0 MIMIX Global Operations 5250 MIMIX Version 7.0 MIMIX Global Operations 5250 Published: September 2010 level 7.0.01.00 Copyrights, Trademarks, and tices Contents Version 7.0 MIMIX Global Operations 5250 Who this book is for... 5 What

More information

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Scott James VP Global Alliances Luminex Software, Inc. Randy Fleenor Worldwide Data Protection Management IBM Corporation

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents companies

More information

Achieving Continuous Availability for Mainframe Tape

Achieving Continuous Availability for Mainframe Tape Achieving Continuous Availability for Mainframe Tape Dave Tolsma Systems Engineering Manager Luminex Software, Inc. Discussion Topics Needs in mainframe tape Past to present small to big? How Have Needs

More information

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper By Anton Els Copyright 2017 Dbvisit Software Limited. All Rights Reserved V3, Oct 2017 Contents Executive Summary...

More information

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management IBM Spectrum Protect Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management Document version 1.4 Dominic Müller-Wicke IBM Spectrum Protect Development Nils Haustein EMEA Storage

More information

Oracle StorageTek's VTCS DR Synchronization Feature

Oracle StorageTek's VTCS DR Synchronization Feature Oracle StorageTek's VTCS DR Synchronization Feature Irene Adler Oracle Corporation Thursday, August 9, 2012: 1:30pm-2:30pm Session Number 11984 Agenda 2 Tiered Storage Solutions with VSM s VTSS/VLE/Tape

More information

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases Manageability and availability for Oracle RAC databases Overview Veritas Storage Foundation for Oracle RAC from Symantec offers a proven solution to help customers implement and manage highly available

More information

Simple And Reliable End-To-End DR Testing With Virtual Tape

Simple And Reliable End-To-End DR Testing With Virtual Tape Simple And Reliable End-To-End DR Testing With Virtual Tape Jim Stout EMC Corporation August 9, 2012 Session Number 11769 Agenda Why Tape For Disaster Recovery The Evolution Of Disaster Recovery Testing

More information

Session RMM Exploitation

Session RMM Exploitation Session 15549 RMM Exploitation Speakers Vickie Dault, IBM Thursday August 7, 2014 3:00 4:00 pm Insert Custom Session QR if Desired. Agenda Retentionmethods VRSEL EXPDT Assigning Retentionmethod and Limitations

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

SMC Client/Server Implementation

SMC Client/Server Implementation SMC Client/Server Implementation July, 2006 Revised June, 2010 Oracle Corporation Authors: Nancy Rassbach Dale Hammers Sheri Wright Joseph Nofi Page 1 1 Introduction to SMC Client/Server Operations The

More information

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before

More information

EMC Disk Library Automated Tape Caching Feature

EMC Disk Library Automated Tape Caching Feature EMC Disk Library Automated Tape Caching Feature A Detailed Review Abstract This white paper details the EMC Disk Library configuration and best practices when using the EMC Disk Library Automated Tape

More information

Veritas Storage Foundation for Oracle RAC from Symantec

Veritas Storage Foundation for Oracle RAC from Symantec Veritas Storage Foundation for Oracle RAC from Symantec Manageability, performance and availability for Oracle RAC databases Data Sheet: Storage Management Overviewview offers a proven solution to help

More information

Exam : Title : High-End Disk Solutions for Open Systems Version 1. Version : DEMO

Exam : Title : High-End Disk Solutions for Open Systems Version 1. Version : DEMO Exam : 000-206 Title : High-End Disk Solutions for Open Systems Version 1 Version : DEMO 1. A customer has purchased three IBM System Storage DS8300 systems and would like to have their SAN and storage

More information

IBM. Availability Implementing high availability. IBM i 7.1

IBM. Availability Implementing high availability. IBM i 7.1 IBM IBM i Availability Implementing high availability 7.1 IBM IBM i Availability Implementing high availability 7.1 Note Before using this information and the product it supports, read the information

More information

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability Applied Technology Abstract This white paper explains how the combination of EMC DiskXtender for Windows and EMC RecoverPoint can be used to implement a solution that offers efficient storage management,

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Jul 6, 2017 IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Dante Pichardo Tucson Tape Development Tucson, Arizona Introduction During normal and exception processing within

More information

DELL EMC POWERMAX AND VMAX ALL FLASH: GDPS AND ADVANCED COPY SERVICES COMPATIBILITY

DELL EMC POWERMAX AND VMAX ALL FLASH: GDPS AND ADVANCED COPY SERVICES COMPATIBILITY DELL EMC POWERMAX AND VMAX ALL FLASH: GDPS AND ADVANCED COPY SERVICES COMPATIBILITY ABSTRACT This white paper introduces Dell EMC s optional copy services features: Compatible Peer, and Compatible Native

More information

An Oracle White Paper May Implementing StorageTek Enterprise Library Software Client/Server Capability

An Oracle White Paper May Implementing StorageTek Enterprise Library Software Client/Server Capability An Oracle White Paper May 2014 Implementing StorageTek Enterprise Library Software Client/Server Capability Introduction... 2 Benefits of Using ELS Client/Server... 2 StorageTek ELS Client/Server and VTCS

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring Ashish Ray Group Product Manager Oracle Corporation Causes of Downtime Unplanned Downtime Planned Downtime System Failures Data

More information

VERITAS Volume Manager for Windows 2000

VERITAS Volume Manager for Windows 2000 VERITAS Volume Manager for Windows 2000 Advanced Storage Management Technology for the Windows 2000 Platform In distributed client/server environments, users demand that databases, mission-critical applications

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

A CommVault White Paper: Business Continuity: Architecture Design Guide

A CommVault White Paper: Business Continuity: Architecture Design Guide A CommVault White Paper: Business Continuity: Architecture Design Guide CommVault Corporate Headquarters 2 Crescent Place Oceanport, New Jersey 07757-0900 USA Telephone: 888.746.3849 or 732.870.4000 2007

More information

Availability Implementing high availability

Availability Implementing high availability System i Availability Implementing high availability Version 6 Release 1 System i Availability Implementing high availability Version 6 Release 1 Note Before using this information and the product it

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

IBM Tivoli System Automation for z/os

IBM Tivoli System Automation for z/os Policy-based self-healing to maximize efficiency and system availability IBM Highlights Provides high availability for IBM z/os Offers an advanced suite of systems and IBM Parallel Sysplex management and

More information

High Availability- Disaster Recovery 101

High Availability- Disaster Recovery 101 High Availability- Disaster Recovery 101 DBA-100 Glenn Berry, Principal Consultant, SQLskills.com Glenn Berry Consultant/Trainer/Speaker/Author Principal Consultant, SQLskills.com Email: Glenn@SQLskills.com

More information

IBM Spectrum Protect Node Replication

IBM Spectrum Protect Node Replication IBM Spectrum Protect Node Replication. Disclaimer IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM s sole discretion. Information regarding

More information

Understanding high availability with WebSphere MQ

Understanding high availability with WebSphere MQ Mark Hiscock Software Engineer IBM Hursley Park Lab United Kingdom Simon Gormley Software Engineer IBM Hursley Park Lab United Kingdom May 11, 2005 Copyright International Business Machines Corporation

More information

Advanced Architecture Design for Cloud-Based Disaster Recovery WHITE PAPER

Advanced Architecture Design for Cloud-Based Disaster Recovery WHITE PAPER Advanced Architecture Design for Cloud-Based Disaster Recovery WHITE PAPER Introduction Disaster Recovery (DR) is a fundamental tool for mitigating IT and business risks. But because it is perceived as

More information

EMC BUSINESS CONTINUITY SOLUTION FOR GE HEALTHCARE CENTRICITY PACS-IW ENABLED BY EMC MIRRORVIEW/CE

EMC BUSINESS CONTINUITY SOLUTION FOR GE HEALTHCARE CENTRICITY PACS-IW ENABLED BY EMC MIRRORVIEW/CE White Paper EMC BUSINESS CONTINUITY SOLUTION FOR GE HEALTHCARE CENTRICITY PACS-IW ENABLED BY EMC MIRRORVIEW/CE Applied Technology EMC GLOBAL SOLUTIONS Abstract This white paper provides an overview of

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

Carbonite Availability. Technical overview

Carbonite Availability. Technical overview Carbonite Availability Technical overview Table of contents Executive summary The availability imperative...3 True real-time replication More efficient and better protection... 4 Robust protection Reliably

More information

Mainframe Backup Modernization Disk Library for mainframe

Mainframe Backup Modernization Disk Library for mainframe Mainframe Backup Modernization Disk Library for mainframe Mainframe is more important than ever itunes Downloads Instagram Photos Twitter Tweets Facebook Likes YouTube Views Google Searches CICS Transactions

More information

Panel Discussion The Benefits of Going Tapeless Session #10931

Panel Discussion The Benefits of Going Tapeless Session #10931 Panel Discussion The Benefits of Going Tapeless Session #10931 Scott James VP, Global Alliances and Marketing Luminex Software, Inc. Steve Schwietz Senior Systems Programmer Agribank Jerry Johnson Consulting

More information

IBM MQ Performance between z/os and Linux Using Q Replication processing model

IBM MQ Performance between z/os and Linux Using Q Replication processing model IBM MQ Performance between z/os and Linux Using Q Replication processing model Version 1.0 February 2018 Tony Sharkey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire IBM MQ Performance

More information

IBM Tools Base for z/os Version 1 Release 6. IMS Tools Knowledge Base User's Guide and Reference IBM SC

IBM Tools Base for z/os Version 1 Release 6. IMS Tools Knowledge Base User's Guide and Reference IBM SC IBM Tools Base for z/os Version 1 Release 6 IMS Tools Knowledge Base User's Guide and Reference IBM SC19-4372-02 IBM Tools Base for z/os Version 1 Release 6 IMS Tools Knowledge Base User's Guide and Reference

More information

Systems management Backup, Recovery, and Media Services (BRMS)

Systems management Backup, Recovery, and Media Services (BRMS) System i Systems management Backup, Recovery, and Media Services (BRMS) Version 6 Release 1 System i Systems management Backup, Recovery, and Media Services (BRMS) Version 6 Release 1 Note Before using

More information

High Availability- Disaster Recovery 101

High Availability- Disaster Recovery 101 High Availability- Disaster Recovery 101 DBA-100 Glenn Berry, Principal Consultant, SQLskills.com Glenn Berry Consultant/Trainer/Speaker/Author Principal Consultant, SQLskills.com Email: Glenn@SQLskills.com

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information