IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1

Size: px
Start display at page:

Download "IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1"

Transcription

1 IBM Virtualization Engine TS7700 Series Best Practices TS7700 Hybrid Grid Usage V1.1 William Travis STSM TS7700 Development Jim Fisher IBM Advanced Technical Skills Americas Page 1 of 28

2 Document Change History Version 1.1 Add enhanced removal policies for TS7720 that were added for R1.7. Add description of how Storage Class actions are handled in a hybrid grid. Version 1.0 March 26, 2010 Initial release Page 2 of 28

3 Table of Contents 1 Introduction TS7700 Hybrid Grids Advantages of a Hybrid Configuration TS7720 and TS7740 Cache Size Considerations Automatic Removal Policy for the TS7720 Cache Enhanced Removal Policies for the TS7720 Cache Returning Removed Volumes to the TS7720 Cache Keeping Volumes in the TS7720 Cache Temporary Removal for Supporting Servicing a TS Adding a TS7720 during Disaster Recovery General TS7700 Operations Host Software Using Device Allocation Assistance Using the Retain Copy Mode Function Using Cluster Families Hybrid Performance Considerations Example TS7720/TS7740 Hybrid Configurations Two-Cluster Grid - TS7720/TS Three-Cluster Grid - Production TS7720/TS7740 and Backup TS Three-Cluster Grid - Production TS7720/TS7720 and Backup TS Four-Cluster Grid Production TS7720/TS7740 and Backup/Production TS7720/TS Four-Cluster Grid Production TS7720/TS7720 and Backup TS7740/TS Four-Cluster Grid - Production TS7720/TS7720/TS7720 and Backup TS Page 3 of 28

4 1 Introduction This white paper describes the usage of various hybrid TS7700 grid configurations where TS7720s and TS7740s both exist in the same grid. Hybrid configurations became generally available in December, 2009 with the x level of code. This document describes how hybrid configurations can be used to improve read hits for recently used volumes and how the TS7720 can be used for additional mount points and for high availability. It also discusses various considerations such as Retain Copy Mode, Copy Consistency Points, Cluster Families, Service Outages and Disaster Recovery considerations for the hybrid grid. Special operations associated with a hybrid grid such as; volume removal, volume pre-removal, returning volumes to TS7720 cache, and so forth, are described. The term hybrid used in this document indicates a grid configuration containing a combination of TS7720 (disk only) and TS7740 (tape back-end) clusters in a grid. Page 4 of 28

5 2 TS7700 Hybrid Grids All clusters within a multi-cluster grid configuration have access to all the volumes defined in the composite library regardless of their type. This allows a TS7720 to provide 256 more mount points to the volumes without requiring the addition of physical tape drives and physical library. This can provide increased availability to customer data as well as additional performance. The existing copy consistency point function works the same in the hybrid configuration as it does in a homogeneous configuration. Copies can be targeted to any cluster within the grid. See the latest IBM Virtualization Engine TS7700 Series Best Practices - Copy Consistency Points white paper on Techdocs for more details. The TS7720 s capacity is limited by its cache size. With release 1.6 a new cache removal function is available for the TS7720 in the hybrid configuration. The new function will remove logical volumes from the TS7720 cache to make space for new logical volumes. Logical volumes are removed as long as a valid copy of the logical volume exists on another cluster in the grid. Logical volumes that have been returned to scratch are removed first and then the least recently used (LRU) private volumes will be removed from the TS7720 cache. This removes the customer requirement to closely manage the amount of data targeted for the TS7720 thus avoiding a full cache situation which would stop operations. With the host software support of device allocation assist (DAA) in z/os V1R8 and above (OA24966), specific mounts are typically directed to devices on clusters that have the best access to a valid copy of the volume. This means that a cluster with the volume in cache will be preferred over one just on tape and above ones that have no valid copy. This takes advantage of directing mounts to a TS7720 that still has the volume in its larger cache and will improve the read hit rate. To facilitate service actions that take down the remaining TS7740 in a hybrid configuration a pre-removal function has been added. This function allows freeing up extra space in the TS7720 cache so that new volume production can continue without filling the TS7720 while the TS7740 is down. See the Pre- Removal section below for determining how many terabytes (TB) of data should be removed to cover your service window. If you want to manage the TS7720s in a hybrid grid so that volumes are never removed from the cache because it never fills then reference the IBM Virtualization Engine TS7700 Series Best Practices - Cache Management in the TS7720 white paper on Techdocs. The following shows the TS7700 Management Interface (MI) Grid Summary panel for a 4-way hybrid configuration. Notice the icons for the TS7740 have a representation of a tape library in the lower left corner. The TS7720 icons do not show a tape library. This example shows a four-cluster grid with Cluster 0 and 1 being TS7740s, and clusters 2 and 3 being TS7720s. Page 5 of 28

6 Figure 1 - TS7700 MI System Summary Page 2.1 Advantages of a Hybrid Configuration This section discusses various ways to take advantage of the hybrid grid. For example, you can add a TS7720 cluster to the production site of an existing TS7740 two-cluster, preferred mode grid. At the production site this provides increased performance, an increased number of mount points (256) and a higher read hit ratio without having to add a tape library, tape drives and tape cartridges. The TS7720 provides access to all data during a service outage of the production TS7740. During the service outage the TS7720 cluster will provide fast access to the most recent logical volumes. Older volumes will be accessible from the remote TS7740 cluster via the grid links. A TS7720 cluster could be added to the grid at the remote site to gain the same redundancy at the remote site. The large cache in the TS7720 can provide improved read hit performance since volumes will reside longer in the cache. In the case where the production site has both a TS7720 and TS7740 in the grid, Device Allocation Assist will help the host to direct specific mounts to a virtual device on the cluster with the quickest ability to recall the logical volume. Typically this is the cluster with the logical volume still in its cache. Mounts for volumes that reside in the TS7720 cache, but not in the smaller cache of the TS7740, will be directed to the TS7720. Refer to Section Host Software Using Device Allocation Assistance on page 14 for more information. The Copy Consistency Points (CCPs), as defined by the Management Class, can be used to direct which cluster s cache is used for a scratch mount. For example, imagine a three-cluster grid with a TS7740 (cluster 0) and TS7720 (cluster 1) at the production site, and a TS7740 (cluster 2) at the remote site. For volumes that are likely to be recalled, you can use a CCP of DRD. The host written copy will be written to the TS7720 no matter which cluster s virtual device is allocated. The TS7740s will receive deferred copies. The TS7720 will treat the logical volume as PG1 and keep it in its cache as long as possible. The TS7740s will treat their copies as PG0 and will remove the logical volume from its cache soon after it is Page 6 of 28

7 pre-migrated to tape. The cache sizes in the TS7740s can remain small since they will remove the volume from their cache fairly quickly. Using this same three-cluster hybrid grid configuration example, when a logical volume will most likely not be accessed again, a CCP of DND would direct the logical volume to just the TS7740s and will leave the TS7720 cache available for the volumes that may be targets for recall. 2.2 TS7720 and TS7740 Cache Size Considerations Sizing the cache for the clusters in a hybrid grid starts with determining how long you want a logical volume to reside in cache so that it is available for a recall. The larger cache sizes in the TS7720 allow for longer retention of logical volumes in cache. If a volume is periodically accessed it will continue to be held in cache since its access time has been updated. Once access stops for longer than the current cache residency the volume will be removed. Consider a two-cluster hybrid grid operating in balanced mode where there are online virtual devices in both clusters. The amount of time a logical volume remains in the caches of both clusters depends upon which cluster, the TS7740 or TS7720, is selected for the mount. When a virtual device in the TS7740 is allocated, the logical volume will remain in cache until it is migrated out based on the tape migration algorithms. Typically the data is treated as PG1 and will remain in cache as long as possible. The Storage Class could be set up to use PG0 in the TS7740 thus removing it from cache soon after it is pre-migrated to tape. With Release R1.6, the copy to the TS7720 will remain in cache until it ages out of the cache according to the Automatic Removal Policy. With Release R1.7 there are a set of Enhanced Removal Policies that allow the logical volumes to be set to one of three policies: Pinned, Prefer Remove, or Prefer Keep. When a virtual device in the TS7720 is allocated, the logical volume will remain in cache until it is removed according to the Automatic Removal Policy (R1.6) or the Enhanced Removal Policies (R1.7+). The copy to the TS7740 is typically treated as PG0 and will be removed from cache soon after it is premigrated to tape. In both cases the logical volume can reside in the TS7720 cache longer than in the TS7740 cache. In the second case above, the logical volume spends very little time in the TS7740 cache since copied volumes are typically treated as PG0. The diagram below illustrates the cache residency time line for both cases. TS7740 Virtual Device Allocated TS7720 Cache TS7740 Cache TS7740 Tape TS7720 Virtual Device Allocated TS7720 Cache TS7740 Cache TS7740 Tape Figure 2 - Time in Cache Time Line Page 7 of 28

8 2.3 Automatic Removal Policy for the TS7720 Cache Release 1.6 introduces the Automatic Removal Policy for a TS7720 in a multi-cluster hybrid grid. When more space is required in the TS7720 cache, the policy removes the Least Recently Used volume from the TS7720 s cache as long as another copy of the volume exists in another cluster in the grid. Refer to the IBM Virtualization Engine TS7700 Series Best Practices - Cache Management in the TS7720 white paper on Techdocs for more details. 2.4 Enhanced Removal Policies for the TS7720 Cache With Release 1.7 the Automatic Removal Policy is replaced by the Enhanced Removal Policies. The Storage Class Construct is used to define the removal policy for logical volumes in the TS7720. The TS7740 Storage Class actions of PG0 and PG1 remain unchanged for logical volumes in the TS7740 cache. For the TS7720, the Enhanced Removal Policies provide three retention options: Pinned, Prefer Remove, and Prefer Keep. Also, a minimum retention period can be associated with the Prefer Remove and Prefer Keep policies. Refer to the IBM Virtualization Engine TS7700 Series Best Practices - Cache Management in the TS7720 white paper on Techdocs for more details. With the addition of the Enhanced Removal Policies for the TS7720, the Storage Class actions are different for the TS7720 and the TS7740. The TS7720 has the three removal policies listed above. The TS7740 has the existing PG0 and PG1 policies. In a hybrid grid, the actions defined at each cluster are used to determine removal. The Storage Class name used at the TS7740 would also be bound to the volume at the TS7720. In other words, when a logical volume is mounted on a TS7740 cluster and subsequently copied to a TS7720, the Storage Class actions as defined on the TS7740 are followed on the TS7740 copy (PG0 or PG1) and the Storage Class actions as defined on the TS7720 are followed on the TS7720 copy (Pinned, Prefer Remove, Prefer Keep). For example there are three storage class names: KEEPME NORMAL SACRFICE On a two cluster, hybrid grid where Cluster 0 is a TS7740 and Cluster 1 is a TS7720: On Cluster 0 (TS7740) the Storage Class actions are defined as follows: KEEPME PG1 NORMAL PG1 SACRFICE PG0 On Cluster 1 (TS7720) the Storage Class actions are defined as follows: KEEPME Pinned NORMAL Prefer Keep SACRFICE Prefer Remove With the Storage Class definitions shown above: Any job that uses the Storage Class KEEPME and writes to either TS7700 in the grid will be PG1 in the TS7740 and pinned in the TS7720. Any job that uses the Storage Class NORMAL and writes to either TS7700 in the grid will be PG1 in the TS7740 and be set to Prefer Keep in the TS7720. Page 8 of 28

9 Any job that uses the Storage Class SACRFICE and writes to either TS7700 in the grid will be PG0 in the TS7740 and be set to Prefer Remove in the TS7720. When a logical volume is to be removed from the TS7720 cache, the removal process is fairly quick since a copy of the volume should already exist on another cluster in the grid. The TS7720 just needs to verify the existence of a consistent copy in another cluster before deleting the volume its cache. If a copy cannot be verified within the peer TS7700s, the volume will not be removed. In the event that removal cannot keep up with inbound resident data or it simply has run out of volumes eligible for removal, the Cache Full state will be entered. The removal policy overrides the copy consistency points established for the volume. Be aware it will result in the total number of consistency points being reduced within the configuration. Removal of volumes from a TS7720 s cache is suspended if Write Protect is enabled for the TS7720 cluster. This provides full access to all production host written volumes during the Disaster Recovery testing. Removal is resumed when the write protect is turned off. The host console request output for a logical volume (LVOL) shows the number of consistent copies in a grid and the number of removed copies. Refer to the IBM Virtualization Engine TS7700 Series z/os Host Command Line Request User's Guide on Techdocs for the command syntax. The VOLUME STATUS request of the Bulk Volume Information Retrieval command provides a timestamp of when a logical volume was removed from a TS7720s cache. Refer to the IBM Virtualization Engine TS7700 Series Bulk Volume Information Retrieval Function User's Guide on Techdocs for the command syntax. 2.5 Returning Removed Volumes to the TS7720 Cache Once a volume has been removed from a TS7720 s cache, there are several ways available to have a copy of the volume returned to the TS7720 cache. If the volume is modified (could be as little as appending a tape mark) a new copy will be made to all copy consistency points. The volume modification could happen on any cluster in the grid. If the production site only has TS7720 clusters then using the Force Volumes Mounted on this Cluster to be copied to the Local Cache override will cause the volume to be brought back into cache at the mount point cluster the next time it is accessed regardless of whether it is modified or not. If a mount command for the volume can be directed to a device address on the TS7720 with the Force Volumes Mounted on this Cluster to be copied to the Local Cache override option set, this will cause a copy of the volume to be pulled back into the TS7720 cache at the beginning of the mount operation. It does not require that the volume be modified. Based on the device allocation assist function, if the TS7740 devices are available to the host, it will most likely pick devices on that cluster for a specific mount. This will then require that the devices to the TS7740 be varied offline while the volumes are touched on the TS7720. To return the volume to the cache ahead of when it will be used then the PRESTAGE tool can be used to issue the mount/read header/close/demount sequence to access the volume. This will pre-stage the copy back into cache for the scenarios described above. The volume will only be pre-staged into the cache of the mounting cluster. In the case where there are two or more clusters with virtual drives online to the host, the logical volume is pre-staged into the cache of the cluster that performed the mount. Page 9 of 28

10 2.6 Keeping Volumes in the TS7720 Cache With Release 1.6, the Automatic Removal Policy removes the oldest logical volumes from the TS7720 cluster in a multi-cluster grid when cache space is needed in the TS7720 cache for new logical volumes. With Release 1.7 a logical volume assigned to Prefer Keep will stay in cache until space is needed and there are no more eligible volumes with the Prefer Remove attribute. This section discusses methods to keep a logical volume in the TS7720 cache. Keeping logical volumes in the TS7720 cache is not an issue when the cache is large enough to store all the active logical volumes. Periodically accessing a logical volume will keep it in the TS7720 cache. When a volume is accessed (mount/demount) on any cluster in the grid, its access time is updated on all clusters in the grid at volume close time. This places it at the end of the Least Recently Used (LRU) list for removal. If there are volumes associated with datasets that need to be kept in cache, the GETVOLS tool can be used to identify the volume from the dataset name. The list can then be feed into the PRESTAGE tool to issue the mount/read header/close/demount sequence to access the volume. This causes the access time to be updated across the grid. The VEHAUDIT tool can be used to create a list of volsers that need a peer copy in the grid. This output can be used to determine whether specific volumes should be brought back into a TS7720 ahead of outages on a TS7740 in the grid. VEHAUDIT uses BVIR data, volumes maps and tape catalog information to create the listing. Another method is to issue the Host Console Request LI REQ lib_name LVOL xxxxxx PREFER command. This request sets a volume s preference group to PG1, if it isn t there already, and moves it to the end of the LRU removal list. This only works for logical volumes that are already in cache. The LVOL PREFER command cannot be used to recall a volume into cache. With Release 1.7, proper use of the Prefer Remove attribute will allow volumes with the Prefer Keep attribute to stay in cache longer. 2.7 Temporary Removal for Supporting Servicing a TS7740 The removal policies discussed previously requires a validation that another copy of a volume exists in another cluster in the grid before it can be removed. Without the validation, the removal cannot occur. In a hybrid grid, where the sole remaining TS7740 is going to be placed in service mode, the Temporary Removal Threshold needs to be activated to increase the free space in the TS7720 cache prior to the TS7740 entering service. This is to prevent the TS7720 s cache from filling up during the TS7740 service period. The temporary pre-removal process uses a temporary pre-removal threshold to free up enough of the TS7720 cache so that it will not fill up whilst the TS7740 cluster is in service. This temporary threshold value sets a lower bar for the removal process. The temporary pre-removal is used when the last or only TS7740 in the grid is to be taken down. Each TS7720 can independently set this pre-removal threshold using the Management Interface. The threshold can be set as low as 2TB but defaults to 95% of the cache minus 4TB. The temporary threshold remains in effect until either the temporary threshold is turned off by the operator, or the cluster placed in service returns to the grid. The TS7720 will get as close to the temporary threshold as possible but may not be able to reach the threshold if there are not enough validated candidates. Page 10 of 28

11 Progress of the pre-removal process can be monitored using the TS7700 Management Interface. The operations history posts periodic messages that describe the progress. Also, the Tape Volume Cache panel can be used to view the amount of available space. The threshold setting will need to be planned such that there is enough free space in the TS7720 cache to contain new volumes written to it for the duration of the service period. The figure below shows a two-cluster hybrid grid with the TS7720 attached to the host and the TS7740 as a DR cluster. The TS7740 is to be put into service mode. The steps involved are listed below the figure. TS7740 LAN/WAN Default Free Space Additional Temporary Free Space Planned Service Outage TS7720 Cache Figure 3 - Temporary Removal Threshold Process 1. The first step is to set the Temporary Removal Threshold for the TS7720 using the TS7720 s Management Interface. 2. Next, at the Management Interface of the TS7740 that is going to enter Service, turn on the Temporary Pre-Removal process. 3. The TS7720 starts to actively remove volumes from its cache that have consistent copies in the TS Scratch volumes are removed first then private volumes. 5. Monitor the TS7720 management interface for the temporary threshold to be reached. 6. The TS7740 enters service prep and eventually reaches service mode. While in service prep, copies to the TS7740 continue. Once in service mode the removal stops and the temporary threshold is turned off. 7. During the service period the TS7720 cache begins to fill again. 8. The TS7740 leaves service mode with TS7720 cache to spare. Page 11 of 28

12 9. All is well; the TS7720 cache did not fill up. Note: Volumes with copies only on the TS7740 will not be accessible during the service outage. The Management Interface (MI) Service Mode panel is used to initiate the Lower Threshold temporary removal operation. The Temporary Removal Threshold value on bottom of the Tape Volume Cache panel must be set on each TS7720 that needs cache space prior to enabling the lower threshold. This operation should be started hours ahead of initiating Service Prep on the TS7740. Figure 4 - Setting Temporary Removal Threshold on TS7720 The temporary threshold is activated on the MI service mode panel of the TS7740 that is going to be put in service mode. The TS7740 MI screen is shown below. Page 12 of 28

13 Figure 5 - Initiating Temporary Removal Threshold on TS7740 Assuming a host write rate of 200MB/s at 2:1 compression and no copies from other clusters, the inbound flow of data into a cache is roughly 360GB an hour ((200 MB/s / 2) * 3600 seconds). Assuming a 12 hour service outage, the total amount of free space needed would be roughly 4TB. The temporary removal function on the MI is only available on clusters within a hybrid grid where at least one other TS7720 cluster is currently available. If the local cluster being asked to initiate the temporary removal phase is a TS7720 cluster, it will not participate in the pre-removal phase. 2.8 Adding a TS7720 during Disaster Recovery If a remote backup site needs to be brought up to take over production in a real disaster and is short on mount points, a TS7720 could quickly be added to provide more mount points and added performance without having to add physical tape devices and libraries. This could be a quick recovery step that could be later replaced with a cluster supporting tape. An example would be if a TS7740 at the remote site has to take over production that had been running on multiple clusters at the production site. An empty TS7720 could quickly be joined with the remaining TS7740 to provide more mount points, higher performance, and a backup copy of newly created volumes. Page 13 of 28

14 3 General TS7700 Operations This section briefly describes some general TS7700 capabilities that can be taken advantage of in the TS7700 hybrid configurations. 3.1 Host Software Using Device Allocation Assistance With the 8.5.x.x level of TS7700 microcode and the host software support of Device Allocation Assist (DAA) in z/os V1R8 and above (OA24966) specific mounts will be directed to devices on clusters that have the best access to a valid copy of the volume. This means that a cluster with the volume in cache will be preferred over one just on tape and above ones that have no valid copy. This will help take advantage of directing mounts to a TS7720 that still has the volume in its larger cache and will improve the read hit rate. 3.2 Using the Retain Copy Mode Function Retain Copy Mode is an optional setting where a volume s existing Copy Consistency Points are honored instead of applying the CCPs defined at the mounting cluster. This applies to private volume mounts for reads or write appends. It is used to prevent more copies of a volume in the grid than desired. The Retain Copy Mode applies to both homogeneous and heterogeneous grids. Refer to the IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance white paper for a detailed description of the Retain Copy Mode function. 3.3 Using Cluster Families Prior to release 1.6 and cluster families, only the Copy Consistency Points could be used to direct who gets a copy of data and when they get the copy. Decisions as to where to source a volume from were left to each cluster in the grid. Two copies of the same data may be transmitted across the grid links for two distant clusters. With the introduction of cluster families in release 1.6, you can make the copying of data to other clusters more efficient by influencing where a copy of data is sourced. This becomes very important with 3 and 4 cluster grids where the clusters may be geographically separated. For example, when two clusters are at one site and the other two are at a remote site, and the two remote clusters need a copy of the data, cluster families make it so only one copy of the data is sent across the long grid link. Also, when deciding where to source a volume, a cluster will give higher priority to a cluster in its family over another family. Refer to the IBM Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the TS7700 Performance white paper for a detailed description of Cluster Families. Page 14 of 28

15 4 Hybrid Performance Considerations Detailed performance numbers for the hybrid configurations will be documented in the IBM System Storage TS7700 Virtualization Engine TS7720 and TS7740 Release 1.6 Performance White Paper targeted for availability in 2Q10. Here are some general considerations about hybrid performance: 1. Since there is no pre-migration or reclaim background activity that needs to be done on the TS7720s their sustained data rate can be higher than the TS7740. The copy activity (Run or Deferred) however within the grid can affect the host data rate across all clusters in the grid. 2. When the TS7720 is paired with a TS7740 in a Run copy mode configuration the overall host data rate will be throttled by the pre-migration activity that eventually occurs on the TS7740. As immediate copies going from the TS7720 to the TS7740 are eventually throttled by the premigration activity in the TS7740 this can result in the TS7720 throttling its host activity because the immediate copies are not getting done fast enough. This results in the TS7720 sustained host data rate dropping to a rate similar to 2 TS7740s in RR copy mode. 3. Deferred copy mode avoids this throttling and thus provides a much higher sustained data rate from the host across the grid. 4. It is important that the host software supporting the Device Allocation Assist should be used with most hybrid configurations. This will eliminate remote mounts having to use the grid network when the mount is allocated to a device on a cluster that does not have the volume in cache (or for that matter have a copy at all). Page 15 of 28

16 5 Example TS7720/TS7740 Hybrid Configurations This section describes some common hybrid configurations that will be used. A hybrid can exist in a 2, 3 or 4 cluster grid configuration. There is no restriction on the combinations of TS7740s or TS7720s that can be configured. There are 6 basic combinations (one 2-cluster, two 3-cluster and three 4-cluster). The following sections detail some expected hybrid configurations. The sections describe general usage of the configuration and how each of the following applies to the configuration: Retain Copy Mode Copy Consistency Points (CCPs) Cluster Families Service Outage Considerations Disaster Recovery Considerations Page 16 of 28

17 5.1 Two-Cluster Grid - TS7720/TS7740 This configuration has one TS7720 and one TS7740 with host connectivity to both clusters. Local TS7720 Host CL0 LAN/WAN TS7740 CL1 Tape Library Figure 6 - Two-Cluster Grid - TS7720/TS7740 This configuration provides high availability for recently used volumes. The two clusters are located within metro distances. This configuration improves the read cache hit rate because of the larger cache in the TS7720. The size of the TS7720 cache should be based on how many days of workload the customer wishes to have resident on both clusters. For example, with new write host workload of 10 TBs per day and a 2.5 to 1 compression ratio, a 40 TB cache should provide approximately 9 days of residency and a 70 TB cache should provide approximately 16 days of residency in the TS7720. The residency time of some volumes will increase if other volumes in cache are returned to scratch during this residency time and delete expired is being used. Copy export can be used in this configuration for offsite backup for disaster recovery. Retain Copy Mode Considerations: Retain Copy Mode does not apply in this configuration since copies are kept in both clusters. CCP Considerations: This is a single site, balanced mode grid. For deferred copies use DD with the Prefer Local ache for Fast Ready Mounts override. For immediate copies use RR. Cluster Family Considerations: Cluster families do not apply in this configuration. Page 17 of 28

18 Service Outage Considerations: During a service outage of the TS7720 all replicated data is available from the TS7740. Recalls will be made for any volumes not resident in the TS7740 cache. For scheduled outages of the TS7740 the pre-removal of volumes from the TS7720 is required to assure enough cache space is available during the outage for creating new volumes in the TS7720 cache. When the TS7740 is not available, volumes no longer in the TS7720 cache will not be available. It is recommended that service for the TS7740 be scheduled during a period where volumes that only exist in the TS7740 are less likely to be accessed. A host job will fail with data inaccessible for the volumes that are only consistent in the TS7740 being serviced. Disaster Recovery Considerations: Use copy export on the TS7740 to create an offsite backup of all the data that would be used in case the production site is lost. In the event of the loss of just the TS7720, all of the data is available on the TS7740, assuming all data required in a disaster that was written to the TS7720 has been copied to the TS7740. In the event of the loss of just the TS7740, the most recent data is still available on the TS7720. However, any data that was removed from the TS7720 (only exists in the TS7740) will not be accessible. Page 18 of 28

19 5.2 Three-Cluster Grid - Production TS7720/TS7740 and Backup TS7740 The production site contains one TS7720 and one TS7740 and the remote site contains one TS7740. There are no channel extenders online to the remote cluster. All clusters have a copy of all data. Local Remote TS7720 CL0 Host LAN/WAN CL2 TS7740 Tape Library TS7740 CL1 Tape Library Figure 7 - Three-Cluster Grid - Production TS7720/TS7740 and Backup TS7740 This configuration provides high availability at the production site using Run or Deferred copy mode. Remote mounts via the grid network may be required when the production TS7740 is not available and the volume is no longer in TS7720 cache. Performance for remote mounts is best with a short distance (< 300 miles) between the production and remote sites. Retain Copy Mode Considerations: Retain Copy Mode does not apply in this configuration since copies are kept in all clusters. CCP Considerations: For deferred copies to all three sites use DDD with the Prefer Local Cache for Fast-Ready Mounts override. For immediate copies to the two production clusters and a deferred copy to the remote cluster use RRD. For volumes that only need to reside on tape, and don t need to reside in the TS7720 cache for quick recall, use NDD for both production clusters. This way, even if a virtual device is allocated on the TS7720, the logical volume will only be written to the caches of the TS7740s. Page 19 of 28

20 Cluster Family Considerations: Place the two production clusters in a family and the remote cluster in its own family. Service Outage Considerations: During a service outage of the TS7720 all replicated data is available from the production TS7740. Recalls will be made for any volumes not resident in the local TS7740 cache. For scheduled outages of the production TS7740 the pre-removal of volumes from the TS7720 isn t required. Volumes removed from the TS7720 cache will exist in the remote TS7740 and will be accessed from there across the grid links. Service outages at the backup TS7740 should have no impact on customer operation other than the time to complete the copies to it after the outage completes. Disaster Recovery Considerations: In the case of a real and complete disaster at the production site, the TS7740 at the remote site is available with all data to hosts brought up at the backup site. A new TS7720 could quickly be added to the backup site for additional mount point devices and new volume creation. The lost clusters can be removed from the grid ahead of the joining of the new cluster. In the event of the loss of the TS7720, all data is available from the production TS7740. In the event of the loss of the production TS7740, all data is available from the remote TS7740. Page 20 of 28

21 5.3 Three-Cluster Grid - Production TS7720/TS7720 and Backup TS7740 The production site contains two TS7720s and a remote site contains one TS7740. Two copies of volumes are kept, one at the mounting TS7720 and the second at the remote TS7740. The remote TS7740 is shared by the TS7720s. Local Remote TS7720 Host CL0 LAN/WAN CL2 TS7740 Tape Library TS7720 CL1 Figure 8 - Three-Cluster Grid - Production TS7720/TS7720 and Backup TS7740 This configuration provides high availability at the production site. This configuration is best suited for systems where old data is rarely accessed. Also, this configuration is good for systems where the total amount of data held by the configuration is small (< 130 TB of compressed data). This means most of the data is always available on one of the TS7720s. Copy modes can be configured so that only one copy is kept at the production site and one at the remote site. This effectively doubles the amount of data that can be held in cache locally. A variation of this configuration is where all clusters are at the same site. In this case Copy Export can be used to create an off-site copy of data for disaster recovery. Retain Copy Mode Considerations: Retain Copy Mode should be set for this configuration so that during periods where one of the TS7720s is not available, extra copies of existing logical volumes are not created. However, during the outage, only the TS7740 will have the latest copy of an altered logical volume. Retain Copy Mode will also help keep just two copies of a logical volume in the grid in a JES3 system where Dynamic Allocation Assist is not available. Page 21 of 28

22 CCP Considerations: - For one copy at the production site and a deferred copy to the TS7740 at the remote site, the CCPs should be set to DND for cluster 0 and NDD for cluster 1. The Prefer Local cache for Fast-Ready Mounts override should also be set. For three copies, with the production site copies being immediate, use a CCP of RRD at both production site clusters. For volumes that only need to reside on tape, and don t take need to reside in the TS7720 cache for quick recall, use NND for both production clusters. This way, even if a virtual device is allocated on the TS7720, the logical volume will only be written to the cache of the remote TS7740. Cluster Family Considerations: Place the two production clusters in a family and the remote cluster in its own family. Service Outage Considerations: During a service outage of one of the TS7720s, all data will be available to the production site through the remaining TS7720. Volumes not resident in the remaining TS7720 will be accessed via a remote mount to the TS7740. For scheduled outages of one of the TS7720s, pre-removal of volumes from the TS7720 is not required. Service outages on the TS7740 require the pre-removal of logical volumes on both of the TS7720s if their caches will fill up during the TS7740 outage. It is recommended that service for the TS7740 be scheduled during a period where volumes that only exist in the TS7740 are less likely to be accessed. A host job will fail with data inaccessible for the volumes that are only consistent in the TS7740. Disaster Recovery Considerations: In the case of a real and complete disaster at the production site, the TS7740 at the remote site is available with all data to hosts brought up at the backup site. A new TS7720 could quickly be added to the backup site for additional mount point devices and new volume creation. The lost clusters can be removed from the grid ahead of the joining of the new cluster. In the event of the loss of one of the TS7720s, all data is available via the other production TS7720. Page 22 of 28

23 5.4 Four-Cluster Grid Production TS7720/TS7740 and Backup/Production TS7720/TS7740 The production site contains both a TS7740 and a TS7720. The remote site also contains a TS7740 and a TS7720. The remote site can be either a second production site or a disaster recovery site. With the appropriate Copy Consistency Points, both sites can contain all of the data required for operations. Local Remote TS7720 TS7720 CL0 CL2 Host LAN/WAN CL1 TS7740 Tape Library CL3 TS7740 Tape Library Figure 9 - Four-Cluster Grid - Production TS7720/TS7740 and Backup/Production TS7720/TS7740 When the remote site is for disaster recovery purposes, a copy of each volume could be made to all clusters. This means both sites contain all of the data. When the remote site is a second production site, copies would be made to only the local TS7720 and then to both TS7740s. This would provide high availability at both sites and would provide local access to volumes on tape as well. Each site would be the backup site for the other. Retain Copy Mode Considerations: Retain Copy Mode should be set for this configuration when just 3 copies of a logical volume are made in the grid. During periods where one of the clusters is not available, extra copies of existing logical volumes are not created. Retain Copy Mode will also help keep just three copies of a logical volume in the grid in a JES3 system where Dynamic Allocation Assist is not available. CCP Considerations: For 4 copies of a logical volume, immediate to the production clusters and deferred to the remote clusters use RRDD. Page 23 of 28

24 For deferred copies use DDDD with the Prefer Local Cache for Fast-Ready Mounts override. For the dual production site, three copy scenario, where two copies are written in the production site and one in the alternate site s TS7740, use RRND for clusters 0 and 1, and NDRR for clusters 2 and 3. Use DDND and NDDD along with the Prefer Local Cache for Fast-Ready Mounts override for increased host data rate performance if local deferred is sufficient. For copies only to the TS7740s use NDND. Cluster Family Considerations: Place the two clusters at each sight into their own family. Cluster 0 and 1 in one family and cluster 2 and 3 in another family. Service Outage Considerations: For the service outage of one of the TS7720s all data would be available from the local TS7740s. Recalls would be made for any volumes not resident in the TS7740 cache. For TS7740 service outages, volumes not in the local TS7720 s cache would be accessed from the remote TS7720 or TS7740. Shorter distances between sites makes this remote access less of a performance impact. No pre-removal operation would be required for this configuration. Disaster Recovery Considerations: In the case of a real disaster, if one site is a strictly a backup site, the other site can run the entire production work load. If the second site runs production as well, replacement clusters may need to be joined into the surviving configuration to provide adequate performance to cover both sets of production activity. Page 24 of 28

25 5.5 Four-Cluster Grid Production TS7720/TS7720 and Backup TS7740/TS7740 The production site contains two TS7720s and the remote site contains two TS7740s. This configuration provides high availability at both sites. The local site will need to recall older data that no longer exists in the TS7720 cache from the remote TS7740s. Local Remote TS7720 TS7740 Tape Library CL0 CL2 Host LAN/WAN TS7720 CL1 TS7740 CL3 Tape Library Figure 10 - Four-Cluster Grid - Production TS7720/TS7720 and Backup TS7740/TS7740 This configuration is recommended when the remote site is a short distance from the local site and there will be few remote accesses needed. The two TS7740s at the remote site provides access to all data when one of the TS7740s is down. A recommended Copy Consistency Point is to have one copy in a TS7720 and a copy in both of the TS7740s (DNDD and NDDD). This increases the overall cache size at the production site thus increasing the cache hit ratio. In a JES2 environment, Device Allocation Assist will direct private mounts to the best TS7720. The use of the Cluster Families will ensure that just one copy of a volume is sent to the remote site. Retain Copy Mode Considerations: Retain Copy Mode should be set for this configuration when 3 or fewer copies of a logical volume are made in the grid. During periods where one of the clusters is not available, extra copies of existing logical volumes are not created. Retain Copy Mode will also help keep extra copies of a logical volume from being made in the grid in a JES3 system where Dynamic Allocation Assist is not available. Page 25 of 28

26 CCP Considerations: For 4 copies of a logical volume, immediate to the production clusters and deferred to the remote clusters use RRDD. For deferred copies use DDDD with the Prefer Local Cache for Fast-Ready Mounts override. For the dual production site, three copy scenario, where two copies are written in the production site and one in the alternate site s TS7740, use RRND for clusters 0 and 1, and NDRR for clusters 2 and 3. Use DDND and NDDD along with the Prefer Local Cache for Fast-Ready Mounts override for increased host data rate performance if local deferred is sufficient. For copies only to the TS7740s use NNDD. Cluster Family Considerations: Place the two clusters at each sight into their own family. Cluster 0 and 1 in one family and cluster 2 and 3 in another family. Service outage considerations: For the service outage of a single TS7720 all data would be accessible via a remote mount to the TS7740s through the virtual devices on the still online TS7720. For the TS7740 service outages, volumes not in the TS7720 cache would be accessed from the remaining remote TS7740. Shorter distances between sites makes this remote access less of a performance impact. No pre-removal operation would be required for this configuration. Disaster recovery considerations: In the case of a real disaster at the production site, the back up TS7740s will contain all of the data needed for the DR hosts. Page 26 of 28

27 5.6 Four-Cluster Grid - Production TS7720/TS7720/TS7720 and Backup TS7740 The production site or sites have virtual devices online to TS7720s. A remote site has a shared TS7740 providing back-up of the data written to the TS7720s. Local Remote TS7720 Host CL0 TS7720 LAN/WAN CL1 CL3 TS7740 Tape Library CL2 TS7720 Figure 11 - Four-Cluster Grid - TS7720/TS7720/TS7720 and Backup TS7740 With this configuration the Copy Consistency Points (CCPs) can be set up such that two copies of a volume exist, one in the mounting TS7720 and the second in the shared TS7740. This creates a very large cache thus increasing the cache hit ratio. In a JES2 environment, Device Allocation Assist will direct private mounts to the best TS7720. In a JES3 environment, Retain Copy Mode can be used to keep just one copy between the TS7720s. While one of the TS7720s is down the other TS7720s would be used to access the data for the missing TS7720 from the TS7740. Retain Copy Mode can be used to limit the number of copies in the grid during the outage. Copy export could be used from the TS7740 if offsite protection of the data is required. If all the clusters are metro distance with FICON extenders the devices to the TS7740 could be varied on for those cases where a TS7720 will be down. Retain Copy Mode Considerations: Retain Copy Mode should be set for this configuration when 3 or fewer copies of a logical volume are made in the grid. During periods where one of the clusters is not available, extra copies of existing logical volumes are not created. Page 27 of 28

28 Retain Copy Mode will also help keep extra copies of a logical volume from being made in the grid in a JES3 system where Dynamic Allocation Assist is not available. CCP Considerations: For two copies of a logical volume use CCPs of DNND/NDND/NNDD for the TS7720 clusters. Also set the Prefer Local Cache for Fast-Ready Mounts override. This provides the largest effective cache. If the total amount of stored data stays under the combined total of the TS7720 s caches then remote mounts would only occur when a TS7720 was down. Use a CCP of NNND if a copy is not needed at the production site. Cluster Family Considerations: Put the three TS7720s into the same family. Service Outage Considerations: For the service outage of a TS7720 all data would be accessible via a remote mount to the TS7740s through the remaining TS7720s. Pre-removal on volumes from the TS7720 caches will be required for TS7740 service outages. Also, data no longer residing in the TS7720s will not be available during the TS7740 service outage. Disaster Recovery Considerations: In the case of a real disaster at the production site(s), the back up TS7740 becomes immediately available to hosts brought up at the backup site. All of the data that was copied to the TS7740 would be available. A new TS7720 could quickly be added to the backup site for additional mount point devices. Also, Copy Export can be used on the TS7740 to create another backup option. END OF DOCUMENT Page 28 of 28

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6 IBM Virtualization Engine TS7700 Series Best Practices Cache Management in the TS7720 V1.6 Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills North America Page 1 of 47 1 Introduction... 3 1.1

More information

TS7700 Technical Update TS7720 Tape Attach Deep Dive

TS7700 Technical Update TS7720 Tape Attach Deep Dive TS7700 Technical Update TS7720 Tape Attach Deep Dive Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 Quick background of TS7700 TS7720T Overview TS7720T Deep Dive TS7720T

More information

TS7700 Technical Update What s that I hear about R3.2?

TS7700 Technical Update What s that I hear about R3.2? TS7700 Technical Update What s that I hear about R3.2? Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 TS7700 Release 3.2 TS7720T TS7720 Tape Attach The Basics Partitions

More information

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2 IBM Virtualization Engine TS7700 Series Best Practices Copy Consistency Points V1.2 Takeshi Nohta nohta@jp.ibm.com RMSS/SSD-VTS - Japan Target Audience This document provides the Best Practices for TS7700

More information

TS7720 Implementation in a 4-way Grid

TS7720 Implementation in a 4-way Grid TS7720 Implementation in a 4-way Grid Rick Adams Fidelity Investments Monday August 6, 2012 Session Number 11492 Agenda Introduction TS7720 Components How a Grid works Planning Considerations TS7720 Setup

More information

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1 7/9/2015 IBM TS7700 Series Best Practices Flash Copy for Disaster Recovery Testing V1.1.1 Norie Iwasaki, norie@jp.ibm.com IBM STG, Storage Systems Development, IBM Japan Ltd. Katsuyoshi Katori, katori@jp.ibm.com

More information

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 May 2013 IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 Kerri Shotwell Senior Design Engineer Tucson, Arizona Copyright 2007, 2009, 2011, 2012 IBM Corporation Introduction...

More information

IBM TS7700 Series Grid Failover Scenarios Version 1.4

IBM TS7700 Series Grid Failover Scenarios Version 1.4 July 2016 IBM TS7700 Series Grid Failover Scenarios Version 1.4 TS7700 Development Team Katsuyoshi Katori Kohichi Masuda Takeshi Nohta Tokyo Lab, Japan System and Technology Lab Copyright 2006, 2013-2016

More information

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER Higher Quality Better Service! Exam Actual QUESTION & ANSWER Accurate study guides, High passing rate! Exam Actual provides update free of charge in one year! http://www.examactual.com Exam : 000-207 Title

More information

IBM TS7700 grid solutions for business continuity

IBM TS7700 grid solutions for business continuity IBM grid solutions for business continuity Enhance data protection and business continuity for mainframe environments in the cloud era Highlights Help ensure business continuity with advanced features

More information

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades IBM United States Announcement 107-392, dated July 10, 2007 IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades Key

More information

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0 IBM System Storage July 3, 212 IBM Virtualization Engine TS772 and TS774 Releases 1.6, 1.7, 2., 2.1 and 2.1 PGA2 Performance White Paper Version 2. By Khanh Ly Tape Performance IBM Tucson Page 2 Table

More information

IBM TS7720 supports physical tape attachment

IBM TS7720 supports physical tape attachment IBM United States Hardware Announcement 114-167, dated October 6, 2014 IBM TS7720 supports physical tape attachment Table of contents 1 Overview 5 Product number 1 Key prerequisites 6 Publications 1 Planned

More information

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 November 2009 IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 Tucson Tape Development Tucson, Arizona Target Audience This document provides the definition of the

More information

Chapter 2 CommVault Data Management Concepts

Chapter 2 CommVault Data Management Concepts Chapter 2 CommVault Data Management Concepts 10 - CommVault Data Management Concepts The Simpana product suite offers a wide range of features and options to provide great flexibility in configuring and

More information

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0 IBM System Storage May 7, 215 IBM TS772, TS772T, and TS774 Release 3.2 Performance White Paper Version 2. By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Copyright IBM Corporation

More information

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2 IBM System Storage March 27, 213 IBM Virtualization Engine TS772 and TS774 Release 3. Performance White Paper - Version 2 By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Page 2

More information

IBM High End Taps Solutions Version 5. Download Full Version :

IBM High End Taps Solutions Version 5. Download Full Version : IBM 000-207 High End Taps Solutions Version 5 Download Full Version : http://killexams.com/pass4sure/exam-detail/000-207 QUESTION: 194 Which of the following is used in a System Managed Tape environment

More information

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0 IBM System Storage September 26, 216 IBM TS776 and TS776T Release 4. Performance White Paper Version 2. By Khanh Ly Virtual Tape Performance IBM Tucson Copyright IBM Corporation Page 2 Table of Contents

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM TS7700 Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-19 Note Before using this information and the product it supports, read the information

More information

Chapter 4 Data Movement Process

Chapter 4 Data Movement Process Chapter 4 Data Movement Process 46 - Data Movement Process Understanding how CommVault software moves data within the production and protected environment is essential to understanding how to configure

More information

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1 IBM Virtualization Engine TS7700 Series Best Practices TPF Host and TS7700 IBM Virtualization Engine V1.1 Gerard Kimbuende gkimbue@us.ibm.com TS7700 FVT Software Engineer John Tarby jtarby@us.ibm.com TPF

More information

Introduction and Planning Guide

Introduction and Planning Guide IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-21 IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction

More information

EMC Disk Library Automated Tape Caching Feature

EMC Disk Library Automated Tape Caching Feature EMC Disk Library Automated Tape Caching Feature A Detailed Review Abstract This white paper details the EMC Disk Library configuration and best practices when using the EMC Disk Library Automated Tape

More information

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 April 2007 IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 By: Wayne Carlson IBM Senior Engineer Tucson, Arizona Introduction The IBM Virtualization Engine TS7700 Series is the

More information

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Art Tolsma CEO LUMINEX Greg Saccomanno Systems Programmer Wells Fargo Dealer Services Scott James Director, Business Development

More information

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices IBM Virtualization Engine TS7700 Series Best Practices TS7700 Logical WORM Best Practices Jim Fisher Executive IT Specialist Advanced Technical Skills (ATS) fisherja@us.ibm.com Page 1 of 10 Contents Introduction...3

More information

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM

IBM Spectrum Protect Version Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM IBM Spectrum Protect Version 8.1.2 Introduction to Data Protection Solutions IBM Note: Before you use this information

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.6 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME?

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? The Business Value of Disk Library for mainframe OVERVIEW OF THE BENEFITS DLM VERSION 5.0 DLm is designed to reduce capital and

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

Zadara Enterprise Storage in

Zadara Enterprise Storage in Zadara Enterprise Storage in Google Cloud Platform (GCP) Deployment Guide March 2017 Revision A 2011 2017 ZADARA Storage, Inc. All rights reserved. Zadara Storage / GCP - Deployment Guide Page 1 Contents

More information

Simple And Reliable End-To-End DR Testing With Virtual Tape

Simple And Reliable End-To-End DR Testing With Virtual Tape Simple And Reliable End-To-End DR Testing With Virtual Tape Jim Stout EMC Corporation August 9, 2012 Session Number 11769 Agenda Why Tape For Disaster Recovery The Evolution Of Disaster Recovery Testing

More information

Mainframe Backup Modernization Disk Library for mainframe

Mainframe Backup Modernization Disk Library for mainframe Mainframe Backup Modernization Disk Library for mainframe Mainframe is more important than ever itunes Downloads Instagram Photos Twitter Tweets Facebook Likes YouTube Views Google Searches CICS Transactions

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

Oracle Secure Backup 12.1 Technical Overview

Oracle Secure Backup 12.1 Technical Overview Oracle Secure Backup 12.1 Technical Overview February 12, 2015 Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and

More information

Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center

Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center Mainframe Virtual Tape: Improve Operational Efficiencies and Mitigate Risk in the Data Center Ralph Armstrong EMC Backup Recovery Systems August 11, 2011 Session # 10135 Agenda Mainframe Tape Use Cases

More information

IBM i Version 7.3. Systems management Disk management IBM

IBM i Version 7.3. Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM IBM i Version 7.3 Systems management Disk management IBM Note Before using this information and the product it supports, read the information in

More information

ZYNSTRA TECHNICAL BRIEFING NOTE

ZYNSTRA TECHNICAL BRIEFING NOTE ZYNSTRA TECHNICAL BRIEFING NOTE Backup What is Backup? Backup is a service that forms an integral part of each Cloud Managed Server. Its purpose is to regularly store an additional copy of your data and

More information

Dell EMC Unity Family

Dell EMC Unity Family Dell EMC Unity Family Version 4.4 Configuring and managing LUNs H16814 02 Copyright 2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes the information in this publication

More information

Chapter 3 `How a Storage Policy Works

Chapter 3 `How a Storage Policy Works Chapter 3 `How a Storage Policy Works 32 - How a Storage Policy Works A Storage Policy defines the lifecycle management rules for all protected data. In its most basic form, a storage policy can be thought

More information

DISK LIBRARY FOR MAINFRAME (DLM)

DISK LIBRARY FOR MAINFRAME (DLM) DISK LIBRARY FOR MAINFRAME (DLM) Cloud Storage for Data Protection and Long-Term Retention ABSTRACT Disk Library for mainframe (DLm) is Dell EMC s industry leading virtual tape library for IBM z Systems

More information

Oracle StorageTek's VTCS DR Synchronization Feature

Oracle StorageTek's VTCS DR Synchronization Feature Oracle StorageTek's VTCS DR Synchronization Feature Irene Adler Oracle Corporation Thursday, August 9, 2012: 1:30pm-2:30pm Session Number 11984 Agenda 2 Tiered Storage Solutions with VSM s VTSS/VLE/Tape

More information

Universal Storage Consistency of DASD and Virtual Tape

Universal Storage Consistency of DASD and Virtual Tape Universal Storage Consistency of DASD and Virtual Tape Jim Erdahl U.S.Bank August, 14, 2013 Session Number 13848 AGENDA Context mainframe tape and DLm Motivation for DLm8000 DLm8000 implementation GDDR

More information

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM

IBM Tivoli Storage Manager for HP-UX Version Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM IBM Tivoli Storage Manager for HP-UX Version 7.1.4 Installation Guide IBM Note: Before you use this information and the product

More information

DASH COPY GUIDE. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 31

DASH COPY GUIDE. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 31 DASH COPY GUIDE Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 31 DASH Copy Guide TABLE OF CONTENTS OVERVIEW GETTING STARTED ADVANCED BEST PRACTICES FAQ TROUBLESHOOTING DASH COPY PERFORMANCE TUNING

More information

IBM. Systems management Disk management. IBM i 7.1

IBM. Systems management Disk management. IBM i 7.1 IBM IBM i Systems management Disk management 7.1 IBM IBM i Systems management Disk management 7.1 Note Before using this information and the product it supports, read the information in Notices, on page

More information

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management IBM Spectrum Protect Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management Document version 1.4 Dominic Müller-Wicke IBM Spectrum Protect Development Nils Haustein EMEA Storage

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance resilient disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents

More information

WHY DO I NEED FALCONSTOR OPTIMIZED BACKUP & DEDUPLICATION?

WHY DO I NEED FALCONSTOR OPTIMIZED BACKUP & DEDUPLICATION? WHAT IS FALCONSTOR? FAQS FalconStor Optimized Backup and Deduplication is the industry s market-leading virtual tape and LAN-based deduplication solution, unmatched in performance and scalability. With

More information

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Ralph Armstrong EMC Corporation February 5, 2013 Session 13152 2 Conventional Outlook Mainframe Tape Use Cases BACKUP SPACE MGMT DATA

More information

DLm8000 Product Overview

DLm8000 Product Overview Whitepaper Abstract This white paper introduces EMC DLm8000, a member of the EMC Disk Library for mainframe family. The EMC DLm8000 is the EMC flagship mainframe VTL solution in terms of scalability and

More information

Veritas NetBackup Vault Administrator s Guide

Veritas NetBackup Vault Administrator s Guide Veritas NetBackup Vault Administrator s Guide UNIX, Windows, and Linux Release 6.5 12308354 Veritas NetBackup Vault Administrator s Guide Copyright 2001 2007 Symantec Corporation. All rights reserved.

More information

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery

Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Data Deduplication Makes It Practical to Replicate Your Tape Data for Disaster Recovery Scott James VP Global Alliances Luminex Software, Inc. Randy Fleenor Worldwide Data Protection Management IBM Corporation

More information

Veeam Endpoint Backup

Veeam Endpoint Backup Veeam Endpoint Backup Version 1.5 User Guide March, 2016 2016 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication may be reproduced,

More information

Figure 1-1: Local Storage Status (cache).

Figure 1-1: Local Storage Status (cache). The cache is the local storage of the Nasuni Filer. When running the Nasuni Filer on a virtual platform, you can configure the size of the cache disk and the copy-on-write (COW) disk. On Nasuni hardware

More information

IBM IBM Storage Sales Combined V1.

IBM IBM Storage Sales Combined V1. IBM 000-200 IBM Storage Sales Combined V1 http://killexams.com/exam-detail/000-200 Which of the following is the entire list of RAID levels supported by the IBM System Storage DS5000? A. 1,3,5 and DP B.

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

Session RMM Exploitation

Session RMM Exploitation Session 15549 RMM Exploitation Speakers Vickie Dault, IBM Thursday August 7, 2014 3:00 4:00 pm Insert Custom Session QR if Desired. Agenda Retentionmethods VRSEL EXPDT Assigning Retentionmethod and Limitations

More information

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties Chapter 7 GridStor Technology GridStor technology provides the ability to configure multiple data paths to storage within a storage policy copy. Having multiple data paths enables the administrator to

More information

Exadata Implementation Strategy

Exadata Implementation Strategy Exadata Implementation Strategy BY UMAIR MANSOOB 1 Who Am I Work as Senior Principle Engineer for an Oracle Partner Oracle Certified Administrator from Oracle 7 12c Exadata Certified Implementation Specialist

More information

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0 IBM Virtualization Engine TS7700 Series Best Practices Usage with Linux on System z 1.0 Erika Dawson brosch@us.ibm.com z/os Tape Software Development Page 1 of 11 1 Introduction... 3 1.1 Change History...

More information

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper By Anton Els Copyright 2017 Dbvisit Software Limited. All Rights Reserved V3, Oct 2017 Contents Executive Summary...

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for mainframes. Geographically Dispersed

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Jul 6, 2017 IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Dante Pichardo Tucson Tape Development Tucson, Arizona Introduction During normal and exception processing within

More information

Data Migration and Disaster Recovery: At Odds No More

Data Migration and Disaster Recovery: At Odds No More Data Migration and Disaster Recovery: At Odds No More Brett Quinn Don Pease EMC Corporation Session 8036 August 5, 2010 1 Mainframe Migrations Challenges Disruptive To applications To Disaster Recovery

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

Chapter 11. SnapProtect Technology

Chapter 11. SnapProtect Technology Chapter 11 SnapProtect Technology Hardware based snapshot technology provides the ability to use optimized hardware and disk appliances to snap data on disk arrays providing quick recovery by reverting

More information

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC z/os IBM DFSMS Introduction Version 2 Release 3 SC23-6851-30 Note Before using this information and the product it supports, read the information in Notices on page 91. This edition applies to Version

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1 Apr 13, 215 IBM TS77 Series Operator Informational Messages White Paper Version 2..1 Dante Pichardo Tucson Tape Development Tucson, Arizona Apr 13, 215 Introduction During normal and exception processing

More information

Backups and archives: What s the scoop?

Backups and archives: What s the scoop? E-Guide Backups and archives: What s the scoop? What s a backup and what s an archive? For starters, one of the differences worth noting is that a backup is always a copy while an archive should be original

More information

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group WHITE PAPER: BEST PRACTICES Sizing and Scalability Recommendations for Symantec Rev 2.2 Symantec Enterprise Security Solutions Group White Paper: Symantec Best Practices Contents Introduction... 4 The

More information

Exadata Implementation Strategy

Exadata Implementation Strategy BY UMAIR MANSOOB Who Am I Oracle Certified Administrator from Oracle 7 12c Exadata Certified Implementation Specialist since 2011 Oracle Database Performance Tuning Certified Expert Oracle Business Intelligence

More information

BUSINESS CONTINUITY: THE PROFIT SCENARIO

BUSINESS CONTINUITY: THE PROFIT SCENARIO WHITE PAPER BUSINESS CONTINUITY: THE PROFIT SCENARIO THE BENEFITS OF A COMPREHENSIVE BUSINESS CONTINUITY STRATEGY FOR INCREASED OPPORTUNITY Organizational data is the DNA of a business it makes your operation

More information

NetBackup 7.1 Best Practice

NetBackup 7.1 Best Practice NetBackup 7.1 Best Practice Using Storage Lifecycle Policies and Auto Image Replication This paper describes the best practices around using Storage Lifecycle Policies, including the Auto Image Replication

More information

Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup. April 2009

Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup. April 2009 Cybernetics Virtual Tape Libraries Media Migration Manager Streamlines Flow of D2D2T Backup April 2009 Cybernetics has been in the business of data protection for over thirty years. Our data storage and

More information

EMC DL3D Best Practices Planning

EMC DL3D Best Practices Planning Best Practices Planning Abstract This white paper is a compilation of specific configuration and best practices information for the EMC DL3D 4000 for its use in SAN environments as well as the use of its

More information

Collecting Hydra Statistics

Collecting Hydra Statistics Collecting Hydra Statistics Fabio Massimo Ottaviani EPV Technologies White paper 1 Overview The IBM Virtualization Engine TS7700, code named Hydra, is the new generation of tape virtualization solution

More information

Dell DR4000 Replication Overview

Dell DR4000 Replication Overview Dell DR4000 Replication Overview Contents Introduction... 1 Challenges with Data Disaster Recovery... 1 The Dell DR4000 Solution A Replication Overview... 2 Advantages of using DR4000 replication for disaster

More information

Introduction. How Does it Work with Autodesk Vault? What is Microsoft Data Protection Manager (DPM)? autodesk vault

Introduction. How Does it Work with Autodesk Vault? What is Microsoft Data Protection Manager (DPM)? autodesk vault Introduction What is Microsoft Data Protection Manager (DPM)? The Microsoft Data Protection Manager is a member of the Microsoft System Center family of management products. DPM provides continuous data

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates Technical Report A Thorough Introduction to 64-Bit Aggregates Shree Reddy, NetApp September 2011 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES The NetApp Data ONTAP 8.0 operating system operating

More information

DELL EMC UNITY: DATA REDUCTION

DELL EMC UNITY: DATA REDUCTION DELL EMC UNITY: DATA REDUCTION Overview ABSTRACT This white paper is an introduction to the Dell EMC Unity Data Reduction feature. It provides an overview of the feature, methods for managing data reduction,

More information

A Thorough Introduction to 64-Bit Aggregates

A Thorough Introduction to 64-Bit Aggregates TECHNICAL REPORT A Thorough Introduction to 64-Bit egates Uday Boppana, NetApp March 2010 TR-3786 CREATING AND MANAGING LARGER-SIZED AGGREGATES NetApp Data ONTAP 8.0 7-Mode supports a new aggregate type

More information

Insights into TSM/HSM for UNIX and Windows

Insights into TSM/HSM for UNIX and Windows IBM Software Group Insights into TSM/HSM for UNIX and Windows Oxford University TSM Symposium 2005 Jens-Peter Akelbein (akelbein@de.ibm.com) IBM Tivoli Storage SW Development 1 IBM Software Group Tivoli

More information

Oracle Secure Backup: Achieve 75 % Cost Savings with Your Tape Backup

Oracle Secure Backup: Achieve 75 % Cost Savings with Your Tape Backup 1 Oracle Secure Backup: Achieve 75 % Cost Savings with Your Tape Backup Donna Cooksey Oracle Principal Product Manager John Swallow Waters Corporation Sr. Infrastructure Architect Enterprise Software Solutions

More information

Most SQL Servers run on-premises. This one runs in the Cloud (too).

Most SQL Servers run on-premises. This one runs in the Cloud (too). Most SQL Servers run on-premises. This one runs in the Cloud (too). About me Murilo Miranda Lead Database Consultant @ Pythian http://www.sqlshack.com/author/murilo-miranda/ http://www.pythian.com/blog/author/murilo/

More information

Achieving Continuous Availability for Mainframe Tape

Achieving Continuous Availability for Mainframe Tape Achieving Continuous Availability for Mainframe Tape Dave Tolsma Systems Engineering Manager Luminex Software, Inc. Discussion Topics Needs in mainframe tape Past to present small to big? How Have Needs

More information

The Microsoft Large Mailbox Vision

The Microsoft Large Mailbox Vision WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more email has many advantages. Large mailboxes

More information

SAS Data Libraries. Definition CHAPTER 26

SAS Data Libraries. Definition CHAPTER 26 385 CHAPTER 26 SAS Data Libraries Definition 385 Library Engines 387 Library Names 388 Physical Names and Logical Names (Librefs) 388 Assigning Librefs 388 Associating and Clearing Logical Names (Librefs)

More information

Solution Brief: Archiving with Harmonic Media Application Server and ProXplore

Solution Brief: Archiving with Harmonic Media Application Server and ProXplore Solution Brief: Archiving with Harmonic Media Application Server and ProXplore Summary Harmonic Media Application Server (MAS) provides management of content across the Harmonic server and storage infrastructure.

More information

Database Management. Understanding Failure Resiliency CHAPTER

Database Management. Understanding Failure Resiliency CHAPTER CHAPTER 14 This chapter contains information on RDU database management and maintenance. The RDU database is the Broadband Access Center (BAC) central database. The BAC RDU requires virtually no maintenance

More information

Database Management. Understanding Failure Resiliency CHAPTER

Database Management. Understanding Failure Resiliency CHAPTER CHAPTER 15 This chapter contains information on RDU database management and maintenance. The RDU database is the Cisco Broadband Access Center (Cisco BAC) central database. The Cisco BAC RDU requires virtually

More information

IBM. DFSMS Implementing System-Managed Storage. z/os. Version 2 Release 3 SC

IBM. DFSMS Implementing System-Managed Storage. z/os. Version 2 Release 3 SC z/os IBM DFSMS Implementing System-Managed Storage Version 2 Release 3 SC23-6849-30 Note Before using this information and the product it supports, read the information in Notices on page 267. This edition

More information

TSM Node Replication Deep Dive and Best Practices

TSM Node Replication Deep Dive and Best Practices TSM Node Replication Deep Dive and Best Practices Matt Anglin TSM Server Development Abstract This session will provide a detailed look at the node replication feature of TSM. It will provide an overview

More information

Overcoming Obstacles to Petabyte Archives

Overcoming Obstacles to Petabyte Archives Overcoming Obstacles to Petabyte Archives Mike Holland Grau Data Storage, Inc. 609 S. Taylor Ave., Unit E, Louisville CO 80027-3091 Phone: +1-303-664-0060 FAX: +1-303-664-1680 E-mail: Mike@GrauData.com

More information

SVC VOLUME MIGRATION

SVC VOLUME MIGRATION The information, tools and documentation ( Materials ) are being provided to IBM customers to assist them with customer installations. Such Materials are provided by IBM on an as-is basis. IBM makes no

More information

White paper ETERNUS CS800 Data Deduplication Background

White paper ETERNUS CS800 Data Deduplication Background White paper ETERNUS CS800 - Data Deduplication Background This paper describes the process of Data Deduplication inside of ETERNUS CS800 in detail. The target group consists of presales, administrators,

More information

C Q&As. IBM Tivoli Storage Manager V7.1 Implementation. Pass IBM C Exam with 100% Guarantee

C Q&As. IBM Tivoli Storage Manager V7.1 Implementation. Pass IBM C Exam with 100% Guarantee C2010-511 Q&As IBM Tivoli Storage Manager V7.1 Implementation Pass IBM C2010-511 Exam with 100% Guarantee Free Download Real Questions & Answers PDF and VCE file from: 100% Passing Guarantee 100% Money

More information

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide

IBM Tivoli Storage Manager HSM for Windows Version 7.1. Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide IBM Tivoli Storage Manager HSM for Windows Version 7.1 Administration Guide Note: Before using this information and the product

More information