IBM TS7700 Series Grid Failover Scenarios Version 1.4

Size: px
Start display at page:

Download "IBM TS7700 Series Grid Failover Scenarios Version 1.4"

Transcription

1 July 2016 IBM TS7700 Series Grid Failover Scenarios Version 1.4 TS7700 Development Team Katsuyoshi Katori Kohichi Masuda Takeshi Nohta Tokyo Lab, Japan System and Technology Lab Copyright 2006, IBM Corporation

2 Table of Contents Introduction... 4 Summary of Changes... 4 Test Configuration... 5 Test Job Mix... 8 TS7700 Grid Failure Mode Principles... 8 Autonomic Ownership Takeover Manager Part I: Failover scenarios for 2-Way clusters Grid configuration Failure of a Host Link to a TS Failure of all Host Links to a TS Failure of One Link Between TS7700s Failure of Both Links Between TS7700s w/local Mounts Only Failure of Both Links Between TS7700s w/remote Mounts Failure of Both Links Between TS7700s and Ownership Transfer Failure of one Host Link to the Remote TS Failure of all Host Links to the Remote TS Failure of the Local TS Failure of the Remote TS Failure of Both Links Between TS7700s W/Autonomic Ownership Takeover Failure of the Local TS7700 W/Autonomic Ownership Takeover for Read Failure of the Local TS7700 W/Autonomic Ownership Takeover for Write Failure of All Links between Sites W/Autonomic Ownership Takeover Failure of Gb Links and One W/Autonomic Ownership Takeover Part II: Failover scenarios for 3-Way clusters Grid configuration Failure of a link between cluster0 and Grid network Failure of both links between cluster0 and Grid network Failure of a Link between cluster0 and Grid network w/remote Mounts Failure of both links between cluster1 and Grid network Failure of cluster0 in three clusters Grid Page 2 of 76

3 Failure of a link between cluster2 and Grid network Failure of both links between cluster2 and Grid network Failure of the remote TS Failure of one local TS7700D in Hybrid 3-way clusters Grid Failure of both links between cluster2 and Grid network in Hybrid 3-Way clusters Grid Failure of remote TS7700 in Hybrid 3-Way clusters Grid (1) Failure of remote TS7700 in Hybrid 3-Way clusters Grid (2) Failure of all grid links between local site and the Grid network Failure of both local TS7700s Failure of whole local site Part III: Failover scenarios for 4-way clusters Grid configuration Failure of one local TS7700 in four clusters Grid Failure of one local TS7700 w/ partitioned workload Remote sites failure in Hybrid 4-Way clusters Grid w/ partitioned workload Failure of local TS7700D in Hybrid 4-Way clusters Grid Failure of local TS7700 in Hybrid 4-Way clusters Grid Part IV: Failover scenarios for 3-way clusters Grid configuration w/synchronous Mode Copy Introduction of Synchronous Mode Copy: Failure of one local cluster in 3-Way clusters Grid with sync mode copy enabled (both sync copy clusters are local) Failure of one local cluster in 3-Way clusters Grid with sync mode copy enabled Failure of one local cluster in 3-Way clusters Grid with sync mode copy enabled (synchronous deferred option enabled) Failure of one local cluster in 3-Way clusters Grid with sync mode copy enabled using Dual Open On Private Mount option (1) Failure of one local cluster in 3-Way clusters Grid with sync mode copy enabled using Dual Open On Private Mount option (2) Page 3 of 76

4 Introduction The IBM TS7700 Series is the latest in the line of tape virtualization products that has revolutionized the way mainframe customers utilize their tape resources. The capability to resume business operations in the event of a product or site failure is provided by the TS7700 Grid configuration. In a Grid configuration, up to six TS7700 clusters are interconnected and can replicate data created between any of the clusters in the configuration. As part of a total systems design, business continuity procedures must be developed to instruct I/T personnel in the actions that need to be taken in the event of a failure. Testing of those procedures should be performed either during initial installation of the system or at some interval. This paper was written in an effort to assist IBM specialists and customers in developing such testing plans as well as better understand how the TS7700 will respond to certain failure conditions. The paper documents a series of TS7700 Grid failover scenarios for z/os which were run in an IBM laboratory environment. Single failures of all major components and communication links and some multiple failures are simulated. For each of the scenarios, the z/os console messages that are typically presented are indicated (depending on how the FICON channels are configured between the host and the TS7700, some of the messages may not be generated). Obviously, not all possible failover situations could be covered. The focus of this paper is on those which demonstrate the critical hardware and microcode failover capabilities of the TS7700 Grid configuration. It is assumed throughout this white paper that the reader is familiar with using virtual tape systems attached to z/os environments. Throughout this document TS7700 is a generic term that refers to the latest model and its architecture, much like VTS was used to describe the prior generation. At the code level 8.40.x.x, the new model TS7760 is introduced. The following model notations are used throughout this white paper: - : V06 and V07 - TS7720: VEA and VEB with no tape library. - TS7720T: VEB with tape library - TS7760: VEC with no tape library - TS7760T: VEC with tape library - TS7700: All models (, TS7720, TS7720T, TS7760 and TS7760T) are included. - TS7700D: TS7700 Disk only models (TS7720 and TS7760) are included. - TS7700T: TS7700 Tape attach models (TS7720T and TS7760T) are included ( is NOT included in the white paper). Summary of Changes Version 1.0 Initial version Version 1.1 o Minor updates to wording in the introduction section o Note these tests are not part of the normal installation of the product Version 1.2 Added failover scenarios for three and four cluster grids. Version 1.3 Oct 2013 o Added scenarios for Sync Mode Copy o Clarifications and updates throughout. Page 4 of 76

5 Version 1.4 July 2016 o Fix typo and remove Virtualization Engine. o Added TS7700 model descriptions (use the new notation of each TS7700 model throughout this white paper). Test Configuration The hardware configurations used for the laboratory test scenarios are illustrated below. For the Autonomic Ownership Takeover scenarios, one or more s (IBM TS3000 System Consoles) attached the TS7700s are required as well as an Ethernet connection between s when more than one exists. Although all the components tested were local, the results of the tests should be similar, if not the same, for remote configurations. All FICON connections were direct, but again, the results should be valid for configurations utilizing FICON directors or channel extenders. Any supported level of z/os software, and current levels of TS7700, 3953 and 3584 microcode should all provide similar results. The test environment was z/os with JES2. Failover behaviors within the TS7700 are the same for all supported host platforms, although host messages will differ and host recovery capabilities may not be supported in all environments. Test results should also be valid for configurations using the 3494 tape library versus the latest TS3500. One of the architectural differences between the TS7700 Grid configuration and the prior VTSs PTP configuration is the elimination of the Virtual Tape Controllers (VTCs). The VTCs provided three major functions, 1) management of what and how to replicate data between the VTSs, 2) determination of which VTS has a valid copy of a logical volume and 3) selection of a VTS to handle the I/O operations for a tape mount and the routing of host I/O operations to that VTS. With the TS7700 Grid configuration, the first two functions have been integrated into each TS7700 cluster s function. For the third function, the attached host in combination with TS7700 Device Allocation Assist and Scratch Allocation Assist, selects which TS7700 will handle the tape mount and the I/O operations associated with it. During the laboratory tests, all virtual devices in all TS7700 clusters were online to the test host as shown in the following figures. Scratch Allocation Assist was not enabled. For the two cluster configuration show in Figure 1, all host jobs are routed to the virtual device addresses associated with TS The host connections to the virtual device addresses in TS are used in testing recovery for a failure of TS In the three cluster Grid configuration shown in Figure 2, the host is connected in a balanced mode to the virtual device addresses in TS and TS TS is used for testing recovery when both TS and TS fail. In the four cluster configuration shown in Figure 3, the host has logical devices for TS and TS online while TS and TS are used for recovery. Page 5 of 76

6 z/os Host TS TS network Grid network Figure 1: Hardware configuration for test scenarios for two clusters Grid. Page 6 of 76

7 z/os Host TS TS Grid network TS network Figure 2: Hardware configuration for test scenarios for three clusters Grid. z/os Host TS TS TS TS network Grid network Figure 3: Hardware configuration for test scenarios for four clusters Grid. Page 7 of 76

8 Note: The test outlines in this white paper are a suggestion of how a customer might test their recovery scenarios in the event of a failure in the TS7700 Grid or its related interconnections. They are not part of the installation of the TS7700 and any IBM service representative involvement is not included in the costs associated with the install. Test Job Mix The test jobs running during each of the failover scenarios consisted of 10 jobs which mounted single specific logical volumes for input (read), and 5 jobs which mounted single scratch logical volumes for output (write). The mix of work used in the tests was purely arbitrary, and any mix would be suitable. TS7700 Grid Failure Mode Principles A TS7700 Grid configuration provides the following availability and data access characteristics: The virtual device addresses for each cluster are independent. This is different than the prior generation s PTP VTS where the mount request was issued on a virtual device address defined for a virtual tape controller and the virtual tape controller then decided which VTS to use for data access. Any mount to any device within any cluster provides access to all volumes contained within any cluster within the grid. Thus, devices simply need to be varied on to at least one cluster within a grid. All logical volumes are accessible through any of the virtual device addresses on the TS7700s in the Grid configuration. The preference will be to access a copy of the volume in the tape volume cache associated with the TS7700 cluster the mount request is received on. If a recall is required to place the logical volume in the tape volume cache on that, it will be done as part of the mount operation. If a copy of the logical volume is not available at the mount point TS7700 (either because it does not have a copy or the copy it does have is inaccessible due to an error), and a copy is available at another TS7700 in the Grid, the volume is accessed through the tape volume cache at the TS7700 that has the available copy. The TCP/IP Grid network infrastructure is essentially used as a channel extender but is able to do so without the FICON protocol overhead and also accesses data in compressed form. If a recall is required to place the logical volume in the tape volume cache on alternate, it will be done as part of the mount operation. If a recall is required to place the volume in the cache of cluster the mount request was received on and a peer cluster already contains a copy in cache, the TS7700 may use the Grid to access the peer version versus waiting for a recall to complete. Whether a copy is available at another TS7700 cluster depends on the copy consistency point that had been assigned to the logical volume when it was written. The copy consistency point is set through the management class storage construct. It specifies if and when a copy of the data is made between the TS7700s in the Grid configuration. There are four copy consistency policies that can be assigned: Synchronous Mode Copy Consistency Point: As data arrives off the FICON channel, it s compressed and then simultaneously duplexed to two TS7700 clusters at the same time. Memory buffering is used in order to enable this consistency policy to operate at long distances with very attractive performance. Applications naturally harden data to tape by issuing SYNCH commands at critical points throughout a job in which the TS7700 uses this SYNCH operation to flush any buffered content and harden all data up to that point on tape at both locations. This provides a zero recovery point objective at sync point granularity which is critical for applications such as DFSMShsm or OAM Object Support. In the event no SYNCH operations occur, one copy may lag by a few megabytes and will be synchronized implicitly during tape close processing. Any two locations may be Page 8 of 76

9 configured as the consistency points and the local mount point cluster is not required to be one of the two. Additional copies can be made at alternate clusters using the remaining copy policies. Rewind Unload Copy Consistency Point: If a data consistency point of RUN is specified, the data created on any TS7700 is copied to the one or more specified TS7700s as part of successful rewind unload command processing, meaning that for completed jobs, a copy of the volume will exist on all TS7700s configured as Synchronous and Rewind Unload. Access to data written by completed jobs (successful rewind/unload) prior to the failure is maintained through the other TS7700 cluster. Deferred Copy Consistency Point: If a data consistency point of Deferred is specified, the data created on any TS7700 is copied to one more ore other TS7700s after successful rewind unload command processing. Access to the data through the other TS7700 cluster is dependent on when the copy completes and whether another cluster containing a copy is accessible. Because there will be some delay in performing the copy, access may or may not be available when a failure occurs. No Copy Copy Consistency Point: If a data consistency point of No Copy is specified, the data created on any TS7700 is not copied to the specified TS7700s. If these No Copy TS7700s are the only TS7700 clusters available after an outage, data would be inaccessible. until the peer TS7700 cluster or clusters containing copes are restored. The volume removal policy has been introduced in release 1.6 microcode level for hybrid Grid configurations. Beginning with release 1.7, it is available in any Grid configuration which contains at least one TS7700D cluster. Since the TS7700 Disk-Only solution has a maximum storage capacity that is the size of its tape volume cache, once cache fills, this policy allows logical volumes to be automatically removed from cache while a copy is retained within one or more peer clusters in the Grid. When the auto removal starts, all volumes in fast-ready (scratch) category are removed first since these volumes are intended to hold temporary data. This mechanism could remove old volumes in a private category from the cache to meet pre-defined cache usage threshold as long as a copy of the volume is retained on one of the peer clusters. A TS7700 cluster failure could affect the availability of old volumes if the cluster which removed the volume is the only one remaining. The TS7700 Grid architecture allows equal access to any volume within a grid from any cluster within the grid. The shared access of a particular volume is achieved through a dynamic volume ownership protocol. At any point in time a logical volume is owned by a cluster. The owning cluster has control over access to the volume and for changes to the attributes associated with the volume (such as category or constructs). The cluster that has ownership of a logical volume can change dynamically based on which cluster in the Grid configuration is requesting a mount of the volume. When a mount request is received on a virtual device address, the TS7700 cluster for that virtual device must have ownership of the volume to be mounted or must obtain the ownership from the cluster that currently owns it. If the TS7700 clusters in a Grid configuration and the communication paths between them are operational, the change of ownership and the processing of logical volume related commands are transparent in regards to the operation of the TS7700. However, if a TS7700 cluster that owns a volume is unable to respond to requests from other clusters, the operation against that volume will fail, unless some additional direction is given. In other words, clusters will not automatically assume or take over ownership of a logical volume, without being directed. This additional action is required to prevent invalid ownership acquisitions due to network-only failures where both clusters are still operational. When more than one cluster has ownership of a volume independently, that could result in the volume s Data or Attributes being changed on each cluster. If a TS7700 cluster has failed or is known to be unavailable (for example, Page 9 of 76

10 it is being serviced), its ownership of logical volumes need to be transferred to the other TS7700 cluster with one of the following modes, which can be set through the management interface. Read-only Ownership Takeover: When Read-only ownership takeover (ROT) is enabled for a failed cluster, ownership of a volume is allowed to be taken from the failed TS7700 cluster when a volume is accessed by a host operation. Only read access to the volume is allowed through the other TS7700 clusters in the Grid. Once ownership for a volume has been taken in this mode, any operation attempting to modify data on that volume or change its attributes is failed. The mode for the failed cluster remains in place until a different mode is selected or the failed cluster has been restored. Any volumes accessed during the outage which were taken over in this mode are reconciled once the original owner returns and all clusters are made aware of the final owner. In the event a volume was accessed and modified during the outage by the original owner (network outage only), no error event occurs given the temporary owner only had read access. Write Ownership Takeover: When Write Ownership Takeover (WOT) is enabled for a failed cluster, ownership of a volume is allowed to be taken from the failed TS7700 cluster when a volume is accessed by a host operation, Full access is allowed through other TS7700 clusters in the Grid. The mode for the failed cluster remains in place until a different mode is selected or the failed cluster has been restored. Any volumes accessed during the outage which were taken over in this mode are reconciled once the original owner returns and all clusters are made aware of the final owner and the latest properties and volume data. Replications are queued if data changed during the outage. In the event a volume was accessed and modified during the outage (network outage only) by the original owner and the temporary owner also modified the volume, the volume will be moved into an error state where manual intervention is required to choose the most valid version. Autonomic ownership takeover is designed to prevent such takeover enablement. Safety checks in manual enablement also prevent such a condition if the TS7700 and infrastructure believes only a network outage exists. Service Ownership Takeover: When a TS7700 cluster is placed in service mode, the TS7700 Grid will automatically enable Write Ownership Takeover mode against the serviced cluster. Though the result is identical to WOT, it is given a unique name to differentiate why it was enabled. This mode is not explicitly enabled through the management interface but is implicitly enabled by initiating the service preparation process. Autonomic Ownership Takeover Manager In addition to the manual setting of one of the ownership takeover modes, an optional automatic method (Autonomic Ownership Takeover Manager or AOTM) is available when each of the TS7700s are attached to a. Whether this function is enabled and how it operates is configurable by an IBM SSR and by a customer through the management interface. If a TS7700 detects a remote TS7700 has failed, a check is made through the s to determine if the owning TS7700 is inoperable or just the communication paths to it are not functioning. When distance exists between the two communicating clusters, independent s which are local to the distant clusters is recommended. The s are then inter-connected through TCP/IP which provides an alternate method of verifying if remote clusters are inoperable or if only a network outage exists. If the or s have determined that the owning TS7700 is inoperable, then it will enable either read or write ownership takeover, depending on what was set in the enablement options. Page 10 of 76

11 AOTM enables an ownership takeover mode after a grace period which is configurable. Therefore, jobs can intermediately fail with an option to retry until the AOTM enables the configured ROT or WOT takeover mode. The grace period is set to 20 minutes by default and can be lowered to a value of 10 minutes. The grace period is in place to allow temporary outages to heal before a takeover mode is enabled. The grace period starts when a TS7700 detects that a remote TS7700 has failed. Following OAM messages can be displayed up until the point when AOTM enables the configured ownership takeover mode: CBR3758E Library Operations Degraded CBR3785E Copy operations disabled in library CBR3786E VTS operations degraded in library CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present CBR3750I Message from library libname: G0009 Autonomic ownership takeover manager within library libname has determined that library libname is unavailable. The Read/Write ownership takeover mode has been enabled. CBR3750I Message from library libname: G0010 Autonomic ownership takeover manager within library libname has determined that library libname is unavailable. The Read-Only ownership takeover mode has been enabled. A failure of a TS7700 cluster will cause the jobs using its virtual device addresses to abend. In order to re-run the jobs, host connectivity to the virtual device addresses in alternate TS7700 clusters must be enabled (if not already) and an appropriate ownership takeover mode may need to be selected. Scratch allocations can traditionally continue given they will favor ownership accessible volumes, but private mounts for read or modification may fail when the volume was owned by the downed cluster. As long as another TS7700 has a valid copy of a logical volume, the jobs which issue private mounts can be retried once an ownership takeover mode is manually or automatically enabled. Once scratch volumes owned by the remaining clusters are exhausted, WOT must be enabled against the downed cluster in order to utilize the additional scratch volumes which were owned by the downed cluster. The following table format is used to document each scenario: Scenario # Scenario title A description of the link or component failure(s) in this scenario. TS7720 cluster0 cluster1 TS7720 cluster2 cluster3 Actions required to test this scenario. Customer network A list of the effects of the failure(s) on the TS7700 Grid capabilities and operations. A list of possible host console messages with paraphrased text that may be posted during this scenario. Page 11 of 76

12 Actions required to recover from the failure(s) in this scenario. Actions required to resume normal operations after a test of this scenario. Page 12 of 76

13 Part I: Failover scenarios for 2-Way clusters Grid configuration Page 13 of 76

14 Failure of a Host Link to a TS7700 Failover Scenario # 1 X Failure of a host link to a TS7700 One host link to cluster0 fails. It may be that the intermediate FICON links, FICON directors, FICON channel extenders or remote channel extenders fail. cluster0 cluster1 Run jobs that access volume in cluster0 only. Disconnect a cable somewhere between the host and cluster0. All Grid components continue to operate. All channel activity on the failing host link is stopped. Host channel errors are reported or error information becomes available from the intermediate equipment. If there are alternate paths from the host to either TS7700, host I/O operations may continue. Ownership takeover modes are not needed. All data remains available. IOS450E Not operational path taken offline IOS050I Channel detected error IOS051I Interface timeout detected Normal error recovery procedures and repair will apply for the host channel and the intermediate equipment Contact your service representative for repair of the failed connection. Reconnect host cable. Page 14 of 76

15 Failure of all Host Links to a TS7700 Failover Scenario # 2 X X TS7700 TS7700 cluster0 cluster1 Failure of all host links to a TS7700 All host links to cluster0 fails. Run jobs that access devices in cluster0 only. Disconnect all cables from the host to cluster0. Although only two are shown, there can be up to 4 FICON paths per TS7700. Retry the failed jobs using the virtual device addresses associated with cluster1. Virtual tape device addresses for cluster0 will become unavailable; all other Grid components continue to operate. All channel activity on the failing host links are stopped. Host channel errors are reported or error information becomes available from the intermediate equipment. Jobs which were using the virtual device addresses of cluster0 will fail. All data remains accessible through the virtual device addresses associated with cluster1. Ownership takeover modes are not needed. IOS451E Boxed, No operational paths IOS050I Channel detected error IOS000I (and related) Data check/equipment check/i/o error/sim IOS002A No paths available IEF281I Device offline - boxed IEF524I/IEF525E Pending offline IEF696I I/O timeout CBR4195I/CBR4196D (and related) I/O error in library (only for mount commands) IEC215I (and related) Abend 714-0C - I/O error on close IEC210I (and related) Abend 214-0C - I/O error on read If possible, vary on remote devices to cluster1 and rerun the failed jobs using the virtual device addresses in cluster1. Normal error recovery procedures and repair will apply for the host channels and the intermediate equipment Contact your service representative for repair of the failed connections. Reconnect host cables. Vary cluster0 and its paths and virtual devices online from the host. Page 15 of 76

16 Failure of One Link Between TS7700s Failover Scenario # 3 Failure of one link between TS7700s One of the Gb Ethernet links between the cluster0 and the Grid network fails. cluster0 X cluster1 Run jobs that access volumes in cluster0 only. Disconnect one of the Gb Ethernet cables between the TS7700s. All Grid components continue to operate through the remaining link. All host jobs would continue to run. The Grid enters the Grid Links Degraded state and the VTS Operations Degraded state. Copies using the link at the time of the failure will be redirected to the other remaining link. Performance of copy operations may be reduced. If the TS7700 is operating with a high workload with a copy consistency point of RUN, the Immediate Mode Copy Completion s Deferred state may also be entered. Jobs using Synchronous Mode Copy may be slower given the overall bandwidth to the alternate TS7700 is reduced. Call home support is invoked. CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library (if RUN copy policy) CBR3796E Grid links degraded in library CBR3750I Message from library libname: G0030 Library libname, degraded_port Grid Link is degraded.. (degraded_port is Pri or Pri2 disconnected) Contact your service representative or local network personnel for repair of the failed connections. Reconnect Gb Ethernet cable. Page 16 of 76

17 Failure of Both Links Between TS7700s w/local Mounts Only Failover Scenario # 4 Failure of both links between TS7700s with local mounts only and no Synchronous Mode Copy Both of the Gb Ethernet links between the TS7700s fails. cluster0 cluster1 Run jobs that access devices in cluster0 only. Disconnect both of the Gb Ethernet cables between cluster0 and the Grid network. X X Jobs on virtual device addresses on cluster0 will continue to run if accessing logical volumes which are owned by cluster0. All scratch mounts to cluster0 will succeed so long as it owns one or more volumes in the scratch category at the time of mount operation. Once the scratch volumes owned by cluster0 are exhausted, scratch mounts will begin to fail. Jobs which access private volumes for read or mod that are owned by cluster1 will fail with a retry request. Ownership takeover is not recommended given cluster1 is still operational. Given this configuration where production runs only to one cluster, ownerships of private volumes are already most likely present within cluster0. All copy operations are stopped. The Grid enters the Grid Links Degraded state and the VTS Operations Degraded state. The Grid enters the Copy Operation Disabled state. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. Call home support is invoked. CBR4195I/CBR4196D (and related) I/O error in library CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3796E Grid links degraded in library CBR3750I Message from library libname: G0030 Library libname, Pri, Pri2 Grid Link is degraded CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. Contact your service representative or local network personnel for repair of the failed connections. Reconnect Gb Ethernet cables. Page 17 of 76

18 Failure of Both Links Between TS7700s w/remote Mounts Failover Scenario # 6 cluster0 X X Failure of both links between TS7700s with remote mounts and no Synchronous Mode Copy Both of the Gb Ethernet links between the TS7700s fails. Create data only on cluster1 using a management class that specifies that cluster0 is not to have a copy. Run specific mount jobs to devices on cluster0 that access the data only present on cluster1. This will result in the TVC associated with cluster1 to be selected for the mount. Disconnect both of the Gb Ethernet cables between the cluster0 and the Grid network. Jobs on virtual device addresses on cluster0 that are using cluster1 as the TVC cluster will fail. Subsequent specific mount jobs that attempt to access the data through cluster0 that only exists on cluster1 will fail. All scratch mounts to cluster0 will succeed so long as it owns one or more volumes in the scratch category at the time of mount operation. Once the scratch volumes owned by cluster0 are exhausted, scratch mounts will begin to fail. Scratch mounts which use the same previously defined management class which only creates content in cluster1 will fail. All copy operations are stopped. The Grid enters the Grid Links Degraded state, the VTS Operations Degraded state and the Grid enters the Copy Operation Disabled state. Call home support is invoked. IOS000I (and related) Data check/equipment check/i/o error/sim CBR4195I/CBR4196D (and related) I/O error in library CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3758E Library Operations Degraded CBR3796E Grid links degraded in library CBR3750I Message from library libname: G0030 Library libname, Pri, Pri2 Grid Link is degraded CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. IEC147I (and related) Abend CBRXLCS processing error Contact your service representative or local network personnel for repair of the failed connections. Reconnect Gb Ethernet cables. cluster1 Page 18 of 76

19 Failure of Both Links Between TS7700s and Ownership Transfer Failover Scenario # 7 Failure of both links between TS7700s and ownership transfer Both of the Gb Ethernet links between the TS7700s fails. cluster0 cluster1 X X Autonomic ownership takeover is not enabled for this test Use virtual device addresses on cluster1 to access or create several specific volumes so that ownership of those volumes will shift to cluster1 if not already there. Disconnect both of the Gb Ethernet cables between the cluster0 and the Grid network. Run specific mount jobs which attempt to access one or more of the volume whose ownership was transferred to cluster1 through the virtual device addresses associated with cluster0. Note: Do not place Grid into Write takeover modes when only the links have failed in a real configuration. That could allow a host attached to cluster1 to modify a volume which also was being modified by a host attached to cluster0. and AOTM will attempt to prevent manual enablement when this condition is true, but not all network-only conditions can be detected by the solution. Please verify the cluster is in fact down before manually enabling takeover. Jobs subsequent to the failure using virtual device addresses on cluster0 that need to access volumes that are owned by cluster1 will fail (even if the data is local to cluster0). Specific mount jobs subsequent to the failure using virtual device addresses on cluster0 that target a volume which are only consistent on cluster1 will fail. All scratch mounts to cluster0 will succeed so long as it owns one or more volumes in the scratch category at the time of mount operation and it specifies a management class that has a consistency point other than No Copy at cluster0. Once the scratch volumes owned by cluster0 are exhausted, scratch mounts will begin to fail. All copy operations are stopped. The Grid enters the Grid Links Degraded state and the VTS Operations Degraded state and the Grid enters the Copy Operation Disabled state. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. If Synchronous Mode copy is used, the Grid also enters the Synchronous-Deferred state for the next scratch mount or mod which occurs to a synchronous mode copy defined volume. If the fail on synch failure option is used, these jobs will fail. Call home support is invoked. If ownership takeover is enabled against cluster1, operations will continue but any chance of modification from cluster1 devices of the same volumes introduces risk. If ownership takeover must be enabled, it s recommended to only enable ROT vs WOT. If WOT is enabled, you must be confident that no host activity to the same volume ranges is occurring within cluster1. If a AOTM setup is configured (enabled or disabled), it will prevent such a manual enablement if it can detect that cluster1 is in fact still running. Page 19 of 76

20 CBR4174I Cannot obtain ownership volume volser in library libname (Note: this message indicates that an operation was attempted that requires volume ownership and volume ownership could not be obtained). CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3730E One or more synchronous mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3758E Library Operations Degraded CBR3796E Grid links degraded in library CBR3750I Message from library libname: G0030 Library libname, Pri, Pri2 Grid Link is degraded CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. Contact your service representative or local network representative for repair of the failed connections. Do not place cluster0 in an ownership takeover mode unless a unique situation requires it. Reconnect Gb Ethernet cables. Page 20 of 76

21 Failure of one Host Link to the Remote TS7700 Failover Scenario # 8 X Failure of one host link to the remote TS7700 One host link to cluster1 fails. It may be that the intermediate FICON directors, FICON channel extenders or remote channel extenders fail. cluster0 cluster1 Although a host is attached to cluster1, all operations are only using the paths to cluster0. Disconnect one of the host links to cluster1. No I/O operations are affected. All Grid components continue to operate. Any host LPARs exclusively connected through the failed link will not receive any z/os console messages initiated by the TS7700 Grid. IOS001E Inoperative Path IOS450E Not operational path taken offline IOS050I Channel detected error Contact your service representative for repair of the failed connections. Reconnect host cable. Page 21 of 76

22 Failure of all Host Links to the Remote TS7700 Failover Scenario # 9 cluster0 X X cluster1 Failure of all host links to the remote TS7700 All host links to cluster1 fails. Although a host is attached to cluster1, all operations are only using the paths to cluster0. Disconnect all cables from the host to cluster1. Although only two are shown, there can be up to 4 FICON parts per TS7700. All Grid components continue to operate. Any host LPARs exclusively connected through the failed links will not receive any z/os console messages initiated by the TS7700 Grid. IOS450E Not operational path taken offline IOS050I Channel detected error IOS002A No paths available Normal error recovery procedures and repair will apply for the host channels and the intermediate equipment Contact your service representative for repair of the failed connections. Reconnect host cables. Vary cluster1 and its paths and virtual devices online from the host. Page 22 of 76

23 Failure of the Local TS7700 Failover Scenario # 10 Failure of the local TS7700 TS (cluster0) fails. cluster0 X cluster1 Page 23 of 76 Autonomic ownership takeover is not enabled for this test. Power off cluster0 through the management interface or disconnect FICON cables from the host to cluster0 and Grid links between cluster0 and the Grid network. Run specific mount jobs which read volumes that are owned by cluster0 using the virtual device addresses associated with cluster1. These will fail because ownership of volumes cannot be transferred. Enable read-only ownership takeover mode against cluster0 through the management interface on cluster1. Run specific mount jobs which read data in volumes that are owned by cluster0 again. These jobs will now run successfully because cluster1 takes over the volumes from cluster0. Run specific mount jobs that attempt to write data to volumes that cluster1 took over. These jobs will fail (ISO000 message will indicate write protected) because logical volumes taken over under read-only ownership takeover mode is restricted to read-access only. Enable write ownership takeover mode against cluster0 on cluster1. All jobs will now run successfully. Virtual tape device addresses for cluster0 will become unavailable. All channel activity on the failing host links is stopped. Host channel errors are reported or error information becomes available from the intermediate equipment. Jobs which were using the virtual device addresses of cluster0 will fail. Scratch mounts that target volumes that are owned by the failed cluster will also fail until write ownership takeover mode is enabled. This would only occur once all scratch candidates on cluster1 are exhausted since scratch mounts that target pre-owned volumes will succeed The Grid enters the Copy Operation Disabled and VTS Operations Degraded states. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. If Synchronous Mode copy is used, the Grid also enters the Synchronous-Deferred state for the next scratch mount or mod which occurs to a synchronous mode copy defined volume. If the fail on synch failure option is used, these jobs will fail. All previously copied data can be made accessible through cluster1 through one of the takeover modes. If a takeover mode for cluster0 is not enabled, data will likely not be accessible through cluster1 even if it has a valid copy of the data if the volume is owned by cluster0 because cluster0 likely owned all volumes. If not, then those previously owned by cluster1 will be accessible. IOS450E Not operational path taken offline IOS001E/IOS4510E Boxed, No operational paths IOS050I Channel detected error IOS051I Interface timeout detected IOS000I (and related) Data check/equipment check/i/o error/sim/write protected

24 IOS002A No paths available IEF281I Device offline - boxed IOS1000I Write protected CBR4174I Cannot obtain ownership volume volser in library libname (Note: this message indicates that an operation was attempted to require volume ownership and volume ownership could not be obtained). CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3730E One or more synchronous mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3750I Message from library libname: G0007 A user at library libname has enabled Read/Write takeover against library libname CBR3750I Message from library libname: G0008 A user at library libname has enabled Read-Only takeover against library libname CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. IEC147I (and related) Abend AN ATLDS tape volume was opened for output processing and it is file protected. Enable write or read-only ownership takeover mode through the management interface. Write ownership takeover mode must be enabled if scratch mounts are failing or private mounts with mod are required. Rerun the failed jobs using the virtual device addresses associated with cluster1. Normal error recovery procedures and repair will apply for the host channels and the intermediate equipment Contact your service representative for repair of the failed TS7700. Power on cluster0 or reconnect host and Gb Ethernet cables. Vary cluster0 and its paths and virtual devices online from the host. Page 24 of 76

25 Failure of the Remote TS7700 Failover Scenario # 11 cluster0 cluster1 X Failure of the remote TS7700 TS7700-1(cluster1) fails. Power off cluster1 through the management interface or disconnect FICON cables from the host to cluster1 and Grid links between cluster0 and the Grid network. All specific mount jobs continue to run. All scratch mounts to cluster0 will succeed so long as it owns one or more volumes in the scratch category at the time of mount operation. Once the scratch volumes owned by cluster0 are exhausted, scratch mounts will begin to fail. All copy operations are stopped. The Grid enters the Copy Operation Disabled and VTS Operations Degraded states. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. If Synchronous Mode copy is used, the Grid also enters the Synchronous-Deferred state for the next scratch mount or mod which occurs to a synchronous mode copy defined volume. If the fail on synch failure option is used, these jobs will fail. Call home support is invoked. CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3730E One or more synchronous mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. Contact your service representative for repair of the failed TS7700. Power on the TS7700 or reconnect host and Gb Ethernet cables. Vary cluster1 and its paths and virtual devices online from the host. Page 25 of 76

26 Failure of Both Links Between TS7700s W/Autonomic Ownership Takeover Failover Scenario # 12 cluster0 X X Customer network cluster1 Failure of both links between TS7700s W/Autonomic Ownership Takeover Both of the Gb Ethernet links between the TS7700s fails. Autonomic ownership takeover is enabled for this test Use virtual device addresses on cluster1 to access several specific volumes so that ownership of those volumes will shift to cluster1. Disconnect both of the Gb Ethernet cables between cluster0 and the Grid network. Run specific mount jobs which attempt to access one or more of the volume whose ownership was transferred to cluster1 through the virtual device addresses associated with cluster0. Note: The results will be the same as for scenario 6 because the s will determine that cluster0 is still operable and that takeover is not allowed. Specific mount jobs subsequent to the failure using virtual device addresses on cluster0 that need to access volumes that are owned by cluster1 will fail (even if the data is local to cluster0). Jobs using virtual device addresses on cluster1 that need to access volumes that are owned by cluster0 will also fail. All scratch mounts to cluster0 will succeed so long as it owns one or more volumes in the scratch category at the time of mount operation. Once the scratch volumes owned by cluster0 are exhausted, scratch mounts will begin to fail. All copy operations are stopped. The Grid enters the Grid Links Degraded state, the VTS Operations Degraded state and the Copy Operation Disabled state. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. If Synchronous Mode copy is used, the Grid also enters the Synchronous-Deferred state for the next scratch mount or mod which occurs to a synchronous mode copy defined volume. If the fail on synch failure option is used, these jobs will fail. Call home support is invoked. IOS000I (and related) Data check/equipment check/i/o error/sim CBR4174I Cannot obtain ownership volume volser in library libname (Note: this message indicates that an operation was attempted that requires volume ownership and volume ownership could not be obtained). CBR4195I/CBR4196D (and related) I/O error in library CBR3786E VTS operations degraded in library CBR3787E Immediate mode copy operations deferred in library CBR3730E One or more synchronous mode copy operations deferred in library CBR3785E Copy operations disabled in library CBR3758E Library Operations Degraded CBR3796E Grid links degraded in library CBR3750I Message from library libname: G0030 Library libname, Pri, Pri2 Grid Link is degraded. CBR3750I Message from library libname: G0013 Library libname has experienced an unexpected outage with its peer library libname. Library libname may be unavailable or a communication issue may be present. Page 26 of 76

27 IEC147I (and related) Abend CBRXLCS processing error Contact your service representative for repair of the failed connections. Reconnect Gb Ethernet cables. Page 27 of 76

28 Failure of the Local TS7700 W/Autonomic Ownership Takeover for Read Failover Scenario # 13 X cluster0 Customer network cluster1 Failure of the local TS7700 W/Autonomic Ownership Takeover for Read TS7700-0(cluster0) fails. Autonomic ownership takeover for read is enabled for this test. Power off cluster0 through the management interface or disconnect FICON cables from the host to cluster0 and Grid links between cluster0 and the Grid network. Run specific mount jobs which read data using the virtual device addresses associated with cluster1. These jobs will run successfully because the ownership of the volumes will be automatically taken over by cluster1. Run specific mount jobs that attempt to write data to the volumes that cluster1 took over from cluster0. These jobs will fail with an IOS message indicating the volume is write protected because the volumes that cluster1 took over under read-only takeover mode is restricted to readaccess only. Manually enable write ownership takeover mode for cluster0. Specific mount jobs with write jobs will now succeed. Virtual tape device addresses for cluster0 will become unavailable. All channel activities on the failing host links are stopped. Host channel errors are reported or error information becomes available from the intermediate equipment. Jobs which were using the virtual device addresses of cluster0 will fail. Scratch mounts that target volumes that are owned by the failed cluster will also fail until write ownership takeover mode is enabled. Scratch mounts that target pre-owned volumes will succeed. The Grid enters the Copy Operation Disabled and VTS Operations Degraded states. If the RUN copy consistency point is being used, the Grid also enters the Immediate Mode Copy Completion s Deferred state. If Synchronous Mode copy is used, the Grid also enters the Synchronous-Deferred state for the next scratch mount or mod which occurs to a synchronous mode copy defined volume. If the fail on synch failure option is used, these jobs will fail. All copied data can be read without operator action because an automatic transition to read-only ownership takeover mode is made. An operator must place cluster0 into write ownership takeover mode to allow volumes owned by cluster0 to be written to. IOS450E Not operational path taken offline IOS001E/IOS4510E Boxed, No operational paths IOS050I Channel detected error IOS051I Interface timeout detected IOS000I (and related) Data check/equipment check/i/o error/sim/write protected IOS002A No paths available IEF281I Device offline - boxed IOS1000I Write protected Page 28 of 76

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2

IBM Virtualization Engine TS7700 Series Best Practices. Copy Consistency Points V1.2 IBM Virtualization Engine TS7700 Series Best Practices Copy Consistency Points V1.2 Takeshi Nohta nohta@jp.ibm.com RMSS/SSD-VTS - Japan Target Audience This document provides the Best Practices for TS7700

More information

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1

IBM TS7700 Series Best Practices. Flash Copy for Disaster Recovery Testing V1.1.1 7/9/2015 IBM TS7700 Series Best Practices Flash Copy for Disaster Recovery Testing V1.1.1 Norie Iwasaki, norie@jp.ibm.com IBM STG, Storage Systems Development, IBM Japan Ltd. Katsuyoshi Katori, katori@jp.ibm.com

More information

IBM TS7700 grid solutions for business continuity

IBM TS7700 grid solutions for business continuity IBM grid solutions for business continuity Enhance data protection and business continuity for mainframe environments in the cloud era Highlights Help ensure business continuity with advanced features

More information

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Hybrid Grid Usage V1.1 IBM Virtualization Engine TS7700 Series Best Practices TS7700 Hybrid Grid Usage V1.1 William Travis billyt@us.ibm.com STSM TS7700 Development Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills

More information

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6

IBM Virtualization Engine TS7700 Series Best Practices. Cache Management in the TS7720 V1.6 IBM Virtualization Engine TS7700 Series Best Practices Cache Management in the TS7720 V1.6 Jim Fisher fisherja@us.ibm.com IBM Advanced Technical Skills North America Page 1 of 47 1 Introduction... 3 1.1

More information

TS7700 Technical Update TS7720 Tape Attach Deep Dive

TS7700 Technical Update TS7720 Tape Attach Deep Dive TS7700 Technical Update TS7720 Tape Attach Deep Dive Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 Quick background of TS7700 TS7720T Overview TS7720T Deep Dive TS7720T

More information

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1

IBM Virtualization Engine TS7700 Series Best Practices. TPF Host and TS7700 IBM Virtualization Engine V1.1 IBM Virtualization Engine TS7700 Series Best Practices TPF Host and TS7700 IBM Virtualization Engine V1.1 Gerard Kimbuende gkimbue@us.ibm.com TS7700 FVT Software Engineer John Tarby jtarby@us.ibm.com TPF

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM TS7700 Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-19 Note Before using this information and the product it supports, read the information

More information

TS7700 Technical Update What s that I hear about R3.2?

TS7700 Technical Update What s that I hear about R3.2? TS7700 Technical Update What s that I hear about R3.2? Ralph Beeston TS7700 Architecture IBM Session objectives Brief Overview TS7700 TS7700 Release 3.2 TS7720T TS7720 Tape Attach The Basics Partitions

More information

Introduction and Planning Guide

Introduction and Planning Guide IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction and Planning Guide GA32-0567-21 IBMTS7700Series IBM TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Introduction

More information

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0

IBM Virtualization Engine TS7720 and TS7740 Releases 1.6, 1.7, 2.0, 2.1 and 2.1 PGA2 Performance White Paper Version 2.0 IBM System Storage July 3, 212 IBM Virtualization Engine TS772 and TS774 Releases 1.6, 1.7, 2., 2.1 and 2.1 PGA2 Performance White Paper Version 2. By Khanh Ly Tape Performance IBM Tucson Page 2 Table

More information

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER

Exam Actual. Higher Quality. Better Service! QUESTION & ANSWER Higher Quality Better Service! Exam Actual QUESTION & ANSWER Accurate study guides, High passing rate! Exam Actual provides update free of charge in one year! http://www.examactual.com Exam : 000-207 Title

More information

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades IBM United States Announcement 107-392, dated July 10, 2007 IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades Key

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1

IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Jul 6, 2017 IBM TS7700 Series Operator Informational Messages White Paper Version 4.1.1 Dante Pichardo Tucson Tape Development Tucson, Arizona Introduction During normal and exception processing within

More information

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1

IBM TS7700 Series Operator Informational Messages White Paper Version 2.0.1 Apr 13, 215 IBM TS77 Series Operator Informational Messages White Paper Version 2..1 Dante Pichardo Tucson Tape Development Tucson, Arizona Apr 13, 215 Introduction During normal and exception processing

More information

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5

IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 May 2013 IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide Version 2.1.5 Kerri Shotwell Senior Design Engineer Tucson, Arizona Copyright 2007, 2009, 2011, 2012 IBM Corporation Introduction...

More information

TS7720 Implementation in a 4-way Grid

TS7720 Implementation in a 4-way Grid TS7720 Implementation in a 4-way Grid Rick Adams Fidelity Investments Monday August 6, 2012 Session Number 11492 Agenda Introduction TS7720 Components How a Grid works Planning Considerations TS7720 Setup

More information

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper By Anton Els Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper By Anton Els Copyright 2017 Dbvisit Software Limited. All Rights Reserved V3, Oct 2017 Contents Executive Summary...

More information

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2

IBM Virtualization Engine TS7720 and TS7740 Release 3.0 Performance White Paper - Version 2 IBM System Storage March 27, 213 IBM Virtualization Engine TS772 and TS774 Release 3. Performance White Paper - Version 2 By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Page 2

More information

Datacenter replication solution with quasardb

Datacenter replication solution with quasardb Datacenter replication solution with quasardb Technical positioning paper April 2017 Release v1.3 www.quasardb.net Contact: sales@quasardb.net Quasardb A datacenter survival guide quasardb INTRODUCTION

More information

Understanding high availability with WebSphere MQ

Understanding high availability with WebSphere MQ Mark Hiscock Software Engineer IBM Hursley Park Lab United Kingdom Simon Gormley Software Engineer IBM Hursley Park Lab United Kingdom May 11, 2005 Copyright International Business Machines Corporation

More information

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0

IBM Virtualization Engine TS7700 Series Best Practices. Usage with Linux on System z 1.0 IBM Virtualization Engine TS7700 Series Best Practices Usage with Linux on System z 1.0 Erika Dawson brosch@us.ibm.com z/os Tape Software Development Page 1 of 11 1 Introduction... 3 1.1 Change History...

More information

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1

IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 April 2007 IBM Virtualization Engine TS7700 Series Encryption Overview Version 1.1 By: Wayne Carlson IBM Senior Engineer Tucson, Arizona Introduction The IBM Virtualization Engine TS7700 Series is the

More information

IBM High End Taps Solutions Version 5. Download Full Version :

IBM High End Taps Solutions Version 5. Download Full Version : IBM 000-207 High End Taps Solutions Version 5 Download Full Version : http://killexams.com/pass4sure/exam-detail/000-207 QUESTION: 194 Which of the following is used in a System Managed Tape environment

More information

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0

IBM TS7760 and TS7760T Release 4.0 Performance White Paper Version 2.0 IBM System Storage September 26, 216 IBM TS776 and TS776T Release 4. Performance White Paper Version 2. By Khanh Ly Virtual Tape Performance IBM Tucson Copyright IBM Corporation Page 2 Table of Contents

More information

April 21, 2017 Revision GridDB Reliability and Robustness

April 21, 2017 Revision GridDB Reliability and Robustness April 21, 2017 Revision 1.0.6 GridDB Reliability and Robustness Table of Contents Executive Summary... 2 Introduction... 2 Reliability Features... 2 Hybrid Cluster Management Architecture... 3 Partition

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for mainframes. Geographically Dispersed

More information

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6

IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 November 2009 IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 1.6 Tucson Tape Development Tucson, Arizona Target Audience This document provides the definition of the

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for IBM zseries mainframes. Geographically

More information

Microsoft SQL Server Fix Pack 15. Reference IBM

Microsoft SQL Server Fix Pack 15. Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Microsoft SQL Server 6.3.1 Fix Pack 15 Reference IBM Note Before using this information and the product it supports, read the information in Notices

More information

EMC for Mainframe Tape on Disk Solutions

EMC for Mainframe Tape on Disk Solutions EMC for Mainframe Tape on Disk Solutions May 2012 zmainframe Never trust a computer you can lift! 1 EMC & Bus-Tech for Mainframe EMC supports mainframe systems since 1990 with first integrated cached disk

More information

Broker Clusters. Cluster Models

Broker Clusters. Cluster Models 4 CHAPTER 4 Broker Clusters Cluster Models Message Queue supports the use of broker clusters: groups of brokers working together to provide message delivery services to clients. Clusters enable a Message

More information

IBM TS7720 supports physical tape attachment

IBM TS7720 supports physical tape attachment IBM United States Hardware Announcement 114-167, dated October 6, 2014 IBM TS7720 supports physical tape attachment Table of contents 1 Overview 5 Product number 1 Key prerequisites 6 Publications 1 Planned

More information

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0

IBM TS7720, TS7720T, and TS7740 Release 3.2 Performance White Paper Version 2.0 IBM System Storage May 7, 215 IBM TS772, TS772T, and TS774 Release 3.2 Performance White Paper Version 2. By Khanh Ly and Luis Fernando Lopez Gonzalez Tape Performance IBM Tucson Copyright IBM Corporation

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Mainframe Tape Replacement with cloud connectivity ESSENTIALS A Global Virtual Library for all mainframe tape use cases Supports private and public cloud providers. GDDR Technology

More information

IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity

IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity IBM Europe Announcement ZG08-0543, dated July 15, 2008 IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity Key prerequisites...2 Description...2 Product

More information

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME?

DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? DLm TM TRANSFORMS MAINFRAME TAPE! WHY DELL EMC DISK LIBRARY FOR MAINFRAME? The Business Value of Disk Library for mainframe OVERVIEW OF THE BENEFITS DLM VERSION 5.0 DLm is designed to reduce capital and

More information

DLm8000 Product Overview

DLm8000 Product Overview Whitepaper Abstract This white paper introduces EMC DLm8000, a member of the EMC Disk Library for mainframe family. The EMC DLm8000 is the EMC flagship mainframe VTL solution in terms of scalability and

More information

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC

IBM. DFSMS Introduction. z/os. Version 2 Release 3 SC z/os IBM DFSMS Introduction Version 2 Release 3 SC23-6851-30 Note Before using this information and the product it supports, read the information in Notices on page 91. This edition applies to Version

More information

Universal Storage Consistency of DASD and Virtual Tape

Universal Storage Consistency of DASD and Virtual Tape Universal Storage Consistency of DASD and Virtual Tape Jim Erdahl U.S.Bank August, 14, 2013 Session Number 13848 AGENDA Context mainframe tape and DLm Motivation for DLm8000 DLm8000 implementation GDDR

More information

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe

Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Paradigm Shifts in How Tape is Viewed and Being Used on the Mainframe Ralph Armstrong EMC Corporation February 5, 2013 Session 13152 2 Conventional Outlook Mainframe Tape Use Cases BACKUP SPACE MGMT DATA

More information

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery

Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery White Paper Business Continuity Protecting Mission-Critical Application Environments The Top 5 Challenges and Solutions for Backup and Recovery Table of Contents Executive Summary... 1 Key Facts About

More information

Data Loss and Component Failover

Data Loss and Component Failover This chapter provides information about data loss and component failover. Unified CCE uses sophisticated techniques in gathering and storing data. Due to the complexity of the system, the amount of data

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

DB2 Data Sharing Then and Now

DB2 Data Sharing Then and Now DB2 Data Sharing Then and Now Robert Catterall Consulting DB2 Specialist IBM US East September 2010 Agenda A quick overview of DB2 data sharing Motivation for deployment then and now DB2 data sharing /

More information

Achieving Continuous Availability for Mainframe Tape

Achieving Continuous Availability for Mainframe Tape Achieving Continuous Availability for Mainframe Tape Dave Tolsma Systems Engineering Manager Luminex Software, Inc. Discussion Topics Needs in mainframe tape Past to present small to big? How Have Needs

More information

Introduction and Planning Guide

Introduction and Planning Guide IBM Virtualization Engine TS7700 Series Introduction and Planning Guide IBM Virtualization Engine TS7700, TS7700 Cache Controller, and TS7700 Cache Drawer Printed in U.S.A. GA32-0567-11 Note! Before using

More information

Chapter 2 CommVault Data Management Concepts

Chapter 2 CommVault Data Management Concepts Chapter 2 CommVault Data Management Concepts 10 - CommVault Data Management Concepts The Simpana product suite offers a wide range of features and options to provide great flexibility in configuring and

More information

Hitachi Content Platform Failover Processing Using Storage Adapter for Symantec Enterprise Vault

Hitachi Content Platform Failover Processing Using Storage Adapter for Symantec Enterprise Vault Hitachi Content Platform Failover Processing Using Storage Adapter for Symantec Enterprise Vault Best Practices Guide By Dave Brandman October 24, 2012 Feedback Hitachi Data Systems welcomes your feedback.

More information

IBM Virtualization Engine TS7700 supports disk-based encryption

IBM Virtualization Engine TS7700 supports disk-based encryption IBM United States Hardware Announcement 112-160, dated October 3, 2012 IBM Virtualization Engine TS7700 supports disk-based encryption Table of contents 1 Overview 5 Product number 2 Key prerequisites

More information

Distributed System Chapter 16 Issues in ch 17, ch 18

Distributed System Chapter 16 Issues in ch 17, ch 18 Distributed System Chapter 16 Issues in ch 17, ch 18 1 Chapter 16: Distributed System Structures! Motivation! Types of Network-Based Operating Systems! Network Structure! Network Topology! Communication

More information

EMC Disk Library Automated Tape Caching Feature

EMC Disk Library Automated Tape Caching Feature EMC Disk Library Automated Tape Caching Feature A Detailed Review Abstract This white paper details the EMC Disk Library configuration and best practices when using the EMC Disk Library Automated Tape

More information

IBM MQ Appliance HA and DR Performance Report Version July 2016

IBM MQ Appliance HA and DR Performance Report Version July 2016 IBM MQ Appliance HA and DR Performance Report Version 2. - July 216 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before using this report,

More information

Availability Implementing high availability

Availability Implementing high availability System i Availability Implementing high availability Version 6 Release 1 System i Availability Implementing high availability Version 6 Release 1 Note Before using this information and the product it

More information

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before

More information

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication

Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Improve Disaster Recovery and Lower Costs with Virtual Tape Replication Art Tolsma CEO LUMINEX Greg Saccomanno Systems Programmer Wells Fargo Dealer Services Scott James Director, Business Development

More information

Documentation Accessibility. Access to Oracle Support

Documentation Accessibility. Access to Oracle Support Oracle NoSQL Database Availability and Failover Release 18.3 E88250-04 October 2018 Documentation Accessibility For information about Oracle's commitment to accessibility, visit the Oracle Accessibility

More information

Chapter 18 Distributed Systems and Web Services

Chapter 18 Distributed Systems and Web Services Chapter 18 Distributed Systems and Web Services Outline 18.1 Introduction 18.2 Distributed File Systems 18.2.1 Distributed File System Concepts 18.2.2 Network File System (NFS) 18.2.3 Andrew File System

More information

VCS-276.exam. Number: VCS-276 Passing Score: 800 Time Limit: 120 min File Version: VCS-276

VCS-276.exam. Number: VCS-276 Passing Score: 800 Time Limit: 120 min File Version: VCS-276 VCS-276.exam Number: VCS-276 Passing Score: 800 Time Limit: 120 min File Version: 1.0 VCS-276 Administration of Veritas NetBackup 8.0 Version 1.0 Exam A QUESTION 1 A NetBackup policy is configured to back

More information

StorageTek ACSLS Manager Software

StorageTek ACSLS Manager Software StorageTek ACSLS Manager Software Management of distributed tape libraries is both time-consuming and costly involving multiple libraries, multiple backup applications, multiple administrators, and poor

More information

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo

Vendor: Hitachi. Exam Code: HH Exam Name: Hitachi Data Systems Storage Fondations. Version: Demo Vendor: Hitachi Exam Code: HH0-130 Exam Name: Hitachi Data Systems Storage Fondations Version: Demo QUESTION: 1 A drive within a HUS system reaches its read error threshold. What will happen to the data

More information

DISK LIBRARY FOR MAINFRAME (DLM)

DISK LIBRARY FOR MAINFRAME (DLM) DISK LIBRARY FOR MAINFRAME (DLM) Cloud Storage for Data Protection and Long-Term Retention ABSTRACT Disk Library for mainframe (DLm) is Dell EMC s industry leading virtual tape library for IBM z Systems

More information

REC (Remote Equivalent Copy) ETERNUS DX Advanced Copy Functions

REC (Remote Equivalent Copy) ETERNUS DX Advanced Copy Functions ETERNUS DX Advanced Copy Functions (Remote Equivalent Copy) 0 Content Overview Modes Synchronous Split and Recovery Sub-modes Asynchronous Transmission Sub-modes in Detail Differences Between Modes Skip

More information

Agenda for IBM Tape Solutions

Agenda for IBM Tape Solutions G19 - IBM System Storage Tape Update Scott Drummond spd@us.ibm.com Agenda for IBM Tape Solutions IBM Tape Milestones IBM Enterprise and LTO Drive Technology? World Class Reliability? Encryption IBM Automation

More information

TECHNICAL ADDENDUM 01

TECHNICAL ADDENDUM 01 TECHNICAL ADDENDUM 01 What Does An HA Environment Look Like? An HA environment will have a Source system that the database changes will be captured on and generate local journal entries. The journal entries

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices

IBM Virtualization Engine TS7700 Series Best Practices. TS7700 Logical WORM Best Practices IBM Virtualization Engine TS7700 Series Best Practices TS7700 Logical WORM Best Practices Jim Fisher Executive IT Specialist Advanced Technical Skills (ATS) fisherja@us.ibm.com Page 1 of 10 Contents Introduction...3

More information

Introduction to shared queues

Introduction to shared queues Introduction to shared queues Matt Leming lemingma@uk.ibm.com Agenda What are shared queues? SMDS CF Flash Structures persistence and recovery Clients and GROUPUR 2 What are shared queues? 3 Shared queues

More information

Veritas Volume Replicator Option by Symantec

Veritas Volume Replicator Option by Symantec Veritas Volume Replicator Option by Symantec Data replication for disaster recovery The provides organizations with a world-class foundation for continuous data replication, enabling rapid and reliable

More information

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication

CSE 444: Database Internals. Section 9: 2-Phase Commit and Replication CSE 444: Database Internals Section 9: 2-Phase Commit and Replication 1 Today 2-Phase Commit Replication 2 Two-Phase Commit Protocol (2PC) One coordinator and many subordinates Phase 1: Prepare Phase 2:

More information

MIMIX. Version 7.0 MIMIX Global Operations 5250

MIMIX. Version 7.0 MIMIX Global Operations 5250 MIMIX Version 7.0 MIMIX Global Operations 5250 Published: September 2010 level 7.0.01.00 Copyrights, Trademarks, and tices Contents Version 7.0 MIMIX Global Operations 5250 Who this book is for... 5 What

More information

SMC Client/Server Implementation

SMC Client/Server Implementation SMC Client/Server Implementation July, 2006 Revised June, 2010 Oracle Corporation Authors: Nancy Rassbach Dale Hammers Sheri Wright Joseph Nofi Page 1 1 Introduction to SMC Client/Server Operations The

More information

EMC DATA DOMAIN OPERATING SYSTEM

EMC DATA DOMAIN OPERATING SYSTEM EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 31 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive

More information

IBM 3494 Peer-to-Peer Virtual Tape Server Enhances Data Availability and Recovery

IBM 3494 Peer-to-Peer Virtual Tape Server Enhances Data Availability and Recovery Hardware Announcement February 29, 2000 IBM 3494 Peer-to-Peer Virtual Tape Server Enhances Data Availability and Recovery Overview With IBM s new Magstar 3494 Peer-to-Peer Virtual Tape Server (VTS) configuration,

More information

Synergetics-Standard-SQL Server 2012-DBA-7 day Contents

Synergetics-Standard-SQL Server 2012-DBA-7 day Contents Workshop Name Duration Objective Participants Entry Profile Training Methodology Setup Requirements Hardware and Software Requirements Training Lab Requirements Synergetics-Standard-SQL Server 2012-DBA-7

More information

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition,

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition, Module 16: Distributed System Structures, Silberschatz, Galvin and Gagne 2009 Chapter 16: Distributed System Structures Motivation Types of Network-Based Operating Systems Network Structure Network Topology

More information

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Model 800 A high-performance resilient disk storage solution for systems across the enterprise IBM TotalStorage Enterprise Storage Server Model 800 e-business on demand The move to e-business on demand presents

More information

Virtual Disaster Recovery

Virtual Disaster Recovery The Essentials Series: Managing Workloads in a Virtual Environment Virtual Disaster Recovery sponsored by by Jaime Halscott Vir tual Disaster Recovery... 1 Virtual Versus Physical Disaster Recovery...

More information

Chapter 3 `How a Storage Policy Works

Chapter 3 `How a Storage Policy Works Chapter 3 `How a Storage Policy Works 32 - How a Storage Policy Works A Storage Policy defines the lifecycle management rules for all protected data. In its most basic form, a storage policy can be thought

More information

The Collaboration Cornerstone

The Collaboration Cornerstone E-Mail: The Collaboration Cornerstone On Demand Insurance Business Problems 1. We lose customers because we process new policy applications too slowly. 2. Our claims processing is time-consuming and inefficient.

More information

CA Vtape Virtual Tape System CA RS 1309 Service List

CA Vtape Virtual Tape System CA RS 1309 Service List CA Vtape Virtual Tape System 12.6 1 CA RS 1309 Service List Description Hiper 12.6 RO52045 RECOVER=GLOBAL DOES NOT CORRECTLY RESTORE SCRATCH POOL RO53687 MESSAGE SVT1PR000I HAS MISLEADING WORDING RO54768

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

How Symantec Backup solution helps you to recover from disasters?

How Symantec Backup solution helps you to recover from disasters? How Symantec Backup solution helps you to recover from disasters? Finn Henningsen Presales Specialist Technology Days 2011 1 Thank you to our sponsors Technology Days 2011 2 Agenda Why do we bother? Infrastructure

More information

Availability Implementing High Availability with the solution-based approach Operator's guide

Availability Implementing High Availability with the solution-based approach Operator's guide System i Availability Implementing High Availability with the solution-based approach Operator's guide Version 6 Release 1 System i Availability Implementing High Availability with the solution-based

More information

PracticeTorrent. Latest study torrent with verified answers will facilitate your actual test

PracticeTorrent.   Latest study torrent with verified answers will facilitate your actual test PracticeTorrent http://www.practicetorrent.com Latest study torrent with verified answers will facilitate your actual test Exam : C9020-668 Title : IBM Storage Technical V1 Vendor : IBM Version : DEMO

More information

BUSINESS CONTINUITY: THE PROFIT SCENARIO

BUSINESS CONTINUITY: THE PROFIT SCENARIO WHITE PAPER BUSINESS CONTINUITY: THE PROFIT SCENARIO THE BENEFITS OF A COMPREHENSIVE BUSINESS CONTINUITY STRATEGY FOR INCREASED OPPORTUNITY Organizational data is the DNA of a business it makes your operation

More information

IBM Tivoli System Automation for z/os

IBM Tivoli System Automation for z/os Policy-based self-healing to maximize efficiency and system availability IBM Highlights Provides high availability for IBM z/os Offers an advanced suite of systems and IBM Parallel Sysplex management and

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 3: Programming Models Piccolo: Building Fast, Distributed Programs

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management

Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management IBM Spectrum Protect Configuring IBM Spectrum Protect for IBM Spectrum Scale Active File Management Document version 1.4 Dominic Müller-Wicke IBM Spectrum Protect Development Nils Haustein EMEA Storage

More information

Mainframe Backup Modernization Disk Library for mainframe

Mainframe Backup Modernization Disk Library for mainframe Mainframe Backup Modernization Disk Library for mainframe Mainframe is more important than ever itunes Downloads Instagram Photos Twitter Tweets Facebook Likes YouTube Views Google Searches CICS Transactions

More information

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM

IBM Spectrum Protect HSM for Windows Version Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM IBM Spectrum Protect HSM for Windows Version 8.1.0 Administration Guide IBM Note: Before you use this information and the product

More information

IBM Active Cloud Engine centralized data protection

IBM Active Cloud Engine centralized data protection IBM Active Cloud Engine centralized data protection Best practices guide Sanjay Sudam IBM Systems and Technology Group ISV Enablement December 2013 Copyright IBM Corporation, 2013 Table of contents Abstract...

More information

IBM TS7700 v8.41 Phase 2. Introduction and Planning Guide IBM GA

IBM TS7700 v8.41 Phase 2. Introduction and Planning Guide IBM GA IBM TS7700 8.41 Phase 2 Introduction and Planning Guide IBM GA32-0567-25 Note Before using this information and the product it supports, read the information in Safety and Enironmental notices on page

More information

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard On February 11 th 2003, the Internet Engineering Task Force (IETF) ratified the iscsi standard. The IETF was made up of

More information

IBM Software. IBM z/vm Management Software. Introduction. Tracy Dean, IBM April IBM Corporation

IBM Software. IBM z/vm Management Software. Introduction. Tracy Dean, IBM April IBM Corporation IBM z/vm Management Software Introduction Tracy Dean, IBM tld1@us.ibm.com April 2009 Agenda System management Operations Manager for z/vm Storage management Backup and Restore Manager for z/vm Tape Manager

More information

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties

Chapter 7. GridStor Technology. Adding Data Paths. Data Paths for Global Deduplication. Data Path Properties Chapter 7 GridStor Technology GridStor technology provides the ability to configure multiple data paths to storage within a storage policy copy. Having multiple data paths enables the administrator to

More information

Oracle StorageTek's VTCS DR Synchronization Feature

Oracle StorageTek's VTCS DR Synchronization Feature Oracle StorageTek's VTCS DR Synchronization Feature Irene Adler Oracle Corporation Thursday, August 9, 2012: 1:30pm-2:30pm Session Number 11984 Agenda 2 Tiered Storage Solutions with VSM s VTSS/VLE/Tape

More information