IBM System Storage SAN Volume Controller enhanced stretched cluster with GUI changes

Size: px
Start display at page:

Download "IBM System Storage SAN Volume Controller enhanced stretched cluster with GUI changes"

Transcription

1 IBM System Storage SAN Volume Controller enhanced stretched cluster with GUI changes Evaluation guide v2.0 Sarvesh S. Patel, Bill Scales IBM Systems and Technology Group May 2014 Copyright IBM Corporation, 2014

2 Table of Contents Abstract...2 Getting started...2 About IBM SAN Volume Controller stretched cluster services... 2 About this guide... 3 Assumptions... 3 How stretched cluster works?...4 Prerequisites and configuration considerations... 4 Setup description... 4 Connectivity details... 5 Configuring site awareness using CLI...7 Disaster recovery sites... 7 Assigning sites to IBM SVC nodes using CLI... 8 Controller site assignment using CLI Multi-WWNN controller Quorum disk placement Enabling / Disabling the site disaster recovery capability using CLI Configuring enhanced stretched cluster system using GUI...13 Configuring site awareness using GUI Disaster recovery sites using GUI Assigning sites to IBM SVC nodes using GUI Controller site assignment using GUI Enabling / Disabling site disaster recovery capability using GUI Configuring site awareness using GUI during initial system setup...40 Invoking the site disaster recovery feature Returning to normal operation after invoking the site disaster recovery feature Resources...59 About the authors...59 Trademarks and special notices

3 Abstract This white paper describes the procedure for creating an IBM System Storage SAN Volume Controller (SVC) cluster with the enhanced stretched cluster topology. The white paper gives information regarding the stretched topology and the disaster recovery feature in the enhanced stretched cluster topology. The white paper gives a brief introduction to the enhanced stretched cluster feature and describes how to assign site awareness to the controllers and other entities. It also describes the procedure for enabling and disabling the feature and the procedure to be used in case of disaster recovery. Version 1 had the implementation details using command-line interface (CLI) only. Version 2 also includes a section to configure an SVC system with enhanced stretched cluster implementation using graphical user interface (GUI). Getting started This section gives a brief idea about IBM System Storage SAN Volume Controller stretched cluster implementation and different topologies used to configure stretched cluster implementation. About IBM SAN Volume Controller stretched cluster services IBM SVC, a storage virtualization system supports a stretched implementation of a cluster. This feature is supported since version 6.3, released. The main advantage of configuring a cluster as a stretched implementation is to give more redundancy considering the failures in power domains. The existing SVC stretched cluster solution is the concept of having two locations with each location having one node from each I/O group pair; additionally, quorum disks are usually held in a third location. With stretched cluster implementation, the two nodes in an I/O group are separated by a distance between two locations. These two locations can be two racks in a data center, two buildings in a campus, or two labs between supported distances. A copy of the volume (VDisk) is stored at both the locations. This configuration implies that in case of a power failure or storage area network (SAN) failure at one site, at least one copy of the volume will be accessible from the user. Both the copies of the storage are maintained in synchronization by SAN Volume Controller. Therefore, the loss of one location causes no disruption to the alternate location. The key benefit of stretched cluster compared to Metro Mirror is that it allows for fast non-disruptive failover in the event of small scale outages. For example, if there is an impact to just a single storage device, SVC will fail over internally with minimal delay. If there is a failure in a fabric element or SVC node, a host can fail over to another SVC node and continue performing I/O. As the host will see half of the paths active and half dead as at least one node is running. SVC always has automatic quorum to act as a tie-break. This means no external management software or human intervention is ever required to perform a failover. This simplicity is another key advantage. 2

4 The following two types of configurations are supported: Stretched cluster configuration without using inter-switch links As explained in the product documentation, this topology has direct connections from IBM SVC nodes to the switches from different power domains. Stretched cluster configuration using inter-switch links This implementation strategy has inter-switch links between two sites having different power domains. Here is the documentation on the solution as included in [1]. About this guide The purpose of this guide is to support self-guided, hands-on evaluation of the enhanced stretched cluster feature deployments for the storage administrators and IT professionals and walk them through the different configurations considerations and workflow required to set the SVC clusters in enhanced stretched implementation. The guide version 1.0 attempts to demonstrate how to deploy enhanced stretched cluster configurations with site awareness and the most important site disaster recovery feature. Version 2.0 of the guide includes the setup configuration using the GUI of the SVC cluster. This paper is intended to provide an overview of the steps required to successfully evaluate and deploy site awareness on SVC. This is not meant to be a substitute for product documentation and encourages users to refer to the product documentation, information center, command-line interface (CLI) guide of enhanced stretched cluster for more details. Assumptions Below are the assumptions considered while writing this white paper. The SVC clusters are successfully installed with the latest (at the time of this publication) IBM SVC code levels (or later) for configuration using CLI and for GUI. The SVC clusters have the required licenses. (No separate license is required to enable enhanced stretched cluster site awareness and site disaster recovery feature.) The storage SAN is configured as per the product documentation and the infrastructure to support SVC clusters in a stretched cluster using Fiber Channel 8G is properly in place. The user has the basic understanding and awareness of SVC stretched and split cluster concepts, SVC storage concepts, and configurations for host attach. The user knows the different heterogeneous SVC platforms that can be added in FC partnerships. The same apply for IP partnerships as well. For SVC documentation, refer to: Note: Refer the configuration section in the SVC documentation for more information about the working of stretched cluster. 3

5 SVC terminology Metro Mirror, Global Mirror, and Global Mirror with Change Volumes SAN NAS Failover Failback I/O group FC SVC or IBM SAN Volume Controller FCoE Table 1: Terminology and abbreviation Brief description These are the different remote copy services supported on SVC platforms. Storage area network Network-attached storage Failure of a node within an I/O group fails causes virtual disk access through the surviving node. The IP addresses fail over to the surviving node in the I/O group. When the configuration node of the system fails, management IPs also fails over to an alternate node. When the failed node rejoins the system, all failed over IP addresses are failed back from the surviving node to the rejoined node and virtual disk access is restored through this node. Two nodes or two canisters form an I/O group. A single SVC system supports four I/O groups, that is, eight nodes. Fibre Channel Unless explicitly specified, a general term that is used to describe all applicable SVC platforms IBM SVC (CG8, CF8, 8G4), IBM Storwize V7000, Storwize V5000, Storwize V3700 and IBM PureFlex System storage nodes. Fibre Channel over Ethernet How stretched cluster works? The stretched cluster feature enables one SVC cluster to configure itself from two sites that are in different power domains and the disaster recovery feature which can be used in case of a disaster to recover a cluster when less resources are available. Prerequisites and configuration considerations To enable enhanced stretched cluster site disaster recovery feature, the following details should be considered and understood. The SVC Clustered System should be configured as mentioned in the information center. Note that no extra hardware configuration changes are required to enable the site disaster recovery feature. A cluster can be connected using FC or FCoE SAN. Image mode virtual disks (vdisks) should not be configured on the controller that has site assignments. Details are given in a separate section for restrictions. Setup description Hardware summary Minimum two IBM SVC nodes 4

6 Brocade Fibre Channel switches (IBM Brocade 2498) Backend storage controllers (IBM System Storage DS3400) Connectivity details The stretched cluster system feature is supported with two types of implementations. Without inter switch links With inter switch links Both the implementations will not affect the behavior of site awareness and the site disaster recovery feature. It is completely dependent on how the administrator wants to configure the system implementation. As mentioned in the earlier sections, the connectivity will not change and the feature is optional. The administrator can choose to use the same for disaster recovery. An important thing to notice here is, if the stretched cluster implementation has site awareness, only then the site disaster recovery feature can be invoked. 5

7 Implementation details Without inter-switch links This hardware implementation will be the same as recommended for non-stretched cluster. No change in the connectivity is needed. Refer to the information center document for more details regarding the recommended connections. With inter-switch links Figure 1: Enhanced stretched cluster implementation using inter-switch links In the implementation strategy described in Figure 1, two production sites are connected using inter switch links. These two sites can be in the same rack with different power domain, across racks, across data centers, and so on as it was supported earlier. 6

8 Configuring site awareness using CLI Assuming that the connectivity is completed and a stretched cluster is implemented, in this paper, you will learn how to set up site awareness for all the different entities in the cluster. Note that the features described here are available on SVC systems only. They are hidden on other platforms. Disaster recovery sites As the earlier stretch cluster implementation needed three sites, the same concept has been named using the new CLIs, lssite and chsite. A new set of 'site' objects will be defined. These will be implicitly created for every system automatically. There will always be exactly three sites, numbered 1, 2, and 3. (Site index 0 will never be reported). There is no means of deleting sites or creating extra sites. The only configurable attribute for each site is its name. The default names for the sites are 'site1', 'site2', and 'site3'. Site1 and site2 are where the two halves of the stretched cluster are located. Site3 is the optional third site for a quorum tie-break disk. The appropriate 'site' instance will be referenced when a site value is defined for an object. Objects can also leave their 'site' value undefined. This is the default setting for an object. Enabling the site disaster recovery feature and correct operation of the disaster recovery feature requires assigning objects to the site. These are the mandatory three sites needed for a stretched cluster implementation. Site1: Production site 1 Site2: Production site 2 Site3: A site at a different location to house the quorum disk In IBM SVC version 7.2 onwards, these three sites would be present by default. A new CLI, lssite has been introduced to list down the sites. Figure 2: Site listing using lssite CLI The site names can be changed using the CLI, chsite. For example, the stretched luster implementation is spread across two data centers, and therefore, named the site accordingly. 7

9 Figure 3: Site name allocation using chsite CLI After the names are assigned to the sites, they can be viewed using the lssite command. Figure 4: Changed site information Assigning sites to IBM SVC nodes using CLI The site can optionally be defined for each node. The default for a node is to have its site undefined. This is also the initial state on upgrade to version Now, as the sites are renamed accordingly, the next task is to assign sites to the nodes in the cluster. For example, consider a two-node SVC cluster. The site awareness to nodes can be achieved using the addnode command while adding a new node to the cluster, or by using the chnode command to the existing nodes. The site can be specified when a node is added to the system. It can also be specified or changed after that time. It can be set back to undefined. Nodes can only be assigned to sites 1 or 2. Nodes cannot be assigned to site 3. When the disaster recovery feature has been enabled using chsystem, then extra policing is added which requires that every node have its site defined, and disallows any changes in the site for any configured node. Therefore the site must be specified in addnode, else addnode will fail. Additionally when the disaster recovery feature is enabled, then within an I/O group with two configured nodes, one node must be assigned to each of sites 1 and 2. Here, one node is assigned to site 1 and another to site 2. Figure 5: Assigning site attributes to IBM SAN Volume Controller nodes After assigning appropriate site awareness to nodes, the user can verify the site assignment using the lsnode command in concise as well as detailed view. 8

10 Figure 6: Listing node site attributes using lsnode CLI 9

11 Controller site assignment using CLI After assigning sites to the nodes, the user needs to assign the site attributes to controllers. The site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720. Controllers can be assigned to any of the sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. A managed disk (MDisk) derives its 'site' value from the controller that it is associated with at that time. Some backend storage devices are presented by the SVC system as multiple controller objects, and an MDisk might be associated with any of these from time to time. The user is responsible for ensuring that all such 'controller' objects have the same 'site' specified, so as to ensure that any MDisk associated with that controller is associated with a well-defined single site. The site for a controller can be changed when the disaster recovery feature is disabled. It can also be changed if the controller has no managed (or image-mode) MDisks. The site for a controller cannot be changed when the disaster recovery feature is enabled i.e. the topology is Stretched or if the controller has managed (or image-mode) disks. The site property for a controller adjusts the I/O routing and error reporting for connectivity between nodes and the associated MDisks. These changes are effective for any MDisk whose controller has a site defined, even if the disaster recovery feature is disabled. The use of solid-state drives (SSDs) within SVC nodes is not supported in the configurations described. The software does not enforce any requirement that all MDisks have a site defined. The site attribute can be assigned to a controller using the chcontroller command. As specified earlier, the site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720. Controllers can be assigned to any of sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. The site assigned to a controller can be verified using the lscontroller command. 10

12 Figure 7: Listing site attributes for a controller Connectivity is permitted between: Any node and controllers in site 3, or controllers with no site defined A node with no site defined and any controller And of course, I/O is permitted in the following most important cases: A node configured in a site and a controller MDisk configured to the same site A node configured in a site and a controller MDisk configured to site 3 The fault reporting algorithms for raising event logs in the case of missing connectivity are also adjusted to allow for these rules. When a controller is configured to site 1, then connectivity to nodes in site 2 is not expected or required, and is disregarded. Faults are only reported if any node in site 1 has inadequate connectivity (that is, if any node in site 1 has less than two SVC ports with visibility to the controller). Similarly, if a controller is configured to site 2, then connectivity to nodes in site 1 is disregarded. 11

13 Multi-WWNN controller When the site is changed on a multi-worldwide node name (WWNN) controller, all of the affected controllers are updated with the site setting on all controllers at the same time. Quorum disk placement If the site disaster recovery feature is disabled, then quorum selection algorithm operates as in SVC release. When the disaster recovery feature is enabled and automatic quorum disk selection is also enabled, SVC system elects 3 quorum disks (one in each of the three sites) and makes the quorum disk at site 3 as an active quorum disk. If a site has no suitable MDisks, then less than three quorum disks will be configured. Note that before the cluster topology being changed to stretched (the activation of the disaster recovery feature) system ignores the site parameter when selecting quorum disks. This means that the quorum selection can change if the disaster recovery feature is enabled for an existing installation. If the user is controlling quorum using chquorum, then the choice of the quorum disk the user selects must also follow the one disk per site rule. Enabling / Disabling the site disaster recovery capability using CLI A system setting will enable and disable the disaster recovery feature. The preconditions on enabling the disaster recovery feature are: All nodes are assigned to a site. All I/O groups with two nodes are assigned with one node in site 1 and one node in site 2. There is no precondition on sites being configured for controllers. The feature will not be operable on nodes that were absent until they rejoin the cluster. New clusters and clusters upgrading to and later versions have the disaster recovery feature disabled by default. The site disaster recovery feature can be enabled or disabled by using the chsystem command and can be checked using the lssystem command. Figure 8: Output of the lssystem command when the feature is disabled Figure 9: Enabling the site disaster recovery feature using the chsystem command Figure 10: Output of the lssystem command when the feature is enabled 12

14 Configuring enhanced stretched cluster system using GUI This section describes the steps to configure an SVC clustered system enhanced stretched cluster implementation. Before proceeding to the actual configuration, the user must read the previous section of Enhanced Stretched Cluster configuration using CLI as the section has very detailed explanation for setting up the system. Configuring site awareness using GUI Assuming that the connectivity is completed and a stretched cluster is implemented, this paper describes how to set up site awareness for all the different entities in the cluster. Note that the features described here are available on SVC systems only. They are hidden on other platforms. Disaster recovery sites using GUI As the earlier stretch cluster implementation needed three sites, the same concept has been named using the new options in the GUI. A new set of 'site' objects will be defined. These will be implicitly created for every system automatically. There will always be exactly three sites, numbered 1, 2, and 3. (Site index 0 will never be reported). There is no means of deleting sites or creating extra sites. The only configurable attribute for each site is its name. The default names for the sites are 'site1', 'site2', and 'site3'. Site1 and site2 are where the two halves of the stretched cluster are located. Site3 is the optional third site for a quorum tie-break disk. The appropriate 'site' instance will be referenced when a site value is defined for an object. Objects can also leave their 'site' value undefined. This is the default setting for an object. Enabling the site disaster recovery feature and correct operation of the disaster recovery feature requires assigning objects to the site. These are the mandatory three sites needed for a stretched cluster implementation. Site1: Production site 1 Site2: Production site 2 Site3: A site at a different location to house the quorum disk From IBM SVC version 7.2 onwards, these three sites would be present by default. Perform the following steps for listing the current sites and changing the site names. 1. Consider that you have a two-node SVC running cluster which needs to be configured as an enhanced stretched cluster implementation. 13

15 Figure 11: SVC system overview 14

16 2. Click Monitoring System Details. Figure 12: SVC system details 15

17 3. Click Actions Rename Sites. Figure 13: SVC system, renaming sites 16

18 This displays the default sites with the default names as site1, site2 and site3. Figure 14: Option to rename sites 17

19 4. Change the site names as per convenience. For example, change the names to DataCenter1, DataCenter2, and Quorum_Site, and click Rename. Figure 15: Changed site names 18

20 5. Verify that the action is completed successfully. Figure 16: Successful task completion 19

21 Assigning sites to IBM SVC nodes using GUI The site can optionally be defined for each node. The default for a node is to have its site undefined. This is also the initial state on upgrade to version or later. Now, as the sites are renamed accordingly, the next task is to assign sites to the nodes in the cluster. For example, consider a two-node SVC cluster. The site can be specified when a node is added to the system. It can also be specified or changed after that time. It can be set back to undefined. Nodes can only be assigned to sites 1 or 2. Nodes cannot be assigned to site 3. Here, one node is assigned to site 1 and another to site 2. The following steps describe site assignments to SVC nodes. 1. Click Monitoring System Details then click io_grp0 to expand the IOgrp view. 2. Click node1 and, then click Actions Modify Site. Figure 17: Modifying node site attribute 20

22 3. From the Select a site drop-down list, select Unassigned if you want to remove the site assignment in future. In this example setup, node1 is assigned to the site named DataCenter1 and node2 to DataCenter2. Figure 18: Assigning an appropriate site to an SVC node 21

23 4. Select the site accordingly and click Modify. This changes the site assignment for node1. Check whether the task is completed successfully without any errors. Figure 19: Successful task completion 22

24 5. Now repeat, the same for node2 and assign it to DataCenter2. 6. Verify the node site assignments by clicking Monitoring System Details io_grp0 node1 (you might need to refresh the page to display the latest updated results). Figure 20: Verification of site assignments to SVC nodes 23

25 Controller site assignment using GUI After assigning sites to the nodes, the user needs to assign the site attributes to controllers. The site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720 or later. Controllers can be assigned to any of the sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. A managed disk (MDisk) derives its 'site' value from the controller that it is associated with at that time. Some backend storage devices are presented by the SVC system as multiple controller objects, and an MDisk might be associated with any of these from time to time. The user is responsible for ensuring that all such 'controller' objects have the same 'site' specified, so as to ensure that any MDisk associated with that controller is associated with a well-defined single site. The site for a controller can be changed when the disaster recovery feature is disabled. It can also be changed if the controller has no managed (or image-mode) MDisks. The site for a controller cannot be changed when the disaster recovery feature is enabled, that is, the topology is stretched or if the controller has managed (or image-mode) disks. The site property for a controller adjusts the I/O routing and error reporting for connectivity between nodes and the associated MDisks. These changes are effective for any MDisk whose controller has a site defined, even if the disaster recovery feature is disabled. The use of SSDs within SVC nodes is not supported in the configurations described. The software does not enforce any requirement that all MDisks have a site defined. As specified earlier, the site can optionally be specified for each controller. The default for a controller is for its site to be undefined. This will be the default for pre-existing controllers on upgrade to 720 or later. Controllers can be assigned to any of sites (site 1, site 2 or site 3), or it can be set back to 'undefined' again. 24

26 Perform the following steps to change and verify the site assignment for external controllers. 1. Click Pools External Storage. Figure 21: Selecting external storage from the available storage types 25

27 A list of the available controllers with their default site assignment is displayed. Figure 22: Checking the default site assignment of available controllers 26

28 2. Now select a controller, for example controller0 and click Actions Modify Site. Figure 23: Changing site assignment of a controller 27

29 3. From the Select a site drop-down list, select the respective site for the controller, for example, DataCenter1. Figure 24: Selecting the required site for a controller 28

30 4. Click Modify and ckeck whether the task is completed successfully without any errors. Figure 25: Checking successful task completion 29

31 5. Check if the site attribute is assigned to the respective controller by clicking Pools External Storage. Figure 26: Checking correct site assignment to a controller 30

32 6. Repeat the same procedure for all the controllers for which you want site awareness. Note that a controller can be assigned to site1, site2, site3 or can be left site unassigned as per the requirement. You need to read the information center document carefully. 7. Verify that all the required controllers are assigned with site attributes. Figure 27: Verifying site assignment to all available controllers 31

33 Enabling / Disabling site disaster recovery capability using GUI A system setting can enable and disable the disaster recovery feature. The preconditions on enabling the disaster recovery feature are: All nodes are assigned to a site. All I/O groups with two nodes are assigned to one node in site 1 and another node in site 2. There is no precondition on sites being configured for controllers. The feature will not be operable on nodes that were absent until they rejoin the cluster. New clusters and clusters upgrading to and later versions have the disaster recovery feature disabled by default. The site disaster recovery feature can be enabled or disabled using the GUI as described in the following steps. 1. Click Monitoring System Details. Then click a cluster name, for example, Cluster_R65, and then click Actions Enable Stretch System. Figure 28: Enabling stretched topology 32

34 2. A warning stating that the system will now be stretched across multiple sites, is displayed. Click Yes to continue. Figure 29: Verification for changing system topology 33

35 3. The system will now be configured as a stretched system and the site disaster recovery feature will be enabled. Check if the task is completed successfully without any errors. Figure 30: Successful task completion 34

36 4. The system topology can later be verified by clicking Monitoring System Details and then by clicking the cluster name, for example, Cluster_R65. Figure 31: Verification of changed topology 35

37 5. Site disaster recovery can be disabled using the same way as described in steps 3 and 4. Click Monitoring System Details. Then click the cluster name, for example, Cluster_R65 and then click Actions Disable Stretch System. Figure 32: Disabling stretched topology 36

38 6. In the warning message The system will no longer be stretched. Do you want to continue? that is displayed, click Yes to continue. Figure 33: Confirmation on system topology change 37

39 7. Check if the task is completed successfully without any errors and click Close. Figure 34: Verification of successful task completion 38

40 8. The cluster state can later be verified by clicking Monitoring System Details and then selecting the cluster name, for example, Cluster_R65. Figure 35: Verification of the changed system topology 39

41 Configuring site awareness using GUI during initial system setup Unlike the two ways described earlier, you can also opt for configuring an enhanced stretched cluster while configuring the initial system setup. Consider that all the physical connectivity is done and a new system is created using two IBM SVC nodes. When the user logs in for the very first time using the SVC cluster GUI, the user will be prompted for setting up the system using Easy Setup. Perform the following steps to configure an enhanced stretched cluster during initial Easy Setup. 1. Assuming that the cluster is created on one of the candidate nodes, log in to the IBM SVC clustered system web interface / GUI interface as superuser. Figure 36: Initial system login screen 40

42 2. After accepting the licence aggrement, notice the different options that are displayed at the left side of the window. Figure 37: End user license agreement 41

43 3. Configure the licened features, system name, and date and time as per the requirement. 4. On the Stretch System page, for the question, Will this system be stretched across multiple sites? select Yes, this system will be stretched across multiple sites. Figure 38: Selecting if the system is stretched or located on one site 42

44 5. After selecting the option for a stretched cluster, you can notice that the Site Names, Add Nodes, and External Storage options get enabled and are displayed under Stretch System. Then, click Apply and Next. Figure 39: Selecting the system as a stretched system across multiple sites 43

45 6. Notice that all the default site names are displayed for Site 1, Site 2, and Site 3. Figure 40: Default site names 44

46 7. Change the site names as per convenience and click Apply and Next. Figure 41: Changing site names to the required ones 45

47 8. Verify that the site names are changed and the task is completed successfully without any errors. Figure 42: Checking successful task completion 46

48 9. The next option is for adding nodes to the system. By clicking the empty node position, you can view the candidate nodes to add a node to the system. Note that the node on which the system is initially created will be assigned to Site 1 by default. The second node in the IOgrp will be assigned to Site 2 by default. After selecting the required node, click Add Node. Figure 43: Adding a node to the clustered system 47

49 10. A warning message, It could take up to 30 minutes for the node to be added to the system. Do you want to continue? is displayed. Click Yes to continue. Figure 44: Confirmation message before changing system topology 48

50 11. Check if the task is completed successfully without any errors and then click Apply and Next. Figure 45: Checking successful task completion 49

51 12. The next option is for configuring site awareness for the available external storage controllers. By default, notice that no site is assigned to any of the storage controllers. Figure 46: Checking default site assignment for available controllers 50

52 13. Right-click a controller and then click Modify Site. Figure 47: Changing site assignment of a controller 51

53 14. Notice that a new dialog box for selecting the required site for the respective controller is displayed. Figure 48: Assigning the required site to a controller 52

54 15. After selecting the site, click Modify and check if the task is completed successfully without any errors. Figure 49: Checking successful task completion 53

55 16. Now, check if the correct site assignment is listed in the External Storage view for the respective controller. Figure 50: Verifying correct site assignment of a controller 54

56 17. Repeat the same procedure from step 14 through 16 for the remaining or required controllers and check if all the respective controllers are listing the site attributes correctly. In this example, one controller has been assigned to each site and one site has been left unassigned. Figure 51: Verifying controller site assignments 55

57 18. Click Apply and Next to change the system topology to stretched. Check if the task is completed successfully without any errors. Figure 52: Verifying successful task completion 19. Now your system is configured as an enhanced stretched cluster system and you can continue with the other actions as Easy Setup. 56

58 Invoking the site disaster recovery feature When the inter-site link fails in a stretched cluster, the two halves of the cluster race each other to use the quorum disk to resolve the tie-break situation. The half that successfully reserves and updates the quorum disk keeps operating; the remaining half of the cluster stops with each of the nodes suffering a lease expiry. Normally, these nodes will display a 'node error 550' message indicating that there are not sufficient nodes to form a quorum. For a stretch cluster that has been configured with a topology of 'stretched', the code determines if sufficient nodes are present and if invoking the disaster recovery feature is an alternative option and will modify the node error that is displayed (new node is error 551). If there are insufficient nodes to allow the overridequorum command to be used, the nodes continue to display the existing 'node error 550' message. The site that keeps running continues to display a cluster topology of 'stretched' and a topology status of dual_site even though one site is offline. In the simple inter-site link failure, the normal recovery option is to allow the site that got the quorum to keep running and fix the link, and then the other nodes will be automatically added back and copies synchronized, and so on. The disaster recovery feature is expected to be invoked only if the surviving site suffers a disaster just after the inter-site link failed, leaving the other site as the only option for continuing with I/O. There are several more-complex failure scenarios. If access to the site 3 quorum disks fails at the same time as the inter-site link from site 1 to site 2, then both halves of the cluster displays 'node error 551' (the new version of 'node error 550'). If access to the quorum disk is restored, then whichever site regains the access first will be able to automatically restart. Alternatively, the inter-site link from site 1 to site 2 can be restored to allow the cluster to reform. A new service CLI subcommand is defined that attempts to invoke the disaster recovery feature. If accepted, the node that is running the CLI subcommand generates a new worldwide unique cluster ID and informs each of the set of visible nodes of the local site to change their cluster ID to the new cluster ID, set a flag in each node indicating that disaster recovery has been invoked and then warm start. Note that the cluster alias which by default is set to the cluster ID when a cluster is created is not changed and this ensures that the VDisk UIDs do not change. The CLI subcommand will not affect the nodes of the remote site. They cannot be involved in the initial recovery of the local site because their state will corrupt the consistent freeze of the local site. The remote nodes can only be introduced to the local system later using the following procedure (needs disconnecting all FC/FCoE connectivity of the remote node). The code will then disable the disaster recovery feature in the new recovered cluster. The topology (reported by lssystem) remains as stretched but the topology_status changes from dual_site to either recovered_site_1 or recovered_site_2 to indicate that disaster recovery has been invoked. The status will return to dual_site only after the user has performed the recovery actions to reintroduce the nodes from the other site. Nodes that did not participate in the recovery process (and hence are still members of the old cluster) will not automatically be re-added to the cluster. The user must explicitly delete these nodes from the cluster (returning them to candidate state) before they can be auto added. This is achieved by using the rmnode command from within the management interface of the old cluster (if it is still operating) or by using the satask leavecluster -force command. Before running these commands, the user must disconnect all the FC/FCoE cables from all the nodes that they want to re-add in the cluster. 57

59 Returning to normal operation after invoking the site disaster recovery feature The user must take care of the following steps to ensure that the system maintains integrity as the two sites' connectivity is recovered. In particular, care is needed to not conflict with the activity of any still active nodes in the failed site, for example, if power is recovered after a failure. 1. After the disaster recovery feature is invoked, an alert will be raised indicating that this process must be used. 2. Access to all the recovered site volume copies is recovered. This includes the mirror-half of stretched volumes plus any single-copy volumes with a defined local site. 3. Access to all other volume copies is lost. The user must treat all such storage as suspect and potentially corrupt. The conservative approach is to delete all such volume copies. Some users might choose to retain access to such volumes to attempt to recover some data. 4. Mirrored volumes with one online fresh local copy can be retained. 5. Access to all other site quorum disks is lost. All such quorum disks must be deleted. 6. This can be achieved by using rmmdisk if the MDisk no longer holds any volume copy. If there are volume copies that are being retained, then the process must use chquorum to select new quorum disks and prevent attempts to use the other site quorum disks. 7. All inter-system remote copy relationships, consistency groups, and partnerships must be destroyed (partnerships will be in the partially-configured state). 8. At this point, the user can address the missing nodes. This requires disconnecting the FC/FCoE connectivity of the missing nodes, then either deconfiguring the node using svctask rmnode (in the abandoned cluster) or satask leavecluster as described earlier or decommissioning the node so that it can no longer access the shared storage. Then, issue the rmnode command in the recovered cluster to inform it that this step has been performed. 9. When the last offline node from the failed site is repaired, the alert on auto fixes and any non-local site volume copies become online. The process of reconstructing the system objects can begin, including: Defining quorum disks in the correct sites Re-creating volumes that were not automatically recovered earlier Re-creating any intra-system copy services that were deleted because their volumes were deleted Re-creating any inter-system Metro Mirror or Global Mirror objects Note that there is no need to explicitly re-enable the disaster recovery feature. The cluster topology remains as stretched, and when the event log auto fixes the cluster topology, the status returns to dual_site and assuming that there are online nodes at both sites, the voting set will be manipulated to prepare for the next disaster recovery. 58

60 Resources The following websites provide useful references to supplement the information contained in this paper: IBM Systems on IBM PartnerWorld ibm.com/partnerworld/systems/ IBM Publications Center IBM Redbooks ibm.com/redbooks IBM developerworks ibm.com/developerworks IBM SAN and SVC Stretched Cluster and VMware solution Implementation ibm.com/redbooks/redbooks/pdfs/sg pdf IBM SAN Volume Controller Stretched Cluster with PowerVM and PowerHA ibm.com/redbooks/redbooks/pdfs/sg pdf SVC Split Cluster How it works ibm.com/developerworks/community/blogs/storagevirtualization/entry/split_cluster?lang=en About the authors Sarvesh S Patel is a staff software engineer in IBM Systems and Technology Group SVC and Storwize family. He has 6 years of experience in storage test. As part of the enhanced stretched cluster, he was the functional test lead for the feature. You can reach Sarvesh at sarvpate@in.ibm.com. Bill Scales is a software engineer in IBM Systems and Technology Group SVC and Storwize family. As part of the enhanced stretched cluster, he was the functional development architect and lead for the feature. You can reach Bill at bill_scales@uk.ibm.com. 59

61 Trademarks and special notices Copyright IBM Corporation References in this document to IBM products or services do not imply that IBM intends to make them available in every country. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-ibm products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-ibm list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-ibm products. Questions on the capability of non-ibm products should be addressed to the supplier of those products. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. 60

62 Any references in this information to non-ibm websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk. 61

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster

IBM System Storage SAN Volume Controller Enhanced Stretched Cluster IBM System Storage SAN Volume Controller Enhanced Stretched Cluster Evaluation guide v1.0 Sarvesh S. Patel, Bill Scales IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation,

More information

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release

IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in release IBM System Storage SAN Volume Controller IBM Easy Tier enhancements in 7.5.0 release Kushal S. Patel, Shrikant V. Karve, Sarvesh S. Patel IBM Systems, ISV Enablement July 2015 Copyright IBM Corporation,

More information

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524

Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524 Introduction to IBM System Storage SVC 2145-DH8 and IBM Storwize V7000 model 524 Guide v1.0 Bhushan Gavankar, Sarvesh S. Patel IBM Systems and Technology Group June 2014 Copyright IBM Corporation, 2014

More information

Implementing disaster recovery solution using IBM SAN Volume Controller stretched cluster and VMware Site Recovery Manager

Implementing disaster recovery solution using IBM SAN Volume Controller stretched cluster and VMware Site Recovery Manager Implementing disaster recovery solution using IBM SAN Volume Controller stretched cluster and VMware Site Recovery Manager A technical report Mandar J. Vaidya IBM Systems ISV Enablement December 2015 Copyright

More information

IBM System Storage SAN Volume Controller IBM Easy Tier in release

IBM System Storage SAN Volume Controller IBM Easy Tier in release IBM System Storage SAN Volume Controller IBM Easy Tier in 7.3.0 release Kushal S. Patel, Shrikant V. Karve IBM Systems and Technology Group ISV Enablement July 2014 Copyright IBM Corporation, 2014 Table

More information

Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation

Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation Demonstrating IBM Storwize V7000 advanced storage efficiency in a Veritas Storage Foundation environment John Cooper

More information

Deploying FC and FCoE SAN with IBM System Storage SVC and IBM Storwize platforms

Deploying FC and FCoE SAN with IBM System Storage SVC and IBM Storwize platforms Deploying FC and FCoE SAN with IBM System Storage SVC and IBM Storwize platforms Guide v1.0 Bhushan Gavankar, Subhadip Das, Aakanksha Mathur IBM Systems and Technology Group ISV Enablement August 2014

More information

Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment

Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment Using IBM Flex System Manager for efficient VMware vsphere 5.1 resource deployment Jeremy Canady IBM Systems and Technology Group ISV Enablement March 2013 Copyright IBM Corporation, 2013 Table of contents

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

IBM Active Cloud Engine centralized data protection

IBM Active Cloud Engine centralized data protection IBM Active Cloud Engine centralized data protection Best practices guide Sanjay Sudam IBM Systems and Technology Group ISV Enablement December 2013 Copyright IBM Corporation, 2013 Table of contents Abstract...

More information

Jeremy Canady. IBM Systems and Technology Group ISV Enablement March 2013

Jeremy Canady. IBM Systems and Technology Group ISV Enablement March 2013 Introducing the IBM Storage Integration Server An introduction to how the IBM Storage Integration Server provides a new level of simplicity to storage integrations Jeremy Canady IBM Systems and Technology

More information

IBM Storage Tier Advisor Tool with IBM Easy Tier

IBM Storage Tier Advisor Tool with IBM Easy Tier IBM Storage Tier Advisor Tool with IBM Easy Tier Samrat Dutta, Shrikant V Karve IBM Systems and Technology Group ISV Enablement July 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...1

More information

SAS workload performance improvements with IBM XIV Storage System Gen3

SAS workload performance improvements with IBM XIV Storage System Gen3 SAS workload performance improvements with IBM XIV Storage System Gen3 Including performance comparison with XIV second-generation model Narayana Pattipati IBM Systems and Technology Group ISV Enablement

More information

Brendan Lelieveld-Amiro, Director of Product Development StorageQuest Inc. December 2012

Brendan Lelieveld-Amiro, Director of Product Development StorageQuest Inc. December 2012 Automated archiving using IBM Tape Libraries and StorageQuest Archive Manager Automated archiving made easy using volume spanning with StorageQuest Archive Manager and an IBM Tape Library Brendan Lelieveld-Amiro,

More information

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform

IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform IBM Scale Out Network Attached Storage (SONAS) using the Acuo Universal Clinical Platform A vendor-neutral medical-archive offering Dave Curzio IBM Systems and Technology Group ISV Enablement February

More information

Benefits of the IBM Storwize V7000 Real-time Compression feature with VMware vsphere 5.5

Benefits of the IBM Storwize V7000 Real-time Compression feature with VMware vsphere 5.5 Benefits of the IBM Storwize V7000 Real-time Compression feature with VMware vsphere 5.5 A technical report Mandar J. Vaidya IBM Systems and Technology Group ISV Enablement January 2015 Copyright IBM Corporation,

More information

IBM and Lawson M3 (an Infor affiliate) ERP software workload optimization on the new IBM PureFlex System

IBM and Lawson M3 (an Infor affiliate) ERP software workload optimization on the new IBM PureFlex System IBM and Lawson M3 (an Infor affiliate) ERP software workload optimization on the new IBM PureFlex System Enterprise software in an easily managed delivery platform Fredrik Astrom Infor Software Paul Swenson

More information

Combining IBM Storwize V7000 IP Replication and Oracle Data Guard Reference guide for database and storage administrators

Combining IBM Storwize V7000 IP Replication and Oracle Data Guard Reference guide for database and storage administrators Combining IBM Storwize V7000 IP Replication and Oracle Data Guard Reference guide for database and storage administrators Shashank Shingornikar IBM Systems and Technology Group ISV Enablement July 2014

More information

jetnexus ALB-X on IBM BladeCenter

jetnexus ALB-X on IBM BladeCenter jetnexus ALB-X on IBM BladeCenter Performance and scalability test results jetnexus IBM Systems and Technology Group ISV Enablement November 2012 Copyright IBM Corporation, 2012 Table of contents Abstract...1

More information

... Performance benefits of POWER6 processors and IBM i 6.1 for Oracle s JD Edwards EnterpriseOne A performance case study for the Donaldson Company

... Performance benefits of POWER6 processors and IBM i 6.1 for Oracle s JD Edwards EnterpriseOne A performance case study for the Donaldson Company Performance benefits of POWER6 processors and IBM i 6.1 for Oracle s JD Edwards EnterpriseOne A performance case study for the Donaldson Company........ Jim Denton i ERP Development Jos Vermaere Executive

More information

Infor M3 on IBM POWER7+ and using Solid State Drives

Infor M3 on IBM POWER7+ and using Solid State Drives Infor M3 on IBM POWER7+ and using Solid State Drives IBM Systems & Technology Group Robert Driesch cooter@us.ibm.com This document can be found on the web, Version Date: January 31, 2014 Table of Contents

More information

... IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne

... IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne IBM Power Systems with IBM i single core server tuning guide for JD Edwards EnterpriseOne........ Diane Webster IBM Oracle International Competency Center January 2012 Copyright IBM Corporation, 2012.

More information

This document was last updated on 5 June New Features 2. Known Issues and Restrictions 3. Issues Resolved 3.1. Security Issues Resolved

This document was last updated on 5 June New Features 2. Known Issues and Restrictions 3. Issues Resolved 3.1. Security Issues Resolved 1 of 10 6/5/2015 1:12 PM This is the release note for the 7.5.0 release and details the issues resolved in all Program Temporary Fixes (PTFs) between 7.5.0.0 and 7.5.0.0. This document will be updated

More information

... WebSphere 6.1 and WebSphere 6.0 performance with Oracle s JD Edwards EnterpriseOne 8.12 on IBM Power Systems with IBM i

... WebSphere 6.1 and WebSphere 6.0 performance with Oracle s JD Edwards EnterpriseOne 8.12 on IBM Power Systems with IBM i 6.1 and 6.0 performance with Oracle s JD Edwards EnterpriseOne 8.12 on IBM Power Systems with IBM i........ Gerrie Fisk IBM Oracle ICC June 2008 Copyright IBM Corporation, 2008. All Rights Reserved. All

More information

Online data migration to IBM SVC / Storwize from iscsi SAN storage controllers

Online data migration to IBM SVC / Storwize from iscsi SAN storage controllers Table of contents Getting started... 1 Configuring iscsi initiator on SVC and connecting IBM Storwize as an iscsi storage... 4 Configuring iscsi initiator on SVC and connecting Dell EqualLogic as an iscsi

More information

IBM Storwize V7000: For your VMware virtual infrastructure

IBM Storwize V7000: For your VMware virtual infrastructure IBM Storwize V7000: For your VMware virtual infrastructure Innovative midrange disk system leverages integrated storage technologies Highlights Complement server virtualization, extending cost savings

More information

High Availability Options for SAP Using IBM PowerHA SystemMirror for i

High Availability Options for SAP Using IBM PowerHA SystemMirror for i High Availability Options for SAP Using IBM PowerHA Mirror for i Lilo Bucknell Jenny Dervin Luis BL Gonzalez-Suarez Eric Kass June 12, 2012 High Availability Options for SAP Using IBM PowerHA Mirror for

More information

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity 9-November-2010 Singapore How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity Shiva Anand Neiker Storage Sales Leader STG ASEAN How Smarter Systems Deliver Smarter Economics

More information

V6R1 System i Navigator: What s New

V6R1 System i Navigator: What s New Agenda Key: Session Number: V6R1 System i Navigator: What s New Tim Kramer - timkram@us.ibm.com System i Navigator web enablement 8 Copyright IBM Corporation, 2008. All Rights Reserved. This publication

More information

Networking best practices with IBM Storwize V7000 and iscsi Reference guide for network and storage administrators

Networking best practices with IBM Storwize V7000 and iscsi Reference guide for network and storage administrators Networking best practices with IBM Storwize V7000 and iscsi Reference guide for network and storage administrators Shashank Shingornikar IBM Systems and Technology Group ISV Enablement September 2013 Copyright

More information

IBM System Storage IBM :

IBM System Storage IBM : IBM System Storage IBM : $ # 20-40%! 18-24 " 1%-5% 2010 %! 2 &! 2000 2005 2010 2015 ' (? ) 35% 65%* * : Mirrors Snapshots Clones Replicas Disk! ' % +, Mirrors Snapshots Clones! Disk % & -!! 3 Replicas

More information

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013

... IBM Advanced Technical Skills IBM Oracle International Competency Center September 2013 Performance benefits of IBM Power Systems and IBM FlashSystem for JD Edwards EnterpriseOne IBM Power 780 server with AIX and IBM FlashSystem 820 flash storage improves batch performance in a client proof

More information

IBM Power Systems solution for SugarCRM

IBM Power Systems solution for SugarCRM IBM Power Systems solution for SugarCRM Performance and scaling overview of Sugar on IBM Power Systems running Linux featuring the new IBM POWER8 technology Steve Pratt, Mark Nellen IBM Systems and Technology

More information

IBM System Storage SAN Volume Controller (SVC)

IBM System Storage SAN Volume Controller (SVC) IBM System Storage SAN Volume Controller (SVC) Procedures for replacing or adding nodes to an existing cluster March, 2011 Scope and Objectives The scope of this document is two fold. The first section

More information

IBM System Storage DS8870 Release R7.3 Performance Update

IBM System Storage DS8870 Release R7.3 Performance Update IBM System Storage DS8870 Release R7.3 Performance Update Enterprise Storage Performance Yan Xu Agenda Summary of DS8870 Hardware Changes I/O Performance of High Performance Flash Enclosure (HPFE) Easy

More information

IBM Spectrum Virtualize HyperSwap Configuration

IBM Spectrum Virtualize HyperSwap Configuration IBM Spectrum Virtualize HyperSwap Configuration Author: John Wilkinson 1 Overview of HyperSwap The HyperSwap high availability function in the IBM Spectrum Virtualize software allows business continuity

More information

IBM SONAS with VMware vsphere 5: Bigger, better, and faster!

IBM SONAS with VMware vsphere 5: Bigger, better, and faster! IBM SONAS with VMware vsphere 5: Bigger, better, and faster! Technical report Benton Gallun IBM System and Technology Group SONAS ISV Enablement September 2011 Copyright IBM Corporation, 2011 Table of

More information

IBM Data Center Networking in Support of Dynamic Infrastructure

IBM Data Center Networking in Support of Dynamic Infrastructure Dynamic Infrastructure : Helping build a Smarter Planet IBM Data Center Networking in Support of Dynamic Infrastructure Pierre-Jean BOCHARD Data Center Networking Platform Leader IBM STG - Central Eastern

More information

... Oracle Database 11g Release 2 with Oracle Real Application Clusters on IBM Flex System tm p460 Compute Nodes with IBM Storwize tm V7000 Unified

... Oracle Database 11g Release 2 with Oracle Real Application Clusters on IBM Flex System tm p460 Compute Nodes with IBM Storwize tm V7000 Unified Oracle Database 11g Release 2 with Oracle Real Application Clusters on IBM Flex System tm p460 Compute Nodes with IBM Storwize tm........ Author: Ravisankar Shanmugam IBM Oracle International Competency

More information

1 Revisions. Storage Layout, DB, and OS performance tuning guideline for SAP - V4.4. IBM System Storage layout & performance guideline for SAP

1 Revisions. Storage Layout, DB, and OS performance tuning guideline for SAP - V4.4. IBM System Storage layout & performance guideline for SAP 1 Revisions Storage Layout, DB, and OS performance tuning guideline for SAP - V4.4 Location of this document: http://www-03.ibm.com/support/techdocs/atsmastr.nsf/webindex/wp101602 This document has been

More information

Enterprise file sync and share using Citrix ShareFile and IBM Storwize V7000 Unified system

Enterprise file sync and share using Citrix ShareFile and IBM Storwize V7000 Unified system Enterprise file sync and share using Citrix ShareFile and IBM Storwize V7000 Unified system A technical report Sandeep Zende IBM Systems ISV Enablement January 2016 Table of contents Abstract... 1 Scope...

More information

IBM tape libraries help Arkivum make the difference

IBM tape libraries help Arkivum make the difference IBM tape libraries help Arkivum make the difference The key role played by Linear Tape Open (LTO) technology and Linear Tape File System (LTFS) format in delivering the Arkivum Assured Archiving Service

More information

IBM Europe Announcement ZP , dated April 8, 2008

IBM Europe Announcement ZP , dated April 8, 2008 IBM Europe Announcement ZP08-0185, dated April 8, 2008 IBM TotalStorage Productivity Center for Replication for System z V3.4 delivers enhanced management and new high availability features for IBM System

More information

COURSE 20740B: INSTALLATION, STORAGE AND COMPUTE ITH WINDOWS SERVER 2016

COURSE 20740B: INSTALLATION, STORAGE AND COMPUTE ITH WINDOWS SERVER 2016 ABOUT THIS COURSE This five-day course is designed primarily for IT professionals who have some experience with Windows Server. It is designed for professionals who will be responsible for managing storage

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1297BE Stretched Clusters or VMware Site Recovery Manager? We Say Both! Jeff Hunter, VMware, @jhuntervmware GS Khalsa, VMware, @gurusimran #VMworld Disclaimer This presentation may contain product features

More information

... HTTP load balancing for Oracle s JD Edwards EnterpriseOne HTML servers using WebSphere Application Server Express Edition

... HTTP load balancing for Oracle s JD Edwards EnterpriseOne HTML servers using WebSphere Application Server Express Edition HTTP load balancing for Oracle s JD Edwards EnterpriseOne HTML servers using WebSphere Application Server Express Edition........ Diane Webster Performance Consultant IBM Oracle International Competency

More information

Computing as a Service

Computing as a Service IBM System & Technology Group Computing as a Service General Session Thursday, June 19, 2008 1:00 p.m. - 2:15 p.m. Conrad Room B/C (2nd Floor) Dave Gimpl, gimpl@us.ibm.com June 19, 08 Computing as a Service

More information

[MS20740]: Installation, Storage, and Compute with Windows Server 2016

[MS20740]: Installation, Storage, and Compute with Windows Server 2016 [MS20740]: Installation, Storage, and Compute with Windows Server 2016 Length : 5 Days Audience(s) : IT Professionals Level : 200 Technology : Windows Server Delivery Method : Instructor-led (Classroom)

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

20740C: Installation, Storage, and Compute with Windows Server 2016

20740C: Installation, Storage, and Compute with Windows Server 2016 20740C: Installation, Storage, and Compute with Windows Server 2016 Course Details Course Code: Duration: Notes: 20740C 5 days This course syllabus should be used to determine whether the course is appropriate

More information

Course Installation, Storage, and Compute with Windows Server 2016

Course Installation, Storage, and Compute with Windows Server 2016 Course 20740 Installation, Storage, and Compute with Windows Server 2016 About this course: This five-day course is designed primarily for IT professionals who have some experience with Windows Server.

More information

Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization

Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization Oracle s JD Edwards EnterpriseOne IBM POWER7 performance characterization Diane Webster IBM Oracle International Competency Center January 2012 Copyright IBM Corporation, 2012. All Rights Reserved. All

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

Lawson M3 7.1 Large User Scaling on System i

Lawson M3 7.1 Large User Scaling on System i Lawson M3 7.1 Large User Scaling on System i IBM System i Paul Swenson paulswen@us.ibm.com System i ERP, Lawson Team Version Date: November 15 2007 Statement of Approval... 3 Introduction... 4 Benchmark

More information

IBM Copy Services Manager Version 6 Release 1. Release Notes August 2016 IBM

IBM Copy Services Manager Version 6 Release 1. Release Notes August 2016 IBM IBM Copy Services Manager Version 6 Release 1 Release Notes August 2016 IBM Note: Before using this information and the product it supports, read the information in Notices on page 9. Edition notice This

More information

Implementing Pure Storage with IBM SAN Volume Controller. Simon Dodsley, Global Solutions Architect

Implementing Pure Storage with IBM SAN Volume Controller. Simon Dodsley, Global Solutions Architect Implementing Pure Storage with IBM SAN Volume Controller Simon Dodsley, Global Solutions Architect Version: 2.2 16 April 2018 Contents Notices... 3 Summary... 4 Audience... 4 Prerequisites... 4 Preparation

More information

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012 IBM SmartCloud Desktop Infrastructure with ware View 12 December 2012 Copyright IBM Corporation, 2012 Table of contents Introduction...1 Architectural overview...1 Component model...2 ware View provisioning...

More information

... IBM AIX performance and tuning tips for Oracle s JD Edwards EnterpriseOne web server

... IBM AIX performance and tuning tips for Oracle s JD Edwards EnterpriseOne web server IBM AIX performance and tuning tips for Oracle s JD Edwards EnterpriseOne web server Applies to JD Edwards EnterpriseOne 9.0 with tools release 8.98 or 9.1........ Diane Webster IBM Oracle International

More information

DS8880 High Performance Flash Enclosure Gen2

DS8880 High Performance Flash Enclosure Gen2 Front cover DS8880 High Performance Flash Enclosure Gen2 Michael Stenson Redpaper DS8880 High Performance Flash Enclosure Gen2 The DS8880 High Performance Flash Enclosure (HPFE) Gen2 is a 2U Redundant

More information

Installation, Storage, and Compute with Windows Server 2016

Installation, Storage, and Compute with Windows Server 2016 Installation, Storage, and Compute with Windows Server 2016 OD20740B; On-Demand, Video-based Course Description This course is designed primarily for IT professionals who have some experience with Windows

More information

IBM Storwize V5000 disk system

IBM Storwize V5000 disk system IBM Storwize V5000 disk system Latest addition to IBM Storwize family delivers outstanding benefits with greater flexibility Highlights Simplify management with industryleading graphical user interface

More information

IBM Storwize HyperSwap with IBM i

IBM Storwize HyperSwap with IBM i Front cover IBM Storwize HyperSwap with IBM i Jana Jamsek Falk Schneider Redpaper International Technical Support Organization IBM Storwize HyperSwap with IBM i May 2018 REDP-5490-00 Note: Before using

More information

Installation, Storage, and Compute with Windows Server 2016 Course 20740B - 5 Days - Instructor-led, Hands on

Installation, Storage, and Compute with Windows Server 2016 Course 20740B - 5 Days - Instructor-led, Hands on Installation, Storage, and Compute with Windows Server 2016 Course 20740B - 5 Days - Instructor-led, Hands on Introduction This five-day course is designed primarily for IT professionals who have some

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide

Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide Implementing IBM Easy Tier with IBM Real-time Compression IBM Redbooks Solution Guide Overview IBM Easy Tier is a performance function that automatically and non-disruptively migrates frequently accessed

More information

Infor Lawson on IBM i 7.1 and IBM POWER7+

Infor Lawson on IBM i 7.1 and IBM POWER7+ Infor Lawson on IBM i 7.1 and IBM POWER7+ IBM Systems & Technology Group Mike Breitbach mbreit@us.ibm.com This document can be found on the web, Version Date: March, 2014 Table of Contents 1. Introduction...

More information

Remove complexity in protecting your virtual infrastructure with. IBM Spectrum Protect Plus. Data availability made easy. Overview

Remove complexity in protecting your virtual infrastructure with. IBM Spectrum Protect Plus. Data availability made easy. Overview Overview Challenge In your organization, backup management is too complex and consumes too much time and too many IT resources. Solution IBM Spectrum Protect Plus dramatically simplifies data protection

More information

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo

Vendor: IBM. Exam Code: Exam Name: IBM Midrange Storage Technical Support V3. Version: Demo Vendor: IBM Exam Code: 000-451 Exam Name: IBM Midrange Storage Technical Support V3 Version: Demo QUESTION NO: 1 On the Storwize V7000, which IBM utility analyzes the expected compression savings for an

More information

iseries Tech Talk Linux on iseries Technical Update 2004

iseries Tech Talk Linux on iseries Technical Update 2004 iseries Tech Talk Linux on iseries Technical Update 2004 Erwin Earley IBM Rochester Linux Center of Competency rchlinux@us.ibm.com Agenda Enhancements to the Linux experience introduced with i5 New i5/os

More information

Installation, Storage, and Compute with Windows Server 2016 (20740)

Installation, Storage, and Compute with Windows Server 2016 (20740) Installation, Storage, and Compute with Windows Server 2016 (20740) Duration: 5 Days Live Course Delivery Price: $2795 *California residents and government employees call for pricing. MOC On-Demand Price:

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information

DS8880 High-Performance Flash Enclosure Gen2

DS8880 High-Performance Flash Enclosure Gen2 DS8880 High-Performance Flash Enclosure Gen2 Bert Dufrasne Kerstin Blum Jeff Cook Peter Kimmel Product Guide DS8880 High-Performance Flash Enclosure Gen2 This IBM Redpaper publication describes the High-Performance

More information

Windows Server : Installation, Storage, and Compute with Windows Server Upcoming Dates. Course Description.

Windows Server : Installation, Storage, and Compute with Windows Server Upcoming Dates. Course Description. Windows Server 2016 20740: Installation, Storage, and Compute with Windows Server 2016 Dive into the latest features of Microsoft Windows Server 2016 in this 5-day training class. You'll get 24-7 access

More information

IBM Application Runtime Expert for i

IBM Application Runtime Expert for i IBM Application Runtime Expert for i Tim Rowe timmr@us.ibm.com Problem Application not working/starting How do you check everything that can affect your application? Backup File Owner & file size User

More information

Course Outline 20740B. Module 1: Installing, upgrading, and migrating servers and workloads

Course Outline 20740B. Module 1: Installing, upgrading, and migrating servers and workloads Course Outline 20740B Module 1: Installing, upgrading, and migrating servers and workloads This module describes the new features of Windows Server 2016, and explains how to prepare for and install Nano

More information

Subex Fraud Management System version 8 on the IBM PureFlex System

Subex Fraud Management System version 8 on the IBM PureFlex System Subex Fraud Management System version 8 on the IBM PureFlex System A fraud management solution for today s dynamic market place Subex IBM Systems and Technology Group ISV Enablement April 2012 Copyright

More information

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note

Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note Using VERITAS Volume Replicator for Disaster Recovery of a SQL Server Application Note February 2002 30-000632-011 Disclaimer The information contained in this publication is subject to change without

More information

A Pragmatic Path to Compliance. Jaffa Law

A Pragmatic Path to Compliance. Jaffa Law A Pragmatic Path to Compliance Jaffa Law jaffalaw@hk1.ibm.com Introduction & Agenda What are the typical regulatory & corporate governance requirements? What do they imply in terms of adjusting the organization's

More information

IBM Geographically Dispersed Resiliency for Power Systems. Version Release Notes IBM

IBM Geographically Dispersed Resiliency for Power Systems. Version Release Notes IBM IBM Geographically Dispersed Resiliency for Power Systems Version 1.2.0.0 Release Notes IBM IBM Geographically Dispersed Resiliency for Power Systems Version 1.2.0.0 Release Notes IBM Note Before using

More information

Course Outline. exam, Installation, Storage and Compute with Windows Server Course 20740A: 5 days Instructor Led

Course Outline. exam, Installation, Storage and Compute with Windows Server Course 20740A: 5 days Instructor Led Installation, Storage, and Compute with Windows Server 2016 Course 20740A: 5 days Instructor Led About this course This five-day course is designed primarily for IT professionals who have some experience

More information

Unified Management for Virtual Storage

Unified Management for Virtual Storage Unified Management for Virtual Storage Storage Virtualization Automated Information Supply Chains Contribute to the Information Explosion Zettabytes Information doubling every 18-24 months Storage growing

More information

Behind the Glitz - Is Life Better on Another Database Platform?

Behind the Glitz - Is Life Better on Another Database Platform? Behind the Glitz - Is Life Better on Another Database Platform? Rob Bestgen bestgen@us.ibm.com DB2 for i CoE We know the stories My Boss thinks we should move to SQL Server Oracle is being considered for

More information

IBM XIV Storage System Gen3 and the Microsoft SQL Server I/O Reliability Partner Program

IBM XIV Storage System Gen3 and the Microsoft SQL Server I/O Reliability Partner Program IBM XIV Storage System Gen3 and the Microsoft SQL Server I/O Reliability Partner Program Eric B. Johnson IBM Systems and Technology Group ISV Enablement May 2013 Table of contents Abstract... 1 Disclaimer...

More information

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family A step-by-step guide IBM Systems and Technology Group ISV Enablement February 2014 Copyright IBM Corporation, 2014 Table of contents

More information

Tivoli Storage Manager for Virtual Environments: Data Protection for VMware Solution Design Considerations IBM Redbooks Solution Guide

Tivoli Storage Manager for Virtual Environments: Data Protection for VMware Solution Design Considerations IBM Redbooks Solution Guide Tivoli Storage Manager for Virtual Environments: Data Protection for VMware Solution Design Considerations IBM Redbooks Solution Guide IBM Tivoli Storage Manager for Virtual Environments (referred to as

More information

Server for IBM i. Dawn May Presentation created by Tim Rowe, 2008 IBM Corporation

Server for IBM i. Dawn May Presentation created by Tim Rowe, 2008 IBM Corporation Integrated Web Application Server for IBM i Dawn May dmmay@us.ibm.com Presentation created by Tim Rowe, timmr@us.ibm.com IBM i integrated Web application server the on-ramp to the Web 2 Agenda Integrated

More information

Configuring Storage Profiles

Configuring Storage Profiles This part contains the following chapters: Storage Profiles, page 1 Disk Groups and Disk Group Configuration Policies, page 2 RAID Levels, page 3 Automatic Disk Selection, page 4 Supported LUN Modifications,

More information

High Availability for Oracle Database with IBM PowerHA SystemMirror and IBM Spectrum Virtualize HyperSwap

High Availability for Oracle Database with IBM PowerHA SystemMirror and IBM Spectrum Virtualize HyperSwap Front cover High Availability for Oracle Database with IBM PowerHA SystemMirror and IBM Spectrum Virtualize HyperSwap Ian MacQuarrie Redpaper High Availability for Oracle Database with IBM PowerHA SystemMirror

More information

Active Energy Manager. Image Management. TPMfOSD BOFM. Automation Status Virtualization Discovery

Active Energy Manager. Image Management. TPMfOSD BOFM. Automation Status Virtualization Discovery Agenda Key: Session Number: 53CG 550502 Compare and Contrast IBM ~ ~ Navigator for IBM i Tim Rowe timmr@us.ibm.com 8 Copyright IBM Corporation, 2009. All Rights Reserved. This publication may refer to

More information

SVC VOLUME MIGRATION

SVC VOLUME MIGRATION The information, tools and documentation ( Materials ) are being provided to IBM customers to assist them with customer installations. Such Materials are provided by IBM on an as-is basis. IBM makes no

More information

Configuring ApplicationHA in VMware SRM 5.1 environment

Configuring ApplicationHA in VMware SRM 5.1 environment Configuring ApplicationHA in VMware SRM 5.1 environment Windows Server 2003 and 2003 R2, Windows Server 2008 and 2008 R2 6.0 September 2013 Contents Chapter 1 About the ApplicationHA support for VMware

More information

IBM System Storage DS6800

IBM System Storage DS6800 Enterprise-class storage in a small, scalable package IBM System Storage DS6800 Highlights Designed to deliver Designed to provide over enterprise-class functionality, 1600 MBps performance for with open

More information

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc.

High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. High Availability through Warm-Standby Support in Sybase Replication Server A Whitepaper from Sybase, Inc. Table of Contents Section I: The Need for Warm Standby...2 The Business Problem...2 Section II:

More information

Veritas Storage Foundation for Windows by Symantec

Veritas Storage Foundation for Windows by Symantec Veritas Storage Foundation for Windows by Symantec Advanced online storage management Veritas Storage Foundation 5.1 for Windows brings advanced online storage management to Microsoft Windows Server environments,

More information

IBM FileNet Content Manager and IBM GPFS

IBM FileNet Content Manager and IBM GPFS IBM FileNet Content Manager support for IBM General Parallel File System (GPFS) September 2014 IBM SWG Enterprise Content Management IBM FileNet Content Manager and IBM GPFS Copyright IBM Corporation 2014

More information

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family

HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family Data sheet HPE Data Replication Solution Service for HPE Business Copy for P9000 XP Disk Array Family HPE Lifecycle Event Services HPE Data Replication Solution Service provides implementation of the HPE

More information

IBM XIV Adapter for VMware vcenter Site Recovery Manager 4.x Version User Guide GA

IBM XIV Adapter for VMware vcenter Site Recovery Manager 4.x Version User Guide GA IBM XIV Adapter for VMware vcenter Site Recovery Manager 4.x Version 4.1.0 User Guide GA32-2224-00 Note Before using this document and the product it supports, read the information in Notices on page 35.

More information

HP StoreVirtual Storage Multi-Site Configuration Guide

HP StoreVirtual Storage Multi-Site Configuration Guide HP StoreVirtual Storage Multi-Site Configuration Guide Abstract This guide contains detailed instructions for designing and implementing the Multi-Site SAN features of the LeftHand OS. The Multi-Site SAN

More information

TPF Debugger / Toolkit update PUT 12 contributions!

TPF Debugger / Toolkit update PUT 12 contributions! TPF Debugger / Toolkit update PUT 12 contributions! Matt Gritter TPF Toolkit Technical Lead! IBM z/tpf April 12, 2016! Copyright IBM Corporation 2016. U.S. Government Users Restricted Rights - Use, duplication

More information