USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC

Size: px
Start display at page:

Download "USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC"

Transcription

1 White Paper USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC DEMONSTRATING INTEROPERABILITY OF SRDF/METRO AND VMWARE, USING ORACLE APPLICATIONS RUNNING ON AN EXTENDED RAC CLUSTER Abstract This white paper discusses how to configure SRDF/Metro on a VMware Metro Storage Cluster (vmsc) configured with an Oracle Applications environment running on Oracle Extended RAC. May 2016 VMAX Engineering

2 Copyright 2016 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESXi, vmotion, and vsphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number h

3 Table of Contents Executive summary... 5 Audience... 5 Document scope and limitations... 6 Introduction... 6 VMAX3 with HYPERMAX OS EMC Unisphere for VMAX... 7 Fully Automated Storage Tiering (FAST)... 8 SRDF/Metro... 9 Bias and Witness SRDF/Metro limitations VAAI support VMware Metro Storage Cluster (vmsc) VMware vmsc and SRDF/Metro Uniform and Nonuniform vmsc SRDF/Metro Witness in vmsc SRDF/Metro and vmsc Failure Handling Setting up SRDF/Metro SRDF group creation SRDF/Metro pair creation Oracle Applications Applications Architecture Working with FAST and Oracle Applications FAST elements Oracle Applications Tablespace Model Oracle Applications implementation Oracle Applications deployment Hardware layout FAST configuration for Oracle Applications Oracle Real Application Clusters (RAC) on SRDF/Metro Extended RAC and Oracle Clusterware Oracle RAC and Oracle E-Business Suite Oracle Real Application Clusters (RAC) on VMFS Oracle Clusterware deployment Oracle Extended RAC installation

4 Procedure Networking VMware Cluster Configuration with SRDF/Metro vsphere HA Heartbeating Polling time for datastore paths All Paths Down (APD) and Permanent Data Loss (PDL) VMCP PDL VMCP settings vmsc recommendations for PDL APD VMCP settings vmsc recommendations for APD VMware VM/Host Groups and VM/Host Rules Best practices with VMware HA and SRDF/Metro Viewing VMAX3 in vsphere EMC Virtual Storage Integrator Conclusion References EMC VMware Oracle Appendix A SRDF/Metro maintenance Adding new pairs VM mobility Appendix B SRDF/Metro with VMware Site Recovery Manager VMware vcenter Site Recovery Manager Environment Configuration SRDF SRA Stretched Storage - Tags, Categories and Storage Policies Stretched Storage - Protection Group Stretched Storage - Recovery Plan Stretched Storage - Known Issue Cross-vMotion - Known Limitation SRDF Adapter Utilities

5 Executive summary The EMC VMAX family provides disaster recovery and mobility solutions through its remote replication technology SRDF (Symmetrix Remote Data Facility). SRDF has the capability to replicate between multiple sites, co-located or even thousands of miles apart depending on the type of replication desired. Beginning with HYPERMAX OS 5977 Q Service Release and Solutions Enabler and Unisphere for VMAX 8.1, EMC offers SRDF/Metro, an active/active version of SRDF. In a traditional SRDF device pair relationship, the secondary device, or R2, is write disabled. Only the primary device, or R1, is accessible for read/write activity. With SRDF/Metro the R2 is also write enabled and accessible by the host or application. The R2 takes on the personality of the R1 including the WWN. A host, therefore, would see both the R1 and R2 as the same device. As both devices are simultaneously accessible, the hosts in a cluster, for example, can read and write to both the R1 and R2. The SRDF/Metro technology ensures that the R1 and R2 remain current and consistent, addressing any conflicts which might arise between the pairs. When SRDF/Metro is used in conjunction with VMware vsphere across multiple hosts, a VMware Metro Storage Cluster (vmsc) is formed. At its core, a VMware vmsc infrastructure is a stretched cluster. The architecture is built on the idea of extending what is defined as local in terms of network and storage. This enables these subsystems to span geographies, presenting a single and common base infrastructure set of resources to the vsphere cluster at both sites. In essence, it stretches network and storage between sites. With vmsc customers acquire the capability to migrate virtual machines between sites with VMware vsphere vmotion and vsphere Storage vmotion, enabling ondemand and nonintrusive mobility of workload. Combining these technologies together with an enterprise ERP and CRM software solution like Oracle Applications running on Oracle Extended RAC, provides a highly available enterprise solution. Audience This white paper is intended for VMware administrators, Oracle DBAs, server administrators, and storage administrators responsible for creating, managing, and using VMware, as well as their underlying storage devices, for their VMware vsphere environments attached to a VMAX3 storage array running HYPERMAX OS 5977 and SRDF/Metro. The white paper assumes the reader is familiar with Oracle databases and applications, VMware environments, VMAX3 and the related software. 5

6 Document scope and limitations This document applies to EMC SRDF/Metro configured with a Witness, VMware vsphere and Oracle Applications on Oracle Extended RAC. The details provided in this white paper are based on the following software and hardware: EMC SRDF/Metro with HYPERMAX OS plus the epack for ESXi support 2 VMAX3 arrays that are part of the SRDF/Metro cluster are within 5 milliseconds (ms) of each other to allow for VMware HA SRDF/Metro Witness is deployed to a third VMAX3 array also in metro distance Solutions Enabler/Unisphere for VMAX 8.1 vsphere 6.0 U1 (vcenter, ESXi) using NMP EMC VSI 6.7 Oracle Applications Release 12 Oracle RAC 12c Introduction The purpose of this paper is not to present performance results or scalability tests of the various products included, but rather to lay out how these different technologies can be integrated and will work together to provide a robust High Availability solution. This document, therefore, will present the best practices for the various components involved in the solution. Even were this lab environment to adhere perfectly to the physical requirements, it may not replicate a particular customer s environment. By including the best practices, however, a customer can use the information and the resources at hand to design the best possible environment for their needs. The details around setting up the physical hardware will only be mentioned as necessary to the solution, e.g. sufficient disk technologies to meet SLOs. Customers are advised to work with EMC to ensure their physical hardware for VMAX3 is properly setup and configured to take advantage of solutions such as the one presented in this white paper. The following sections provide a brief introduction to the EMC technologies most involved in this solution: VMAX3, Unisphere for VMAX, and FAST. VMAX3 with HYPERMAX OS 5977 The VMAX3 Family 100K, 200K, 400K arrays provide unprecedented performance and scale. Ranging from the single or dual-engine VMAX 100K up to the eight engine VMAX 400K, these new arrays offer dramatic increases in floor tile density with 6

7 engines and high capacity disk enclosures for both 2.5" and 3.5" drives consolidated in the same system bay. The VMAX3 arrays support the use of native 6 Gb/s SAS 2.5 drives, 3.5 drives, or a mix of both drive types. Individual system bays can house either one or two engines and up to six high-density disk array enclosures (DAEs) per engine available in either 3.5 (60 slot) or 2.5" (120 slot) formats. Each system bay can support up to " drives or " drives, or a mix of the two. The Dynamic Virtual Matrix architecture allows scaling of system resources through common and fully redundant building blocks called VMAX3 engines. VMAX3 engines provide the complete foundation for high-availability storage arrays. The VMAX3 utilizes Virtual Provisioning (VP), a type of host-accessible device called a thin device, or virtual device, which can be used in many of the same ways that regular, host-accessible VMAX devices have traditionally been used. VP offers storage efficiency/flexibility that traditional thick volumes cannot. Unlike thick devices, thin devices do not need to have physical storage completely allocated at the time the devices are created and presented to a host. The physical storage that is used to supply drive space for a thin device comes from a shared Storage Resource Pool, or SRP. An SRP is comprised of data pools which in turn are made up of internal VMAX3 devices called data devices. The VMAX3 arrays come fully pre-configured out of the factory to significantly shorten the time to first I/O during installation. EMC Unisphere for VMAX EMC Unisphere for VMAX (Unisphere) is EMC s VMAX array management software which offers big-button navigation and streamlined operations to simplify and reduce the time required to manage a data center. Unisphere for VMAX simplifies storage management under a common framework, incorporating Symmetrix Performance Analyzer which previously required a separate interface. In addition, with the Performance monitoring option, Unisphere for VMAX provides tools for performing analysis and historical trending of VMAX system performance data. The default Storage Groups Dashboard is presented in Figure 1. 7

8 Figure 1. Unisphere for VMAX Default Storage Groups Dashboard Unisphere for VMAX, shown in the preceding figure, can be run on a number of different kinds of open systems hosts, physical or virtual. Unisphere for VMAX is also available as a virtual appliance for ESXi in VMware vsphere and as an embedded guest OS on the VMAX3 array. For more details please visit support.emc.com. Fully Automated Storage Tiering (FAST) EMC Fully Automated Storage Tiering (FAST) provides automated management of VMAX3 array disk resources on behalf of thin devices. FAST automatically configures disk groups to form a Storage Resource Pool (SRP) by creating thin pools according to each individual disk technology, capacity and RAID type. FAST technology moves the most active parts of your workloads (hot data) to high-performance flash disks and the least-frequently accessed storage (cold data) to lower-cost drives, leveraging the best performance and cost characteristics of each different drive type. FAST delivers higher performance using fewer drives to help reduce acquisition, power, cooling, and footprint costs. FAST is able to factor in the RAID protections to ensure write heavy workloads go to RAID 1 and read heavy workloads go to RAID 6. This process is entirely automated and requires no user intervention. 8

9 SRDF/Metro SRDF/Metro is a new feature available on the VMAX3 which provides active/active access to the R1 and R2 of an SRDF configuration. In traditional SRDF, R1 devices are Read/Write accessible while R2 devices are Read Only/ Write Disabled. In SRDF/Metro configurations both the R1 and R2 are Read/Write accessible. The way this is accomplished is the R2 takes on the personality of the R1 in terms of geometry and most importantly the WWN. By sharing a WWN, the R1 and R2 appear as a shared virtual device across the two VMAX3 arrays for host presentation. A host or typically multiple hosts in a cluster can read and write to both the R1 and R2. SRDF/Metro ensures that the each copy remains current and consistent and addresses any write conflicts which might arise. A simple diagram of the feature is available in Figure 2. Figure 2. SRDF/Metro SRDF/Metro is supported on VMAX3 arrays running HYPERMAX OS 5977 Q SR 1 or higher. This feature provides the following advantages: A high availability solution at Metro distances by leveraging and extending SRDF/S functionality. Active-Active replication capabilities on both the source and target sites. Witness support to enable full high availability, resiliency, and seamless failover. 2 1 See E-Lab for the currently supported hosts and multi-pathing software. The E-Lab Interoperability Navigator can be found at this address: 2 The SRDF SRA only supports the use of Bias with VMware SRM. 9

10 Bias and Witness SRDF/Metro maintains consistency between the R1 and R2 during normal operation. If, however, a device or devices go not ready (NR) or connectivity is lost between the arrays, SRDF/Metro selects one side of the environment as the winner and makes the other side inaccessible to the host(s). There are two ways that SRDF/Metro can determine a winner: bias or SRDF/Metro Witness (Witness). The bias or Witness prevents any data inconsistencies which might result from the two arrays being unable to communicate. Bias is a required component of SRDF/Metro, with or without Witness. Witness builds upon the bias functionality in essence bias becomes the failsafe in case Witness is unavailable or fails. The initial createpair operation of SRDF/Metro will assign bias to the R1 site though it is possible to change it to the R2 after initial synchronization. Note changing the bias turns the R2 into the R1. In the event of a failure, SRDF/Metro makes the non-biased side inaccessible to the host(s) while the bias side (R1) survives. Bias is denoted by the state of ActiveBias on a device pair or SRDF group as in Figure 3. Note the Bias Location is Local, meaning this is the R1 side. Figure 3. ActiveBias state for SRDF/Metro If the bias side (R1) experiences the failure, then the entire SRDF/Metro cluster becomes unavailable to the hosts and will require user intervention to rectify. To avoid these types of failures, EMC offers the Witness. The Witness is an external arbiter running on a separate VMAX or VMAX3 with the proper code. A separate SRDF group is created from each SRDF/Metro array (R1, R2) to the Witness array and marked as witness or quorum groups. The use of the Witness supersedes the bias functionality. If the SRDF groups to the Witness are present before the createpair command is executed, the device pair(s) will automatically enter a Witness Protected state upon synchronization and the state will be ActiveActive. The ActiveActive state for SRDF/Metro can be seen in Unisphere in Figure 4. 10

11 Figure 4. ActiveActive state for SRDF/Metro Alternatively if the groups are added after synchronization, at the next re-establish the Witness will take effect. It is also possible to configure multiple Witnesses if multiple arrays are available. In such cases the SRDF/Metro handles the use of multiple Witnesses so if the initial one fails, no user intervention is required to enable a secondary one. The following screenshot in Figure 5 obtained from Solutions Enabler, provides an example of a failure of the Witness. Note that for the SRDF group the configured type(c) is Witness, but the effective type(e) is Bias. This is due to the Witness status(s) being Failed. 11

12 Figure 5. An SRDF/Metro group with a failed Witness Once the issue with the Witness is resolved, the group automatically is returned to an effective Witness state and a Normal status as in Figure 6. 12

13 Figure 6. Restoring the Witness As this whitepaper will not detail all failure scenarios of an SRDF/Metro configuration, please see the References section for the Technical Note EMC VMAX3 SRDF/Metro Overview and Best Practices if more information is desired. SRDF/Metro limitations As SRDF/Metro (active mode) is a different type of SRDF implementation, there are some restrictions which do not exist for other SRDF modes. As such they are included here for reference. Note that this list is based on HYPERMAX OS 5977 Q SR release of SRDF/Metro. Many restrictions will be lifted in future releases. Both the source (R1) and target (R2) arrays must be running HYPERMAX OS 5977 Q SR or higher. Existing SRDF device pairs that participate in an SRDF mode of operation cannot be part of an SRDF/Metro configuration; SRDF device pairs that participate in an SRDF/Metro configuration cannot participate in any other SRDF mode of operation. Concurrent or Cascaded SRDF devices are not supported. The R1 and R2 must be the same size. Devices cannot have Geometry Compatibility Mode (GCM) set. 13

14 Devices cannot have User Geometry set. Controlling devices in an SRDF group that contain a mixture of source (R1) and target (R2) devices is not supported. The following operations must apply to all devices in the SRDF group: createpair establish, establish, restore, and suspend Note that SRDF/Metro supports all of the vsphere Storage APIs for Array Integration (VAAI) commands save for Full Copy (XCOPY). VAAI support As noted above, SRDF/Metro does not support the VAAI primitive, XCOPY. However as support is disabled in the code, VMware will report that the primitive is supported. In the vsphere Web Client VAAI support is represented only as a single capability called Hardware Acceleration. Figure 7 is the datastore and SRDF/Metro device, DATA_1, which reports support for VAAI. Figure 7. Checking VAAI support in the vsphere Client The CLI, on the other hand, details the status for each of the 4 primitives for the same device backing datastore DATA_1. In Figure 8 note that Clone Status (XCOPY) shows as supported. 14

15 Figure 8. Checking VAAI support in the CLI So though VMware will still issue the commands if required, they will not be accepted for SRDF/Metro devices. As with all VAAI integration, however, the user is not required to make any changes and therefore the fact that XCOPY is supported will not be readily apparent to the end user. VMware Metro Storage Cluster (vmsc) A VMware vsphere Metro Storage Cluster configuration is a VMware vsphere 5 or 6 certified solution that combines synchronous replication with array-based clustering. These solutions typically are deployed in environments where the distance between datacenters is limited, often metropolitan or campus environments. EMC SRDF/Metro represents one of those certified solutions. A VMware vmsc requires what is in effect a single storage subsystem that spans both sites. In this design, a given datastore must be accessible (able to be read and written to) simultaneously from both sites. Furthermore, when problems occur, the ESXi hosts must be able to continue to access datastores from either array, transparently and without impact to ongoing storage operations. The storage subsystem for a VMware vmsc must be able to be read from and write to the two locations simultaneously. All disk writes are committed synchronously at the two locations to ensure that data is always consistent regardless of the location from which it is being read. This storage architecture requires significant bandwidth and very low latency between the sites involved in the cluster. Increased distances or latencies cause delays writing to disk, making performance suffer dramatically, and prevent vmotion activities between the cluster nodes that reside in different locations. VMware (and therefore SRDF/Metro on VMware) supports 32 hosts in a cluster in vsphere 5 and 64 hosts in a cluster in vsphere 6. 15

16 VMware vmsc and SRDF/Metro EMC SRDF/Metro breaks physical barriers of data centers and allows users to access data at different geographical locations concurrently. This functionality in a VMware context enables functionality that was not available previously. Specifically, the ability to concurrently access the same set of devices independent of the physical location enables geographically stretched clusters based on VMware vsphere in other words vmsc. This allows for transparent load sharing between multiple sites while providing the flexibility of migrating workloads between sites in anticipation of planned events such as hardware maintenance. Furthermore, in case of an unplanned event that causes disruption of services at one of the data centers, the failed services can be quickly and easily restarted at the surviving site with minimal effort. Nevertheless, the design of the VMware environment has to account for a number of potential failure scenarios and mitigate the risk for services disruption. Uniform and Nonuniform vmsc There are two types of configurations available for vmsc: uniform and nonuniform (Figure 9). VMware defines them as such: Uniform host access configuration When ESXi hosts from both sites are all connected to a storage node in the storage cluster across all sites. Paths presented to ESXi hosts are stretched across distance. Nonuniform host access configuration ESXi hosts in each site are connected only to storage node(s) in the same site. Paths presented to ESXi hosts from storage nodes are limited to the local site. The environment detailed in this paper is a nonuniform configuration for vmsc with SRDF/Metro so each host only sees its respective VMAX3 array. EMC also supports using a uniform (also known as cross-connect) configuration. 3 SRDF/Metro maintains the cache state on each array, so an ESXi host in either datacenter detects the virtual device as local. Even when two virtual machines reside on the same datastore but are located in different datacenters, they write locally without any performance impact on either of them. Each host accesses and is served by its own array with SRDF/Metro ensuring consistency. If a Witness is not in use, there is site bias, meaning that one site will be the winner in the event of failure. 3 EMC supports uniform configurations with an RPQ. If using uniform with RR PSP it is essential to leave the IOPS setting at the default of

17 Figure 9. vmsc nonuniform configuration with SRDF/Metro SRDF/Metro Witness in vmsc As noted in the previous paragraph, SRDF/Metro by default uses site bias to define how a site or link failure should be handled in a SRDF/Metro configuration. If two clusters lose contact, the bias defines which cluster continues operation and which suspends I/O. Bias is defined at the SRDF group level. The use of bias to control which site is a winner, however, adds unnecessary complexity in case of a site failure since it may be necessary to manually intervene to resume I/O to the surviving site. SRDF/Metro has the capability to use a Witness, instead of bias. The SRDF/Metro Witness is available through a third VMAX or VMAX3 array 4. The VMAX can reside in a physically separate failure domain to either VMAX3 arrays in the metro configuration if desired. It provides the following features: Active/active use of both data centers 4 See SRDF/Metro product documentation for the exact Enginuity or HYPERMAX OS supported versions. 17

18 High availability for applications (no single points of storage failure, autorestart) Fully automatic failure handling Better resource utilization Lower capital expenditures and lower operational expenditures as a result Note: Fault domain is decided by the customer and can range from different racks in the same datacenter all the way up to a 5ms of distance away from each SRDF/Metro cluster (5ms measured latency or typical synchronous distance). A configuration that uses SRDF/Metro with Witness allows both sides to provide coherent read/write access to the same virtual volume. That means that on the remote site, the paths are up and the storage is available even before any failover happens. When this is combined with host failover clustering technologies such as VMware HA, one gets a fully automatic application restart for any site-level disaster. The system rides through component failures within a site, including the failure of an entire array. In this scenario, a virtual machine can write to the same virtual device from either cluster. In other words, if the customer is using VMware Distributed Resource Scheduler (DRS), which allows the automatic load distribution on virtual machines across multiple ESXi servers, a virtual machine can be moved from an ESXi server attached to the R1 array to an ESXi server attached to R2 array without losing access to the underlying storage. This configuration allows virtual machines to move between two geographically disparate locations with up to 5 ms of latency, the limit to which VMware vmotion is supported. In the event of a complete site failure, shown in Figure 10, SRDF/Metro Witness automatically assigns the R2 array as the winner, rather than following the R1 site bias. VMware HA detects the failure of the virtual machines and restarts the virtual machines automatically at the surviving site with no user intervention. 18

19 Figure 10. vmsc with SRDF/Metro Witness SRDF/Metro and vmsc Failure Handling There are EMC and VMware documents available that cover all the various failure scenarios that can impact a SRDF/Metro and vmsc environment. As this paper is not specifically addressing those scenarios, they are included in the References section so the customer may review them before deployment. 19

20 Setting up SRDF/Metro SRDF/Metro can be configured with Solutions Enabler (CLI) or with Unisphere for VMAX. EMC recommends using Unisphere to configure SRDF/Metro to reduce complexity. This section details the setup and includes those tasks directly related to SRDF/Metro setup. In this example, the following objects are assumed to already exist as their creation is independent of SRDF/Metro: Initiator groups for each site Port groups for each site Devices on each site created and placed in a single storage group on each array A masking view exists for the R1 array, but NOT for the R2 SRDF group creation SRDF/Metro is a feature of SRDF and therefore the same basic process is followed when setting up device pairs. Initially, an SRDF group is required on each VMAX3 array. Within Unisphere for VMAX navigate to array -> Data Protection -> Replication Groups and Pools -> SRDF Groups as in Figure 11. Figure 11. SRDF group creation in Unisphere for VMAX Enter the required information, and select OK to save as in Figure 12. This will create an SRDF group on each array. Be sure not to check the box labeled SRDF/Metro Quorum Group as that refers to the Witness which will be covered next. Therefore 20

21 note that there is no distinction between a Metro group and an SRDF group for another mode such as async. Figure 12. SRDF group creation for Metro In the environment detailed in this whitepaper, a Witness is being utilized. SRDF/Metro defaults to the bias option unless there is another SRDF group on a third site (VMAX or VMAX3 array) to which each Metro array is configured. In such cases, when creating device pairs in the group, the witness option will be specified which will supersede the bias, though it remains as part of the configuration in case the Witness fails. Create a group from each Metro array to the third Witness array, which is demonstrated in Figure 13 and Figure 14. A Witness group is designated by checking the box SRDF/Metro Quorum Group. If a Witness group already exists to the remote array, the setting for SRDF/Metro Quorum Group will be grayed-out. 21

22 Figure 13. Creating the SRDF group from R1 to Witness array in Unisphere 22

23 Figure 14. Creating the SRDF group from R2 to Witness array in Unisphere To add an SRDF group for the Witness using Solutions Enabler, specify the witness flag to a standard SRDF group creation statement shown in Figure

24 Figure 15. Creating SRDF groups for the Witness through CLI If a Witness group already exists, whether or not it is in use, Solutions Enabler will return the following error: A Witness RDF group already exists between the two Symmetrix arrays. SRDF/Metro pair creation The Protection Dashboard in Unisphere provides an easy-to-use wizard to enable SRDF/Metro on a storage group. Before starting the wizard, it is important to remember that though this is an active-active device configuration, the R2 should NOT be presented to the hosts until all devices are fully synchronized. Start by navigation to the Protection Dashboard seen in Figure 16 and selecting Total in step 1 to see all the storage groups available for protection. 24

25 Figure 16. Protection Dashboard in Unisphere In step 2, Figure 17, highlight the desired storage group for protection and select the button Protect at the bottom of the screen. Figure 17. Storage group selection for SRDF/Metro setup 25

26 With the storage group selected, Unisphere provides the available protection types on the array. Select High Availability Using SRDF/Metro radio button and then Next. This step is shown in Figure 18. Figure 18. Protection Type in protection wizard In step 4, Figure 19, Unisphere will select Auto for the SRDF Group. As the groups were created in a previous step, in this example the SRDF Group is changed to Manual and the correct selection made. Similarly the remote storage group name will be automatically generated, though it can be altered as in this case. Note, too that if a Witness group exists, Unisphere will select Quorum but it can be overridden to Bias if so desired. 26

27 Figure 19. SRDF/Metro Connectivity in in protection wizard Finally review the proposed changes in step 5 in Figure 20 and run the task. 27

28 Figure 20. Complete protection wizard Once completed, the SRDF/Metro dashboard in Figure 21 will show the group syncing. Figure 21. SRDF/Metro dashboard The user should wait until the State in Figure 21 reads ActiveActive before creating a masking view for the R2 devices. That will mean the synchronization is complete. Alternatively if using bias, the synchronized state is ActiveBias. Solutions Enabler may also be used to verify the state of the RDF group (Figure 22). 28

29 Figure 22. Verify SRDF/Metro state in CLI Once the state is in the proper active state, the R2 devices can be presented to the R2 host(s) using the masking view wizard in Unisphere, displayed in Figure

30 Figure 23. Create R2 masking view As shown above, when setting up SRDF/Metro pairs, Unisphere for VMAX wizards work at the storage group level. If the devices are not already in a storage group, Unisphere can still be utilized in more of a manual fashion. For instance, pairs can be created in the Replication Groups and Pools screen in the Data Protection area of 30

31 Unisphere. Here use the SRDF Mode Active to denote SRDF/Metro pairs. The ability to set a Witness and add devices to storage groups is available and seen in Figure 24. Figure 24. Creating SRDF/Metro pairs in Unisphere without a storage group In Unisphere, any storage groups in an SRDF/Metro relationship will appear in the High Availability protection group that is part of the Protection Dashboard. Figure 25 shows the R2 side of the SRDF/Metro group created in this section. 31

32 Figure 25. Unisphere Protection Dashboard - High Availability Though EMC recommends using Unisphere, it is possible to setup SRDF/Metro using Solutions Enabler. In general, setting up SRDF/Metro is akin to any SRDF configuration, save for during the createpair command when the switch rdf_metro should be specified for an SRDF/Metro pair. Oracle Applications 12 Oracle Applications, or Oracle E-Business Suite, is a tightly integrated family of Financial, ERP, CRM, and manufacturing application products that share a common look and feel. Using the menus and windows of Oracle Applications, users have access to all the functions they need to manage their business information. Oracle Applications is highly responsive to users, supporting a multi-window GUI that provides users with full point-and-click capability. In addition, Oracle Applications offers many other features such as field-to-field validation and a list of values to help users simplify data entry and maintain the integrity of the data they enter. Applications Architecture The Oracle Applications Architecture is a framework for multi-tiered, distributed computing that supports Oracle Applications products. In this model, various servers or services are distributed among three levels, or tiers. A tier is a logical grouping of services, potentially spread across more than one physical or virtual machine. The three-tier architecture that comprises an Oracle E- Business Suite installation is made up of the database tier, which supports and manages the Oracle database; the application tier, which supports and manages the 32

33 various Applications components, and is sometimes known as the middle tier; and the desktop tier, which provides the user interface through an add-on component to a standard web browser. The simplest architecture for Oracle Applications is to have all tiers, except the desktop tier, installed on a single server. This configuration might be acceptable in a development environment, but for production environments scaling would quickly become an issue. In order to mimic a more realistic production environment, therefore, the architecture of the testing environment is built with two physical application tiers and two database tiers. The architectural components of Oracle Applications are seen in Figure 26. Figure 26. Oracle Applications architecture Working with FAST and Oracle Applications Because of the diversity of Oracle Applications, in that there are hundreds of different modules within a single product, deploying them appropriately on the right tier of storage is a daunting task. Implementing them in a VMware environment that utilizes 33

34 VMAX3 with FAST technology will enable how a customer can achieve proper performance and cost savings at the same time. FAST provides the ability to deliver variable performance levels through Service Level Objectives (SLO). Thin devices can be added to a storage group and the storage group can be assigned a specific Service Level Objective to set performance expectations. FAST monitors the storage group s performance relative to the Service Level Objective and automatically and non-disruptively relocates data to maintain a consistent performance level. FAST runs entirely within HYPERMAX OS, the storage operating environment that controls components within the array. FAST elements There are five main elements that comprise FAST on VMAX3 arrays. These are graphically depicted in Figure 27. These are: Disk groups Data pools Storage Resource Pools (SRP) Service Level Objectives (SLO) Storage groups Figure 27. FAST elements SLO defines an expected average response time target for an application. 34

35 Storage group a logical collection of VMAX3 devices that are to be managed together, and may constitute a single application. Storage group definitions are to be shared between FAST and auto-provisioning groups. SRP a collection of data pools constituting a FAST domain. Data pool a collection of data devices of identical emulation and protection type, all of which reside on disks of the same technology type and speed. The disks in a data pool are from the same disk group. Disk group a collection of physical drives within the array that share the same performance characteristics, which are determined by rotational speed (15K, 10K, and 7.2K), technology (SAS, flash SAS), and capacity. Storage groups can be created, managed and assigned an SLO by using either Unisphere or the Solutions Enabler Command Line Interface (SYMCLI). The other FAST elements, including the available SLOs, are factory configured and cannot be modified by the user. FAST manages the allocation of new data within the SRP by automatically selecting a SRP based on available disk technology, capacity and RAID type. If a storage group has a SLO, FAST automatically changes the ranking of the SRPs used for initial allocation. If the preferred drive technology is not available, allocation reverts to the default behavior and uses any available SRP for allocation. FAST enforces SLO compliance within the SRP by restricting the available technology allocations. For example, the Platinum SLO cannot have allocations on 7K RPM disks within the SRP. This allows FAST to be more reactive to SLO changes and ensure critical workloads are isolated from lower performance disks. Note: For more information on FAST and SLOs see EMC VMAX3 Service Level Provisioning with Fully Automated Storage Tiering (FAST) on support.emc.com. Oracle Applications Tablespace Model For this paper, the latest Oracle Applications release 12 was installed and configured. There are a few benefits to using the latest release but most importantly, unlike previous Oracle Application releases, Oracle no longer uses two tablespaces, and hence at least two datafiles, per application. Prior to release 12, each Applications module had its own set of tablespaces and datafiles, one for the data and one for the index. With over 200 schemas, managing a database of over 400 tablespaces and datafiles was, and is, a sizable undertaking. The new approach that Oracle uses in release 12 is called the Oracle Applications Tablespace Model, or OATM. OATM is similar to the traditional model in retaining the system, undo, and temporary tablespaces. The key difference is that Applications products in an OATM environment share a much smaller number of tablespaces, rather than having their own dedicated tablespaces. Applications schema objects are allocated to the shared tablespaces based on two main factors: the type of data they contain, and I/O characteristics such as size, life 35

36 span, access methods, and locking granularity. For example, tables that contain seed data are allocated to a different tablespace from the tables that contain transactional data. In addition, while most indexes are held in the same tablespace as the base table, indexes on transaction tables are held in a single tablespace dedicated to such indexes. As will be explained, OATM is a perfect match for the FAST technology because despite the data being consolidated into less datafiles, FAST is able to separate it on the backend based on how it is accessed. Oracle Applications implementation A customer implementation of Oracle Applications is not a quick process. The installation itself is only the first part of what can be an endeavor lasting many months or longer. Although there are almost 200 application modules in the Oracle Applications, customers rarely, if ever, use all of them. They use a selection of them, or perhaps a bundle such as Financials, or CRM. These modules are then implemented (typically) in a phased approach. The transition from an existing applications system or implementation of a new system takes time. How that system eventually will be used, and more importantly how that database will be accessed, presents a real challenge for the system administrator and database administrator. These individuals are tasked with providing the right performance for the right application at the right price. In other words, both performance optimization and cost optimization are extremely important to them; however, obtaining the balance between the two is not an easy task. This is made even more difficult under the new Oracle Applications Tablespace Model. Oracle s new model certainly does a good job at high-level database object consolidation less tablespaces and less datafiles but since the application modules are no longer separated into individual tablespaces and datafiles, there really is no practical way to put different modules on different tiers of storage until now. FAST is the perfect complement to the manner in which Oracle implements the database in release 12 of the Oracle application suite. In fact, the entire database of user data can be placed on a single mount point on a VMware virtual disk and yet still be spread over the appropriate disk technologies that match the business requirements. The simplicity of this deployment model is enabled by FAST and Service Level Objectives. Oracle Applications deployment The Oracle Applications VMware environment deployed in this study consists of two ESXi 6.0 U1 servers with a total of five virtual machines listed in Table 1. The environment is managed by a VMware vcenter Server. Figure 28 is a high-level visual representation of the environment. Note not all hardware components are included. 36

37 Table 1. Example environment Virtual Machine Name Model OS & Version CPUs RAM (GB) Disk RAC Database Tier Node 1 RAC Database Tier Node 2 dsib2233 dsib2234 VMware VM VMware VM Applications Tier 1 dsib0242 VMware VM Applications Tier 2 dsib0243 VMware VM Production vcenter dsib2113 VMware VM EMC VMAX3 HK VMAX 200K EMC VMAX3 HK VMAX 200K EMC VMAX3 HK VMAX 400K OEL bit OEL bit OEL bit OEL bit Win bit microcode microcode microcode 4 16 SAN 4 16 SAN 4 12 SAN 4 12 SAN 2 12 SAN R1 R2 Witness SATA,FC, EFD SATA,FC, EFD SATA,FC, EFD 37

38 Hardware layout Figure 28. Physical/virtual environment diagram FAST configuration for Oracle Applications In each VMAX3 array there are 3 disk technologies: EFD, FC, and SATA. The disks are automatically configured in the factory into disk groups and then thin pools. The disks together comprise a Storage Resource Pool: Group 1: FC RAID 1 Group 2: SATA RAID-6 (6+2) Group 3: EFD RAID The screen capture in Figure 29 shows the content of the Storage Resource Pool. 38

39 Figure 29. Disk group configuration for the SRP The 3 disk technologies enable the use of all the SLOs seen in Figure 30. Figure 30. Service Level Objectives Because of the varied workloads that an Oracle Applications environment may experience, from order entry to running reports, the assigned SLO is Optimized. Optimized SLO offers an optimal balance of resources and performance across the whole SRP, based on I/O load, type of I/O, data pool utilization, and available 39

40 capacities in the pools. It will place the most active data on higher performing storage and least active data on the most cost-effective storage. If data pools capacity or utilization is stressed it will attempt to alleviate it by using other pools. FAST and SRDF/Metro FAST shares performance statistics across the arrays in a typical active/passive SRDF configuration to ensure the workload is properly represented on the remote site (R2) in case of failover. This statistic sharing has been extended to an SRDF/Metro configuration, with both the R1 and R2 sending statistics across to the other array. This sharing is particularly important since the R1 and R2 are both in an active state and users accessing the same application may be balanced across hosts on each side of the cluster. The sharing ensures that the whole workload (R1+R2) is represented. The sharing of statistics has no bearing on the SLO selection for storage groups that are participating in an SRDF relationship. Even in an SRDF/Metro configuration, the R1 and R2 devices can be in different SLOs. While it is possible to have a different SLO for each site in an SRDF/Metro configuration, as the Oracle Applications are balanced across both application and database nodes, and thus sites, it is recommended that both storage groups have the same SLO of Optimized. This will ensure a balanced performance experience whether the user is executing I/O against the R1 or R2. Oracle Real Application Clusters (RAC) on SRDF/Metro Oracle deployments on virtual hardware should essentially be no different than physical hardware. In other words you can achieve all the scaling benefits of the hypervisor without being concerned that Oracle will run differently than when run on a physical host. Like its physical host counterpart virtual deployments should follow best practices for Oracle databases. Other recommendations mirror physical deployments and can be found at the VMware website. The References section includes some of these documents. Extended RAC and Oracle Clusterware Oracle Extended RAC is a deployment model for Oracle Real Application Clusters in which the server nodes reside in physical separate locations. Such a deployment has strict software and hardware requirements but can be enabled with such advanced technologies as SRDF/Metro. Extended RAC maintains the strict latency requirements and disk sharing. Such a deployment is typically in a metro or campus environment. SRDF/Metro provides the perfect solution for Extended RAC and is fully supported by Oracle. One of the benefits of SRDF/Metro is the avoidance of a third site for the Oracle voting disks which is usually a requirement of Extended RAC. A third site prevents a splitbrain situation where the Oracle nodes are unaware there has been an interconnect 40

41 issue and both sides continue to receive I/O. A SRDF/Metro does not need the third site because it uses a Witness in a third site fault domain as discussed previously in this paper. The behavior of Extended RAC with an SRDF/Metro Witness is as follows: If there is an interconnect failure between Oracle RAC nodes that is unrelated to SRDF, Oracle Clusterware will reconfigure based on majority rules and access to the voting disk. If there is an SRDF link failure or a site failure, SRDF will initiate bias preference rules with witness guidance and continue I/O at the designated site. The Oracle RAC node(s) accessing storage at the surviving SRDF/Metro site will still have access to the voting disks and therefore Oracle Clusterware will reconfigure the cluster in accordance with this. Oracle RAC and Oracle E-Business Suite The Oracle E-Business Suite implementation (default demo database VIS) used in this paper installs as a single instance on a single node. If the user desires to use RAC, he must convert the database from single to multiple instances. For standalone databases, this is not a trivial task but for E-Business Suite environments it is even more complicated. Oracle provides a number of support notes to guide the user through the implementation. There are two components involved the applications and the database. Because Oracle Applications is integrally tied to the database, it requires an upgrade just like the database. There are numerous patches and configuration changes that must be completed before the database can be moved to RAC. The Oracle notes are extensive and detailed and version specific for the applications and the database. They are also frequently updated with new information. It is therefore impossible to properly explain the process used in this environment beyond the high-level one already mentioned. The support notes used in this upgrade are included in the References for the user s reference. The focus here will be on one possible upgrade path to RAC in an Oracle E-Business Suite Environment. Oracle provides for the conversion to RAC on a number of different paths. The one used in this paper was a separate install of the binaries (grid and database), a conversion to ASM, and an upgrade of the applications database both to the new database version and to Extended RAC. The paper will address the pertinent component which is the installation of RAC on VMware with VMAX3. Oracle Real Application Clusters (RAC) on VMFS RAC on VMware can be configured with either Raw Device Mappings (RDMs) or vmdks. For this solution vmdks were chosen to reduce complexity. As VMware is moving away from RDMs, given the advent of Virtual Volumes (VVols) in vsphere 6, VMFS was used. VMFS has the added benefit of simplified management. Using RAC on VMware requires that certain steps are taken to enable the virtual machines running as RAC nodes to access the same vmdks. Oracle has a clustering technology, Oracle Clusterware 5, which contains the intelligence to ensure that two or 5 In 12c, the Oracle Clusterware and Automatic Storage Management (ASM) form the Oracle Grid Infrastructure. 41

42 more RAC nodes can access the same disk without causing data issues. While VMware will allow multiple VMs to access the same vmdk when used for such technologies as VMware Fault Tolerance (FT) where only one VM is writing, by default it does not permit multiple virtual machines to write to the same vmdk. If such a restriction were not in place, VMware could not guarantee any order of writes coming from the virtual machines which could very easily lead to data corruption and loss. VMware provides the means to override this protection through the multi-writer flag that is set on a vmdk by vmdk basis. The process for this can be found in VMware KB article VMware specifically calls out third-party clustering software like RAC for this type of change. One of the pre-requisites for setting the flag is that the Oracle vmdks designated for the database need to be created as type eagerzeroedthick. This is in line with VMware s best practices for Oracle databases on VMware. Oracle Clusterware deployment Oracle Clusterware and Automatic Storage Management (ASM) are deployed on SRDF/Metro devices and, as previously discussed, without the need for a third site for the voting disks. EMC recommends deploying the Clusterware files, OCR and voting, on their own ASM disk group using Normal redundancy, while using External redundancy for the other disk groups. Oracle Extended RAC installation The following are the basic steps undertaken when configuring Oracle RAC on VMware for this environment. These steps were taken on each of two nodes in the RAC environment. As this is being done on a SRDF/Metro environment, it is by default an Extended RAC configuration. The steps below assume that SRDF/Metro devices have been presented to both ESXi hosts in the HA cluster, after full synchronization. Note that in the environment in this paper two RAC nodes were used and that they were installed independently and with independent software homes. There are a number of options available when installing RAC nodes, particularly on VMware. Users may choose to install some components on a single node (e.g. OS, disks, etc.) and then create a template from it. Using that template they can create multiple RAC nodes. At that point they may choose to use a shared home for the Oracle binaries. These options are strictly preferences of the user doing the install and do not impact the functionality of the cluster, though they can make patching easier. Create VMFS on each of the devices presented from the VMAX3. There will be some for the OS and binaries and others for the ASM setup for the Oracle RAC database. Though SRDF/Metro makes it perfectly possible to use a shared home for Oracle, in this installation separate homes were configured, providing redundancy for each node independently since each host sees the datastores in use. Procedure 1. Configure an NTP client for use in the environment. 42

43 2. Create a VM with Guest OS set to Oracle Linux 4/5/6 (64-bit). Create a single vmdk to hold the OS, and another vmdk to hold the Oracle Application binaries. In this setup, these vmdks reside on separate datastores on SRDF/Metro. 3. Add a second NIC. Be sure both NICs are set to VMXNET3. These NICs should be on VMware networks configured on their own physical adapter. 4. Install the OS and all necessary packages. Note Oracle now offers an E-Business Suite Pre-Install RPM which contains all the necessary packages to run Oracle E- Business Suite. 5. Install Oracle Enterprise Linux, VMware tools and the ASMLIB components. Configure ASMLIB. 6. Configure NTP on the node. 7. Create vmdks for the Oracle database and CRS/voting disks. The configurations may vary. For this environment multiple disks (TDEVs) were created for the cluster files, data, redo, and archive (FRA). All vmdks need to be eagerzeroedthick (EZT), otherwise there will be issues such as an inability to restart the VM. The disks should be assigned to a Paravirtual SCSI controller. Be sure the controller is set to the default of None for SCSI Bus Sharing or vmotions will not be possible. 8. Use VMware KB article to set the multi-writer flag on the vmdks, allowing more than one node to access the disk. 9. Using fdisk configure a single partition on all the disks. It is important to align the partition on the disk since neither Oracle nor VMware will do this automatically. VMware ensures the VMFS is aligned but not the file systems on vmdks. On the VMAX3, a 128 KB offset should be utilized as in this environment. 10. Use oracleasm to create the ASM disks on one of the hosts. Then use oracleasm to rescan on the second host and discover the ASM disks. 11. Install and configure Oracle Grid Infrastructure There are many prerequisites that must be completed before installation. Oracle uses the Cluster Verification Tool (CVU) to ensure the nodes are ready for installation. The Clusterware install will use a single ASM disk group during installation. Set this disk group to Normal redundancy which will ensure multiple voting disks as Oracle automatically determines the number of files based on redundancy. This ASM disk group need not be large as the Clusterware files are small. 12. Create the remaining ASM disk groups using the ASM GUI or ASM CLI, being sure to set redundancy to External. Oracle and EMC both recommend using this redundancy due to the superior availability of the VMAX3. The Clusterware disk group is the only one requiring a Normal redundancy for previously noted reasons. 13. Install Oracle database binaries on both nodes at once through the installer. 14. Following the various Oracle technical notes, take the following steps: a. Upgrade the E-Business Suite database from 11g to 12c. 43

44 Networking b. Convert the E-Business Suite database from file to ASM c. Upgrade the E-Business Suite database to RAC The network configuration used in this environment was limited to two physical NICs on the ESXi hosts. A best practice would be to separate the management network, vmotion network, Oracle public and private networks. Figure 31 shows the configuration with just two NICs: Figure 31. vsphere networking for Oracle RAC After the Oracle Applications database is converted to RAC, the database administrator will be able to run a successful status check that should yield results similar to Figure 32. It displays the status of an Extended RAC node after installation and conversion. 44

45 Figure 32. Oracle RAC status on production node VMware Cluster Configuration with SRDF/Metro This section will specifically focus on the best practices when running a vmsc with SRDF/Metro. Figure 33 shows the recommended cluster configuration for VMware deployments that leverage devices presented through EMC SRDF/Metro with a Witness employed. It can be seen from the figure that VMware vsphere consists of a single VMware cluster. The cluster includes two VMware ESXi hosts with one at each physical data center (referred to here as New York and New Jersey). If site HA is also desired, two or more ESXi hosts could be configured at each site. Also shown in the figure, as an inset, are the settings for each cluster. The inset shows that VMware DRS and VMware HA are active in the cluster and that VM Monitoring is activated. 45

46 Figure 33. Setting vsphere DRS and HA services vsphere HA As the main business driver of SRDF/Metro and vsphere Metro Cluster is high availability, it is important to ensure that server resources exist to failover to a single site. Therefore the Admission Control policy of vsphere HA should be configured for 50% CPU and 50% memory. This is demonstrated in Figure

47 Figure 34. vsphere HA Admission Control Both VMware and EMC recommend using a percentage-based policy for Admission Control with HA as it is flexible and does not require changes when additional hosts are added to the cluster. Heartbeating vsphere HA uses heartbeat mechanisms to validate the state of a host. There are two different types of heartbeating: network (primary) datastore (secondary) If vsphere HA fails to determine the state of the host with network heartbeating, it will then use datastore heartbeating. If a host is not receiving any heartbeats, it uses a fail-safe mechanism to detect if it is merely isolated from its master node or completely isolated from the network. It does this by pinging the default gateway. It is possible to configure additional isolation addresses in case the gateway is down. VMware recommends specifying a minimum of two additional isolation addresses, with each address site local. This is configured in the Advanced Options of vsphere 47

48 HA using the option name das.isolationaddress.x ( x is incremented for each address): Figure 35. Adding network isolation addresses for vsphere HA For the heartbeat mechanism, the minimum number of heartbeat datastores is two and the maximum is five. VMware recommends increasing the number of heartbeat datastores from two to four in a stretched cluster environment. This provides full redundancy for both data center locations. Defining four specific datastores as preferred heartbeat datastores is also recommended. To increase the minimum heartbeat datastores, in the Advanced Options of vsphere HA add a new option named das.heartbeatdsperhost, depicted in Figure

49 Figure 36. Change minimum heartbeating datastores Once the minimum heartbeat datastores are configured, under Datastore for Heartbeating select the radio button Use datastores from the specified list and complement automatically if needed. Then select the four specific datastores to use in the configuration. The recommended setup is depicted in Figure

50 Figure 37. Modify heartbeat datastore selection policy Polling time for datastore paths By default, VMware polls for new datastore paths every 300 seconds. In an SRDF/Metro environment using NMP, EMC recommends changing this to 30 seconds to avoid the need to manually rescan after presenting the R2 devices. The value that requires changing is Disk.PathEvalTime and it must be changed on each ESXi host as demonstrated in Figure

51 Figure 38. Changing path evaluation time All Paths Down (APD) and Permanent Data Loss (PDL) An additional, important feature that should be addressed when enabling vsphere HA is Host Hardware Monitoring or VM Component Protection. This relates to the conditions All Paths Down and Permanent Data Loss and will now be discussed before detailing affinity rules. All paths down or APD, occurs on an ESXi host when a storage device is removed in an uncontrolled manner from the host (or the device fails), and the VMkernel core storage stack does not know how long the loss of device access will last. VMware, however, assumes the condition is temporary. A typical way of getting into APD would be if the zoning was removed. Permanent device loss or PDL, is similar to APD (and hence why initially VMware could not distinguish between the two) except it represents an unrecoverable loss of access to the storage. VMware assumes the storage is never coming back. Removing the device backing the datastore from the storage group would produce the error. VMCP vsphere 6 offers some new capabilities around APD and PDL for the HA cluster which allow automated recovery of VMs. The capabilities are enabled through a new feature in vsphere 6 called VM Component Protection or VMCP. When VMCP is enabled, vsphere can detect datastore accessibility failures, APD or PDL, and then recover affected virtual machines. VMCP allows the user to determine the response that vsphere HA will make, ranging from the creation of event alarms to virtual machine restarts on other hosts. VMCP is enabled in the vsphere HA edit screen of the vsphere Web Client shown previously in Figure 33. Check the box for Protect against Storage Connectivity Loss demonstrated in Figure

52 Figure 39. Enabling VMCP Once VMCP is enabled, storage protection levels and virtual machine remediation can be chosen for APD and PDL conditions as shown in Figure

53 Figure 40. Storage and VM settings for VMCP In vmsc Fibre Channel configurations, the Host isolation setting should remain as Disabled which means the VMs remain powered on. If iscsi is being used then VMware recommends setting Host isolation to power off the VMs. PDL VMCP settings The PDL settings are the simpler of the two failure conditions to configure. This is because there are only two choices: vsphere can issue events, or it can initiate power off of the VMs and restart them on the surviving host(s). vmsc recommendations for PDL As the purpose of HA is to keep the VMs running, the default choice should always be to power off and restart. Once the option is selected, the table at the top of the edit settings is updated to reflect that choice. This is seen in Figure

54 Figure 41. PDL datastore response for VMCP PDL AutoRemove PDL AutoRemove is a feature that was first introduced in vsphere 5.5. This feature automatically removes a device from a host when it enters a PDL state. Because vsphere hosts have a limit of 255 disk devices per host, a device that is in a PDL state can no longer accept I/O but can still occupy one of the available disk device spaces. Therefore, it is better to remove the device from the host. PDL AutoRemove occurs only if there are no open handles left on the device. The autoremove takes place when the last handle on the device closes. If the device recovers, or if it is re-added after having been inadvertently removed, it will be treated as a new device. In such cases VMware does not guarantee consistency for VMs on that datastore. In a vmsc environment, such as with SRDF/Metro, VMware recommends that AutoRemove be left in the default state, enabled. In an SRDF/Metro environment this 54

55 is particularly important because if it is disabled, and a suspend action is taken on a metro pair(s), the non-biased side will experience a loss of communication to the devices that can only be resolved by a host reboot. For more detail on AutoRemove refer to VMware KB APD VMCP settings As APD events are by nature transient, and not a permanent condition like PDL, VMware provides a more nuanced ability to control the behavior within VMCP. Essentially, however, there are still two options to choose from: vsphere can issue events, or it can initiate power off of the VMs and restart them on the surviving host(s) (aggressively or conservatively). The output in Figure 42 is similar to PDL. Figure 42. APD datastore response for VMCP 55

56 If issue events is selected, vsphere will do nothing more than notify the user through events when an APD event occurs. As such no further configuration is necessary. If, however, either aggressive or conservative restart of the VMs is chosen, additional options may be selected to further define how vsphere is to behave. The formerly grayed-out option Delay for VM failover for APD is now available and a minute value can be selected after which the restart of the VMs would proceed. Note that this delay is in addition to the default 140 second APD timeout. The difference in approaches to restarting the VMs is straightforward. If the outcome of the VM failover is unknown, say in the situation of a network partition, then the conservative approach would not terminate the VM, while the aggressive approach would. Note that if the cluster does not have sufficient resources, neither approach will terminate the VM. In addition to setting the delay for the restart of the VMs, the user can choose whether vsphere should take action if the APD condition resolves before the user-configured delay period is reached. If the setting Response for APD recovery after APD timeout is set to Reset VMs, and APD recovers before the delay is complete, the affected VMs will be reset which will recover the applications that were impacted by the I/O failures. This setting does not have any impact if vsphere is only configured to issue events in the case of APD. VMware and EMC recommend leaving this set to disabled so as to not unnecessarily disrupt the VMs. Figure 43 highlights the additional APD settings. Figure 43. Additional APD settings when enabling power off and restart VMs 56

57 vmsc recommendations for APD As previously stated, since the purpose of HA is to maintain the availability of VMs, VMware and EMC recommend setting APD to power off and restart the VMs. Depending on the business requirements, the 3 minute default delay can be adjusted higher or lower. If either the Host Monitoring or VM Restart Priority settings are disabled, VMCP cannot perform virtual machine restarts. Storage health can still be monitored and events can be issued, however. VMware VM/Host Groups and VM/Host Rules To take full advantage of the HA cluster failover capability in a SRDF/Metro cluster that employs a Witness, it is necessary to create Groups and then Rules that will govern how the VMs will be restarted in the event of a site failure. Setting these up is a fairly simple procedure in the vsphere Web Client. This will allow restarting of VMs in the event of a failure at either New York or New Jersey. To access the wizards to create Host Groups and Rules, highlight the cluster (SRDF_Metro_Cluster) and navigate to the Manage tab and then Settings sub-tab. This is shown in Figure 44. Note that as required the DRS automation level is already configured as Fully automated to permit VMware to move the virtual machines as necessary as previously seen in Figure

58 Figure 44. Locating DRS Groups and Rules For VM/Host Groups, one should create two VM Groups and two Host DRS Groups. In Figure 45 four groups have been created. New_York_VM_Group contains those VMs associated with host and New_York_Host_Group contains that single host. The New Jersey setup is similar. 58

59 Figure 45. Creation of groups for hosts and virtual machines Hosts and VMs are added respectively to each group and this is seen in Figure

60 Figure 46. Adding hosts and virtual machines to appropriate groups Now that the groups are in place, rules need to be created to govern how the groups should behave when there is a site failure. There are two rules, one that applies to New York and one that applies to New Jersey. The rule New_York_VM_Rule, seen in Figure 47, dictates that the VMs associated with New York (through the group) should run on the hosts associated with New York (again through the group) viceversa for New Jersey s VMs. It is important that the condition for both rules is should run and not must run since this gives flexibility for the VMs to start-up on the other node if needed. Each rule will permit the VMs associated with the failing cluster to be brought up on the two hosts that are part of the site that did not fail, and most importantly, to automatically migrate back to their original hosts when the site failure is resolved. Also note that by default, vsphere HA ignores the rules during failover if these are not left to ignore (default) then vsphere would not bring up the failed site s VMs on the surviving host. 60

61 Figure 47. Creation of VM/Host rules for groups These configurations along with SRDF/Metro Witness will ensure that a site failure will not completely bring down the environment. Best practices with VMware HA and SRDF/Metro Follow these best practices when configuring a SRDF/Metro in a campus: Configure the front end with a stretched layer-2 network so that when a virtual machine moves between sites, its IP address can stay the same. 61

62 Use host affinity rules. Host affinity rules keep virtual machines running in the preferred site as long as the virtual machines can, and only moves the virtual machines to the non-preferred site if they cannot run in the preferred site. Use a Witness with SRDF/Metro and not bias to provide the best availability. Viewing VMAX3 in vsphere When the environment is fully configured, EMC provides the capability to get detailed information about the VMAX3 devices that back the VMware datastores. This is accomplished through EMC Virtual Storage Integrator. EMC Virtual Storage Integrator The mapping of the VMware canonical device and VMFS to EMC VMAX devices are critical components when using EMC VMAX-based storage software. To aid in this, EMC provides a plug-in called EMC Virtual Storage Integrator for VMware vsphere, also known as VSI. This free tool, available to download at support.emc.com, enables additional capabilities to the vsphere Web Client, so that users may view detailed storage-specific information. The VSI Web Client runs as a virtual appliance and a VSI plug-in is installed in the vsphere Web Client and is thus accessible from any browser. The VSI Web Client provides simple, storage mapping functionality for EMC storagerelated entities including datastores, RDMs, SCSI and iscsi targets. The storage information is displayed within the various panels of the vsphere Web Client. VSI provides a view at the datastore level and for a more global view, at the host level. The host level, seen in Figure 48, has important mapping information such as the runtime name of the device that is mapped to a particular datastore, along with the array SID, model, HYPERMAX OS version and pathing ownership. 62

63 Figure 48. VSI host level view By selecting any datastore, the hyperlink will bring the user to the datastore detail where the SLO, storage group, device ID, SRP, and device type are available, among other fields. An example from the datastore DATA_1 is shown in Figure

64 Figure 49. VSI datastore view Conclusion EMC SRDF/Metro running the HYPERMAX operating system is an enterprise-class technology that dissolves distance by providing active/active access to dispersed VMAX3 arrays enhancing availability and mobility. Using SRDF/Metro in conjunction with VMware and Oracle availability technologies, VMware HA and RAC, provides new 64

65 levels of availability suitable for the most mission critical environments without compromise. These technologies provide the basis by which a customer can ensure high availability at both the hardware and software level through the nature of SRDF/Metro, vsphere HA and Oracle Extended RAC. References Deployment Best Practice for Oracle Database with VMAX3 Service Level Objective Management EMC Using EMC Symmetrix Storage in VMware vsphere Environments TechBook Database-with-VMAX3-Service-Level-Objective-Management.pdf SRDF/Metro vmsc support VMware KB article e=kc&externalid= EMC VMAX3 SRDF/Metro Overview and Best Practices Using EMC SRDF Adapter for VMware vcenter Site Recovery Manager The following are available on Support.EMC.com website: Oracle Databases on EMC Symmetrix Storage Systems TechBook VSI Product Guide Product Guide Unisphere for VMAX Product Guide VMware vsphere Storage ESXi vsphere Installation and Setup Guide 65

66 Virtual Machine Administration Guide The following are available at the VMware.com website: Oracle Databases on VMware Best Practices Guide VMware vsphere Metro Storage Cluster Recommended Practices Oracle The following are available at support.oracle.com website: Oracle Applications Installation Guide: Using Rapid Install Release 12 Oracle Applications Concepts Release 12 Oracle E-Business Suite Installation and Upgrade Notes Release 12 (12.2) for Linux x86-64 (Doc ID ) Oracle Database 12c Upgrade Guide Oracle E-Business Suite Release Notes, Release 12.2 (Doc ID ) Database Initialization Parameters for Oracle E-Business Suite Release 12 [ID ] Using Oracle 12c Release 1 (12.1) Real Application Clusters with Oracle E- Business Suite Release R12.2 (Doc ID ) Interoperability Notes Oracle EBS 12.2 with Oracle Database 12c Release 1 (Doc ID ) 66

67 Appendix A This appendix contains procedures which can be referenced for future use once a vmsc with SRDF/Metro is operational. SRDF/Metro maintenance Adding new pairs If maintenance is required on the existing SRDF/Metro configuration, for example adding new pairs to the group, there are restrictions in the GA release which prevent doing this online. For example, once an SRDF group has Metro pairs, one cannot simply add another pair to the group. The group must first be suspended. 6 If it is determined that changes are needed, the following steps should be followed in the vmsc environment. Use of the vsphere Web Client and Unisphere for VMAX is recommended, however it is possible to use command line interfaces such as Solutions Enabler. Note that during the maintenance procedure, one side of the HA environment is unavailable, i.e. there is no HA. If the business cannot tolerate any potential disruption to the HA environment, it is advisable to create a new SRDF/Metro group with the new pairs. This avoids any downtime for one side of the Metro. Before beginning the maintenance procedure, be sure to check on which array the bias resides. The bias will always be the R1. This can be determined through Unisphere or Solutions Enabler. 1. If necessary, change the bias to the non-maintenance site. 2. Change DRS Automation to Manual to prevent automatic movement of VMs to the R2. 3. Move all VMs from the maintenance side (R2) to the R1 hosts. 4. Unmount the datastores on the R2 ESXi hosts. Although this step is not required, it is recommended to ensure no VMs are still running on the datastores. Note that if any of the datastores are being used for the HA heartbeat, a warning will be shown which can be ignored. In order to avoid this warning, alternative datastores can be specified for the HA heartbeat before unmounting. 5. Delete the masking view for the R2. The devices should be unmapped/unmasked from the hosts before suspending the group. 6. Rescan the R2 ESXi hosts so they no longer see the devices. 7. Suspend the devices in the SRDF group. 6 If expanding a device, however, the pair must be removed after suspension. This is not unique to Metro as any SRDF mode requires the pair to first be dissolved and the R1, R2 expanded independently. 67

68 8. Add pairs to the SRDF group (NR device, suspended link state) and the R1 and R2 storage groups. 9. Re-establish pairs. 10. Once synchronized (ActiveBias or ActiveActive state) re-create the R2 view. 11. Rescan the R2 ESXi hosts which will automatically discover the datastores and mount them or wait for them to be recognized. 12. Move VMs back to R2 hosts manually and enable DRS or allow DRS to move VMs automatically based on rules. VM mobility One of the benefits of vmsc is VMs can be moved from one site to the other by simply relocating compute resources since the virtual device is the same on both sides of the cluster. Although DRS will automatically handle the placement and movement of VMs based upon resources as well as any host rules in place (see VMware VM/Host Groups and VM/Host Rules), it may be necessary to manually relocate VMs. The process of migration is no different in a Metro cluster than a typical VMware cluster. Though migration is a common task documented by VMware, in the interest of completeness it is included here in a series of steps. 1. Start by setting DRS from Fully Automated to Manual so that the VMs will not be re-migrated. 2. Select the VM for migration, and choose Migrate. 68

69 3. Select Change compute resource only 4. Select the host in the other metro site. 69

70 5. Review the network map. 6. Select the priority. 70

71 7. Complete the migration. As VMware does not have to change the storage of the VM, the migration completes relatively quickly. Appendix B SRDF/Metro with VMware Site Recovery Manager This appendix covers using SRDF/Metro with VMware Site Recovery Manager (SRM) in a stretched storage configuration. It is included here as it is based upon the environment in this paper with minor adjustments to meet the requirements of SRM stretched storage setup. The example below assumes a detailed knowledge of using the SRDF SRA with SRM and therefore will only cover the new stretched storage capability available with VMware SRM 6.1 in conjunction with SRDF SRA 6.1. If additional information on the workings of VMware SRM with the SRDF SRA is required, please refer to the following TechBook: Using EMC SRDF Adapter for VMware vcenter Site Recovery Manager. 71

72 VMware vcenter Site Recovery Manager 6.1 With the release of VMware vcenter Site Recovery Manager 6.1, VMware adds a new type of functionality to their disaster recovery software, support of stretched storage. At a high level, the VMware SRM software handles the orchestration/automation of disaster recovery between two sites. In a typical disaster recovery configuration, one site is active while the other is passive, and the distance between those sites might be significant (i.e. hundreds of miles or more). By contrast, in a stretched storage configuration, both sites are active and while the distance between the sites can approach 60 miles or more depending on the network, more commonly the sites are located within a campus (e.g. campus cluster). What SRM 6.1 offers, in conjunction with the SRDF SRA 6.1, is the same capability it offers today but with a stretched storage deployment. Each of the existing SRM functions is supported: Test, Planned Migration, and Disaster Recovery. The rationale for using SRM in stretched storage may be different than a traditional implementation where the secondary site is passive and at a distance that makes clustering impossible. With stretched storage, SRM may be particularly useful in avoiding manual intervention during site maintenance. SRM, in conjunction with the SRDF SRA, provides complete automation during the planned migration. At a high level this involves vmotioning the VMs from one site to the other, suspending the RDF link, and changing the bias to the other site. After maintenance is complete, reprotection is run, followed again by a planned migration if maintenance is required on the other site. In a disaster recovery (DR) event, SRM can again complete the tasks necessary to bring up all VMs on the surviving site. What is most beneficial in a SRM stretched storage configuration is of course the active shared storage. An active/active cluster means in a planned migration or DR, the datastores are already mounted and available, saving costly time. This is particularly important in a DR scenario where the recovery time object (RTO) may be very low. As the RTO approaches zero, it is ever more critical to introduce the type of automation SRM offers to avoid human error. Environment In order to use the environment detailed in this paper with SRM and stretched storage, two changes are required. Firstly, the SRDF SRA 6.1 does not support the Witness functionality of SRDF/Metro. All SRDF/Metro pairs must use Bias, and therefore the state of the pairs will be ActiveBias. The second requirement is one that all SRM environments share, and that is the need for 2 vcenters, not one. SRM must have a protection and recovery site. With stretched storage those vcenters must also be in Enhanced Linked Mode (ELM). The details provided in this appendix are based on the environment in this paper, along with the following additions/updates: HYPERMAX OS 5977 Q SR 72

73 SRDF/Metro with Bias SRDF SRA 6.1 Solutions Enabler/Unisphere for VMAX 8.2 vsphere 6.0 U2 (vcenters, ESXi) in Enhanced Linked Mode using NMP VMware SRM 6.1 As mentioned above, in order to use this environment with SRM and stretched storage, the ESXi hosts that comprised the vcenter were split apart and each added to their own vcenter in Enhanced Linked Mode. As this effectively removed VMware HA from the solution, a second ESXi host was added to each vcenter with both environments using the same HA setup previously outlined in the paper (vsphere HA). To support the second change, the use of Bias and not Witness, the third array was removed from the original configuration which resulted in the SRDF/Metro pairs defaulting to bias. The updated environment is shown in Table 2. Table 2. Example environment Virtual Machine Name Model OS & Version CPUs RAM (GB) Disk RAC Database Tier Node 1 RAC Database Tier Node 2 dsib2233 dsib2234 VMware VM VMware VM Applications Tier 1 dsib0242 VMware VM Applications Tier 2 dsib0243 VMware VM Production vcenter NY Production vcenter NJ dsib2116 [ESXi hosts dsib1131, 1139] dsib2117 [ESXi hosts dsib1132, 1140] VMware VM VMware VM EMC VMAX3 HK VMAX 200K EMC VMAX3 HK VMAX 200K OEL bit OEL bit OEL bit OEL bit Win bit Win bit HYPERMAX OS 5977 Q HYPERMAX OS 5977 Q SAN 4 16 SAN 4 12 SAN 4 12 SAN 2 12 SAN 2 12 SAN R1 R2 SATA,FC, EFD SATA,FC, EFD Figure 50 demonstrates the Enhanced Linked Mode (ELM) setup, where both vcenters are visible from a single pane. ELM allows cross-vcenter migrations of virtual machines, and hence why it is required for the stretched storage SRM setup. The way ELM is configured is as such: With vsphere 6 vcenter there are two components a Platform Service Controller (PSC) and the vcenter itself. The PSC contains the Single Sign-On (SSO) domain. Each vcenter in a SRM setup that supports stretched storage must share a PSC. While there are a number of supported topologies for deployment, 73

74 VMware recommends that the PSC is separate from either vcenter. 7 Please refer to the VMware documentation in the References section for detailed installation instructions of ELM. Figure 50. Updated vmsc environment for SRM Figure 51 is an updated visual representation of the environment originally presented in Figure 28. Again, not all hardware components are included. 7 VMware publishes a list of recommended topologies which can be found in KB

75 Figure 51. Physical/virtual environment diagram for SRDF/Metro with SRM Configuration SRDF SRA 6.1 The SRDF SRA version 6.1 is required for stretched storage with SRM. Once the sites are paired in SRM, as depicted in Figure 52, install the SRDF SRA on both SRM servers. 75

76 Figure 52. SRM paired sites Within a site in SRM, under the Monitor tab and sub-tab SRA, all installed adapters are listed. Note that in SRM 6.1 there is a new row called Stretched Storage for each SRA. In Figure 53, one can see that the SRDF SRA 6.1 supports stretched storage. Be aware that if the sites are not paired, the Status row will indicate SRM is unable to find the SRDF SRA at the remote site, rather than the OK demonstrated in the figure. 76

77 Figure 53. SRDF SRA 6.1 Stretched Storage - Tags, Categories and Storage Policies Tag and Category Configuring SRM proceeds as normal for most inventory mappings, despite the use of stretched storage. Setup the following mappings along with their reciprocals, just as in any SRM configuration: Network Mappings Folder Mappings Resource Mappings Placeholder Datastores The SRM setup to this point, as one can see, is no different than a typical configuration. Normally the next logical step, therefore, is to create the protection group and then the associated recovery plan. Here, however, is where the paradigm changes and important configuration changes are required for stretched storage. VMware uses VM Storage Policies when setting up the protection groups for stretched storage, not datastore groups which is the normal practice when using the SRDF SRA. Storage policies are most commonly associated with VASA, particular in conjunction 77

78 with VMware Virtual Volumes (VVols). When used with VASA, storage policies are created based on data services. With SRM, however, these storage policies are tagbased. The first requirement therefore is to create a tag. There are 2 steps involved with creating and applying the tag: 1. Create tag and category 2. Assign newly created tag to SRDF/Metro datastores at protection and recovery site Only a single tag is used in a stretched storage configuration. When creating a tag, a new category should be created and associated with that tag as in Figure

79 Figure 54. Tag and category for SRM stretched storage configuration Once created, the tag needs to be applied to each SRDF/Metro datastore. This can be done in bulk, by highlighting the vcenter on the left and selecting the Related Objects tab and then Datastores sub-tab, and then completing the 3 easy steps as shown in Figure

80 Figure 55. Assign tag to SRDF/Metro datastores 80

81 The tag should be applied on both the protection site and the recovery site. After the tag is successfully applied to the datastores, a storage profile can now be created. VM Storage Profile Each vcenter in SRM must have a separate VM storage policy. The process of creation is the same on each vcenter. Start by accessing the storage policies screen through the VMware menu Policies and Profiles. Steps 1 through 4 are demonstrated in Figure 56. Select the protection vcenter, a name and a description. In step 3 the storage profile is defined by a tag-based rule, in this case the tag previously created in Figure 55. In step 4 select Next to move to the next screen. 81

82 Figure 56. Creating a Storage Policy - Steps 1-4 In step 5 in Figure 57 the wizard will list all the compatible datastores. All previously tagged datastores should appear as compatible. 82

83 Figure 57. Creating a Storage Policy - Step 5 After each storage profile is created, a final mapping is required in SRM. If, as in this case, the storage profiles have different names then a manual mapping is required. The mapping appears in Figure 58. If the mapping does not exist, SRM will be unable to fail over correctly. Figure 58. Create Storage Policy Mapping This constitutes the components of the SRM setup. In order for a VM to be part of a protection group, however, it must be assigned the recently created storage profile. 83

84 Applying a Storage Policy to a VM Since a SRM stretched storage protection group relies on storage policies to decide which VMs to include in a protection group, it is necessary to apply that storage policy to the VMs. This is a relatively straightforward process which can be accomplished in any number of screens in vsphere. Figure 59 shows the 3 steps necessary to apply the storage policy to VM dsib0242. In step 1 highlight the VM, select the Manage tab and Policies sub-tab. Here choose the Edit VM Storage Policies button which will bring up another dialog box. In step 2 use the drop-down box and select the newly created storage policy and select Apply to all. In step 3 the policy is applied to all aspects of the VM. Repeat this for the other VMs at the protection site. This process should also be repeated on the recovery site, applying the storage policy created on the recovery site to the VMs running there. This will ensure those VMs will be included in the protection group when a reprotect is run after a planned migration. 84

85 Figure 59. Applying a storage policy to a VM It is a best practice to create device groups for those devices involved in the SRM stretched storage configuration. Failure to create the groups may result in configuration errors in the protection group for the VMs. Note that consistency groups are not supported with the SRDF SRA 6.1. Stretched Storage - Protection Group The protection group for stretched storage is created in the same manner as a traditional protection group. Here are the 4 steps: 85

86 Step 1 - Name and location Type in a name and description for the protection group as in Figure 60. Figure 60. Protection group creation - step 1 Step 2 - Protection group type In the second step choose the direction of protection and then select Storage policies which is the group type for stretched storage. This is demonstrated in Figure 61. Figure 61. Protection group creation - step 2 86

87 Step 3 - Storage policies When the storage policies type is selected in step 2, step 3 will allow for selection of the previously created storage policy. Check the box as in Figure 62. Figure 62. Protection group creation - step 3 Step 4 - Review Step 4 is a review screen where the selections can be double-checked. Once satisfied, select Finish and complete creation. Step 4 is seen in Figure

88 Figure 63. Protection group creation - step 4 Stretched Storage - Recovery Plan The recovery plan for stretched storage is created in the same manner as a traditional recovery plan. Here are the 5 steps: Step 1 - Name and location Type in a name and description for the recovery plan as in Figure 64. Figure 64. Recovery plan creation - step 1 88

89 Step 2 - Protection group type In the second step choose the recovery site. This is demonstrated in Figure 65. Figure 65. Recovery plan creation - step 2 Step 3 - Protection groups From the drop-down, select the Storage policy protection groups. This will result in showing the previously created protection group. Check the box, as in Figure 66, and move to the next step. Figure 66. Recovery plan creation - step 3 Step 4 - Test networks In a recovery plan there is an additional step to setup test networks. Leave the mapping to Auto* for all recovery networks. In this example in Figure 67 there are 2 networks. 89

90 Figure 67. Recovery plan creation - step 4 Step 5 - Review Step 5 is a review screen where the selections can be double-checked. Once satisfied, select Finish and complete creation. Step 5 is seen in Figure 68. Figure 68. Recovery plan creation - step 5 The environment is now ready for testing, planned migration, or disaster recovery. Stretched Storage - Known Issue There is a known VMware bug with cross-vcenter migrations that impact planned migrations with stretched storage on SRM. Although there is no impact to the test functionality, if a test is run SRM will warn the user that if a planned migration is run, it will fail. The bug concerns free space in the datastore. Before SRM vmotions a VM during a planned migration, it checks whether the shared datastore has enough space for 2x the size of the vmdks (in other words another copy). If there is insufficient space, the vmotion will not proceed and the planned migration will fail. As this is shared storage VMware should not be conducting this test. If a test is run, the following warning in Figure 69 will be issued. 90

91 Figure 69. VMware bug - insufficient disk space VMware has no plans to address this issue so it is critical that the datastores being used in the SRDF/Metro/SRM environment have sufficient space beyond the 2x size of the vmdks. Cross-vMotion - Known Limitation This limitation concerns the ability to vmotion a VM across vcenters, also known as a cross vcenter vmotion. The issue arises if the VM to be vmotioned has vmdks in more than one shared (e.g. SRDF/Metro) datastore. When the vmotion is attempted, VMware will fail to recognize the remote cluster/esxi hosts as compatible, despite the fact that all datastores are shared. VMware indicates this is as designed. vsphere will only allow you to migrate a VM (compute resource) if it is on a single datastore. In the case of multiple datastores the computation is too difficult and therefore vsphere cannot process the request. In order to do the vmotion, therefore, the user must select Change both compute resource and storage when doing a cross vcenter vmotion with multiple datastores, even if those datastores are shared (SRDF/Metro). This means that when migrating from one vcenter to the other, the user must employ the advanced mapping capability in the migrate wizard to map each vmdk to a target datastore that is the same as the source datastore. Fortunately this issue does not arise in SRM when doing a planned migration, but many customers do use cross vcenter vmotion to distribute resources across their SRM environment so it is important to be aware of this limitation. SRDF Adapter Utilities 6.1 To aid in managing the XML files associated with the SRDF SRA 6.1, EMC offers the SRDF Adapter Utilities 6.1 (SRDF-AU). The SRDF-AU 6.1 supports SRDF/Metro stretched storage configuration in SRM 6.1. This section will discuss SRDF-AU only in relation to SRDF/Metro, but detailed product documentation can be reviewed here. The SRDF-AU is installed on the Windows vcenter host. While it is possible to install it on one site (typically recovery), EMC recommends installing SRDF-AU on both the protection and recovery sites for redundancy. Once installed, the SRDF-AU is available as an icon in the Home page of the vsphere Web Client which is shown in Figure

92 Figure 70. EMC SRDF Adapter Utilities While SRM supports the use of the vcenter appliance (VCSA), the SRDF Adapter Utilities do not. Therefore if use of the SRDF-AU is desired 8, the vcenters must be installed on Windows. As mentioned, the SRDF-AU provides a GUI interface to modify the XML files that the SRDF SRA utilizes. In order to query information about the SRDF pairs, SRDF-AU requires a connection to an SMI-S Provider for the recovery site. That SMI-S Provider can be a part of the same Solutions Enabler installation that is used as an SRM array manager. In the Settings tab in Figure 71, supply this information, testing both the vcenter and SMI-S connection before saving. Note for SRDF/Metro it is unnecessary to supply an SMI-S Provider to the protection site as consistency groups are not supported with the SRDF SRA The SRDF-AU is an optional software and is not required by the SRDF SRA. 92

93 Figure 71. SRDF-AU Settings The SRDF-AU offers the following tabs: Global Options used to change values in the EmcSrdfSraGlobalOptions.xml file. FailoverTest/GoldCopy used to set pairs for the R2 device. RDF Device Masking used to configure masking that the SRDF SRA can perform. SRDF/Metro does not support this functionality. 93

94 Test Replica Masking used to configure masking and unmasking of test devices. Create CGs used to create consistency groups. SRDF/Metro does not support this functionality. In stretched storage the most useful of these tabs is the FailoverTest/GoldCopy. It eliminates the need to manually pair the R2 devices with TimeFinder devices. Any SRDF/Metro pair will show an SRDF Mode of Active as in Figure 72. Figure 72. FailoverTest/GoldCopy tab Under the TimeFinder Replications sub-tab one can set the appropriate replication type and then run Auto Set Replicas which will apply all available TimeFinder devices to the appropriate R2. An example is provided in Figure

95 Figure 73. TimeFinder Replications - Auto Set Replicas After setting the replicas, the file can be saved to the SRDF-AU default location of C:\ProgramData\EMC\SRAUtilities\<user> on the installed system. Once saved the file can then be downloaded to a local directory if desired. Note that as the SRDF-AU has no direct relationship with the SRDF SRA, the files it modifies are not those used by the SRDF SRA. It is still necessary to copy any modified files with the SRDF-AU over to the SRDF SRA location of C:\ProgramData\EMC\EmcSrdfSra\Config in order for the SRDF SRA to use it during any SRM function. 95

BEST PRACTICES FOR USING DELL EMC SRDF/METRO IN A VMWARE VSPHERE METRO STORAGE CLUSTER

BEST PRACTICES FOR USING DELL EMC SRDF/METRO IN A VMWARE VSPHERE METRO STORAGE CLUSTER BEST PRACTICES FOR USING DELL EMC SRDF/METRO IN A VMWARE VSPHERE METRO STORAGE CLUSTER Abstract This white paper discusses best practices when configuring Dell EMC SRDF/Metro with VMware vsphere Metro

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

EMC VPLEX with Quantum Stornext

EMC VPLEX with Quantum Stornext White Paper Application Enabled Collaboration Abstract The EMC VPLEX storage federation solution together with Quantum StorNext file system enables a stretched cluster solution where hosts has simultaneous

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Interfamily Connectivity

Interfamily Connectivity Interfamily Connectivity SRDF Interfamily Connectivity Information REV 01 May 2017 This document defines the versions of HYPERMAX OS and Enginuity that can make up valid SRDF replication and SRDF/Metro

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Using the EMC SRDF Adapter for VMware Site Recovery Manager

Using the EMC SRDF Adapter for VMware Site Recovery Manager Using the EMC SRDF Adapter for VMware Site Recovery Manager Version 5.0 SRDF Overview Installing and Configuring the SRDF SRA Initiating Test Site Failover Operations Performing Site Failover/Failback

More information

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 This document supports the version of each product listed and supports all subsequent

More information

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.0 Document revision Date Revision Comments /9/0 A Initial Draft THIS GUIDE IS FOR INFORMATIONAL

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

VMware vsphere: Taking Virtualization to the Next Level

VMware vsphere: Taking Virtualization to the Next Level About this research note: Product Evaluation notes provide an analysis of the market position of a specific product and its vendor through an in-depth exploration of their relative capabilities. VMware

More information

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures Table of Contents Get the efficiency and low cost of cloud computing with uncompromising control over service levels and with the freedom of choice................ 3 Key Benefits........................................................

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

Reasons to Deploy Oracle on EMC Symmetrix VMAX

Reasons to Deploy Oracle on EMC Symmetrix VMAX Enterprises are under growing urgency to optimize the efficiency of their Oracle databases. IT decision-makers and business leaders are constantly pushing the boundaries of their infrastructures and applications

More information

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT White Paper EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT Genetec Omnicast, EMC VPLEX, Symmetrix VMAX, CLARiiON Provide seamless local or metropolitan

More information

EMC SRDF/Metro. vwitness Configuration Guide REVISION 02

EMC SRDF/Metro. vwitness Configuration Guide REVISION 02 EMC SRDF/Metro vwitness Configuration Guide REVISION 02 Copyright 2016-2017 Dell Inc or its subsidiaries All rights reserved. Published May 2017 Dell believes the information in this publication is accurate

More information

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING White Paper EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING Abstract This white paper provides an overview of VPLEX/VE use cases and performance characteristics Copyright 2014 EMC Corporation.

More information

StorMagic SvSAN: A virtual SAN made simple

StorMagic SvSAN: A virtual SAN made simple Data Sheet StorMagic SvSAN: A virtual SAN made simple StorMagic SvSAN SvSAN is a software-defined storage solution designed to run on two or more servers. It is uniquely architected with the combination

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX Increasing operational efficiency with VMware and EMC VMAX Abstract This white paper discusses how VMware s vsphere Storage APIs for

More information

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect Next Gen Storage StoreVirtual 3200 Alex Wilson Solutions Architect NEW HPE StoreVirtual 3200 Storage Low-cost, next-gen storage that scales with you Start at < 5K* and add flash when you are ready Supercharge

More information

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family A step-by-step guide IBM Systems and Technology Group ISV Enablement February 2014 Copyright IBM Corporation, 2014 Table of contents

More information

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6. Setup for Failover Clustering and Microsoft Cluster Service Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware

More information

Using VMware vsphere Replication. vsphere Replication 6.5

Using VMware vsphere Replication. vsphere Replication 6.5 Using VMware vsphere Replication 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit your

More information

VMware vsphere Stretched Cluster with X-IO Technologies ISE using Active-Active Mirroring. Whitepaper

VMware vsphere Stretched Cluster with X-IO Technologies ISE using Active-Active Mirroring. Whitepaper VMware vsphere Stretched Cluster with X-IO Technologies ISE using Active-Active Mirroring Whitepaper May 2014 Table of Contents Table of Contents... 2 Table of Figures... 3 Introduction... 4 Executive

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 2 AccessAnywhere TM ProtectEverywhere TM Application Availability and Recovery in Distributed Datacenter Environments Horia Constantinescu Sales Territory Manager, EMEA EMC RecoverPoint EMC VPLEX T:

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR 1 VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR PRINCIPAL CORPORATE SYSTEMS ENGINEER RECOVERPOINT AND VPLEX 2 AGENDA VPLEX Overview RecoverPoint

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

APPSYNC REPURPOSING COPIES ON UNITY

APPSYNC REPURPOSING COPIES ON UNITY APPSYNC REPURPOSING COPIES ON UNITY Repurposing Block Based Storage Volumes ABSTRACT This white paper discusses and provides guidelines for users who wants to utilize repurposing capabilities of Dell EMC

More information

Microsoft E xchange 2010 on VMware

Microsoft E xchange 2010 on VMware : Microsoft E xchange 2010 on VMware Availability and R ecovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer

Storage Platforms Update. Ahmed Hassanein, Sr. Systems Engineer Storage Platforms Update Ahmed Hassanein, Sr. Systems Engineer 3 4 Application Workloads PERFORMANCE DEMANDING UNDERSTANDING APPLICATION WORKLOADS CAPACITY DEMANDING IS VITAL TRADITIONAL CLOUD NATIVE 5

More information

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Wendy Chen, Roger Lopez, and Josh Raw Dell Product Group February 2013 This document is for informational purposes only and may

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Delivering unprecedented performance, efficiency and flexibility to modernize your IT

Delivering unprecedented performance, efficiency and flexibility to modernize your IT SvSAN 6 Data Sheet Delivering unprecedented performance, efficiency and flexibility to modernize your IT StorMagic SvSAN SvSAN is a software-defined storage solution designed to run on two or more servers.

More information

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions EMC Solutions for Enterprises EMC Tiered Storage for Oracle ILM Enabled by EMC Symmetrix V-Max Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009 EMC Corporation.

More information

VMware vsphere Metro Storage Cluster Case Study

VMware vsphere Metro Storage Cluster Case Study Metro Storage Cluster Case Study VMware vsphere 5.0 TECHNICAL MARKETING DOCUMENTATION v 1.0 MAY 2012 Table of Contents Purpose and Overview... 3 Target Audience.... 3 Interpreting This Document.... 3 Reference

More information

ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3

ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3 ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3 Simplified Oracle storage operations for virtualized Oracle environments Faster Oracle database snapshot creation and deletion

More information

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control.

EMC Solutions Enabler (SE) version 8.2 and Unisphere for VMAX version 8.2 provide array management and control. This module provides an overview of the VMAX All Flash and VMAX3 Family of arrays with HYPERMAX OS 5977. Key features and storage provisioning concepts are covered as well as the CLI command structure

More information

VMAX ALL FLASH. For Mission-Critical Oracle

VMAX ALL FLASH. For Mission-Critical Oracle VMAX ALL FLASH For Mission-Critical Oracle Performance All Flash performance that can scale (submillisecond response times) for mission critical Oracle mixed workloads; OLTP, DW/BI, and Analytics Virtualize

More information

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX Increasing operational efficiency with VMware and Dell EMC VMAX and PowerMax Abstract This white paper discusses

More information

Eliminate the Complexity of Multiple Infrastructure Silos

Eliminate the Complexity of Multiple Infrastructure Silos SOLUTION OVERVIEW Eliminate the Complexity of Multiple Infrastructure Silos A common approach to building out compute and storage infrastructure for varying workloads has been dedicated resources based

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1297BE Stretched Clusters or VMware Site Recovery Manager? We Say Both! Jeff Hunter, VMware, @jhuntervmware GS Khalsa, VMware, @gurusimran #VMworld Disclaimer This presentation may contain product features

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc.

Potpuna virtualizacija od servera do desktopa. Saša Hederić Senior Systems Engineer VMware Inc. Potpuna virtualizacija od servera do desktopa Saša Hederić Senior Systems Engineer VMware Inc. VMware ESX: Even More Reliable than a Mainframe! 2 The Problem Where the IT Budget Goes 5% Infrastructure

More information

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

Exam4Tests.   Latest exam questions & answers help you to pass IT exam test easily Exam4Tests http://www.exam4tests.com Latest exam questions & answers help you to pass IT exam test easily Exam : VCP510PSE Title : VMware Certified Professional 5 - Data Center Virtualization PSE Vendor

More information

EMC VPLEX Metro with HP Serviceguard A11.20

EMC VPLEX Metro with HP Serviceguard A11.20 White Paper EMC VPLEX Metro with HP Serviceguard A11.20 Abstract This white paper describes the implementation of HP Serviceguard using EMC VPLEX Metro configuration. October 2013 Table of Contents Executive

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

vsan Stretched Cluster & 2 Node Guide January 26, 2018

vsan Stretched Cluster & 2 Node Guide January 26, 2018 vsan Stretched Cluster & 2 Node Guide January 26, 2018 1 Table of Contents 1. Overview 1.1.Introduction 2. Support Statements 2.1.vSphere Versions 2.2.vSphere & vsan 2.3.Hybrid and All-Flash Support 2.4.On-disk

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE WHITE PAPER - DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE ABSTRACT This planning guide provides best practices and requirements for using stretched clusters with VxRail appliances. April 2018

More information

Virtual Volumes FAQs First Published On: Last Updated On:

Virtual Volumes FAQs First Published On: Last Updated On: First Published On: 03-20-2017 Last Updated On: 07-13-2018 1 Table of Contents 1. FAQs 1.1.Introduction and General Information 1.2.Technical Support 1.3.Requirements and Capabilities 2 1. FAQs Frequently

More information

Site Recovery Manager Installation and Configuration. Site Recovery Manager 5.5

Site Recovery Manager Installation and Configuration. Site Recovery Manager 5.5 Site Recovery Manager Installation and Configuration Site Recovery Manager 5.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

EMC VIPR SRM: VAPP BACKUP AND RESTORE USING VMWARE VSPHERE DATA PROTECTION ADVANCED

EMC VIPR SRM: VAPP BACKUP AND RESTORE USING VMWARE VSPHERE DATA PROTECTION ADVANCED White paper EMC VIPR SRM: VAPP BACKUP AND RESTORE USING VMWARE VSPHERE DATA PROTECTION ADVANCED Abstract This white paper provides a working example of how to back up and restore an EMC ViPR SRM vapp using

More information

REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX

REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX White Paper REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX An Architectural Overview EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates

More information

Application Integration IBM Corporation

Application Integration IBM Corporation Application Integration What is Host Software? Simultaneous development efforts NextGeneration Virtual Storage Meets Server Virtualization Benefits of VMware Virtual Infrastructure Maximum consolidation

More information

SRDF/METRO OVERVIEW AND BEST PRACTICES

SRDF/METRO OVERVIEW AND BEST PRACTICES SRDF/METRO OVERVIEW AND BEST PRACTICES Technical Notes ABSTRACT SRDF/Metro significantly changes the traditional behavior of SRDF to better support critical applications in high availability environments.

More information

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software.

Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. Mission-Critical Databases in the Cloud. Oracle RAC in Microsoft Azure Enabled by FlashGrid Software. White Paper rev. 2017-10-16 2017 FlashGrid Inc. 1 www.flashgrid.io Abstract Ensuring high availability

More information

SvSAN Data Sheet - StorMagic

SvSAN Data Sheet - StorMagic SvSAN Data Sheet - StorMagic A Virtual SAN for distributed multi-site environments StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

DELL EMC UNITY: VIRTUALIZATION INTEGRATION

DELL EMC UNITY: VIRTUALIZATION INTEGRATION DELL EMC UNITY: VIRTUALIZATION INTEGRATION A Detailed Review ABSTRACT This white paper introduces the virtualization features and integration points that are available on Dell EMC Unity. July, 2017 WHITE

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3

Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3 Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3 Applied Technology Abstract This white paper provides a detailed description of the technical aspects and benefits of deploying

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Workflow Guide for 7.2 release July 2018 215-13170_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

What's New in vsan 6.2 First Published On: Last Updated On:

What's New in vsan 6.2 First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 08-23-2017 1 1. Introduction 1.1.Preface 1.2.Architecture Overview 2. Space Efficiency 2.1.Deduplication and Compression 2.2.RAID - 5/6 (Erasure Coding)

More information

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5]

[VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] [VMICMV6.5]: VMware vsphere: Install, Configure, Manage [V6.5] Length Delivery Method : 5 Days : Instructor-led (Classroom) Course Overview This five-day course features intensive hands-on training that

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

DEPLOY/REMOVE A VVOL ENVIRONMENT ON AN EMC VMAX3 OR VMAX ALL FLASH

DEPLOY/REMOVE A VVOL ENVIRONMENT ON AN EMC VMAX3 OR VMAX ALL FLASH DEPLOY/REMOVE A VVOL ENVIRONMENT ON AN EMC VMAX3 OR VMAX ALL FLASH September 2016 VMAX Engineering Table of Contents Introduction... 3 Deploy VVol Environment... 3 In Unisphere for VMAX... 3 In VMware

More information

Technical Field Enablement. Symantec Messaging Gateway 10.0 HIGH AVAILABILITY WHITEPAPER. George Maculley. Date published: 5 May 2013

Technical Field Enablement. Symantec Messaging Gateway 10.0 HIGH AVAILABILITY WHITEPAPER. George Maculley. Date published: 5 May 2013 Symantec Messaging Gateway 10.0 HIGH AVAILABILITY WHITEPAPER George Maculley Date published: 5 May 2013 Document Version: 1.0 Technical Field Enablement Contents Introduction... 3 Scope... 3 Symantec Messaging

More information

vsan Disaster Recovery November 19, 2017

vsan Disaster Recovery November 19, 2017 November 19, 2017 1 Table of Contents 1. Disaster Recovery 1.1.Overview 1.2.vSAN Stretched Clusters and Site Recovery Manager 1.3.vSAN Performance 1.4.Summary 2 1. Disaster Recovery According to the United

More information