Using EMC VMAX Storage in VMware vsphere Environments

Size: px
Start display at page:

Download "Using EMC VMAX Storage in VMware vsphere Environments"

Transcription

1 Using EMC VMAX Storage in VMware vsphere Environments Version 11.3 Layout, Configuration, Management, and Performance Replication, Cloning, and Provisioning Disaster Restart and Recovery Drew Tonnesen July 2017

2 Copyright 2017 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on support.emc.com. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. EMC is now part of the Dell group of companies. Part number H Using EMC VMAX Storage in VMware vsphere Environments

3 Contents Preface Chapter 1 Contents VMware vsphere and EMC VMAX Introduction Setup of VMware ESXi Using EMC VMAX with VMware ESXi Fibre channel port settings...16 VMAX FA connectivity...16 Driver configuration in VMware ESXi...21 Adding and removing EMC VMAX devices to VMware ESXi hosts...21 Creating VMFS volumes on VMware ESXi...23 Partition alignment Partition alignment for virtual machines using VMFS volumes...32 Partition alignment for virtual machines using RDM...33 Mapping VMware devices/datastores to EMC VMAX devices VSI Classic...34 VSI for vsphere Web Client...38 Optimizing VMware and EMC VMAX for interoperability Storage considerations for VMware ESXi...41 VMware Virtual Volumes (VVols)...47 Expanding VMAX metavolumes Concatenated metavolume expansion...50 Striped metavolume expansion...53 Thin meta expansion...58 Converting to VMAX metavolumes...59 Growing VMFS in VMware vsphere using an expanded metavolume Path management Path failover and load balancing in VMware vsphere environments...70 Improving resiliency of ESXi to SAN failures...88 All Paths Down (APD) and Permanent Device Loss (PDL) APD in vsphere APD in vsphere 5.0 Permanent Device Loss (PDL) change...92 Avoiding APD/PDL Device removal...94 PDL AutoRemove APD/PDL in vsphere PDL VMCP settings Using EMC VMAX Storage in VMware vsphere Environments 1

4 Contents APD VMCP settings APD/PDL monitoring Chapter 2 Chapter 3 Management of EMC VMAX Arrays Introduction Solutions Enabler in VMware environments Thin Gatekeepers and multipathing Installing and configuring VMAX management applications on virtual machines Solutions Enabler virtual appliance Extension of Solutions Enabler to support VMware virtual environments EMC Unisphere for VMAX in VMware environments Unisphere for VMAX virtual appliance EMC Unisphere for VMAX virtual environment integration Unisphere EMC Storage Analytics VMAX Content Pack for VMware vrealize Log Insight Dell EMC VMAX Content Pack vsphere Storage APIs for Storage Awareness (VASA) Profile-Driven Storage VASA features Vendor providers and storage data representation EMC Virtual Provisioning and VMware vsphere Introduction EMC VMAX Virtual Provisioning overview Thin device sizing VMAX array VMAX3/VMAX All Flash array Thin device compression on VMAX Compression on the VMAX All Flash Enabling compression Compression metrics Performance considerations VMAX array VMAX3/VMAX All Flash array Virtual disk allocation schemes Zeroedthick allocation format Thin allocation format Using EMC VMAX Storage in VMware vsphere Environments

5 Contents Eagerzeroedthick allocation format Eagerzeroedthick with ESXi 5 and 6 and Enginuity 5875+/HYPERMAX OS Virtual disk type recommendations Dead Space Reclamation (UNMAP) Datastore Guest OS Thin pool management Capacity monitoring Viewing thin pool details with the Virtual Storage Integrator Exhaustion of oversubscribed pools Chapter 4 Data Placement and Performance in vsphere Introduction EMC Fully Automated Storage Tiering (FAST) Federated Tiered Storage (FTS) FAST benefits FAST managed objects FAST VP components FAST VP allocation by FAST Policy FAST VP coordination FAST VP storage group reassociate FAST VP space reclamation and UNMAP with VMware FAST VP Compression FAST management EMC FAST for VMAX3/VMAX All Flash SLO/SRP configuration EMC Virtual LUN migration VLUN VP VLUN migration with Unisphere for VMAX example Federated Live Migration Non-Disruptive Migration EMC Host I/O Limit vsphere Storage I/O Control Enable Storage I/O Control Storage I/O Control resource shares and limits Setting Storage I/O Control resource shares and limits Changing Congestion Threshold for Storage I/O Control Using EMC VMAX Storage in VMware vsphere Environments 3

6 Contents Storage I/O Control requirements Spreading I/O demands SIOC and EMC Host I/O Limit VMware Storage Distributed Resource Scheduler (SDRS) Datastore clusters Affinity rules and maintenance mode Datastore cluster requirements Using SDRS with EMC FAST (DP and VP) SDRS and EMC Host I/O Limit Chapter 5 Cloning of vsphere Virtual Machines Introduction EMC TimeFinder overview TimeFinder SnapVX TimeFinder VP Snap TimeFinder and VMFS Copying virtual machines after shutdown Using TimeFinder/Clone with cold virtual machines Using TimeFinder/Snap with cold virtual machines Using TimeFinder/VP Snap with cold virtual machines Copying running virtual machines using EMC Consistency technology Transitioning disk copies to cloned virtual machines Cloning virtual machines on VMFS in vsphere environments Cloning virtual machines using RDM in VMware vsphere environments Cloning virtual machines using Storage APIs for Array Integration (VAAI) VMware vsphere Storage API for Array Integration Hardware-accelerated Full Copy Enabling the Storage APIs for Array Integration Full Copy use cases Server resources Impact to CPU and memory Caveats for using hardware-accelerated Full Copy enas and VAAI VAAI and EMC Enginuity/HYPERMAX OS version support ODX and VMware Choosing a virtual machine cloning methodology Cloning at disaster protection site with vsphere 5 and Using EMC VMAX Storage in VMware vsphere Environments

7 Contents Using TimeFinder copies Detaching the remote volume Detaching a device and resignaturing the copy LVM.enableResignature parameter LVM.enableResignature example RecoverPoint Enabling image access Chapter 6 Disaster Protection for VMware vsphere Introduction SRDF Storage Replication Adapter for VMware Site Recovery Manager Site Recovery Manager workflow features SRDF Storage Replication Adapter Business continuity solutions for VMware vsphere Recoverable vs. restartable copies of data SRDF/S SRDF/A SRDF/Metro SRDF Automated Replication SRDF/Star overview Concurrent SRDF (3-site) Cascaded SRDF (3-site) Extended Distance Protection Configuring remote site virtual machines on replicated VMFS volumes Configuring remote site virtual machines with replicated RDMs Write disabled devices in a VMware environment Using EMC VMAX Storage in VMware vsphere Environments 5

8 Contents 6 Using EMC VMAX Storage in VMware vsphere Environments

9 Preface This TechBook describes how the VMware vsphere platform works with EMC VMAX 1 storage systems and software technologies. As part of an effort to improve and enhance the performance and capabilities of its product lines, Dell EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. Audience This document is intended for use by storage administrators, system administrators and VMware administrators. Readers of this document are expected to be familiar with the following topics: EMC VMAX/VMAX3/VMAX All Flash system operation EMC SRDF, EMC TimeFinder, and EMC Solutions Enabler VMware products IMPORTANT This TechBook covers all VMAX/Enginuity releases and VMAX3/VMAX All Flash/HYPERMAX OS releases through the 5977 Q SR. 1.VMAX was also previously known as Symmetrix. Where necessary that term is still utilized in this book to refer to the family of arrays or for a particular product e.g. SRDF. Specific platform distinction will always be made where appropriate. Using EMC VMAX Storage in VMware vsphere Environments 7

10 Preface Organization The document is divided into six chapters: Chapter 1, VMware vsphere and EMC VMAX, discusses the installation, setup and configuration of VMware vsphere environments with EMC VMAX arrays (VMAX, VMAX3, and VMAX All Flash). This chapter also presents best practices when using EMC VMAX storage with VMware vsphere platforms. Chapter 2, Management of EMC VMAX Arrays, discusses the various ways to manage a VMware environment using EMC technologies. Chapter 3, EMC Virtual Provisioning and VMware vsphere, discusses EMC Virtual Provisioning in detail along with an introduction to vsphere Storage APIs for Storage Awareness (VASA) and vsphere Thin Provisioning. Chapter 4, Data Placement and Performance in vsphere, addresses EMC and VMware technologies that help with placing data and balancing performance in the VMware environment and on the VMAX. Chapter 5, Cloning of vsphere Virtual Machines, presents how EMC TimeFinder and the Storage APIs for Array Integration (VAAI) can be used in VMware vsphere environments to clone virtual machines. It also discusses how to create TimeFinder copies of SRDF R2 and RecoverPoint images of VMware file systems hosted on VMAX volumes and then mount those volumes to the disaster recovery site. Chapter 6, Disaster Protection for VMware vsphere, discusses the use of EMC SRDF in a VMware vsphere environment to provide disaster restart protection for VMware vsphere data. IMPORTANT Examples provided in this guide cover methods for performing various VMware vsphere activities using VMAX systems and EMC software. These examples were developed for laboratory testing and may need tailoring to suit other operational environments. Any procedures outlined in this guide should be thoroughly tested before implementing in a production environment. Note: If an image in the TechBook does not appear clear, it will be necessary to increase the magnification in Adobe, occasionally to 200 percent, to view it properly. 8 Using EMC VMAX Storage in VMware vsphere Environments

11 Preface Authors Related documents Conventions used in this document This TechBook was authored by a partner engineering team based in Hopkinton, Massachusetts. Drew Tonnesen is a Technical Staff member of VMAX Systems Engineering focusing on VMware and other virtualization technologies. Before starting in his current position, Drew worked as a Global Solutions Consultant focusing on Oracle Technology. He has worked at EMC since 2006 in various capacities. Drew has over 20 years of experience working in the IT industry. Drew holds a Master s degree from the University of Connecticut. The following documents are available at support.emc.com: EMC VSI for VMware vsphere: Storage Viewer Product Guide EMC VSI for VMware vsphere Web Client Product Guide Using the EMC SRDF Adapter for VMware vcenter Site Recovery Manager TechBook EMC Solutions Enabler TimeFinder Family CLI Product Guide EMC TimeFinder Product Guide Solutions Enabler Management Product Guide Solutions Enabler Controls Product Guide Symmetrix Remote Data Facility (SRDF) Product Guide HYPERMAX OS 5977 Release Notes Using the EMC VMAX Content Pack for VMware vcenter Log Insight Using VMware Storage APIs for Array Integration with EMC Symmetrix VMAX Implementing VMware's Storage API for Storage Awareness with Symmetrix Storage Arrays White Paper Using VMware Virtual Volumes with EMC VMAX3 and VMAX All Flash Storage Tiering for VMware Environments Deployed on EMC Symmetrix VMAX with Enginuity 5876 White Paper EMC Support Matrix EMC uses the following conventions for special notices. Note: A note presents information that is important, but not hazard-related. Using EMC VMAX Storage in VMware vsphere Environments 9

12 Preface A caution contains information essential to avoid data loss or damage to the system or equipment. IMPORTANT An important notice contains information essential to operation of the software or hardware. Typographical conventions EMC uses the following type style conventions in this document: Normal Used in running (nonprocedural) text for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) Names of resources, attributes, pools, Boolean expressions, buttons, DQL statements, keywords, clauses, environment variables, functions, utilities URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications Bold Used in running (nonprocedural) text for: Names of commands, daemons, options, programs, processes, services, applications, utilities, kernels, notifications, system calls, man pages Used in procedures for: Names of interface elements (such as names of windows, dialog boxes, buttons, fields, and menus) What user specifically selects, clicks, presses, or types Italic Used in all text (including procedures) for: Full titles of publications referenced in text Emphasis (for example a new term) Variables Courier Used for: System output, such as an error message or script URLs, complete paths, filenames, prompts, and syntax when shown outside of running text Courier bold Used for: Specific user input (such as commands) Courier italic Used in procedures for: Variables on command line User input variables < > Angle brackets enclose parameter or variable values supplied by the user [] Square brackets enclose optional values 10 Using EMC VMAX Storage in VMware vsphere Environments

13 Preface Vertical bar indicates alternate selections - the bar means or {} Braces indicate content that you must specify (that is, x or y or z)... Ellipses indicate nonessential information omitted from the example Using EMC VMAX Storage in VMware vsphere Environments 11

14 Preface 12 Using EMC VMAX Storage in VMware vsphere Environments

15 1 VMware vsphere and EMC VMAX This chapter discusses the configuration and best practices when connecting a VMware virtualization platform to a EMC VMAX storage array. Introduction Setup of VMware ESXi Using EMC VMAX with VMware ESXi Partition alignment Mapping VMware devices/datastores to EMC VMAX devices. 34 Optimizing VMware and EMC VMAX for interoperability Expanding VMAX metavolumes Growing VMFS in VMware vsphere using an expanded metavolume Path management All Paths Down (APD) and Permanent Device Loss (PDL) APD/PDL in vsphere VMware vsphere and EMC VMAX 13

16 VMware vsphere and EMC VMAX Introduction VMware ESXi virtualizes IT assets into a flexible, cost-effective pool of compute, storage, and networking resources. These resources can be then mapped to specific business needs by creating virtual machines. EMC VMAX storage arrays are loosely coupled parallel processing machines that handle various workloads from disparate hardware and operating systems simultaneously. When VMware ESXi is used with EMC VMAX storage arrays, it is critical to ensure proper configuration of both the storage array and the ESXi host to achieve optimal performance and availability. This chapter addresses the following topics: Configuration of EMC VMAX arrays when used with the VMware virtualization platform. Discovering and using EMC VMAX devices in VMware. Configuring EMC VMAX storage arrays and VMware ESXi for optimal operation. Detailed information on configuring and using VMware ESXi in EMC FC, FCoE, NAS and iscsi environments can also be found in the Host Connectivity Guide for VMware ESX, located at support.emc.com. This is the authoritative guide for connecting VMware ESXi to EMC VMAX storage arrays and should be consulted for the most current information. 14 Using EMC VMAX Storage in VMware vsphere Environments

17 VMware vsphere and EMC VMAX Setup of VMware ESXi VMware and EMC fully support booting the VMware ESXi 5 and 6 1 hosts from EMC VMAX storage arrays when using either QLogic or Emulex HBAs or all CNAs from Emulex, QLogic, or Brocade. Booting the VMware ESXi from the SAN enables the physical servers to be treated as an appliance, allowing for easier upgrades and maintenance. Furthermore, booting VMware ESXi from the SAN can simplify the processes for providing disaster restart protection of the virtual infrastructure. Specific considerations when booting VMware ESXi hosts are beyond the scope of this document. The E-Lab Interoperability Navigator, available at support.emc.com, and appropriate VMware documentation, should be consulted for further details. Regardless of whether the VMware ESXi is booted off an EMC VMAX storage array or internal disks, the considerations for installing VMware ESXi do not change. Readers should consult the vsphere Installation and Setup Guide available on for further details. 1. In this TechBook, unless specified explicitly, minor releases of vsphere environments such as vsphere 5.1, 5.5, 6.0, and 6.5 are implied in the use of vsphere ESXi 5 or 6. Note that in August 2016, VMware ended general support for vsphere 5.0 and 5.1. Setup of VMware ESXi 15

18 VMware vsphere and EMC VMAX Using EMC VMAX with VMware ESXi Fibre channel port settings VMware ESXi 5 and 6 require SPC-2 compliant SCSI devices. When connecting VMware ESXi 5 or 6 to an EMC VMAX storage array the SPC-2 bit on the appropriate Fibre Channel ports needs to be set. The SPC-2 is set by default on all VMAX, VMAX3 and VMAX All Flash arrays. For all VMAX3 and VMAX All Flash arrays, the 5977 default factory settings should be used. Note: Please consult the EMC Support Matrix for an up-to-date listing of port settings. The bit settings described previously, and in the EMC Support Matrix, are critical for proper operation of VMware virtualization platforms and EMC software. The bit settings can be either set at the VMAX FA port or per HBA. VMAX FA connectivity VMAX arrays Each ESXi server in the VMware vsphere environment should have at least two physical HBAs, and each HBA should be connected to at least two different front-end ports on different directors of the VMAX. If the VMAX in use has only one engine then each HBA should be connected to the odd and even directors within it. Connectivity to the VMAX front-end ports should consist of first connecting unique hosts to port 0 of the front-end directors before connecting additional hosts 16 Using EMC VMAX Storage in VMware vsphere Environments

19 VMware vsphere and EMC VMAX to port 1 of the same director and processor. Figure 1 displays the connectivity for a single engine VMAX. Connectivity for all VMAX family arrays would be similar. Figure 1 Connecting ESXi servers to a single engine VMAX If multiple engines are available on the VMAX, the HBAs from the VMware ESXi servers in the VMware vsphere environment should be connected to different directors on different engines. In this situation, the first VMware ESXi server would be connected to four different processors on four different directors over two engines instead of four different processors on two directors in one engine Figure 2 demonstrates this connectivity. Using EMC VMAX with VMware ESXi 17

20 VMware vsphere and EMC VMAX Figure 2 Connecting ESXi servers to a multi-engine VMAX As more engines become available, the connectivity can be scaled as needed. These methodologies for connectivity ensure all front-end directors and processors are utilized, providing maximum potential performance and load balancing for VMware vsphere environments connected to VMAX storage arrays. VMAX3/VMAX All Flash arrays The recommended connectivity between ESXi and VMAX3/VMAX All Flash arrays is similar to the VMAX array. If the VMAX3/VMAX All Flash in use has only one engine then each HBA should be connected to the odd and even directors within it. Each ESXi host with 2 HBAs utilizing 2 ports, therefore, would have a total of 4 paths to the array as in Figure 3 on page 19. Note the port numbers were chosen randomly. 18 Using EMC VMAX Storage in VMware vsphere Environments

21 VMware vsphere and EMC VMAX Figure 3 Connecting ESXi servers to a single engine VMAX3/VMAX All Flash The multi-engine connectivity is demonstrated in Figure 4 on page 20. Using EMC VMAX with VMware ESXi 19

22 VMware vsphere and EMC VMAX Figure 4 Connecting ESXi servers to a multi-engine VMAX3/VMAX All Flash Performance It should be noted, however, that unlike the VMAX, the ports on the VMAX3/VMAX All Flash do not share a CPU. Ports 0 and 1 on a particular FA on the VMAX share the same CPU resource and therefore cabling both ports means competing for a single CPU. On the VMAX3/VMAX All Flash all the front-end ports have access to all CPUs designated for that part of the director. This also means that a single port on a VMAX3/VMAX All Flash can do far more I/O than a similar port on the VMAX. From a performance perspective, for small block I/O, 2 ports per host would be sufficient while for large block I/O, more ports would be recommended to handle the increase in GB/sec. Note: EMC recommends using 4 ports in each port group. If creating a port group in Unisphere for VMAX, the user will be warned if less than 4 ports are used. 20 Using EMC VMAX Storage in VMware vsphere Environments

23 VMware vsphere and EMC VMAX Driver configuration in VMware ESXi The drivers provided by VMware as part of the VMware ESXi distribution should be utilized when connecting VMware ESXi to EMC VMAX storage. However, EMC E-Lab does perform extensive testing to ensure the BIOS, BootBIOS and the VMware supplied drivers work together properly with EMC storage arrays. Adding and removing EMC VMAX devices to VMware ESXi hosts The addition or removal of EMC VMAX devices to and from VMware ESXi is a two-step process: 1. Appropriate changes need to be made to the EMC VMAX storage array configuration. In addition to LUN masking, this may include creation and assignment of EMC VMAX volumes and metavolumes to the Fibre Channel ports utilized by the VMware ESXi. The configuration changes can be performed using EMC Solutions Enabler or Unisphere for VMAX from an independent storage management host. 2. The second step of the process forces the VMware kernel to rescan the Fibre Channel bus to detect changes in the environment. This can be achieved by the one of the following three processes: Utilize the graphical user interface (vsphere Client/vSphere Web Client) Utilize the command line utilities Wait for no more than 5 minutes The process to discover changes to the storage environment using these tools are discussed in the next two subsections. Note: In general, when presenting devices to multiple ESXi hosts in a cluster, VMware relies on the Network Address Authority (NAA) ID and not the LUN ID. Therefore the LUN number does not need to be consistent across hosts in a cluster. This means that a datastore created on one host in the cluster will be seen by the other hosts in the cluster even if the underlying LUN ID is different between the hosts. Therefore the setting Consistent LUNs is unnecessary at the host level. There is one caveat to this, however, and that is when using raw device mappings (RDMs) in vsphere versions prior to 5.5. If a VM using RDMs is migrated to another host in the cluster with vmotion, the LUN ID for those RDMs must be the same across the hosts or the vmotion will not succeed. If the LUN ID cannot be made the same, the RDM(s) would need to be removed from the VM before the vmotion then Using EMC VMAX with VMware ESXi 21

24 VMware vsphere and EMC VMAX added back once on the new host. VMware KB article provides more detail on this behavior. As noted this is not a concern in vsphere 5.5 and above. Using the vsphere Client 1 Changes to the storage environment can be detected using the vsphere Client by following the process listed below: 1. In the vcenter inventory, right-click the VMware ESXi host (or, preferably, the vcenter cluster or datacenter object to rescan multiple ESXi hosts at once) on which you need to detect the changes. 2. In the menu, select Rescan for Datastores Selecting Rescan for Datastores... using vsphere Client results in a new window shown in Figure 5 on page 22, providing users with two options: Scan for New Storage Devices and Scan for New VMFS Volumes. The two options allow users to customize the rescan to either detect changes to the storage area network, or to the changes in the VMFS volumes. The process to scan the storage area network is much slower than the process to scan for changes to VMFS volumes. The storage area network should be scanned only if there are known changes to the environment. 4. Click OK to initiate the rescan process on the VMware ESXi. Figure 5 Rescanning options in the vsphere Client 1. When providing examples, this TechBook uses either or both the thick vsphere Client and the vsphere Web Client. vsphere 5 and 6 support both clients, though new features are available only in the Web Client. VMware has also introduced a partial-functionality HTML 5 client in vsphere Using EMC VMAX Storage in VMware vsphere Environments

25 VMware vsphere and EMC VMAX Using VMware ESXi command line utilities VMware ESXi 5 and 6 use the vcli command vicfg-rescan to detect changes to the storage environment. The vcli utilities take the VMkernel SCSI adapter name (vmhbax) as an argument. This command should be executed on all relevant VMkernel SCSI adapters if EMC VMAX devices are presented to the VMware ESXi on multiple paths. Figure 6 on page 23 displays an example using vicfg-rescan. Note: The use of the remote CLI or the vsphere client is highly recommended, but in the case of network connectivity issues, ESXi 5 and 6 still offer the option of using the CLI on the host (Tech Support Mode must be enabled). The command to scan all adapters is: esxcli storage core adapter rescan --all. Unlike vicfg-rescan there is no output from the esxcli command. Figure 6 Using vicfg-rescan to rescan for changes to the SAN environment Creating VMFS volumes on VMware ESXi 1 VMware Virtual Machine File System (VMFS) volumes created utilizing the vsphere Client are automatically aligned on 64 KB boundaries. Therefore, EMC strongly recommends utilizing the vsphere Client to create and format VMFS volumes. 1. Due to an interoperability issue with a VAAI primitive and Enginuity (see VMware vsphere Storage API for Array Integration in Chapter 5), customers running an Enginuity level prior to with ESXi 5.1 will be unable to create VMFS-5 datastores. This includes attempting to install ESXi in a boot-to-san configuration. This issue does not exist in Enginuity For more information, customers who are running this environment should review EMC Technical Advisory emc Using EMC VMAX with VMware ESXi 23

26 VMware vsphere and EMC VMAX Note: A detailed description of track and sector alignment in x86 environments is presented in the section Partition alignment on page 32. Creating a VMFS datastore using vsphere Web Client The vsphere Web Client offers a single process to create an aligned VMFS. The following section will walk through creating the VMFS datastore in the vsphere Web Client in a vsphere 6.5 environment. Note: A datastore in a VMware environment can be either a NFS file system, VMFS, or VVol in vsphere 6. Therefore the term datastore is utilized in the rest of the document. Furthermore, a group of VMware ESXi hosts sharing a set of datastores is referred to as a cluster. This is distinct from a datastore cluster, the details of which can be found in Chapter 4. Note: vsphere 6.x supports both NFS 3 and 4.1. enas also supports both file systems but 4.1 is not enabled by default and must be configured before attempting to create an NFS 4.1 datastore. The user starts by selecting an ESXi host on the left-hand side of the vsphere Web Client. If shared storage is presented across a cluster of hosts, any host can be selected. A series of tabs are available on the right-hand side of the Client. Select the Datastores tab as in Figure 7. All available datastores on the VMware ESXi are displayed here. In addition to the current state information, the pane also provides the options to manage the datastore information and create a new datastore. The wizard to create a new datastore can be launched by selecting the New Datastore button circled in red in Figure 7 on the top left-hand corner of the datastores pane. 24 Using EMC VMAX Storage in VMware vsphere Environments

27 VMware vsphere and EMC VMAX Figure 7 Displaying and managing datastores in the vsphere Web Client The New Datastore wizard on startup presents a summary of the required steps to provision a new datastore in ESXi, as seen in the highlighted box in Figure 8 on page 25. Select the type, either VMFS, NFS, or VVol. Figure 8 Provisioning a new datastore in the vsphere Web Client Type Using EMC VMAX with VMware ESXi 25

28 VMware vsphere and EMC VMAX Note: VVols are supported on the VMAX3/VMAX All Flash running a minimum of HYPERMAX OS 5977 Q SR and management software version 8.2. Selecting the Next button in the wizard presents all viable FC or iscsi attached devices. The next step in the process involves selecting the appropriate device in the list provided by the wizard, supplying a name for the datastore, and selecting the Next button as in Figure 9 on page Using EMC VMAX Storage in VMware vsphere Environments

29 VMware vsphere and EMC VMAX Figure 9 Provisioning a new datastore in the vsphere Client Disk/LUN It is important to note that devices that have existing VMFS volumes are not presented on this screen 1. This is independent of whether or not that device contains free space. However, devices with existing non-vmfs formatted partitions but with free space are visible in the wizard. Note: The vsphere Web Client allows only one datastore on a device. EMC VMAX storage arrays support non-disruptive expansion of storage LUNs. The excess capacity available after expansion can be utilized to expand the existing datastore on the LUN. Growing VMFS in VMware vsphere using an expanded metavolume on page 61 focuses on this feature of EMC VMAX storage arrays. 1. Devices that have incorrect VMFS signatures will appear in this wizard. Incorrect signatures are usually due to devices that contain replicated VMFS volumes or changes in the storage environment that result in changes to the SCSI personality of devices. For more information on handling these types of volumes please refer to section, Transitioning disk copies to cloned virtual machines on page 328. Using EMC VMAX with VMware ESXi 27

30 VMware vsphere and EMC VMAX The next screen will prompt the user for the type of VMFS format, either the newer VMFS-6 format or the older format VMFS-5 if both vsphere 5 and 6.0 environments will access the datastore. This screen is shown in Figure 10 on page 28. Figure 10 Provisioning a new datastore in the vsphere Web Client VMFS file system The user is then presented with the partition configuration. Here, if VMFS 6 has been chosen, there is the ability to set automated UNMAP. It is noted in the blue box in Figure 11 on page 29 and is on by default. 28 Using EMC VMAX Storage in VMware vsphere Environments

31 VMware vsphere and EMC VMAX Figure 11 Provisioning a new datastore in the vsphere Web Client Partition The final screen provides a summary. This is seen in Figure 12. Figure 12 Provisioning a new datastore in the vsphere Web Client Summary Using EMC VMAX with VMware ESXi 29

32 VMware vsphere and EMC VMAX Creating a VMFS datastore using command line utilities VMware ESXi provides a command line utility, vmkfstools, to create VMFS. 1 The VMFS volume can be created on either FC or iscsi attached EMC VMAX storage devices by utilizing partedutil for ESXi. Due to the complexity involved in utilizing command line utilities, VMware and EMC recommend use of the vsphere Client to create a VMware datastore on EMC VMAX devices. Upgrading VMFS volumes from VMFS-3 to VMFS-5 Beginning with the release of vsphere 5, users have the ability to upgrade their existing VMFS-3 datastores to VMFS-5. VMFS-5 has a number of improvements over VMFS-3 such as: Support of greater than 2 TB storage devices for each VMFS extent. Increased resource limits such as file descriptors. Standard 1MB file system block size with support of 2 TB virtual disks. With VMFS-5 created on vsphere 5.5 and 6, this size increases to 62 TB. Support of greater than 2 TB disk size for RDMs in physical compatibility mode, up to 64 TB with vsphere 5.5 and 6. Scalability improvements on storage devices that support hardware acceleration. Default use of hardware assisted locking, also called atomic test and set (ATS) locking, on storage devices that support hardware acceleration. Online, in-place upgrade process that upgrades existing datastores without disrupting hosts or virtual machines that are currently running. There are a number of things to consider if you upgrade from VMFS-3 to VMFS-5. With ESXi 5 and 6, if you create a new VMFS-5 datastore, the device is formatted with GPT. The GPT format enables you to create datastores larger than 2 TB and up to 64 TB 2 without 1. The vmkfstools vcli can be used with ESXi. The vmkfstools vcli supports most but not all of the options that the vmkfstools service console command supports. See VMware Knowledge Base article for more information. 2. The VMAX3/VMAX All Flash supports a maximum device size of 64 TB. 30 Using EMC VMAX Storage in VMware vsphere Environments

33 VMware vsphere and EMC VMAX resorting to the use of physical extent spanning. VMFS-3 datastores continue to use the MBR format for their storage devices. Some other considerations about upgrades: For VMFS-3 datastores, the 2 TB limit still applies, even when the storage device has a capacity of more than 2 TB. To be able to use the entire storage space, upgrade a VMFS-3 datastore to VMFS-5. Conversion of the MBR format to GPT happens only after you expand the datastore. When you upgrade a VMFS-3 datastore to VMFS-5, any spanned extents will have the GPT format. When you upgrade a VMFS-3 datastore, remove from the storage device any partitions that ESXi does not recognize, for example, partitions that use the EXT2 or EXT3 formats. Otherwise, the host cannot format the device with GPT and the upgrade fails. You cannot expand a VMFS-3 datastore on devices that have the GPT partition format. Because of the many considerations when upgrading VMFS, many customers may prefer to create new VMFS-5 datastores and migrate their virtual machines onto them from the VMFS-3 datastores. This can be performed online utilizing Storage vmotion. Upgrading VMFS volumes from VMFS-5 to VMFS-6 VMware does not support upgrading from VMFS-5 to VMFS-6 datastores because of the changes in metadata in VMFS-6 to make it 4k aligned. VMware offers a number of detailed use cases on how to move from to VMFS-6 and they can be found in KB ; however for most environments a simple Storage vmotion of the VM from VMFS-5 to VMFS-6 will be sufficient. Note that it will be necessary to first upgrade the vcenter to vsphere 6.5 as vsphere 6.0 does not support VMFS-6. Using EMC VMAX with VMware ESXi 31

34 VMware vsphere and EMC VMAX Partition alignment Partition alignment for virtual machines using VMFS volumes EMC VMAX storage arrays manage the data on physical disk using a logical construct known as a track. A track can be either 32 KB, 64 KB or 128 KB 1 in size depending on the VMAX hardware and the code running on the array. A hypervolume can thus be viewed as a collection of 32 KB, 64 KB or 128 KB tracks. A data partition created by VMware ESXi consists of all or a subset of the tracks that represent the hypervolume. VMFS 5 and VMFS 6 datastores employ a GUID partition table (GPT) when formatting a disk, as opposed to an MBR. The GPT occupies the first 1 MB of the disk. Because the VMFS on the data partition of the disk uses a 1 MB block size which is a multiple of the track size, file allocations are in even multiples of the track size. Thus, virtual disks created on ESXi 5 and 6 datastores are always aligned. While the VMFS and thus the virtual disks created on that VMFS are aligned, the operating systems of the virtual machines created on those virtual disks will not be aligned if they use an MBR. An unaligned partition results in a track crossing and an additional I/O for the additional track, incurring a penalty on latency and throughput. The additional I/O (especially if small) can impact system resources significantly. An aligned partition eliminates the need for the additional I/O and results in an overall performance improvement. Prior experience with misaligned Windows partitions and file systems has shown as much as 20 to 30 percent degradation in performance. Aligning the data partitions on 64 KB boundary results in positive improvements in overall I/O response time experienced by all hosts connected to the shared storage array. Therefore, EMC recommends aligning the virtual disk on a track boundary to ensure the optimal performance from the storage subsystem if MBR is in use by the OS. Most Windows (e.g. Windows 2008) and Linux (e.g. RHEL) distributions are aligned by default now and therefore no additional action is required. Microsoft on its support site provides alignment instructions for older operating KB is the track size on a VMAX3/VMAX All Flash array. 32 Using EMC VMAX Storage in VMware vsphere Environments

35 VMware vsphere and EMC VMAX systems. A similar procedure utilizing fdisk or sfdisk can be used to align disks in virtual machines with Linux as the guest operating system. Note: Due to a number of known issues, EMC recommends using a 64 KB Allocation Unit for NTFS rather than the default of 4 KB. Partition alignment for virtual machines using RDM EMC VMAX devices accessed by virtual machines using raw device mapping (RDM) do not contain VMFS volumes. In this configuration, the alignment problem is the same as that seen on physical servers. The process employed for aligning partitions on physical servers needs to be used in the virtual machines. Partition alignment 33

36 VMware vsphere and EMC VMAX Mapping VMware devices/datastores to EMC VMAX devices The mapping of the VMware canonical device and VMFS to EMC VMAX devices are critical components when using EMC VMAX-based storage software. To aid in this, EMC provides a plug-in called EMC Virtual Storage Integrator for VMware vsphere, also known as VSI. This free tool, available to download at support.emc.com, enables additional capabilities to the vsphere Client, so that users may view detailed storage-specific information. There are two different versions of VSI available, the thick client also referred to as VSI Classic 1, and the thin client, known as VSI for VMware vsphere Web Client (VSI Web Client). VSI Classic is a collection of feature executables which are installed on the system where the vsphere Client is installed. The VSI Classic framework is installed as a plug-in, providing access to any of the feature executables that are installed. The VSI Web Client, on the other hand, runs as a virtual appliance and a VSI plug-in is installed in the vsphere Web Client and is thus accessible from any browser and not tied to a particular system. The reason for two different versions is that VMware no longer supports the vsphere Client starting in vsphere 6.5 and requires the use of the vsphere Web Client exclusively. Currently all new features of vsphere are only available in the Web Client, though the vsphere Client can still be used for vsphere 5.5. VSI Classic, therefore, is no longer being enhanced and all future improvements will be made to the VSI Web Client. Note: Only the VSI Web Client supports VMAX3/VMAX All Flash. VSI Classic The most current (and last) VSI Classic versions ( ) support vsphere 5 but not vsphere 6. VSI Classic provides simple, storage mapping functionality for various EMC storage-related entities that exist within vsphere Client, including datastores, LUNs, and SCSI targets. It does so through the Storage Viewer feature of VSI Classic. 1. The VSI Classic terminology is being used in this TechBook in order to distinguish it from the VSI Web Client. VSI will still be found on support.emc.com under the various VSI features, e.g. VSI Storage Viewer. 34 Using EMC VMAX Storage in VMware vsphere Environments

37 VMware vsphere and EMC VMAX The storage information displayed through VSI Classic provides distinction among the types of storage used, the specific arrays and devices presented, the paths that are used for the storage, and individual characteristics of the existing storage. Note: EMC Storage Viewer is one of the features of the product EMC Virtual Storage Integrator Classic. EMC Virtual Storage Integrator Classic includes additional features that enable tasks such as path management (Path Management feature), simplified provisioning of VMAX storage to VMware vsphere environments (Unified Storage Management feature), and management of SRM environments (SRA Utilities, RecoverPoint Management feature). Further details of the product can be obtained at support.emc.com. EMC Storage Viewer provides two main views: VMware ESXi context view Virtual Machine context view. The LUN subview of the ESXi context view, as seen in Figure 13, provides detailed information about EMC storage associated with the LUNs visible on the selected ESXi host. Mapping VMware devices/datastores to EMC VMAX devices 35

38 VMware vsphere and EMC VMAX Figure 13 LUN subview of the VMware ESXi context view of the EMC Storage Viewer feature of VSI 36 Using EMC VMAX Storage in VMware vsphere Environments

39 VMware vsphere and EMC VMAX The datastore subview of the ESXi context view, as seen in Figure 14, provides detailed information about EMC storage associated with the datastore visible on the selected ESXi host. Figure 14 Datastore subview of the VMware ESXi context view of the EMC Storage Viewer feature of VSI Detailed discussion of the VSI for VMware vsphere: Storage Viewer is beyond the scope of this document. Interested readers should consult the EMC VSI for VMware vsphere: Storage Viewer Product Guide for this feature, available at support.emc.com. Mapping VMware devices/datastores to EMC VMAX devices 37

40 VMware vsphere and EMC VMAX VSI for vsphere Web Client Like VSI Classic, the VSI Web Client provides simple, storage mapping functionality for EMC storage-related entities including datastores, RDMs, SCSI and iscsi targets. The storage information is displayed within the various panels of the vsphere Web Client. The presentation differs from VSI Classic because the plug-in is integrated into the Web Client and generally information is presented in existing panels, rather than in their own screens like VSI Classic. The datastore display for the VSI Web Client is shown in Figure 15. There are two panels with EMC information: Storage System and Storage Device. The Storage System panel provides the array ID and the model number while the Storage Device panel includes fields such as the Device ID, Device Type, Raid Level, and Used Capacity among others. Figure 15 VSI Web Client Datastore view 38 Using EMC VMAX Storage in VMware vsphere Environments

41 VMware vsphere and EMC VMAX The VSI Web Client, like VSI Classic, also has a VM view. This can be found by selecting the VM on the left-hand panel, then the Monitor tab on the right. Within the Monitor tab are a number of sub-tabs, one of which is EMC Storage Viewer. Storage information for both vmdks and RDMs are visible here. An example is demonstrated in Figure 16. Figure 16 VSI Web Client VM view Starting with version 7.0 of the VSI Web Client, there are two additional views available for VMAX storage at both the cluster and host level: EMC VMAX Datastore Viewer and EMC VMAX LUN Viewer. The EMC VMAX Datastore Viewer lists each VMAX datastore with VMware detail but by selecting an individual datastore, the EMC details are shown in the bottom screen. An example is shown in Figure 17 on page 40. Mapping VMware devices/datastores to EMC VMAX devices 39

42 VMware vsphere and EMC VMAX Figure 17 EMC VMAX Datastore Viewer The EMC VMAX LUN Viewer displays all devices presented to the cluster or host along with both VMware and EMC detail. This is seen in Figure 18. Figure 18 EMC VMAX LUN Viewer 40 Using EMC VMAX Storage in VMware vsphere Environments

43 VMware vsphere and EMC VMAX Optimizing VMware and EMC VMAX for interoperability The EMC VMAX product line includes the VMAX3 (100K, 200K, 400K), VMAX All Flash (250F/FX, 450F/FX, 850F/FX, 950F/FX), VMAX (40K, 20K, 10K) Virtual Matrix and DMX Direct Matrix Architecture family. EMC VMAX is a fully redundant, high-availability storage processor providing non-disruptive component replacements and code upgrades. The VMAX system features high levels of performance, data integrity, reliability, and availability. Configuring the VMAX storage array appropriately for a VMware ESXi environment is critical to ensure scalable, high-performance architecture. This section discusses these best practices. Storage considerations for VMware ESXi Physical disk size and data protection EMC VMAX storage arrays offer customers a wide choice of physical drives to meet different workloads. Some examples include high performance Enterprise Flash Drives (EFD) in 3.5 or 2.5 ; 15k rpm Fibre Channel drives; SATA-II drives; and small form-factor 2.5 SAS drives which can reduce power consumption by 40 percent and weigh 54 percent less while delivering performance similar to the 10K rpm Fibre Channel drives. For some platforms, various drive sizes can be intermixed on the same storage array to allow customers with the option of providing different applications with the appropriate service level. In addition to the different physical drives, EMC also offers various protection levels on an EMC VMAX storage array. The storage arrays support RAID 1, RAID 10 (DMX/VMAX only), RAID 5 and RAID 6 protection types. For the VMAX the RAID protection type can be mixed not only in the same storage array but also on the same physical disks. Furthermore, depending on the VMAX array, the storage utilization can be optimized by deploying Virtual Provisioning and Fully Automated Storage Tiering for Virtual Pools or FAST VP On the VMAX3/VMAX All Flash FAST VP is simply referred to as FAST. Optimizing VMware and EMC VMAX for interoperability 41

44 VMware vsphere and EMC VMAX FAST, FAST VP and FTS EMC VMAX FAST VP automates the identification of data volumes for the purposes relocating application data across different performance/capacity tiers within an array. FAST VP pro-actively monitors workloads at both the LUN and sub-lun level in order to identify busy data that would benefit from being moved to higher performing drives. FAST VP will also identify less busy data that could be relocated to higher capacity drives, without existing performance being affected. This promotion/demotion activity is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, via thin storage pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed non-disruptively, without affecting business continuity and data availability. The VMAX3 introduces the next-generation FAST engine which has many benefits including: Set performance levels by Service Level Objective. Actively manages and delivers the specified performance levels. Provides high-availability capacity to the FAST process. Delivers defined storage services based on a mixed drive configuration. The Service Level Objectives (SLO) 1 are defined by their average response time and provide the next level of storage performance simplification for users Introduced in Enginuity , VMAX adds to the drive options through Federated Tiered Storage, or FTS 3. FTS allows supported, SAN-attached disk arrays to provide physical disk space for a VMAX array. This permits the user to manage, monitor, migrate, and replicate data residing on both VMAX and other supported arrays using familiar EMC software and Enginuity features. This applies equally to data that already exists on external arrays, as well as to new storage that is being allocated. 1. Simply SL or Service Level on VMAX All Flas.h. 2. To utilize FTS on the VMAX 10K platform requires Enginuity FTS is also known as FAST.X in more recent releases of HYPERMAX OS. 42 Using EMC VMAX Storage in VMware vsphere Environments

45 VMware vsphere and EMC VMAX The flexibility provided by EMC VMAX storage arrays enables customers to provide different service levels to the virtual machines using the same storage array. However, to configure appropriate storage for virtual infrastructure, a knowledge of the anticipated I/O workload is required. For customers who leverage FAST VP in their environments, using it in VMware vsphere environments can take the guesswork out of much of the disk placement. For example, a virtual disk could be placed on a datastore that is in a storage group that is part of a FAST VP policy, thus allowing a more nuanced placement of the data over time, the result being that the most accessed data would be moved to the fastest disks in the policy, while the least accessed would be placed on the slower, more cost-effective tier. On the VMAX3 the storage group model has changed so that a single storage group may have children each with a different SLO assignment. LUN configuration and size presented to the VMware ESXi hosts The most common configuration of a VMware ESXi cluster presents the storage to the virtual machines as flat files in a VMFS. It is, therefore, tempting to present the storage requirement for the VMware ESXi hosts as one large LUN as vsphere 5 and 6 support datastores up to 64 TB in size. Using a singular, large LUN, however, can be detrimental to the scalability and performance characteristics of the environment. Presenting the storage as one large LUN forces the VMkernel to serially queue I/Os from all of the virtual machines utilizing the LUN. To help alleviate the queue prior to ESXi 5.5, a per-host VMware parameter, Disk.SchedNumReqOutstanding, can be adjusted to prevent one virtual machine from monopolizing the Fibre Channel queue for the LUN. From ESXi 5.5 onward, the outstanding requests can be adjusted on a per-device level. Note that the HBA queue depth may also require changing. In most environments, however, the default settings are sufficient. Adjusting the outstanding requests can have a significant impact on the environment so caution is warranted. Refer to VMware KB article 1268 for more information. VMware also has an adaptive queue. ESX 3.5 Update 4 introduced an adaptive queue depth algorithm that is controlled by two parameters: QFullSampleSize and QFullThreshold. In ESX 3.5 through ESXi 5.0 these settings are global to the ESX host. Starting in ESXi 5.1, however, these parameters can be set on a per-device basis. When using VMAX storage, EMC recommends leaving the Optimizing VMware and EMC VMAX for interoperability 43

46 VMware vsphere and EMC VMAX QFullSampleSize parameter at the default of 32 and the QFullThreshold parameter at the default of 4. Refer to VMware KB article for more information. 1 Despite these available tuning parameters, a long queue against the LUN results in unnecessary and unpredictable elongation of response time. This problem can be further exacerbated in configurations that allow multiple VMware ESXi hosts to share a single LUN, particularly across VMware clusters. In this configuration, all VMware ESXi hosts utilizing the LUN share the queue provided by the VMAX Fibre Channel port. In a large farm with multiple active virtual machines, it is easy to create a very long queue on the EMC VMAX storage array front-end port causing unpredictable and sometimes elongated response time. When such an event occurs, the benefits of moderate queuing are lost. Note that the VMAX3/VMAX All Flash have a shared CPU model for FA ports so it is more difficult to incur a long queue. The potential response time elongation and performance degradation can be addressed by presenting a number of smaller LUNs to the VMware ESXi cluster. However, this imposes overhead for managing the virtual infrastructure. Furthermore, the limitation of 256 SCSI devices per VMware ESXi or 512 in vsphere 6.5, can impose severe restrictions on the total amount of storage that can be presented. Table 1 on page 45 compares the advantages and disadvantages of presenting the storage to a VMware ESXi farm as a single or multiple LUNs. The table shows that the benefits of presenting storage as multiple LUNs outweigh the disadvantages. EMC recommends presenting the storage for a VMware ESXi cluster from EMC VMAX storage arrays as striped metavolumes. The VMAX3/VMAX All Flash no longer require metavolumes and as mentioned due to their architecture which allows CPU sharing on the FA, they are particularly good at concurrency. Thus larger LUN sizes present less of a challenge to these platforms than the VMAX. The anticipated I/O activity influences the maximum size of the LUN that can be presented to the VMware ESXi hosts. The appropriate size for an environment should be determined after a thorough analysis of the performance data collected from existing physical or virtual environments Adaptive queue depth parameters should only be altered under the explicit direction of EMC or VMware support. If throttling is required EMC recommends leveraging VMware Storage I/O Control. 44 Using EMC VMAX Storage in VMware vsphere Environments

47 VMware vsphere and EMC VMAX Table 1 Comparing approaches for presenting storage to VMware ESXi hosts Management Storage as a single LUN Easier management. Storage can be over-provisioned. One VMFS to manage. Storage as multiple LUNs Small management overhead. Storage provisioning has to be on demand. Performance Can result in poor response time. Multiple queues to storage ensure minimal response times. Scalability Functionality Limits number of virtual machines due to response time elongation. Limits number of I/O-intensive virtual machines. All virtual machines share one LUN. Cannot leverage all available storage functionality. Multiple VMFS allow more virtual machines per ESXi server. Response time of limited concern (can optimize). Use VMFS when storage functionality not needed. Enables judicious use of RDMs as needed. IMPORTANT If multiple storage arrays are presented to the same ESXi 5.1+ hosts along with VMAX, be they EMC (e.g. XtremIO) or non-emc, the VMware queues can be adjusted on a per-device basis so there is no conflict. Note, however, this does not hold true for any HBA queues which will be host-wide or other VMware-specific parameters. In such cases careful consideration must be taken when adjusting the default values as some arrays can handle higher queues than others and while performance may be improved on one, at the same time it could be reduced on another. EMC offers recommended settings when using two different EMC storage arrays with the same ESXi host which can be found in the following KB: 2. When it comes to I/O tuning, EMC recommends leaving the VMware configuration option DiskMaxIOSize at the default setting of 32 MB. While smaller I/O sizes are usually optimal on the VMAX array, the overhead of the host splitting the I/O into smaller chunks outweighs any possible performance benefit achieved with smaller I/Os being sent to the VMAX array. On the VMAX3/VMAX All Flash, however, testing has shown minor differences between 32 MB and 4 MB so either could be used. Optimizing VMware and EMC VMAX for interoperability 45

48 VMware vsphere and EMC VMAX Spanned VMFS VMFS supports concatenation of multiple SCSI disks to create a single file system. Allocation schemes used in VMFS spread the data across all LUNs supporting the file system thus exploiting all available spindles. While in the past, spanned VMFS could be useful when an ESX 4 environment called for a VMFS volume larger than 2 TB, this is no longer a concern in ESXi 5 and 6 as it supports extent sizes well beyond 2 TB and therefore EMC does not recommend spanning VMFS. IMPORTANT In a VMFS 5 or 6 file system, if any member besides the first extent (the head) of a spanned VMFS volume is unavailable, the datastore will be still available for use, except for the data from the missing extent. Number of VMFS in a VMware environment Virtualization enables better utilization of IT assets and fortunately the fundamentals for managing information in the virtualized environment are no different from a physical environment. EMC recommends the following best practices for a virtualized infrastructure: A VMFS to store virtual machine boot disks. In most modern operating systems, there is minimal I/O to the boot disk. Furthermore, most of the I/O to boot disk tend to be paging activity that is sensitive to response time. By separating the boot disks from application data, the risk of response time elongation due to application-related I/O activity is mitigated. Data managers such as Oracle, Microsoft SQL Server and IBM MQSeries use an active log and recovery data structure that track changes to the data. In case of an unplanned application or operating system disruption, the active log or the recovery data structure is critical to ensure proper recovery and data consistency. Since the recovery structures are a critical component, any virtual machine that supports data managers should be provided a separate VMFS for storing active log files and other structures critical for recovery. Furthermore, if mirrored recovery structures are employed, the copy should be stored in a separate VMFS. 46 Using EMC VMAX Storage in VMware vsphere Environments

49 VMware vsphere and EMC VMAX Application data, including database files, should be stored in a separate VMFS. Furthermore, this file system should not contain any structures that are critical for application or database recovery. Note that there are certain configurations, e.g. Oracle RAC, which require a special parameter, multi-writer, to ensure multiple VMs can access the same vmdks. For more information, review the VMware KB As discussed in Physical disk size and data protection on page 41, VMware ESXi serializes and queues all I/Os scheduled for a SCSI target. The average response time from the disk depends on the average queue length and residency in the queue. As the utilization rate of the disks increases, the queue length and hence, the response time increase nonlinearly. Therefore, applications requiring high performance or predictable response time should be provided their own VMFS. Both EMC and VMware ESXi provide sophisticated mechanisms to provide fair access to the disk subsystem. EMC offers Host I/O Limits to limit I/O from Storage Groups. VMware offers Storage I/O Control, or SIOC. This capability allows virtual machines with different service-level requirements to share the same VMFS. Additional information on these technologies can be found in Chapter 4. IMPORTANT EMC makes no general recommendation as to the number or size of datastores (VMFS) in a customer s environment, nor how many VMs should be in each datastore. There are many variables which will determine the best configuration. If performance, availability, and management requirements are met, the configuration is appropriate. VMware Virtual Volumes (VVols) VMAX3/VMAX All Flash introduce support for VMware Virtual Volumes, a new integration and management framework (referred to as the VVol framework) that moves management from the datastore to the virtual machine. VVols virtualize storage, enabling a more efficient operational model that is optimized for virtualized environments and centered on the application instead of the infrastructure. VVols map virtual disks, configuration files, clones, Optimizing VMware and EMC VMAX for interoperability 47

50 VMware vsphere and EMC VMAX and snapshots, directly to virtual volume objects on a storage system. This mapping allows VMware to offload more storage operations to the VMAX3/VMAX All Flash, such as VM snapshots. VVols virtualize SAN devices by abstracting physical hardware resources into logical pools of storage called Storage Containers which replace traditional storage volumes. These containers are created on the VMAX3/VMAX All Flash and are assigned storage capabilities which can be all, or a subset, of the SLOs the particular VMAX3/VMAX All Flash array supports. These storage containers are presented to the vcenter when a VASA 2 Provider is registered. The VASA Provider runs on the storage side and integrates with vsphere Storage Monitoring Service (SMS) and ESXi vvold service to manage all aspects of the storage in relation to VVols. All communication between VMware and the VASA Provider is out of band. From the presented storage containers, the VMware Admin creates VVol datastores through the wizard, just like a traditional datastore. A VVol datastores has a one-to-one relationship with a storage container and is its logical representation on the ESXi host. A VVol datastore holds VMs and can be browsed like any datastore. Each VVol datastore then will inherit the capabilities of the container, i.e. SLOs - Bronze, Silver, etc. The VMware Admin creates Storage Policies that are mapped to these capabilities so when a VM is deployed, the user selects the desired capability and is presented with compatible VVol datastores in which to deploy the VM. Unlike traditional devices, ESXi has no direct SCSI visibility to VVols on the VMAX3/VMAX All Flash. Instead, ESXi hosts use a front-end access point called a Protocol Endpoint. A Protocol Endpoint (PE) is a special device created on the VMAX3/VMAX All Flash and mapped and masked to the ESXi hosts. Each ESXi host requires a single, unique PE. The ESXi host uses the PE to communicate with the volumes and the disk files the VVols encapsulate. By utilizing PEs, ESXi establishes data paths from the virtual machines (VM) to the virtual volumes. VVols simplify the delivery of storage service levels to individual applications by providing finer control of hardware resources and allowing the use of native array-based data services such as SnapVX at the VM level. VVols present the VMware Admin a granularity of control over VMs on shared storage that cannot be achieved with traditional architecture due to the device limitations of the ESXi server. VMAX3/VMAX All Flash supports up to 64k VVols on a 48 Using EMC VMAX Storage in VMware vsphere Environments

51 VMware vsphere and EMC VMAX VMAX3/VMAX All Flash system. Note that as each VVol uses a host available address, each one counts toward the total number of devices on the array. VVols will not be covered in detail in this TechBook as an extensive whitepaper, Using VMware Virtual Volumes with EMC VMAX3 and VMAX All Flash, is available which discusses best practices and can be found at and on support.emc.com. Optimizing VMware and EMC VMAX for interoperability 49

52 VMware vsphere and EMC VMAX Expanding VMAX metavolumes EMC VMAX storage arrays offer ways of non-disruptively increasing the size of metavolumes presented to hosts. Concatenated or striped metavolumes can be increased in size without affecting host connectivity. In addition to being comprised out of regular devices, metavolumes can be formed out of thin devices, bringing additional benefits to using metavolumes. Note: The VMAX3/VMAX All Flash array does not have a concept of metavolumes. Devices can be created up to 64TB in size. Concatenated metavolume expansion Concatenated metavolumes are the easiest to expand due to how they are constructed. Concatenated devices are volume sets that are organized with the first byte of data at the beginning of the first device. Addressing continues linearly to the end of the first metamember before any data on the next member is referenced. Thus the addressing mechanism used for a concatenated device ensures the first metamember receives all the data until it is full, and then data is directed to the next member and so on. Therefore, when a new device is added it just needs to be appended to the end of the metavolume without requiring any movement or reorganizing of data. Figure 19 on page 51 shows the VMAX device information in EMC Unisphere for VMAX (Unisphere) of a concatenated two member (one head, one member (tail)) metavolume 727. It is currently 200 GB in size For more information on Unisphere for VMAX and Solutions Enabler, refer to Chapter 2, Management of EMC VMAX Arrays. 50 Using EMC VMAX Storage in VMware vsphere Environments

53 VMware vsphere and EMC VMAX Figure 19 Concatenated metavolume before expansion with Unisphere for VMAX With Solutions Enabler, to add additional members to an existing concatenated metavolume, use the following syntax from within the symconfigure command: add dev SymDevName[:SymDevName] to meta SymDevName; In this example, one more member will be added to the metavolume to increase the size from 200 GB to 300 GB. The procedure to perform this using Unisphere is shown in Figure 20 on page 52. Expanding VMAX metavolumes 51

54 VMware vsphere and EMC VMAX Figure 20 Expanding a concatenated metavolume using Unisphere for VMAX 52 Using EMC VMAX Storage in VMware vsphere Environments

55 VMware vsphere and EMC VMAX Figure 21 shows the metavolume after it has been expanded with Unisphere which is now 300 GB in size. Figure 21 Concatenated metavolume after expansion with Unisphere for VMAX Striped metavolume expansion Striped metavolumes are a little more complicated to expand due to a more complex configuration. Striped metavolume addressing divides each metamember into a series of stripes. When the stripe on the last member of the metavolume has been addressed or written to, the stripe of the first member (the meta head) is addressed. Thus, when there is write I/O to a striped volume, equal size stripes of data from each participating drive are written alternately to each member of the set. Striping data across the multiple drives in definable cylinder stripes benefits sequential I/O by avoiding stacking multiple I/Os to a single spindle and disk director. This scheme allows creation of volumes larger than the single volume limit of the Enginuity operating system, while balancing the I/O activity between the disk devices and the VMAX disk directors. Expanding VMAX metavolumes 53

56 VMware vsphere and EMC VMAX Due to the more complex architecture of a striped metavolume, expanding one requires the use of a BCV 1 metavolume. Since the data is striped over each volume evenly, simply appending a member at the end of a striped volume would result in an unbalanced amount of data striped over the individual members. For this reason it is not allowed. Instead, the data is copied over to an identical BCV metavolume first, and then any new members are added to the original metavolume. From there, the data is copied off of the BCV metavolume and re-striped evenly over the newly expanded metavolume. The simplest way to create the identical BCV device is to make use of the symconfigure command: symconfigure -cmd "configure 1 devices copying dev <original meta device ID> overriding config=bcv+tdev;" This will take the configuration (meta configuration, meta size, member count, thin pool) and make an identical device. Adding the overriding config option will allow the new device to be the correct type of BCV. This can also be done in Unisphere with the duplicate volume wizard. Note though that Unisphere does not support overriding the device configuration so converting it to the type BCV will be a separate step. 1. EMC Business Continuous Volume are special devices used with EMC TimeFinder software. Regardless, expanding a striped metavolume with a BCV does not require a TimeFinder license. More information can be found in Chapter 5, Cloning of vsphere Virtual Machines. 54 Using EMC VMAX Storage in VMware vsphere Environments

57 VMware vsphere and EMC VMAX The following example will use thin devices to demonstrate striped metavolume expansion. The process would be the same for thick devices. Figure 22 shows a screenshot from Unisphere for VMAX of a two member striped meta (one head, one member (tail)). It is currently 200 GB in size. Figure 22 Viewing a striped metavolume before expansion with Unisphere If using Solutions Enabler to add additional members to an existing striped metavolume, use the following syntax from within the symconfigure command: add dev SymDevName[:SymDevName] to meta SymDev [,protect_data=[true FALSE],bcv_meta_head=SymDev]; The protect_data option can be either TRUE or FALSE. For an active VMFS volume with stored data, this value should always be set to TRUE. When set to TRUE, the configuration manager automatically creates a protective copy to the BCV meta of the original device striping. Because this occurs automatically, there is no need to perform a BCV establish. When enabling protection with the protect_data option, a BCV meta identical to the existing (original) Expanding VMAX metavolumes 55

58 VMware vsphere and EMC VMAX striped meta has to be specified. The bcv_meta_head value should be set to the device number of a bcv_meta that matches the original metavolume in capacity, stripe count, and stripe size. Note: If expanding a striped thin metavolume, the BCV must be fully allocated to a thin pool and the allocation type must be persistent. 56 Using EMC VMAX Storage in VMware vsphere Environments

59 VMware vsphere and EMC VMAX In this example, one more member will be added to the striped metavolume to increase the size from 200 GB to 300 GB. This can be seen using Unisphere in Figure 23. Figure 23 Initiating a striped metavolume expansion with Unisphere for VMAX Expanding VMAX metavolumes 57

60 VMware vsphere and EMC VMAX Figure 24 shows the metavolume after it has been expanded with Unisphere. The data has now been completely re-striped over the newly expanded striped metavolume, which is now 300 GB in size, and includes the newly added member. Figure 24 Viewing a striped metavolume after expansion with Unisphere for VMAX Thin meta expansion The following stipulations refer to all types of thin metavolumes: Members must be unmasked, unmapped, and unbound before being added to a thin metavolume. Only thin devices can be used to create a thin metavolume. Mixing of thin and traditional, non-thin devices to create a metavolume is not allowed. The following specifically refer to creating and expanding concatenated thin metavolumes: 58 Using EMC VMAX Storage in VMware vsphere Environments

61 VMware vsphere and EMC VMAX If using Solutions Enabler 7.1 and Enginuity 5874 and earlier, bound thin devices may be converted to concatenated thin metavolumes, but they cannot be mapped and masked to a host. Any data on the thin device is preserved through the conversion, but since the device is unmapped and unmasked, this is an offline operation. Beginning with Solutions Enabler 7.2 and Enginuity 5875, a thin device can be bound and mapped/masked to the host during the conversion, allowing an entirely non-disruptive operation. Concatenated thin metavolumes can be expanded non-disruptively regardless of the Solutions Enabler version The following specifically refer to creating and expanding striped thin metavolumes: For a striped metavolume, all members must have the same device size. Only unbound thin devices can be converted to striped thin metavolumes. The striped thin metavolumes must first be formed, and then it can be bound and presented to a host. If using Solutions Enabler 7.1 and Enginuity 5874 and earlier, bound striped thin metavolumes cannot be expanded online, they must be unbound, unmapped and unmasked. Since this process will cause complete destruction of the data stored on the metavolume, any data on the striped thin metavolume must be migrated off before attempting to expand it in order to prevent this data loss. Beginning with Solutions Enabler 7.2 and Enginuity 5875, a striped thin metavolume can be expanded with no impact to host access or to the data stored on the metavolume with the assistance of a BCV 1. Converting to VMAX metavolumes VMAX storage arrays running the Enginuity operating environment 5874 and earlier do not support converting devices online to metavolumes. They first must be unmasked/unmapped before they 1. Previously, BCVs used for thin striped meta expansion had to be thick devices as thin devices with the BCV attribute were not supported. Starting with Enginuity and Solutions Enabler 7.3 thin devices with the BCV attribute are now supported and can now be used for meta expansion. There are a number of restrictions that should be reviewed, however, and which can be found in the EMC Solutions Enabler Symmetrix Array Controls CLI Product Guide on support.emc.com. Expanding VMAX metavolumes 59

62 VMware vsphere and EMC VMAX can be converted. Beginning with Enginuity 5875 and Solutions Enabler 7.2, non-metavolumes can be converted to concatenated 1 metavolumes online without destroying existing data, in other words non-disruptively. Once converted it can then be expanded as needed, again non-disruptively. 1. Conversion to a striped metavolume from a non-meta device still requires that the device be unmapped first, therefore it is an offline operation. Conversion from a concatenated meta to a striped meta, however, is an online operation. 60 Using EMC VMAX Storage in VMware vsphere Environments

63 VMware vsphere and EMC VMAX Growing VMFS in VMware vsphere using an expanded metavolume This section presents a method to expand a VMFS by utilizing the non-disruptive metavolume expansion capability of the VMAX storage system. Note: Although this example focuses on a metavolume, the process of expanding the VMFS is exactly the same if a VMAX3/VMAX All Flash device is resized which does not use metavolumes. vsphere 5 and 6 offer VMFS Volume Grow, which allows one to increase the size of a datastore that resides on a VMFS volume without resorting to physical extent spanning. It complements the dynamic LUN expansion capability that exists in VMAX storage arrays. If a LUN is increased in size, then VMFS Volume Grow enables the VMFS volume to dynamically increase in size as well. Often, virtual machines threaten to outgrow their current VMFS datastores and they either need to be moved to a new, larger datastore or the current datastore needs to be increased in size to satisfy the growing storage requirement. Prior to vsphere, the option for increasing the size of an existing VMFS volume was to extend through a process called spanning. Even if the newly available space was situated upon the LUN which the original VMFS volume resided, the only option was to add an extent. A separate disk partition was created on the additional space and then the new partition could be added to the VMFS volume as an extent. With VMFS Volume Grow the existing partition table is changed and extended to include the additional capacity of the underlying LUN partition. The process of increasing the size of VMFS volumes is integrated into the vsphere Client. It is important to note there are four options when growing a Virtual Machine File System in vsphere. The following options are available when expanding VMware file systems: Use free space to add new extent This options adds the free space on a disk as a new datastore. Use free space to expand existing extent This option grows an existing extent to a required capacity. Growing VMFS in VMware vsphere using an expanded metavolume 61

64 VMware vsphere and EMC VMAX Use free space This option deploys an extent in the remaining free space of a disk. This option is available only when adding an extent. 5. Use all available partitions This option dedicates the entire disk to a single datastore extent. This option is available only when adding an extent and when the disk you are formatting is not blank. The disk is reformatted, and the datastores and any data that it contains are erased. The recommended selection is the Use free space to expand existing extent option as using this choice eliminates the need for additional extents and therefore reduces management complexity of affected VMFS. The process to grow a VMFS by extending a VMAX metavolume is listed below: 1. In vcenter Server, as seen in Figure 25, the datastore vsphere_expand_ds is nearly full. The underlying device has no additional physical storage for VMFS expansion. The datastore consumes 200 GB available on the device. Using VMFS Volume Grow and non-disruptive metavolume expansion, this issue can be resolved. 62 Using EMC VMAX Storage in VMware vsphere Environments

65 VMware vsphere and EMC VMAX Figure 25 VMFS datastore before being grown using VMFS Volume Grow 2. The first step in the expansion process is the identification of the metavolume that hosts the VMFS. As shown in Figure 26, EMC VSI can be used to obtain the mapping information. Figure 26 Identifying the VMAX metavolume to be expanded 3. After using one of the methods of metavolume expansion detailed in the previous sections, a rescan needs to be performed on the ESXi host server to see the expanded metavolume's larger size. For this example, assume the underlying metavolume at this point has been expanded to provide an additional 100 GB of storage. Once a rescan has been executed, the VMFS datastore can be grown. To do this in vcenter, navigate to the datastores listing on any ESXi host that uses the specified VMFS datastore. Select the datastore and click the Properties link, which is shown in Figure 27 on page 64. In this example, the datastore, vsphere_expand_ds, will be the one grown. Growing VMFS in VMware vsphere using an expanded metavolume 63

66 VMware vsphere and EMC VMAX Figure 27 VMFS properties in vsphere Client 4. In the Datastore Properties pop-up window in the vsphere Client, click the Increase button as shown in Figure 28 on page 65. If using the vsphere Web Client, select Increase Datastore Capacity from the right-hand click menu in Figure 29 on page Using EMC VMAX Storage in VMware vsphere Environments

67 VMware vsphere and EMC VMAX Figure 28 Initiating VMFS Volume Grow in vsphere Client Figure 29 Initiating VMFS Volume Grow in vsphere Web Client 5. In the Increase Datastore Capacity window that appears select the device that it currently resides on. It will have a Yes in the Expandable column, as in Figure 30 on page 66. The next Growing VMFS in VMware vsphere using an expanded metavolume 65

68 VMware vsphere and EMC VMAX window, as seen in Figure 30, will confirm the expansion of the VMFS volume; it shows that currently this datastore uses 200 GB and there is 100 GB free. Figure 30 Confirming available free space in the VMFS Volume Grow wizard 66 Using EMC VMAX Storage in VMware vsphere Environments

69 VMware vsphere and EMC VMAX 6. The option to use only part of the newly available space is offered. In Figure 31 the default to maximize all the available space has been chosen and therefore all 100 GB of the newly added space will be merged into the VMFS volume. Figure 31 Choosing how much additional capacity for the VMFS Volume Grow operation 7. A final confirmation screen is presented in Figure 32 on page 68. Growing VMFS in VMware vsphere using an expanded metavolume 67

70 VMware vsphere and EMC VMAX Figure 32 Increase Datastore Capacity wizard confirmation screen 68 Using EMC VMAX Storage in VMware vsphere Environments

71 VMware vsphere and EMC VMAX 8. The datastore vsphere_expand_ds now occupies all 300 GB of the metavolume as shown in Figure 33. Figure 33 Newly expanded VMFS volume Growing VMFS in VMware vsphere using an expanded metavolume 69

72 VMware vsphere and EMC VMAX Path management Path failover and load balancing in VMware vsphere environments The ability to dynamically multipath and load balance through the use of the native or third-party storage vendor multipathing software is available in VMware vsphere. The Pluggable Storage Architecture (PSA) is a modular storage construct that allows storage partners (such as EMC with PowerPath /VE (Virtual Edition)) to write a plug-in to best leverage the unique capabilities of their storage arrays. These modules can interface with the storage array to continuously resolve the best path selection, as well as make use of redundant paths to greatly increase performance and reliability of I/O from the ESXi host to storage. Native Multipathing plug-in By default, the native multipathing plug-in (NMP) supplied by VMware is used to manage I/O. NMP can be configured to support fixed, most recently used (MRU), and round-robin (RR) path selection polices (PSP). 1 NMP and other multipathing plug-ins are designed to be able to coexist on an ESXi host; nevertheless, multiple plug-ins cannot manage the same device simultaneously. To address this, VMware created the concept of claim rules. Claim rules are used to assign storage devices to the proper multipathing plug-in (MPP). When an ESXi host boots or performs a rescan, the ESXi host discovers all physical paths to the storage devices visible to the host. Using the claim rules defined in the /etc/vmware/esx.conf file, the ESXi host determines which multipathing module will be responsible for managing a specific storage device. Claim rules are numbered. For each physical path, the ESXi host processes the claim rules starting with the lowest number first. The attributes of the physical path are compared with the path specification in the claim rule. If there is a match, the ESXi host assigns the MPP specified in the claim rule to manage the physical path. 1. EMC does not support using the MRU PSP for NMP with VMAX. 70 Using EMC VMAX Storage in VMware vsphere Environments

73 VMware vsphere and EMC VMAX This assignment process continues until all physical paths are claimed by an MPP. Figure 34 has a sample claim rules list with only NMP installed. Figure 34 Default claim rules with the native multipathing module The rule set effectively claims all devices for NMP since no other options exist. If changes to the claim rules are needed after installation, devices can be manually unclaimed and the rules can be reloaded without a reboot as long as I/O is not running on the device (for instance, the device can contain a mounted VMFS volume but it cannot contain running virtual machines). It is a best practice to make claim rule changes after installation but before the immediate post-installation reboot. An administrator can choose to modify the claim rules, for instance, in order to have NMP manage EMC or non-emc devices. It is important to note that after initial installation of a MPP, claim rules do not go into effect until after the vsphere host is rebooted. For instructions on changing claim rules, consult the PowerPath/VE for VMware vsphere Installation and Administration Guide at support.emc.com or the VMware vsphere SAN Configuration Guide available from Managing NMP in vsphere Client Claim rules and claiming operations must all be done through the CLI, but the ability to choose the NMP multipathing policy can be performed in the vsphere Client itself. By default, VMAX devices being managed by NMP will be set to the policy of Fixed unless running vsphere 5.1 when they will be set to Round Robin. Using a policy of Fixed is not optimal and will not take advantage of the ability to have multiple, parallel paths to devices residing on the VMAX, leading to under-utilized resources. Therefore, EMC strongly recommends setting the NMP policy of all VMAX devices to Round Robin to maximize throughput. 1 Round Robin uses an automatic path selection rotating through all available paths and enabling the distribution of the load across those paths. The simplest way to ensure that all VMAX devices are set to Round Robin is to change the default policy setting from the Path management 71

74 VMware vsphere and EMC VMAX Fixed policy to the Round Robin policy. This should be performed before presenting any VMAX devices an ESXi 5.0 host to ensure that all of the devices use this policy from the beginning. This can be executed through the use of the service console or through vsphere CLI (recommended) by issuing the command: esxcli nmp satp setdefaultpsp -s VMW_SATP_SYMM -P VMW_PSP_RR From then on all VMAX devices will be, by default, set to Round Robin. Alternatively, each device can be manually changed in the vsphere Client as in Figure 35. Figure 35 NMP policy selection in vsphere Client If using the vsphere thick or Web Client, individual device assignment can be avoided by using the Path Management feature of EMC Virtual Storage Integrator. The Path Management feature will allow all paths for EMC devices to be changed at once. 1. If the VMAX is running any release prior to Enginuity , the EMC special devices known as Gatekeepers require a Fixed policy unless PowerPath/VE is employed or shared storage clustering. See VMware KB article for more detail. 72 Using EMC VMAX Storage in VMware vsphere Environments

75 VMware vsphere and EMC VMAX This is accomplished by accessing the EMC context menu either at the host or cluster level as shown in Figure 36 for the thick client and Figure 37 on page 74 for the Web Client. Figure 36 Path Management feature of EMC VSI Classic Context Menu Path management 73

76 VMware vsphere and EMC VMAX Figure 37 Path Management feature of EMC VSI Web Client Context Menu 74 Using EMC VMAX Storage in VMware vsphere Environments

77 VMware vsphere and EMC VMAX Once accessed, the Multipathing Policy dialog box is presented. In this screen, the user has the option to change the NMP or PowerPath/VE (if PowerPath/VE software and tools are installed) policy. Figure 38 and Figure 39 on page 76 are examples where the NMP policy is being set to Round Robin for all Symmetrix (VMAX) devices. Note in particular that the VSI Web Client has dropped the MRU policy for VMAX devices and only offers Fixed and Round Robin. Figure 38 NMP multipathing policy for VMAX devices in VSI Classic Round Robin Path management 75

78 VMware vsphere and EMC VMAX Figure 39 NMP multipathing policy for VMAX devices in VSI Web Client Round Robin 76 Using EMC VMAX Storage in VMware vsphere Environments

79 VMware vsphere and EMC VMAX Once selected, the user selects Next where the final screen appears as in Figure 40 and Figure 41 on page 78. It provides a summary of the action(s) to be taken before implementation. Note that in VSI Classic the changes are being made at the cluster level, while in the VSI Web Client the change is at the host level. Figure 40 NMP multipathing policy for VMAX devices set the cluster level in VSI Classic Path management 77

80 VMware vsphere and EMC VMAX Figure 41 NMP multipathing policy for VMAX devices set the host level in VSI Web Client Note that the NMP and PowerPath/VE policies can be set differently for each type of EMC array as in Figure 42 on page 79 and Figure 43 on page 80, or set at the global level. For more information on using the Path Management feature of VSI, see Managing PowerPath/VE in vcenter Server on page 84. NMP Round Robin and Gatekeepers As the use of NMP Round Robin with Gatekeepers is only supported with Enginuity or higher or HYPERMAX OS, one of the following three procedures should be used in conjunction with version 5.4 of the Path Management feature of VSI to address restrictions: 1. Customers running Enginuity 5875 or earlier with vsphere 5.1 should use the Path Management feature to change all VMAX paths to Fixed, followed by a second change setting all VMAX devices back to Round Robin. By default, vsphere 5.1 uses Round Robin for all VMAX devices, including Gatekeepers, so initially all devices will use Round Robin. The use of Round Robin with Gatekeepers is not supported at Enginuity 5875 or earlier so the Gatekeepers will need to be set to Fixed. Rather than do this manually, the Path Management feature is designed to filter out Gatekeepers when setting Round Robin. So the logic is that if all devices are first set to Fixed, on the second change to Round 78 Using EMC VMAX Storage in VMware vsphere Environments

81 VMware vsphere and EMC VMAX Robin the Gatekeepers will be skipped, and thus remain under a Fixed policy, while all the other devices are returned to Round Robin. 2. If you are running Enginuity or higher and vsphere 5.0.x or earlier, use the Path Management feature to change all paths to Round Robin since in vsphere 5.0.x and earlier the default NMP PSP is Fixed, but Round Robin is supported. This change to Round Robin will not include Gatekeepers, however, as they will be skipped. If desired, each Gatekeeper, therefore, will need to be manually changed to Round Robin. This is not required, however, as the support of Round Robin for Gatekeepers was added for reasons of ease of use only. 3. If you are running or higher or HYPERMA X OS and vsphere 5.1 or later, Round Robin is the default NMP PSP for VMAX devices and is supported for Gatekeepers so no remediation is necessary. Figure 42 Setting multiple policies for different EMC devices in VSI Classic Path management 79

82 VMware vsphere and EMC VMAX Figure 43 Setting multiple policies for different EMC devices in VSI Web Client NMP Round Robin and the I/O operation limit The NMP Round Robin path selection policy has a parameter known as the I/O operation limit which controls the number of I/Os sent down each path before switching to the next path. The default value is 1000, therefore NMP defaults to switching from one path to another after sending 1000 I/Os down any given path. Tuning the Round Robin I/O operation limit parameter can significantly improve the performance of certain workloads, markedly so in sequential workloads. In case of environments which have random and OLTP type 2 workloads in their environments setting the Round Robin parameter to lower numbers still yields the best throughput. EMC recommends that the NMP Round Robin I/O operation limit parameter be set to 1 for all VMAX/VMAX3/VMAX All Flash devices. 1 This ensures the best possible performance regardless of the workload being generated from the vsphere environment. It is important to note that this setting is per device. In addition, setting it on one host will not propagate the change to other hosts in the cluster. Therefore it must be set for every claimed VMAX/VMAX3/VMAX All Flash device on every host. For all new, 80 Using EMC VMAX Storage in VMware vsphere Environments

83 VMware vsphere and EMC VMAX unclaimed devices, however, a global value can be set to ensure the operation limit is set on claiming. This will also hold true if a claimed device is unclaimed and reclaimed. To set the I/O Operation limit as a rule (to be completed on each ESXi host): esxcli storage nmp satp rule add -s "VMW_SATP_SYMM" -V "EMC" -M "SYMMETRIX" -P "VMW_PSP_RR" -O "iops=1" To set the limit on claimed devices: To check the I/O Operation limit on claimed devices: esxcli storage nmp psp roundrobin deviceconfig get --device=<device NAA> To set the I/O Operation limit on claimed devices: esxcli storage nmp psp roundrobin deviceconfig set --device=<device NAA> --iops=1 --type iops Note that this parameter cannot be altered using the vsphere Client as this is a CLI-only operation. Note: VMware has indicated that on rare occasions, setting the I/O Operation limit to 1 can generate false reservation conflicts in the vmkernel.log file. The entry would appear similar to this: Aug 30 15:51:32 vmkernel: 0:00:22: cpu0:4100)scsideviceio: vm 0: 1447: Reservation conflict retries 32 for command 0x41027f252f40 (op: 0x1a) to device "naa bbd e293b374e6df11. VMware has noted that there is no side effect to this message and it is safe to ignore. EMC, however, has not seen this message in any of the internal testing done with this parameter. For additional information see the EMC article at the following location: Starting with version 7.0 of the VSI Web Client, the interface offers the ability to set the I/O Operation limit to 1 for all devices on a host or cluster. It is also possible to add the rule previously mentioned. The interface for doing this is shown in Figure 44 on page If running an SRDF/Metro vmsc uniform (cross-connect) configuration with RR PSP, it is essential to leave the I/O operation limit at the default of 1000, and not change it to 1. For more information see the whitepaper Using SRDF/Metro in a VMware Metro Storage Cluster Running Oracle E-Business Suite and 12C RAC available on support.emc.com or EMC.com. Path management 81

84 VMware vsphere and EMC VMAX Figure 44 VSI Host Recommended Settings PowerPath/VE Multipathing Plug-in and management EMC PowerPath/VE delivers PowerPath multipathing features to optimize VMware vsphere environments. PowerPath/VE uses a command set, called rpowermt, to monitor, manage, and configure PowerPath/VE for vsphere. The syntax, arguments, and options are very similar to the traditional powermt commands used on all other PowerPath multipathing supported operating system platforms. There is one significant difference in that rpowermt is a remote management tool. ESXi 5 and 6 do not have a service console. 1 In order to manage an ESXi host, customers have the option to use vcenter Server or vcli (also referred to as VMware Remote Tools) on a remote server. PowerPath/VE for vsphere uses the rpowermt command line utility for ESXi. The tools that contain the CLI can be installed on a Windows or Linux server or run directly on the PowerPath/VE virtual appliance. 1. There is a limited CLI available on ESXi if Tech Support mode is enabled. 82 Using EMC VMAX Storage in VMware vsphere Environments

85 VMware vsphere and EMC VMAX PowerPath/VE for vsphere should not be managed on the ESXi host itself. There is also no option for choosing PowerPath within the Manage Paths dialog within the vsphere Client as seen in Figure 35 on page 72. Similarly, if a device is currently managed by PowerPath, no policy options are available for selection as seen in Figure 45. Figure 45 Manage Paths dialog viewing a device under PowerPath ownership There is neither a local nor remote GUI for PowerPath on ESXi with the exception of the Path Management tools for VSI previously discussed. For functionality not present in VSI, administrators must user rpowermt. When the ESXi host is connected to a VMAX array, the PowerPath/VE kernel module running on the vsphere host will associate all paths to each device presented from the array and associate a pseudo device name. An example of this is shown in Figure 46 on page 84, which has the output of rpowermt display Path management 83

86 VMware vsphere and EMC VMAX host=x.x.x.x dev=emcpower0. Note in the output that the device has six paths and displays the recommended optimization mode (SymmOpt = Symmetrix optimization). Figure 46 Output of the rpowermt display command on a VMAX device For more information on the rpowermt commands and output, consult the PowerPath/VE for VMware vsphere Installation and Administration Guide. As more VMAX directors become available, the connectivity can be scaled as needed. PowerPath/VE supports up to 32 paths to a device 1. These methodologies for connectivity ensure all front-end directors and processors are utilized, providing maximum potential performance and load balancing for vsphere hosts connected to the VMAX storage arrays in combination with PowerPath/VE. Managing PowerPath/VE in vcenter Server PowerPath/VE for vsphere is managed, monitored, and configured using rpowermt as discussed in the previous section. This CLI-based management is common across all PowerPath platforms, but there is also integration within vcenter through the Path Management feature of EMC Virtual Storage Integrator as mentioned in Managing NMP in vsphere Client on page 71. Through VSI, the 1. ESXi supports 1024 total combined logical paths to all devices in all vsphere versions save 6.5 where it has been increased to Using EMC VMAX Storage in VMware vsphere Environments

87 VMware vsphere and EMC VMAX PowerPath/VE policy can be set at the host or cluster level for EMC devices. The policy can vary for each EMC array type if desired as displayed in Figure 47 and previously in Figure 43 on page 80. Figure 47 Setting policy for PowerPath/VE in the Path Management feature of VSI Classic In the vsphere Client one can also see LUN ownership which will display PowerPath if it is the owner of the device. An example of this is in Figure 48 on page 86, under the Configuration tab of the host and within the Storage Devices list, the owner of the device is shown as being PowerPath. Path management 85

88 VMware vsphere and EMC VMAX Figure 48 Viewing the multipathing plug-in owning a device in vsphere Client In the vsphere Web Client this view, shown in Figure 49 on page 87, is available under the Manage tab, sub-tab Storage, and by selecting Storage Devices. 86 Using EMC VMAX Storage in VMware vsphere Environments

89 VMware vsphere and EMC VMAX Figure 49 Viewing the multipathing plug-in owning a device in vsphere Web Client The Storage Viewer feature of VSI can also be used to determine if a device is under PowerPath ownership. By highlighting a particular device, it can be seen in Figure 50 on page 88 that not only the multipath plug-in owner is listed, but also the path management policy that is in effect. So in this particular case, one sees that the device backing the datastore Convert_VMs is managed by the PowerPath policy of SymmOpt. One of the other features of Storage Viewer is shown at the top of Figure 50 on page 88. This is a status box that alerts users to important information about the environment. One of these alerts is whether or not PowerPath is installed on the host being accessed. In this screenshot one sees the message indicating it is installed. If PowerPath were installed but not licensed, this would also be indicated in this box. Path management 87

90 VMware vsphere and EMC VMAX Figure 50 PowerPath ownership displayed in the Storage Viewer feature of VSI Improving resiliency of ESXi to SAN failures VMware ESXi uses modified QLogic and Emulex drivers that support multiple targets on every initiator. This functionality can be used to provide greater resiliency in a VMware ESXi environment by reducing the impact of storage port failures. Furthermore, presenting a device on multiple paths allows for better load balancing and reduced sensitivity to storage port queuing. This is extremely important in environments that share EMC VMAX storage array ports between VMware ESXi hosts and other operating systems. 88 Using EMC VMAX Storage in VMware vsphere Environments

91 VMware vsphere and EMC VMAX Figure 51 on page 90 shows how to implement multiple-target zoning in a VMware ESXi environment. It can be seen that the VMware ESXi has two HBAs, vmhba2 and vmhba5. However, the device hosting the datastore SQL_DATA can be accessed from four different paths with the following runtime names: vmhba2:c0:t0:l23, vmhba2:c0:t1:l23, vmhba5:c0:t0:l23 and vmhba5:c0:t1:l23. This is achieved by zoning each HBA in the VMware ESXi to two EMC VMAX storage array ports. The VMkernel assigns a unique SCSI target number to each storage array port. In Figure 51 on page 90, the VMkernel has assigned SCSI target numbers 0 and 1 to each EMC VMAX storage array port into which vmhba1 logs in, and also 0 and 1 to each EMC VMAX storage array port into which vmhba5 logs in. Note: This section does not discuss how multiple target zoning can be implemented in the fabric. EMC always recommends one initiator and one target in each zone. Multiple target zoning discussed in this section can be implemented as a collection of zones in which each zone contains the same initiator but different targets. Path management 89

92 VMware vsphere and EMC VMAX Figure 51 Increasing resiliency of ESXi host to SAN failures 90 Using EMC VMAX Storage in VMware vsphere Environments

93 VMware vsphere and EMC VMAX All Paths Down (APD) and Permanent Device Loss (PDL) All paths down or APD, occurs on an ESXi host when a storage device is removed in an uncontrolled manner from the host (or the device fails), and the VMkernel core storage stack does not know how long the loss of device access will last. VMware, however, assumes the condition is temporary. A typical way of getting into APD would be if the zoning was removed. Permanent device loss or PDL, is similar to APD (and hence why initially VMware could not distinguish between the two) except it represents an unrecoverable loss of access to the storage. VMware assumes the storage is never coming back. Removing the device backing the datastore from the storage group would produce the error. VMware s approach to APD/PDL has evolved over the vsphere versions. It is necessary, therefore, to explain how APD/PDL was first handled in vsphere 4, even though that version is no longer supported. Note: Redundancy of your multi-pathing solution, such as PowerPath/VE, can help avoid some situations of APD. IMPORTANT VMware relies on SCSI sense codes to detect PDL. If a device fails in a manner that does not return the proper sense codes, VMware will default to APD behavior. APD in vsphere 4 In vsphere 4 VMware could not distinguish between APD and PDL, therefore any event was APD. To alleviate some of the issues arising from APD, a number of advanced settings were added which could be changed by the VMware administrator to mitigate the problem. One of these settings changed the behavior of hostd when it was scanning the SAN, so that it would no longer be blocked, even when it came across a non-responding device (i.e. a device in APD state). This setting was automatically added in ESX 4.0 Update 3 and ESX 4.1 Update 1. Previous versions required customers to manually add the setting. 1 All Paths Down (APD) and Permanent Device Loss (PDL) 91

94 VMware vsphere and EMC VMAX APD in vsphere 5.0 Permanent Device Loss (PDL) change In vsphere 5.0, there were changes made to the way VMware handles APD. Most importantly, in 5.0 VMware differentiates between a device to which connectivity is permanently lost (PDL), and a device to which connectivity is transiently or temporarily lost (APD). As mentioned previously, the difference between APD and PDL is the following: APD is considered a transient condition, i.e. the device may come back at some time in the future. PDL is considered a permanent condition where the device is never coming back. Unfortunately, ESXi versions prior to vsphere 5.0 Update 1 could not distinguish between an APD or PDL condition, causing VMs to hang rather than to say automatically invoke an HA failover in an HA cluster. Starting with vsphere 5.0 U1 the ESXi server can act on a PDL SCSI sense code, though two advanced settings must be made. The first setting is disk.terminatevmonpdldefault which must be set on the host (all hosts in cluster) in the file /etc/vmware/settings: disk.terminatevmonpdldefault=true In vsphere 5.5 the setting can be changed in the Advanced System Settings and is found under a new name: VMkernel.Boot.terminateVMOnPDL Edit this setting and change it from No to Yes. Note: This setting is not required in vsphere 6 when running HA with VMCP. This change requires a reboot of the host. This setting will ensure that a VM is killed if the datastore it is on enters a PDL state, assuming all VM files are on that same datastore. The second advanced setting is das.maskcleanshutdownenabled and must be set to true on the vsphere HA cluster. This is shown in Figure 52 on page 93. In vsphere 5.1+ this is enabled by default. 1. For a more detailed explanation of these settings, please see VMware KB article Using EMC VMAX Storage in VMware vsphere Environments

95 VMware vsphere and EMC VMAX Figure 52 PDL setting in Advanced Options of HA configuration All Paths Down (APD) and Permanent Device Loss (PDL) 93

96 VMware vsphere and EMC VMAX This advanced setting enables vsphere HA to trigger a restart response for a virtual machine that has been killed automatically due to a PDL condition. It is also able to distinguish between a PDL condition and the VM simply being powered off by an administrator. Avoiding APD/PDL Device removal Prior to vsphere 5, there was no clearly defined way to remove a storage device from ESXi. With vsphere 5 and within the vsphere Client there are two new storage device management techniques that aid in the removal of the device: mount/unmount a VMFS-3 or VMFS-5 volume and attach/detach a device. In order to remove a datastore and VMAX device completely, avoiding APD/PDL, the user needs to first unmount the volume and then detach the device. Unmounting In the following example, the datastore SDRS_DS_5 seen in Figure 53, will be unmounted and the device associated with it detached in the vsphere Client. Before proceeding, make sure that there are no VMs on the datastore, that the datastore is not part of a datastore cluster (SDRS), is not used by vsphere HA as a heartbeat datastore, and does not have Storage I/O Control enabled. All these must be rectified before proceeding. Figure 53 Datastore SDRS_DS_5 94 Using EMC VMAX Storage in VMware vsphere Environments

97 VMware vsphere and EMC VMAX SDRS_DS_5 meets all these prerequisites so it can be unmounted. Note the Runtime Name, vmhba1:c0:t0:l17, as it will be needed when the device is detached. Once the datastore is identified, right-click on the datastore and choose Unmount as shown in Figure 54. The unmount may also be accomplished through the CLI. The command to do an unmount is esxcli storage filesystem unmount if you prefer to do it from the ESXi shell. Figure 54 Unmount datastore SDRS_DS_5 A number of screens follow with the unmount wizard, which are ordered together in Figure 55 on page 96: 1. The first screen presents the hosts that have the volume mounted. Check the boxes of the hosts to unmount the volume from all listed hosts. 2. The wizard will then ensure all prerequisites are met before allowing the user to proceed, presenting the results for verification. All Paths Down (APD) and Permanent Device Loss (PDL) 95

98 VMware vsphere and EMC VMAX 3. The final screen is a summary screen where the user finishes the task. Figure 55 The unmount wizard in the vsphere Client 96 Using EMC VMAX Storage in VMware vsphere Environments

99 VMware vsphere and EMC VMAX Once the wizard is complete, the result will be an inactive datastore which is grayed out. In Figure 56, the two unmount tasks are shown as completed in the bottom part of the image and the inactive datastore is boxed and highlighted. Figure 56 Datastore unmount completion There is one important difference between the vsphere Client shown here in comparison to the vsphere Web Client. The vsphere Web Client does NOT run a prerequisite check before attempting to unmount the datastore. The only screen presented is seen in Figure 57 on page 98, which lists the conditions that must be met to unmount the datastore; nothing, however, prevents the user from proceeding since the unmount will if a condition has not been met. All Paths Down (APD) and Permanent Device Loss (PDL) 97

100 VMware vsphere and EMC VMAX Figure 57 The unmount wizard in the vsphere Web Client If the user proceeds from this screen and a condition has not been met, the unmount will fail and an error similar to the one in Figure 58 will be produced. Figure 58 An unmount error Conversely, in the vsphere Client, the prerequisites are checked before an unmount is permitted. Attempting to unmount the same datastore as in Figure 57 on page 98, the vsphere Client runs the check and informs the user that one of the conditions has failed, and 98 Using EMC VMAX Storage in VMware vsphere Environments

101 VMware vsphere and EMC VMAX must be corrected before the unmount is allowed. Note in Figure 59 how the OK button is grayed out because a virtual machine resides on the datastore. Once corrected the unmount can proceed as demonstrated previously in Figure 55 on page 96. Figure 59 Unmount prevented in the vsphere Client The proactive checking provided by the vsphere Client provides a better user experience when unmounting a datastore and is therefore recommended over the vsphere Web Client for this particular function. Detaching The second part of the process to remove the devices involves detaching them in the vsphere Client. Start by recalling the runtime name gathered before the unmount (vmhba1:c0:t0:l17). All Paths Down (APD) and Permanent Device Loss (PDL) 99

102 VMware vsphere and EMC VMAX Once the device is identified in the Hardware/Storage option of the Configuration tab of one of the hosts, right-click on it and select detach as in Figure 60. Figure 60 Detaching a device after unmounting the volume As in unmounting the volume, the wizard, seen in Figure 61 on page 101, will run prerequisites before allowing the detach of the device. The vsphere Web Client, however, will not. Once detached, run a rescan to remove the inactive datastore from the list. The grayed out device, however, will remain in the device list until the LUN is unmasked from the host and a second rescan is run. If you prefer to use CLI, the detach can be done through the ESXi shell with the command esxcli storage core device set --state=off. Note: Unlike the unmounting of a volume, detaching the underlying device must be done on each host of a cluster independently. The wizard has no ability to check other hosts in a cluster. 100 Using EMC VMAX Storage in VMware vsphere Environments

103 VMware vsphere and EMC VMAX Figure 61 Completing the detach wizard All Paths Down (APD) and Permanent Device Loss (PDL) 101

104 VMware vsphere and EMC VMAX PDL AutoRemove With vsphere 5.5, a new feature called PDL AutoRemove is introduced. This feature automatically removes a device from a host when it enters a PDL state. Because vsphere hosts have a limit of 255 disk devices per host, a device that is in a PDL state can no longer accept I/O but can still occupy one of the available disk device spaces. Therefore, it is better to remove the device from the host. PDL AutoRemove occurs only if there are no open handles left on the device. The auto-remove takes place when the last handle on the device closes. If the device recovers, or if it is re-added after having been inadvertently removed, it will be treated as a new device. In such cases VMware does not guarantee consistency for VMs on that datastore. In a vsphere 6.0 vmsc environment, such as with SRDF/Metro, VMware recommends that AutoRemove be left in the default state, enabled. In an SRDF/Metro environment this is particularly important because if it is disabled, and a suspend action is taken on a metro pair(s), the non-biased side will experience a loss of communication to the devices that can only be resolved by a host reboot. For more detail on AutoRemove refer to VMware KB Using EMC VMAX Storage in VMware vsphere Environments

105 VMware vsphere and EMC VMAX APD/PDL in vsphere 6 vsphere 6 offers some new capabilities around APD and PDL for the HA cluster which allow automated recovery of VMs. The capabilities are enabled through a new feature in vsphere 6 called VM Component Protection or VMCP. When VMCP is enabled, vsphere can detect datastore accessibility failures, APD or PDL, and then recover affected virtual machines. VMCP allows the user to determine the response that vsphere HA will make, ranging from the creation of event alarms to virtual machine restarts on other hosts. VMCP is enabled in the vsphere HA edit screen of the vsphere Web Client. Note that like all new vsphere 6 features, this functionality is not available in the thick client. Navigate to the cluster, then the Manage tab and Settings sub-tab. Highlight the vsphere HA under Services and select Edit as in Figure 62 on page 104. Check the box for Protect against Storage Connectivity Loss. APD/PDL in vsphere 6 103

106 VMware vsphere and EMC VMAX Figure 62 Enable VMCP in vsphere Using EMC VMAX Storage in VMware vsphere Environments

107 VMware vsphere and EMC VMAX Once VMCP is enabled, storage protection levels and virtual machine remediations can be chosen for APD and PDL conditions as shown in Figure 63. Figure 63 Storage and VM settings for VMCP PDL VMCP settings The PDL settings are the simpler of the two failure conditions to configure. This is because there are only two choices: vsphere can issue events, or it can initiate power off of the VMs and restart them on the surviving host(s). As the purpose of HA is to keep the VMs APD/PDL in vsphere 6 105

108 VMware vsphere and EMC VMAX running, the default choice should always be to power off and restart. Once either option is selected, the table at the top of the edit settings is updated to reflect that choice. This is seen in Figure 64. Figure 64 PDL datastore response for VMCP APD VMCP settings As APD events are by nature transient, and not a permanent condition like PDL, VMware provides a more nuanced ability to control the behavior within VMCP. Essentially, however, there are still two options to choose from: vsphere can issue events, or it can initiate power off of the VMs and restart them on the surviving host(s) (aggressively or conservatively). The output in Figure 65 on page 107 is similar to PDL. 106 Using EMC VMAX Storage in VMware vsphere Environments

109 VMware vsphere and EMC VMAX Figure 65 APD datastore response for VMCP If issue events is selected, vsphere will do nothing more than notify the user through events when an APD event occurs. As such no further configuration is necessary. If, however, either aggressive or conservative restart of the VMs is chosen, additional options may be selected to further define how vsphere is to behave. The formerly grayed-out option Delay for VM failover for APD is now available and a minute value can be selected after which the restart of the VMs would proceed. Note that this delay is in addition to the default 140 second APD timeout. The difference in approaches to restarting the VMs is straightforward. If the outcome of the VM failover is unknown, say in the situation of a network partition, then the conservative approach would not terminate the VM, while the aggressive approach would. Note that if the cluster does not have sufficient resources, neither approach will terminate the VM. APD/PDL in vsphere 6 107

110 VMware vsphere and EMC VMAX In addition to setting the delay for the restart of the VMs, the user can choose whether vsphere should take action if the APD condition resolves before the user-configured delay period is reached. If the setting Response for APD recovery after APD timeout is set to Reset VMs, and APD recovers before the delay is complete, the affected VMs will be reset which will recover the applications that were impacted by the I/O failures. This setting does not have any impact if vsphere is only configured to issue events in the case of APD. VMware and EMC recommend leaving this set to disabled so as to not unnecessarily disrupt the VMs. Figure 66 highlights the additional APD settings. Figure 66 Additional APD settings when enabling power off and restart VMs As previously stated, since the purpose of HA is to maintain the availability of VMs, VMware and EMC recommend setting APD to power off and restart the VMs conservatively. Depending on the business requirements, the 3 minute default delay can be adjusted higher or lower. 108 Using EMC VMAX Storage in VMware vsphere Environments

111 VMware vsphere and EMC VMAX IMPORTANT If either the Host Monitoring or VM Restart Priority settings are disabled, VMCP cannot perform virtual machine restarts. Storage health can still be monitored and events can be issued, however. APD/PDL in vsphere 6.5 Generally, APD/PDL in vsphere 6.5 is the same as vsphere 6.0. There are, however, some changes and additions that should be noted. First, VMware has renamed vsphere HA to vsphere Availability to better reflect the aspects of the feature. The new menu is highlighted in Figure 67. Figure 67 vsphere Availability in vsphere 6.5 In vsphere 6.5, unlike vsphere 6.0, VMCP does not have to be enabled separately. Once HA is turned on, as in Figure 68 on page 110, the menus are no longer grayed-out and APD/PDL can be configured since Host Monitoring is enabled by default, which allows VMCP. APD/PDL in vsphere 6 109

112 VMware vsphere and EMC VMAX Figure 68 Enabling HA in vsphere 6.5 When HA is enabled, APD and PDL options can be configured under Failures and Responses as shown in Figure 69 on page 111. The settings for these conditions should be the same as vsphere Using EMC VMAX Storage in VMware vsphere Environments

113 VMware vsphere and EMC VMAX Figure 69 PDL and APD in vsphere 6.5 APD/PDL monitoring APD and PDL activities can be monitored from within the vsphere Web Client or the ESXi host in the vmkernel.log file. VMware provides a new screen in vsphere 6 which can be accessed by highlighting the cluster and navigating to the Monitor tab and vsphere HA sub-tab. Here can be seen any APD or PDL events. Note that these events are transitory and no historical information is recorded. Since there is a timeout period, APD is easier to catch than PDL. Figure 70 includes an APD event in this monitoring screen. Figure 70 APD/PDL monitoring in the vsphere Web Client APD/PDL in vsphere 6 111

114 VMware vsphere and EMC VMAX The other way monitoring can be accomplished is through the log files in /var/log on the ESXi host. Both APD and PDL events are recorded in various log files. An example of each from the vmkernel.log file is included in the single screenshot in Figure 71 on page 112. Figure 71 APD and PDL entries in the vmkernel.log 112 Using EMC VMAX Storage in VMware vsphere Environments

115 2 Management of EMC VMAX Arrays This chapter discusses the various ways to manage a VMware environment using EMC technologies: Introduction Solutions Enabler in VMware environments EMC Unisphere for VMAX in VMware environments Unisphere EMC Storage Analytics VMAX Content Pack for VMware vrealize Log Insight vsphere Storage APIs for Storage Awareness (VASA) Management of EMC VMAX Arrays 113

116 Management of EMC VMAX Arrays Introduction Managing and monitoring the VMAX storage system that provides a VMware environment with persistent medium to store virtual machine data is a critical task for any VMware system administrator. EMC has numerous options to offer the administrator from virtual appliances running on vsphere to specific VMAX system management context commands created just for VMware. EMC offers two primary complementary methods of managing and monitoring VMAX storage arrays: Command line offering Solutions Enabler SYMCLI The graphical user interface of EMC Unisphere for VMAX which contains both storage management and performance monitoring capabilities. The following sections will describe the best ways to deploy and use these management applications in a VMware environment. 114 Using EMC VMAX Storage in VMware vsphere Environments

117 Management of EMC VMAX Arrays Solutions Enabler in VMware environments Solutions Enabler is a specialized library consisting of commands that can be invoked on the command line, or within scripts. These commands can be used to monitor device configuration and status, and perform control operations on devices and data objects within managed storage arrays. SYMCLI resides on a host system to monitor and perform control operations on VMAX arrays. SYMCLI commands are invoked from the host operating system command line. The SYMCLI commands are built on top of SYMAPI library functions, which use system calls that generate low-level SCSI commands to specialized devices on the VMAX called Gatekeepers. Gatekeepers are very small devices carved from disks in the VMAX that act as SCSI targets for the SYMCLI commands. To reduce the number of inquiries from the host to the storage arrays, configuration and status information is maintained in a VMAX host database file, symapi_db.bin by default. It is known as the VMAX configuration database. Thin Gatekeepers and multipathing Beginning with Enginuity 5876 and Solutions Enabler v7.4, thin devices (TDEV) may be used as Gatekeepers. In addition, these thin devices need not be bound to thin pools. Enginuity 5876 and Solutions Enabler v7.4 also provides support for Native Multipathing (NMP) with Gatekeepers. This means that the NMP policy of Round Robin in vsphere 5 and 6 is supported for Gatekeeper devices so long as the Enginuity code on the VMAX is 5876 or higher. Customers running Enginuity 5875 or lower must still use Fixed Path policy for Gatekeepers on any vsphere version. Prior to Enginuity 5876, PowerPath/VE was the only supported multipathing software with Gatekeepers, though it still remains the preferred multipathing software for VMware on VMAX. On the VMAX3/VMAX All Flash with HYPERMAX OS 5977, there is only one kind of Gatekeeper and that is a thin, bound device. The NMP policy of Round Robin in vsphere 5 and 6 is fully supported. Solutions Enabler in VMware environments 115

118 Management of EMC VMAX Arrays Note: Gatekeepers may not be shared among multiple virtual machines, regardless of the multipathing software; however Gatekeepers can be presented to multiple ESXi hosts in a cluster to enable VMware HA for the VM. Installing and configuring VMAX management applications on virtual machines Even though the use of virtual appliances for VMAX management is generally preferred, VMAX management applications can be installed on virtual machines running supported operating systems. VMAX Gatekeepers must be presented as RDMs in physical compatibility mode, with a minimum of six per VMAX being mapped. Otherwise, there are no special considerations for installing and using these applications under virtual machines as compared to physical machines. Solutions Enabler virtual appliance Solutions Enabler version 7.1 and higher includes a virtual appliance (SE vapp) that can be deployed as a virtual machine in a VMware vsphere environment. The appliance is preconfigured to provide Solutions Enabler SYMAPI services that are required by EMC applications like VSI or the EMC Symmetrix Remote Data Facility (SRDF) SRA for Site Recovery Manager. It encompasses all the components you need to manage your VMAX environment using the storsrvd daemon and Solutions Enabler network client access. The latest components of the 8.4 SE vapp include: EMC Solutions Enabler v8.4 (solely intended as a SYMAPI server for Solutions Enabler client access - in other words the vapp is a server and may not be used as a client) Linux OS (SUSE 11 SP3 JeOS) SMI- S Provider V8.4 Note: Beginning with version 7.4 of the SE vapp, VASA support is included. 116 Using EMC VMAX Storage in VMware vsphere Environments

119 Management of EMC VMAX Arrays In addition, the Solutions Enabler virtual appliance includes a browser-based configuration tool, called the Solutions Enabler Virtual Appliance Configuration Manager. This is seen in Figure 72 on page 118. This tool enables you to perform the following configuration tasks: Monitor the application status Start and stop selected daemons Download persistent data Configure the nethost file (required for client access) Discover storage arrays Modify options and daemon options Add array-based and host-based license keys Run a limited set of Solutions Enabler CLI commands Configure ESXi host and Gatekeeper devices Launch Unisphere for VMAX (available only in Unisphere versions of the appliance console) Configure iscsi initiator and map iscsi Gatekeeper devices Configure additional NIC card (optional) Download SYMAPI debug logs Import CA signed certificate for web browser Import Custom certificate for storsrvd daemon Check disk usage Restart appliance Configure symavoid entries Load array-based elicenses Enable SSH Configure LDAP Manage users Reset hostname Update etc/hosts Note: Root login is not supported on the virtual appliance. Solutions Enabler in VMware environments 117

120 Management of EMC VMAX Arrays Figure 72 Solutions Enabler virtual appliance 7.6.x management web interface 118 Using EMC VMAX Storage in VMware vsphere Environments

121 Management of EMC VMAX Arrays Figure 73 Solutions Enabler virtual appliance 8.x management web interface The Solutions Enabler virtual appliance simplifies and accelerates the process to provide Solutions Enabler API services in an enterprise environment. It is simple and easy to manage, preconfigured, centralized, and can be configured to be highly available. By leveraging the features provided by the Solutions Enabler virtual appliance, customers can quickly deploy and securely manage the services provided to burgeoning EMC applications that require access to Solutions Enabler API service. Note: Like any other SYMAPI server, the Solutions Enabler vapp maintains the requirement for direct access to VMAX management LUNs known as Gatekeepers. A minimum of six Gatekeepers (six Gatekeepers per VMAX array) needs to be presented to the virtual appliance as physical pass-through raw device mappings to allow for management and control of a VMAX storage array. Readers should consult the Solutions Enabler version installation guide available on support.emc.com for detailed description of the virtual appliance. For more information on Gatekeepers, see Primus article emc Solutions Enabler in VMware environments 119

122 Management of EMC VMAX Arrays Extension of Solutions Enabler to support VMware virtual environments In Solutions Enabler 7.2 through 7.6.x, it is possible to performs inquiries into a vsphere environment using SYMCLI commands to identify and resolve storage configuration. The symvm command can add devices to virtual machines and display storage configuration information about authorized vsphere environments. In order to access this information, each ESXi server must have credentials stored in the authorization database. The symcfg auth command is used to set credentials for each ESXi server. For a ESXi host with a FQDN of api105.emc.com the following would be an example of the authorization command: symcfg auth -vmware add -host api105.emc.com -username root -password Letmein! Once the authorizations are put in place, Solutions Enabler can be used to query the authorized ESXi servers as well as add available devices as raw device mappings to virtual machines. An example of this is shown in Figure 74 on page 121. A short list of functionality available when using symvm is as follows: Adding available VMAX devices as raw device mappings to virtual machines (devices must be mapped and masked to the specified host first) List available candidate devices for RDMs List a virtual machine s RDMs List operating systems of virtual machines List datastores on a given ESXi host Provide details of a given datastore 120 Using EMC VMAX Storage in VMware vsphere Environments

123 Management of EMC VMAX Arrays List details of a given virtual machine Figure 74 Solutions Enabler symvm command For the complete list of functionality and directions on specific syntax, refer to the Solutions Enabler Management Product Guide available on support.emc.com. IMPORTANT Solutions Enabler 8.0 and higher does not support the symvm command. Solutions Enabler in VMware environments 121

124 Management of EMC VMAX Arrays EMC Unisphere for VMAX in VMware environments Beginning with Enginuity 5876, Symmetrix Management Console has been transformed into EMC Unisphere for VMAX (hitherto known also as Unisphere) which offers big-button navigation and streamlined operations to simplify and reduce the time required to manage a data center. Unisphere for VMAX simplifies storage management under a common framework, incorporating Symmetrix Performance Analyzer which previously required a separate interface. Depending on the VMAX platform (VMAX, VMAX3, VMAX All Flash), you can use Unisphere for VMAX to: Manage user accounts and roles Perform configuration operations (create volumes, mask volumes, set VMAX attributes, set volume attributes, set port flags, and create SAVE volume pools) Manage volumes (change volume configuration, set volume status, and create/dissolve meta volumes) Perform and monitor replication operations (TimeFinder/Snap, TimeFinder/Clone, TimeFinder/VP Snap, Symmetrix Remote Data Facility (SRDF), Open Replicator for Symmetrix (ORS)) Manage advanced VMAX features, such as: Fully Automated Storage Tiering (FAST) Fully Automated Storage Tiering for virtual pools (FAST VP) Enhanced Virtual LUN Technology Auto-provisioning Groups Virtual Provisioning Federated Live Migration (FLM) Federated Tiered Storage (FTS) Compression Non-Disruptive Migration (NDM) Monitor alerts The Unisphere for VMAX Installation Guide details the environment and system requirements; however a few points are noteworthy. Starting with Unisphere for VMAX 8.0, Solutions Enabler 64-bit is the only supported version. There is no 32-bit version. 122 Using EMC VMAX Storage in VMware vsphere Environments

125 Management of EMC VMAX Arrays Unisphere for VMAX and higher supports the following HYPERMAX OS and Enginuity version: VMAX Family systems (VMAX3/VMAX All Flash) running HYPERMAX OS 5977 or higher VMAX 10K/20K/40K Series systems running Enginuity 5876 DMX systems running Enginuity 5773 or higher On the VMAX3/VMAX All Flash, Unisphere for VMAX is embedded on the array, thus it may be unnecessary to implement it externally. As Unisphere for VMAX 8.x has a similar but not identical user interface as the previous Unisphere for VMAX versions, any screenshots below will include both versions. The Unisphere for VMAX dashboard page is shown in Figure 75 on page 124, Figure 76 on page 125, and for the embedded version Figure 77 on page 126. EMC Unisphere for VMAX in VMware environments 123

126 Management of EMC VMAX Arrays Figure 75 Unisphere for VMAX v1.6 dashboard 124 Using EMC VMAX Storage in VMware vsphere Environments

127 Management of EMC VMAX Arrays Figure 76 Unisphere for VMAX v8.x dashboard EMC Unisphere for VMAX in VMware environments 125

128 Management of EMC VMAX Arrays Figure 77 Embedded Unisphere for VMAX v8.x dashboard In addition, with the Performance monitoring option, Unisphere for VMAX provides tools for performing analysis and historical trending of VMAX system performance data. You can use the performance option to: Monitor performance and capacity over time Drill-down through data to investigate issues View graphs detailing system performance Set performance thresholds and alerts View high frequency metrics in real time Perform root cause analysis View VMAX system heat maps 126 Using EMC VMAX Storage in VMware vsphere Environments

129 Management of EMC VMAX Arrays Execute scheduled and ongoing reports (queries), and export that data to a file Utilize predefined dashboards for many of the system components Customize your own dashboard templates Figure 78 Unisphere for VMAX v1.6 performance monitoring option EMC Unisphere for VMAX in VMware environments 127

130 Management of EMC VMAX Arrays Figure 79 Unisphere for VMAX v8.x performance monitoring option There are three main options from the Performance screen seen in Figure 78 on page 127: Monitor, Analyze, and Settings. In version 8.x in Figure 79 on page 128 there are two additional items, Reports and Charts. Using the Monitor screen the user can review diagnostic, 128 Using EMC VMAX Storage in VMware vsphere Environments

131 Management of EMC VMAX Arrays historical, or real-time information about the array. This includes EMC dashboards such as the Heatmap in Figure 80 and Figure 81 on page 130. Figure 80 Unisphere v1.6 Heatmap dashboard EMC Unisphere for VMAX in VMware environments 129

132 Management of EMC VMAX Arrays Figure 81 Unisphere v8.x Heatmap dashboard 130 Using EMC VMAX Storage in VMware vsphere Environments

133 Management of EMC VMAX Arrays The analyze screen, Figure 82 and Figure 83 on page 132, can be used to convert the raw data from performance monitoring into useful graphs to help diagnose issues, view historical data, or even watch real-time activity. Figure 82 Unisphere v1.6 analyze diagnostics EMC Unisphere for VMAX in VMware environments 131

134 Management of EMC VMAX Arrays Figure 83 Unisphere v8.x analyze diagnostics Finally, the settings screen in Figure 84 on page 133 and Figure 85 on page 133 provides the user the ability to modify the metrics that are collected, run traces and reports, and setup thresholds and alerts. 132 Using EMC VMAX Storage in VMware vsphere Environments

135 Management of EMC VMAX Arrays Figure 84 Settings in the performance monitoring option v1.6 Figure 85 Settings in the performance monitoring option v8.x Unisphere for VMAX can be run on a number of different kinds of open systems hosts, physical or virtual. Unisphere for VMAX is also available as a virtual appliance for vsphere. Whether a full install, or a virtual appliance, Unisphere for VMAX maintains the requirement for direct access to VMAX management LUNs known as Gatekeepers. A minimum of six Gatekeepers (six Gatekeepers per EMC Unisphere for VMAX in VMware environments 133

136 Management of EMC VMAX Arrays VMAX array) needs to be presented to the virtual appliance as physical pass-through raw device mappings to allow for management and control of a VMAX storage array. Unisphere for VMAX virtual appliance As an alternative to installing Unisphere for VMAX on a separate host configured by the user, Unisphere for VMAX is available in a virtual appliance format. The Unisphere for VMAX virtual appliance (henceforth known as the Unisphere Virtual Appliance or Unisphere vapp) is simple and easy to manage, preconfigured, centralized, and can be configured to be highly available. The Unisphere Virtual Appliance is a virtual machine that provides all the basic components required to manage a VMAX environment. The virtual appliance for Unisphere for VMAX 1.6.x includes: EMC Unisphere for VMAX The Performance option EMC Solutions Enabler v7.6.1 (solely intended as a SYMAPI server for Solutions Enabler client access - in other words the vapp is a server and may not be used as a client) Linux OS (SUSE 11 64bit SP1) SMI-S Provider v4.6.1 The virtual appliance for Unisphere for VMAX includes: EMC Unisphere for VMAX The Performance option EMC Solutions Enabler v8.4.0 (solely intended as a SYMAPI server for Solutions Enabler client access - in other words the vapp is a server and may not be used as a client) Linux OS (SUSE 11 64bit SP3) SMI-S Provider v8.4.0, including ECOM In addition, the Unisphere vapp also includes a browser-based configuration tool, called the Solutions Enabler Virtual Appliance Configuration Manager shown in Figure 86 on page 136 and Figure 87 on page 137. This web interface offers the following configuration tasks not available in SMC or from the virtual appliance directly: 134 Using EMC VMAX Storage in VMware vsphere Environments

137 Management of EMC VMAX Arrays Launch Unisphere Monitor the application status Start and stop selected daemons Download persistent data Configure the nethost file (required for client access) Discover storage systems Modify options and daemon options Add host-based license keys Run a limited set of Solutions Enabler CLI commands Configure ESX host and gatekeeper volumes Load VMAX-based elicenses Configure LDAP Configure iscsi initiator and map iscsi gatekeeper volumes Configure additional NIC card (optional) Download SYMAPI debug logs Import CA signed certificate for web browser Import custom certificate for storsrvd daemon Check disk usage Clear temporary files Restart appliance Configure symavoid entries Enable SSH Manage users Reset hostname Update /etc/hosts file Readers should consult the EMC Unisphere for VMAX installation guide available on support.emc.com for detailed description of the virtual appliance. EMC Unisphere for VMAX in VMware environments 135

138 Management of EMC VMAX Arrays Figure 86 Unisphere virtual appliance v1.6 management web interface 136 Using EMC VMAX Storage in VMware vsphere Environments

139 Management of EMC VMAX Arrays Figure 87 Unisphere virtual appliance v8.x management web interface The Unisphere virtual appliance simplifies and accelerates the process to deploy Unisphere for VMAX quickly and securely in an enterprise environment. EMC Unisphere for VMAX virtual environment integration The ability to connect to and resolve storage information in a vsphere environment was introduced in the 7.3 release of Symmetrix Management Console and is available in Unisphere beginning with version 1.1. This can be accessed from the Hosts menu under Virtual Servers as shown in Figure 88 on page 138 for v1.6.x and Figure 89 on page 139 for v8.x. EMC Unisphere for VMAX in VMware environments 137

140 Management of EMC VMAX Arrays Figure 88 Unisphere for VMAX v1.6 VMware integration 138 Using EMC VMAX Storage in VMware vsphere Environments

141 Management of EMC VMAX Arrays Figure 89 Unisphere for VMAX v8.x VMware integration In order to connect to various ESXi hosts, authorizations must be created to allow authentication from Unisphere. It is important to note that authorizations are required for each ESXi host. Figure 90 on page 140 and Figure 91 on page 141 show the process to register an ESXi host. By checking the box next to Retrieve Info, Unisphere will gather all information about the storage and virtual machines associated with that ESXi host. IMPORTANT Only ESXi hosts may be added in Unisphere. The root user is required for access. vcenters are not supported. EMC Unisphere for VMAX in VMware environments 139

142 Management of EMC VMAX Arrays Figure 90 Registering an ESXi host with Unisphere for VMAX v Using EMC VMAX Storage in VMware vsphere Environments

143 Management of EMC VMAX Arrays Figure 91 Registering an ESXi host with Unisphere for VMAX v8.x Once the ESXi host is registered, the storage information can then be viewed from Unisphere. From the Virtual Servers screen, the user can select one of the registered servers and click on View Details. The pop-up box displays information about the ESXi server and has two other drill-down related objects, VMs and Volumes. The navigation through Volumes is shown in Figure 92 on page 142 and Figure 93 on page 143. EMC Unisphere for VMAX in VMware environments 141

144 Management of EMC VMAX Arrays Figure 92 Viewing ESXi host details with Unisphere for VMAX v Using EMC VMAX Storage in VMware vsphere Environments

145 Management of EMC VMAX Arrays Figure 93 Viewing ESXi host details with Unisphere for VMAX v8.x The Volumes detail screen has the following columns: Device ID VM Name Device Name Vendor Array ID Product Name Datastore Name Device Capacity (GB) The VMs detail screen has the following columns: EMC Unisphere for VMAX in VMware environments 143

146 Management of EMC VMAX Arrays VM Name VM OS Name VM State Num of CPUs Total Memory (MB) From the VMs screen, the user can add or remove VM storage. This screen will allow the user to map ESXi-presented LUNs as RDMs to the selected VM, the detail of which is provided in Figure 94 on page 145 and Figure 95 on page Using EMC VMAX Storage in VMware vsphere Environments

147 Management of EMC VMAX Arrays Figure 94 Mapping RDMs to a VM in Unisphere for VMAX v1.6 EMC Unisphere for VMAX in VMware environments 145

148 Management of EMC VMAX Arrays Figure 95 Mapping RDMs to a VM in Unisphere for VMAX v8.x Note: The Virtual Servers functionality in Unisphere for VMAX does not support VMware Virtual Volumes (VVols). While a VM created with VVol storage will display in the VM list, the Volumes will be empty. 146 Using EMC VMAX Storage in VMware vsphere Environments

149 Management of EMC VMAX Arrays Unisphere 360 Unisphere 360 for VMAX is an on-premises management solution that provides an aggregated view of the storage environment, as provided by the enrolled Unisphere for VMAX instances. Unisphere 360 consolidates and simplifies management for the VMAX, VMAX3, and VMAX All Flash storage arrays. Users have a single window view where they can manage, monitor and plan at the array level or for the entire data center. Unisphere 360 relies upon other Unisphere for VMAX installations to consolidate multiple arrays. During the initial login, Unisphere 360 will request a URL of a Unisphere for VMAX implementation as in Figure 96. Figure 96 Unisphere for Unisphere for VMAX URL Once the URL is provided, certificates verified (or an exception added), and credentials added, Unisphere for 360 will bring in any arrays the Unisphere for VMAX installation monitors. Multiple URLs may be provided - for example if each Unisphere for VMAX only Unisphere

150 Management of EMC VMAX Arrays monitors a single array because each one is in a separate data center. The initial dashboard will consolidate all arrays into some logical categories seen in Figure 97. Figure 97 Unisphere for Data Center Dashboard To see each system independently, there is a Systems view that can be accessed in Figure 98. Figure 98 Unisphere for Systems Dashboard 148 Using EMC VMAX Storage in VMware vsphere Environments

151 Management of EMC VMAX Arrays Unisphere 360 does not have the virtualization integration that exists in Unisphere for VMAX. It is designed more as a high-level view of all arrays under management. Unisphere

152 Management of EMC VMAX Arrays EMC Storage Analytics While EMC Unisphere for VMAX provides the system administrator will all the tools necessary to monitor performance of VMAX arrays, the VMware administrator typically must use other tools to monitor performance. For VMware administrators, VMware vrealize Operations Manager (vrops) is an automated operations management solution that provides integrated performance, capacity, and configuration management for highly virtualized and cloud infrastructure. vrops can help VMware administrators more effectively manage the performance of their VMware environments. VMware vrops provides the ability to create plug-ins, known as adapters, to monitor vendor components within the virtual environment. EMC has created EMC Storage Analytics (ESA) to fulfill this need. EMC Storage Analytics links vrealize Operations Manager with an EMC Adapter. The adapter is bundled with a connector that enables vrealize Operations Manager to collect performance metrics. EMC Storage Analytics leverages the power of existing vcenter features to aggregate data from multiple sources and process the data with proprietary analytic algorithms. The latest ESA 4.2 software is delivered in a zip file (.pak extension) and supports vrops 6.5. It can be downloaded from support.emc.com. Using ESA, adapter instances are created for each VMAX array and vsphere vcenter. Using the adapter the VMware administrator is provided access to the performance metrics of the storage components underlying the VMware infrastructure, all within the context of vrops. Once installed, the administrator can configure the adapter instances. The adapter for VMAX requires a connection to the Unisphere REST API which is part of Unisphere for VMAX. Figure 99 on page 151 shows the configuration screen. Note that while ESA is a licensed product 1, a license is not required during configuration as ESA will function for 90 days in a full trial mode. ESA for VMAX is free with the VMAX All Flash array. 1. ESA is free when part of a VMAX All Flash purchase. 150 Using EMC VMAX Storage in VMware vsphere Environments

153 Management of EMC VMAX Arrays Figure 99 Configuring an EMC adapter instance in vrops Once adapter instances are configured, statistics will be gathered for the corresponding array(s). Within the vrops custom page there are various screens available for viewing the metrics. EMC provides two universal dashboards and also specific dashboards for each platform. The first universal dashboard is Storage Topology. All configured VMAX arrays (and other arrays) will be displayed here. The user can navigate from the array level down to the front-end port. ESA is also going to show any VMware relationships - for instance if the device has a datastore built upon it as in Figure 100 on page 152. EMC Storage Analytics 151

154 Management of EMC VMAX Arrays Figure 100 Storage Topology dashboard The second universal dashboard is EMC Storage Metrics. Each EMC resource selected in the Resource Selector will have a set of metrics associated with it. Figure 101 on page 153 demonstrates the metrics for a storage group. Each metric can be displayed in a graph with many different options available - from combining graphs to doing historical reporting. 152 Using EMC VMAX Storage in VMware vsphere Environments

155 Management of EMC VMAX Arrays Figure 101 EMC Storage Metrics dashboard The custom VMAX dashboards are: VMAX Overview VMAX Topology VMAX Metrics Samples of these dashboards are shown in Figure 102 on page 154, Figure 103 on page 154, and Figure 104 on page 155. EMC Storage Analytics 153

156 Management of EMC VMAX Arrays Figure 102 EMC VMAX Overview dashboard Figure 103 EMC VMAX Topology dashboard 154 Using EMC VMAX Storage in VMware vsphere Environments

157 Management of EMC VMAX Arrays Figure 104 EMC VMAX Metrics dashboard In addition to the dashboards, ESA offers preconfigured alerts for VMAX objects such as SRPs and includes root cause analysis capability and capacity planning. An example of a device s remaining capacity which can be used for future planning is seen in Figure 105 on page 156. EMC Storage Analytics 155

158 Management of EMC VMAX Arrays Figure 105 Capacity planning with ESA See the EMC Storage Analytics Installation and User Guide for more details about ESA. 156 Using EMC VMAX Storage in VMware vsphere Environments

159 Management of EMC VMAX Arrays VMAX Content Pack for VMware vrealize Log Insight VMware vrealize Log Insight delivers automated log management through log analytics, aggregation and search. With an integrated cloud operations management approach, it provides the operational intelligence and enterprise-wide visibility needed to pro-actively enable service levels and operational efficiency in dynamic hybrid cloud environments. At the core of Log Insight is the Interactive Analytics page, shown in Figure 106. Figure 106 VMware vrealize Log Insight Interactive Analytics The Dell EMC VMAX Content Pack, when integrated into VMware vrealize Log Insight, provides dashboards and user-defined fields specifically for EMC VMAX arrays to enable administrators to conduct problem analysis and analytics on their array(s). Dell EMC VMAX Content Pack A content pack for VMware vrealize Log Insight (Log Insight) is a special type of dashboard group that is read-only to everyone. A content pack can be imported into any instance of Log Insight. VMware delivers a default content pack with Log Insight that is designed for VMware-related log information. Similarly, EMC has developed a custom content pack for VMAX log information. It can VMAX Content Pack for VMware vrealize Log Insight 157

160 Management of EMC VMAX Arrays be downloaded at the VMware Solution Exchange in the Cloud Management Marketplace 1 but is also available from within Log Insight directly in the Marketplace, highlighted in Figure 107. Figure 107 Log Insight Content Pack Marketplace This content pack contains both dashboards and user-defined fields. All of the widgets that make up the dashboards contain an information field that explains the purpose of the graph. Though the VMAX content pack is not required in order to use Log Insight with the VMAX, it is recommended as a good starting point for helping to categorize all the log information coming from the array. The content pack supports VMAX, VMAX3, and VMAX All Flash arrays ntent-pack Using EMC VMAX Storage in VMware vsphere Environments

161 Management of EMC VMAX Arrays Dashboards Included below are the eight dashboards that comprise the current VMAX content pack. They are seen in Figure 108 on page 160, Figure 109 on page 161, Figure 110 on page 162, Figure 111 on page 163, Figure 112 on page 164, Figure 113 on page 165, Figure 114 on page 166, and Figure 115 on page 167 are defined as follows: Overview - Contains widgets with information about all VMAX data in your Log Insight instance. Problems - Contains widgets with information about potential problems that are recorded in the log files. Local & remote replication - Contains widgets specific to log messages generated by SRDF or TimeFinder software. Virtual provisioning overview - Contains widgets with information about thin pool and device events. Director events - Contains widgets with information about any front-end or back-end director events on the VMAX. FAST/SLO - Contains widgets with metrics specific to FAST and SLOs on the VMAX. Virtual Volumes (VVols) - Contains widgets with information related to VVol activity on the array. Auditing - Contains widgets that display all audit log information. VMAX Content Pack for VMware vrealize Log Insight 159

162 Management of EMC VMAX Arrays Figure 108 VMAX Content Pack - Overview dashboard 160 Using EMC VMAX Storage in VMware vsphere Environments

163 Management of EMC VMAX Arrays Figure 109 VMAX Content Pack - Problems dashboard VMAX Content Pack for VMware vrealize Log Insight 161

164 Management of EMC VMAX Arrays Figure 110 VMAX Content Pack - Local & remote replication dashboard 162 Using EMC VMAX Storage in VMware vsphere Environments

165 Management of EMC VMAX Arrays Figure 111 VMAX Content Pack - Virtual provisioning overview dashboard VMAX Content Pack for VMware vrealize Log Insight 163

166 Management of EMC VMAX Arrays Figure 112 VMAX Content Pack - Director events dashboard 164 Using EMC VMAX Storage in VMware vsphere Environments

167 Management of EMC VMAX Arrays Figure 113 VMAX Content Pack - FAST/SLO dashboard VMAX Content Pack for VMware vrealize Log Insight 165

168 Management of EMC VMAX Arrays Figure 114 VMAX Content Pack - VVol dashboard 166 Using EMC VMAX Storage in VMware vsphere Environments

169 Management of EMC VMAX Arrays User-defined fields Figure 115 VMAX Content Pack - Auditing dashboard In large environment with numerous log messages, it is difficult to locate instantly the data fields that are important to you. Log Insight provides runtime field extraction to address this problem. You can dynamically extract any field from the data by providing a regular expression. Within the VMAX Content Pack, EMC has preconfigured over 50 user-defined fields for the most commonly appearing objects in the log files. All of the fields have the prefix "emc_vmax_" so they can be easily identified. The fields are generally self-explanatory. Below is a partial list of the fields which does not include auditing: emc_vmax_array_srdf_group VMAX Content Pack for VMware vrealize Log Insight 167

170 Management of EMC VMAX Arrays emc_vmax_array_state emc_vmax_be_director emc_vmax_devices emc_vmax_director_name emc_vmax_director_state emc_vmax_egress_tracks emc_vmax_event_date emc_vmax_event_format_type emc_vmax_eventid emc_vmax_fe_director emc_vmax_ingress_tracks emc_vmax_iorate emc_vmax_objecttype emc_vmax_pctbusy emc_vmax_pcthit emc_vmax_percent emc_vmax_port_name emc_vmax_port_status emc_vmax_portgroup emc_vmax_power emc_vmax_response_time emc_vmax_scontainer emc_vmax_scontainer_percent emc_vmax_severity emc_vmax_sg emc_vmax_sg_state emc_vmax_sg_state_name emc_vmax_srdf_group emc_vmax_srdf_state emc_vmax_srp_name emc_vmax_storagegrp 168 Using EMC VMAX Storage in VMware vsphere Environments

171 Management of EMC VMAX Arrays emc_vmax_storagetier emc_vmax_symmid emc_vmax_system emc_vmax_text emc_vmax_thinpool_name emc_vmax_thinpoolname emc_vmax_threshold_value emc_vmax_used_capacity emc_vmax_volume In addition to the dashboards, the VMAX Content Pack also contains pre-configured Alerts and Queries. The VMAX Content Pack displays existing log information in the database. For the VMAX, Solutions Enabler, Unisphere for VMAX, and enas can be configured to send logs via syslog to Log Insight. For more about how this configuration is done along with more detail about the VMAX content pack, please reference the Using the EMC VMAX Content Pack for VMware vrealize Log Insight White Paper, which can be found at and on support.emc.com. VMAX Content Pack for VMware vrealize Log Insight 169

172 Management of EMC VMAX Arrays vsphere Storage APIs for Storage Awareness (VASA) vsphere Storage APIs for Storage Awareness are vcenter Server-based APIs that enable storage arrays to inform the vcenter Server about their configurations, capabilities, and storage health and events. They allow an administrator to be aware of physical storage topology, capability, and state. EMC supports vsphere Storage APIs for Storage Awareness via the plug-in, or provider, which can be downloaded from support.emc.com. 1 EMC has VASA Providers for 1.x and 2.x but does not support 3.x. This section only discusses VASA 1.x functionality that is delivered with Solutions Enabler 7.x 2 and does not address the VASA Provider 8.2/8.3/ IMPORTANT The VMAX3/VMAX All Flash does not support VASA integration with the VASA 1.x Provider; however the 8.2, 8.3, and 8.4 VASA Providers support VASA 1.x capability for the VMAX3/VMAX All Flash. To learn more about this as well as vsphere 6.x integration, read Implementing VMware s Storage API for Storage Awareness with EMC VMAX whitepaper available on EMC.com. Profile-Driven Storage Managing datastores and matching the SLA requirements of virtual machines with the appropriate datastore can be challenging and cumbersome tasks. vsphere 5 introduced Profile-Driven Storage, which enables rapid and intelligent placement of virtual machines based on SLA, availability, performance or other requirements through provided storage capabilities. 1. For full VASA functionality, the minimum version of this provider that should be used is SMI-S Provider version for SMI 1.4. It must be installed on a host that has access to the VMAX through Gatekeepers. 2. Solutions Enabler 7.4 is the first version to support VASA 1.x. 3. The VASA Provider 8.2/8.3/8.4 is covered in the whitepaper, Using VMware Virtual Volumes with EMC VMAX3 and VMAX All Flash, found at and on support.emc.com. 170 Using EMC VMAX Storage in VMware vsphere Environments

173 Management of EMC VMAX Arrays Using Profile-Driven Storage, various storage characteristics, typically defined as a tier, can be requested in a virtual machine storage profile. These profiles are used during provisioning, cloning and Storage vmotion to ensure that only those datastores or datastore clusters that are compliant with the virtual machine storage profile are made available. The virtual machine storage profile can also help select a similar type of datastores when creating a Storage DRS datastore cluster. Profile-Driven Storage will reduce the amount of manual administration required for virtual machine placement while improving virtual machine SLA storage compliance. Profile-Driven Storage delivers these benefits by taking advantage of the following: Full integration with vsphere Storage APIs Storage Awareness, enabling usage of storage characterization supplied by storage vendors. Support for NFS, iscsi, FCoE, and Fibre Channel (FC) storage, and all storage arrays on the HCL. Enabling the vsphere administrator to tag storage based on customer- or business-specific descriptions. Using storage characterizations and/or administrator-defined descriptions to create virtual machine placement rules in the form of storage profiles. Providing a simple mechanism to check a virtual machine's compliance with placement rules. This ensures that a virtual machine is not deployed or migrated to an incorrect type of storage without an administrator being informed about the situation. VASA features Below are some of the general features of VASA broken out into their respective management areas: Operations management Identify trends in a VM's storage capacity usage for troubleshooting Correlate events on datastore and LUNs with VM's performance characteristics Monitor health of storage vsphere Storage APIs for Storage Awareness (VASA) 171

174 Management of EMC VMAX Arrays Policy-based management Choose the right storage in terms of space, performance and SLA requirements Monitor and report against existing storage policies Create a foundation for manageability in vsphere 5 and expand in future releases Storage capacity management Identify trends in a VM's storage capacity usage Identify impact of newly created (or migrated) VMs on storage space Vendor providers and storage data representation The vcenter Server communicates with the vendor provider to obtain information that the vendor provider collects from available storage devices. The vcenter Server can then display the storage data in the vsphere Client. The storage vendor first needs to be registered in the vsphere Client. This can be achieved by selecting the Storage Providers icon in the home screen of the vsphere Client as in Figure 116 on page Using EMC VMAX Storage in VMware vsphere Environments

175 Management of EMC VMAX Arrays Figure 116 Storage providers in the vsphere Client The EMC SMI-S Provider can then be registered using the dialog box provided in the vsphere Client. Table 2 on page 174 lists the values required for each variable in the registration box and whether or not the field is validated. A CIM user with administrator privileges needs vsphere Storage APIs for Storage Awareness (VASA) 173

176 Management of EMC VMAX Arrays to be created before registering the SMI-S Provider. Details on how to this is accomplished can be found in the post-installation tasks of the SMI-S Provider release notes. Table 2 Provider registration Variable Name URL Login Password Add Vendor Provider Value Any descriptive name (not validated) (validated) Administrative user (validated) - admin or vmadmin role Administrative user password (validated) In Figure 117 is the registration box with the result of the registration appearing in the background. Figure 117 EMC SMI-S Provider registration 174 Using EMC VMAX Storage in VMware vsphere Environments

177 Management of EMC VMAX Arrays Once registered, the EMC SMI-S Provider will supply information to the vcenter. Information that the vendor provider supplies can be divided into the following categories: Storage topology Information about physical storage elements appears on the Storage Views tab. It includes such data as storage arrays, array IDs, etc. This type of information can be helpful when you need to track virtual machine-to-storage relationships and configuration, or to identify any changes in physical storage configuration. Storage capabilities The vendor provider collects and communicates information about physical capabilities and services that the underlying storage offers. This information can be useful when, for example, you need to properly aggregate storage into tiers, or select the right storage, in terms of space and performance, for a particular virtual machine. Table 3, next, and Figure 118 on page 177 detail the default storage capabilities that accompany the registration of the EMC storage provider. Table 3 VASA-provided storage capabilities (page 1 of 2) Name SAS/Fibre Storage; Thin; Remote Replication SAS/Fibre Storage; Thin SAS/Fibre Storage; Remote Replication SAS/Fibre Storage Solid State Storage; Thin; Remote Replication Solid State Storage; Thin Solid State Storage; Remote Replication Solid State Storage NL-SAS/SATA Storage; Thin; Remote Replication NL-SAS/SATA Storage; Thin NL-SAS/SATA Storage; Remote Replication Description SAS or Fibre Channel drives; thin-provisioned; remote replication intended to provide disaster recovery. a SAS or Fibre Channel drives; thin-provisioned. SAS or Fibre Channel drives; remote replication intended to provide disaster recovery. SAS or Fibre Channel drives. Solid state drives; thin-provisioned; remote replication intended to provide disaster recovery. Solid state drives; thin-provisioned. Solid state drives; remote replication intended to provide disaster recovery. Solid state drives. Near Line SAS or SATA drives; thin-provisioned; remote replication intended to provide disaster recovery. Near Line SAS or SATA drives; thin-provisioned. Near Line SAS or SATA drives; remote replication intended to provide disaster recovery. vsphere Storage APIs for Storage Awareness (VASA) 175

178 Management of EMC VMAX Arrays Table 3 VASA-provided storage capabilities (page 2 of 2) Name NL-SAS/SATA Storage Auto-Tier b Storage; Thin; Remote Replication Auto-Tier Storage; Thin Auto-Tier Storage; Remote Replication Auto-Tier Storage Description Near Line SAS or SATA drives. Multiple drives tiers with FAST VP enabled; thin-provisioned; remote replication intended to provide disaster recovery. Multiple drives tiers with FAST VP enabled; thin-provisioned. Multiple drives tiers with FAST enabled; remote replication intended to provide disaster recovery. Multiple drives tiers with FAST enabled. a. For the system-defined storage capabilities that include remote replication, it is important to note that EMC's VASA provider will not recognize all replication technologies that are currently available on VMAX. Currently only SRDF, SRDFe, and Open Replicator for Symmetrix (ORS) are supported by the VASA Provider. Any device, for example, being remotely replicated with RecoverPoint will be treated as a non-replicated device. b. In addition to FAST VP enabled devices (pinned or unpinned), a device that has extents in more than a single tier (technology) will also be categorized as Auto-Tier. 176 Using EMC VMAX Storage in VMware vsphere Environments

179 Management of EMC VMAX Arrays Figure 118 Manage storage capabilities Note: Depending on the VMAX arrays and devices that are presented to the vcenter environment, not all of the capabilities that appear in Figure 118 may be listed. For instance, if the vcenter environment is using VMAX 10K arrays exclusively, only storage capabilities that are specific to thin devices will be shown. Storage status This category includes reporting about status of various storage entities. It also includes alarms and events for notifying about configuration changes. vsphere Storage APIs for Storage Awareness (VASA) 177

180 Management of EMC VMAX Arrays This type of information can help one troubleshoot storage connectivity and performance problems. It can also help one correlate array-generated events and alarms to corresponding performance and load changes on the array. The alarms that are part of the storage status category are particularly useful when using Virtual Provisioning because they provide warnings when the thin LUNs on the VMAX are approaching capacity, something that before had to be monitored solely on the storage side. Thin-provisioned LUN alarm Within vsphere 5 and 6 there is an alarm definition for VASA. It is named 1 Thin-provisioned volume capacity threshold exceeded and will be triggered if the thin device on the VMAX exceeds predefined thresholds. These thresholds are as follows: < 65% is green >= 65% is yellow >= 80% is red The alarm definition is seen in Figure 119. Figure 119 Thin-provisioned LUN capacity exceeded alarm definition 1. Prior to vsphere 5.1 this alarm was named Thin-provisioned LUN capacity exceeded. 178 Using EMC VMAX Storage in VMware vsphere Environments

181 Management of EMC VMAX Arrays As a thin LUN below 65% will produce no alarm, a triggered alarm is either a warning (yellow) or an alert (red). An alert is seen in Figure 120 for datastore VASA_DS. Figure 120 Thin-provisioned volume alarm in vcenter While the warning alarm is just that and triggers nothing additional, an alert alarm will prevent the user from creating other virtual machines on the datastore using that thin device. VMware takes this step to reserve the remaining space for the current virtual machines and avoid any out of space conditions for them. If the user attempts to create a virtual machine on the datastore which has this alert, the error seen in Figure 121 will result. Figure 121 Error received after alert on thin-provisioned volume vsphere Storage APIs for Storage Awareness (VASA) 179

182 Management of EMC VMAX Arrays The user must take corrective action at this point and may choose to increase the size of the thin device, have the system administrator perform a space reclamation on the thin device, use VMware Dead Space Reclamation (if supported and there is reclaimable space) or use a different datastore altogether for future virtual machines. Once the thin device has enough free space to be beyond the warning and alert thresholds, the alarm will be removed. In order for vcenter to report an exhaustion of space on the thin-provisioned device on the VMAX, the SMI-S Provider takes advantage of the events that are automatically recorded by the event daemon on the VMAX. These events are recorded regardless of whether the user chooses to be alerted to them in Unisphere. Therefore once the SMI-S Provider is properly installed and registered in vcenter, no additional configuration is required on the VMAX or within EMC Unisphere for VMAX. If the user wishes to see alerts for thin device allocation within Unisphere, the system administrator can configure them through the path Home > Administration > Alert Settings > Alert Thresholds. The configuration screen is shown in Figure 122 on page 181 where the default alert for thin device allocation is highlighted. The default alert cannot be altered - attempts to edit it will result in the error in the bottom-right of Figure 122 on page 181. Users can, however, create specific alerts for particular thin pools which will supersede the default alert. 180 Using EMC VMAX Storage in VMware vsphere Environments

183 Management of EMC VMAX Arrays Figure 122 Thin device allocation alert within EMC Unisphere for VMAX Such alarms and alerts allow the VMware administrator and system administrator to be aware of the state of the storage capacity presented to the VMware environment. The thin volume alarm is only one of the ways that the vsphere Storage APIs for Storage Awareness can provide insight for the VMware administrator into the VMAX storage. Managing storage tiers, provisioning, migrating, cloning virtual machines and correct virtual machine placement in vsphere deployments have become more efficient and user friendly with VASA. It removes the need for maintaining complex and tedious spreadsheets and validating compliance manually during every migration or creation of a virtual machine or virtual disk For more information on using VASA with EMC VMAX, please reference the Implementing VMware s Storage API for Storage Awareness with EMC VMAX White Paper, which can be found at and on support.emc.com. vsphere Storage APIs for Storage Awareness (VASA) 181

184 Management of EMC VMAX Arrays When using VASA functionality, EMC does not recommend preallocating thin devices once a datastore has been created on the device. Preallocation of the device beyond the predefined thresholds will cause the vcenter alarm to be triggered. This could result in an inability to create virtual machines on the datastore. If preallocation of space within the datastore is required, EMC recommends using eagerzeroedthick virtual machines. 182 Using EMC VMAX Storage in VMware vsphere Environments

185 3 EMC Virtual Provisioning and VMware vsphere This chapter discusses deploying and using EMC VMAX Virtual Provisioning in a VMware environment and provides recommendations when combining it with vsphere Thin Provisioning: Introduction EMC VMAX Virtual Provisioning overview Thin device sizing Thin device compression on VMAX Compression on the VMAX All Flash Performance considerations Virtual disk allocation schemes Dead Space Reclamation (UNMAP) Thin pool management EMC Virtual Provisioning and VMware vsphere 183

186 EMC Virtual Provisioning and VMware vsphere Introduction One of the biggest challenges facing storage administrators is provisioning storage for new applications. Administrators typically allocate space based on anticipated future growth of applications. This is done to mitigate recurring operational functions, such as incrementally increasing storage allocations or adding discrete blocks of storage as existing space is consumed. Using this approach results in more physical storage being allocated to the application than needed for a significant amount of time and at a higher cost than necessary. The overprovisioning of physical storage also leads to increased power, cooling, and floor space requirements. Even with the most careful planning, it may be necessary to provision additional storage in the future, which could potentially require an application outage. A second layer of storage overprovisioning happens when a server and application administrator overallocate storage for their environment. The operating system sees the space as completely allocated but internally only a fraction of the allocated space is used. EMC VMAX Virtual Provisioning (which will be referred to as virtual provisioning throughout the rest of the document) can address both of these issues. Virtual Provisioning allows more storage to be presented to an application than is physically available. More importantly, Virtual Provisioning will allocate physical storage only when the storage is actually written to or when purposively pre-allocated by an administrator. This allows more flexibility in predicting future growth, reduces the initial costs of provisioning storage to an application, and can obviate the inherent waste in over-allocation of space and administrative management of subsequent storage allocations. EMC VMAX storage arrays continue to provide increased storage utilization and optimization, enhanced capabilities, and greater interoperability and security. The implementation of Virtual Provisioning, generally known in the industry as thin provisioning, for these arrays enables organizations to improve ease of use, enhance performance, and increase capacity utilization for certain applications and workloads. It is important to note that the VMAX 10K and the VMAX3/VMAX All Flash storage arrays are designed and configured for a one-hundred percent virtually provisioned environment. 184 Using EMC VMAX Storage in VMware vsphere Environments

187 EMC Virtual Provisioning and VMware vsphere EMC VMAX Virtual Provisioning overview VMAX thin devices are logical devices that can be used in many of the same ways that VMAX devices have traditionally been used. Unlike standard VMAX devices, thin devices do not need to have physical storage completely allocated at the time the device is created and presented to a host. A thin device on the VMAX is not usable until it has been bound to a shared storage pool known as a thin pool but a VMAX3/VMAX All Flash thin device is ready upon creation. The thin pool is comprised of special devices known as data devices that provide the actual physical storage to support the thin device allocations. When a write is performed to part of any thin device for which physical storage has not yet been allocated, the VMAX allocates physical storage from the thin pool for that portion of the thin device only. The VMAX operating environment, Enginuity or HYPERMAX OS, satisfies the requirement by providing a unit of physical storage from the thin pool called a thin device extent. This approach reduces the amount of storage that is actually consumed. The thin device extent is the minimum amount of physical storage that can be reserved at a time for the dedicated use of a thin device. An entire thin device extent is physically allocated to the thin device at the time the thin storage allocation is made as a result of a host write operation. A round-robin mechanism is used to balance the allocation of data device extents across all of the data devices in the thin pool that are enabled and that have remaining unused capacity. The thin device extent size on the VMAX is twelve 64 KB tracks (768 KB), while on the VMAX3/VMAX All Flash it is a single 128 KB track. Note that on the VMAX, but not the VMAX3/VMAX All Flash, the initial bind of a thin device to a pool causes one thin device extent to be allocated per thin device. If the thin device is a metavolume, then one thin device extent is allocated per metavolume member. So a four-member thin metavolume would cause four extents (3,072 KB) to be allocated when the device is initially bound to a thin pool. When a read is performed on a thin device, the data being read is retrieved from the appropriate data device in the thin pool. If a read is performed against an unallocated portion of the thin device, zeros are returned. On the VMAX, when more physical data storage is required to service existing or future thin devices, such as when a thin pool is running out of physical space, data devices can be added to existing EMC VMAX Virtual Provisioning overview 185

188 EMC Virtual Provisioning and VMware vsphere thin pools online. Users can rebalance extents over the thin pool when any new data devices are added. This feature is discussed in detail in the next section. New thin devices can also be created and bound to existing thin pools. IMPORTANT Since the VMAX3/VMAX All Flash ships pre-configured with all thin pools and data devices created, there is no manipulation of either., however the striping of thin devices as depicted in Figure 123 is essentially the same. When data devices are added to a thin pool they can be in an enabled or disabled state. In order for the data device to be used for thin extent allocation, however, it needs to be enabled. Conversely, for it to be removed from the thin pool, it needs to be in a disabled state. Active data devices can be disabled, which will cause any allocated extents to be drained and reallocated to the other enabled devices in the pool. They can then be removed from the pool when the drain operation has completed. Figure 123 depicts the relationships between thin devices and their associated thin pools. There are nine devices associated with thin Pool A and three thin devices associated with thin pool B. Figure 123 Thin devices and thin pools containing data devices Using EMC VMAX Storage in VMware vsphere Environments

189 EMC Virtual Provisioning and VMware vsphere The way thin extents are allocated across the data devices results in a form of striping in the thin pool. The more data devices that are in the thin pool, the wider the striping is, and therefore the greater the number of devices available to service I/O. 1. The diagram does not include FAST VP devices which are covered in Chapter 4. EMC VMAX Virtual Provisioning overview 187

190 EMC Virtual Provisioning and VMware vsphere Thin device sizing VMAX array The maximum size of a thin device on a VMAX is approximately 240 GB. If a larger size is needed, then a metavolume comprised of multiple thin devices can be created. For environments using 5874 and earlier, it is recommended that thin metavolumes be concatenated rather than striped since concatenated metavolumes support fast expansion capabilities, as new metavolume members can easily be added to the existing concatenated metavolume. This functionality is required when the provisioned thin device has been completely consumed by the host, and further storage allocations are required. Beginning with Enginuity 5875, striped metavolumes can be expanded to larger sizes online. Therefore, EMC recommends that customers utilize striped thin metavolumes with the 5875 Enginuity level and later. Concatenated metavolumes using standard volumes have the drawback of being limited by the performance of a single metavolume member. In the case of a concatenated metavolume comprised of thin devices, however, each member device is typically spread across the entire underlying thin pool, thus somewhat eliminating that drawback. However, there are several conditions in which the performance of concatenated thin metavolumes can be limited. Specifically: Enginuity allows one outstanding write per thin device per path with Synchronous SRDF. With concatenated metavolumes, this could cause a performance problem by limiting the concurrency of writes. This limit will not affect striped metavolumes in the same way because of the small size of the metavolume stripe (one cylinder or 1,920 blocks). Enginuity has a logical volume write-pending limit to prevent one volume from monopolizing writable cache. Because each metavolume member gets a small percentage of cache, a striped metavolume is likely to offer more writable cache to the metavolume. If changes in metavolume configuration is desired, Solutions Enabler 7.3 and and later support converting a concatenated thin metavolume into a striped thin metavolume while protecting the 188 Using EMC VMAX Storage in VMware vsphere Environments

191 EMC Virtual Provisioning and VMware vsphere data using a thin or thick BCV device (if thin it must be persistently allocated). Prior to these software versions, protected metavolume conversion was not allowed. VMAX3/VMAX All Flash array The maximum size of a thin device on the VMAX3/VMAX All Flash is 64 TB. There is no longer a concept of metavolumes on this platform. Designed for 100% virtually provisioned storage environments, the VMAX3/VMAX All Flash array widely stripes across thin pools delivering superior I/O concurrency and performance. The benefits are equal or superior to those provided by striped metavolumes. Thin device sizing 189

192 EMC Virtual Provisioning and VMware vsphere Thin device compression on VMAX On the VMAX, and beginning with Enginuity release and Solutions Enabler 7.5, data within a thin device can be compressed/decompressed to save space. The data is compressed using a software-based solution internal to the VMAX. It is important to note that users should leverage compression only if the hosted application can sustain a significantly elevated response time when reading from, or writing to, compressed data. IMPORTANT The VMAX3/VMAX All Flash do not support thin device compression. If the application is sensitive to heightened response times it is recommended to not compress the data. Preferably, data compression should be primarily used for data that is completely inactive and is expected to remain that way for an extended period of time. If a thin device (one that hosts a VMFS volume or is in use as an RDM) is presently compressed and an imminent I/O workload is anticipated that will require significant decompression to commit new writes, it is suggested to manually decompress the volume first thereby avoiding the decompression latency penalty. To compress a thin device, the entire bound pool must be enabled for compression. This can be achieved by setting the compression capability on a given pool to enabled in Unisphere for VMAX or through the Solutions Enabler CLI. An example of this process with Unisphere can be seen in Figure 124 on page 191. When compression is no longer desired for allocations in a pool, the compression capability on a pool can be disabled. 190 Using EMC VMAX Storage in VMware vsphere Environments

193 EMC Virtual Provisioning and VMware vsphere Figure 124 Enabling thin device compression on a thin pool Note that once a pool is enabled for compression, a quick background process will run to reserve storage in the pool that will be used to temporarily decompress data (called the Decompressed Read Queue or DRQ). When compression is disabled, the reserved storage in the pool will be returned to the pool only after all compressed allocations are decompressed. Disabling the compression capability on a pool will not automatically cause compressed allocations to be decompressed. This is an action that must be executed by the user. Solutions Enabler or Unisphere will report the current state of the compression capability on a pool, the amount of compressed allocations in the pool, and which thin devices have compressed allocations. Figure 125 on page 192 shows a thin device with a compression ratio of 21% in Unisphere for VMAX. Thin device compression on VMAX 191

194 EMC Virtual Provisioning and VMware vsphere Figure 125 Viewing thin device-specific compression information When a user chooses to compress a device, only allocations that have been written will be compressed. Allocations that are not written to or fully-zeroed will be reclaimed during compression if the tracks are not persistently allocated. In effect, the compression process also performs a zero-space reclamation operation while analyzing for compressible allocations. This implies that after a decompression the allocated tracks will not match the original allocated track count for the TDEV. If this behavior is undesirable, the user can make use of the persistent allocation feature, and then the allocated tracks will be maintained and no compression will take place on persistently allocated tracks. If allocations are persistent and a reclaim of space is in fact desired, then the persistent attribute for existing allocations may be cleared to allow compression and reclamation to occur. Note: If a thin device has allocations on multiple pools, only the allocations on compression-enabled pools will be compressed. Note that compression of allocations for a thin device is a low-priority background task. Once the compression request is accepted, the thin device will have a low-priority background task 192 Using EMC VMAX Storage in VMware vsphere Environments

195 EMC Virtual Provisioning and VMware vsphere associated with it that performs the compression. While this task is running, no other background task, like allocate or reclaim, can be run against the thin device until it completes or is terminated by the user. A read of a compressed track will temporarily uncompress the track into the DRQ maintained in the pool. The space in the DRQ is controlled by a least recently used algorithm ensuring that a track can always be decompressed and that the most recently utilized tracks will still be available in an decompressed form. Writes to compressed allocations will always cause decompression. Thin device compression on VMAX 193

196 EMC Virtual Provisioning and VMware vsphere Compression on the VMAX All Flash On the VMAX All Flash beginning with HYPERMAX OS 2016 Q3 SR and Solutions Enabler 8.3/Unisphere for VMAX 8.3, inline compression is available. Inline compression compresses data as it is written to flash drives. Compression on the VMAX All Flash is hardware-based, with the ability to use software in the event of a failure. Unlike the VMAX implementation of compression, individual devices cannot be compressed; rather compression is enabled at the storage group level for all devices within the group. Data is compressed as it is written to disk when the device is in a storage group with compression enabled. If compression is enabled on a storage group that already has data, that data is compressed by a background process. If compression is disabled on a storage group, new written data is not compressed but existing data is not immediately decompressed. Only when data is written again is it decompressed. Compression is seen as complementary to thin provisioning. Not only can one oversubscribe storage initially, but now the data on the back-end can be compressed reducing the data footprint. This increases the effective capacity of the array. Take the following example, Figure 126, where 1.3 PB of storage is presented to hosts in the form of thin devices. The actual physical storage behind the thin devices is only 1.0 PB. Now by adding compression, the physical space required is reduced by half, providing a 2:1 ratio. The array, therefore, requires half as many drives to support the same front-end capacity. Figure 126 Inline compression on the VMAX All Flash 194 Using EMC VMAX Storage in VMware vsphere Environments

197 EMC Virtual Provisioning and VMware vsphere Despite the space savings, it is important to remember that if/when you decompress, additional space is required. This is taken into account when new arrays are sized. IMPORTANT The VMAX3 does not support storage group compression. Enabling compression By default, when creating a storage group through Unisphere for VMAX, the compression check box is enabled as in Figure 127. Figure 127 Unisphere for VMAX - storage group compression If using Solutions Enabler to create a storage group, compression will be enabled by default provided the SRP is supplied on the command line. For instance in Figure 128 on page 196, storage group compression_sg is created in SRP SRP_1. Therefore compression is enabled. Compression on the VMAX All Flash 195

198 EMC Virtual Provisioning and VMware vsphere Figure 128 Solutions Enabler - storage group compression If the SRP is left out of command, the storage group will be created without compression as in Figure 129. Figure 129 Solutions Enabler - storage group no compression 196 Using EMC VMAX Storage in VMware vsphere Environments

199 EMC Virtual Provisioning and VMware vsphere If the user attempt to create the storage group with the compression flag, -compression, while still not providing an SRP, Solutions Enabler will indicate it is not possible since the group is not FAST managed. This is seen in Figure 130. Figure 130 Solutions Enabler - storage group non-fast Compression metrics As with device compression on the VMAX, Unisphere for VMAX and Solutions Enabler provide viewable compression metrics for the VMAX All Flash. Figure 131 on page 198 includes a view where the compression ratio of the device is shown. In this case the data is being compressed 3 to 1. Note: Compression metrics for any non-vmax All Flash system can be disregarded as they are not valid. Compression on the VMAX All Flash 197

200 EMC Virtual Provisioning and VMware vsphere Figure 131 Compression ratio for a VMAX All Flash device Using Solutions Enabler, it is also possible to obtain this information. Figure 128 on page 196 shows how to list the compression metrics for an individual device. 198 Using EMC VMAX Storage in VMware vsphere Environments

201 EMC Virtual Provisioning and VMware vsphere Figure 132 Compression detail in Solutions Enabler Note: All supported data services, such as SnapVX, SRDF, VVols, and encryption are supported with compression. Compression on the VMAX All Flash 199

202 EMC Virtual Provisioning and VMware vsphere Performance considerations VMAX array The architecture of Virtual Provisioning creates a naturally striped environment where the thin extents are allocated across all devices in the assigned storage pool. The larger the storage pool for the allocations is, the greater the number of devices that can be leveraged for VMware vsphere I/O. One of the possible consequences of using a large pool of data devices that is shared with other applications is variability in performance. In other words, possible contention with other applications for physical disk resources may cause inconsistent performance levels. If this variability is not desirable for a particular application, that application could be dedicated to its own thin pool. VMAX arrays support up to 512 pools. Pools in this instance include Virtual Provisioning thin pools, SRDF/A Delta Set Extension (DSE) pools, and TimeFinder/Snap pools. If the array is running Enginuity 5875, FAST VP should be enabled to automatically uncover and remediate performance issues by moving the right data to the right performance tier. When a new data extent is required from the thin pool there is an additional latency introduced to the write I/O while the thin extent location is assigned and formatted. This latency is approximately one millisecond. Thus, for a sequential write stream, a new extent allocation will occur on the first new allocation, and again when the current stripe has been fully written and the writes are moved to a new extent. If the application cannot tolerate the additional latency it is recommended to preallocate some storage to the thin device when the thin device is bound to the thin pool If using VASA please see vsphere Storage APIs for Storage Awareness (VASA) in Chapter 2 for a CAUTION concerning preallocation. 200 Using EMC VMAX Storage in VMware vsphere Environments

203 EMC Virtual Provisioning and VMware vsphere VMAX3/VMAX All Flash array As it can take advantage of all the thin pools on the array, the VMAX3/VMAX All Flash is optimized from the start to avoid the previous scenarios that might be present on the VMAX. The basis for this is FAST and the SLO paradigm which is addressed in the following section: EMC FAST for VMAX3/VMAX All Flash. Performance considerations 201

204 EMC Virtual Provisioning and VMware vsphere Virtual disk allocation schemes VMware vsphere offers multiple ways of formatting virtual disks through GUI interface or CLI. For new virtual machine creation, only the formats eagerzeroedthick, zeroedthick, and thin are offered as options. Zeroedthick allocation format When creating, cloning, or converting virtual disks in the vsphere Client, the default option is called Thick Provision Lazy Zeroed in vsphere 5 and 6. The Thick Provision Lazy Zeroed selection is commonly referred to as the zeroedthick format. In this allocation scheme, the storage required for the virtual disks is reserved in the datastore but the VMware kernel does not initialize all the blocks. The blocks are initialized by the guest operating system as write activities to previously uninitialized blocks are performed. The VMFS will return zeros to the guest operating system if it attempts to read blocks of data that it has not previously written to. This is true even in cases where information from a previous allocation (data deleted on the host, but not de-allocated on the thin pool) is available. The VMFS will not present stale data to the guest operating system when the virtual disk is created using the zeroedthick format. Since the VMFS volume will report the virtual disk as fully allocated, the risk of oversubscribing is reduced. This is due to the fact that the oversubscription does not occur on both the VMware layer and the VMAX layer. The virtual disks will not require more space on the VMFS volume as their reserved size is static with this allocation mechanism and more space will be needed only if additional virtual disks are added. Therefore, the only free capacity that must be monitored diligently is the free capacity on the thin pool itself. This is 202 Using EMC VMAX Storage in VMware vsphere Environments

205 EMC Virtual Provisioning and VMware vsphere shown in Figure 133, which displays a single 10 GB virtual disk that uses the zeroedthick allocation method. The datastore browser reports the virtual disk as consuming the full 10 GB. Figure 133 Zeroedthick virtual disk allocation size as seen in a VMFS datastore browser However, since the VMware kernel does not actually initialize unused blocks, the full 10 GB is not consumed on the thin device. In the example, the virtual disk resides on thin device 007D and, as seen in Figure 134, only consumes about 1,600 MB of space on it. Figure 134 Zeroedthick virtual disk allocation on VMAX thin devices Thin allocation format Like zeroedthick, the thin allocation mechanism is also Virtual Provisioning-friendly, but, as will be explained in this section, should be used with caution in conjunction with VMAX Virtual Provisioning. Thin virtual disks increase the efficiency of storage utilization for virtualization environments by using only the amount Virtual disk allocation schemes 203

206 EMC Virtual Provisioning and VMware vsphere of underlying storage resources needed for that virtual disk, exactly like zeroedthick. But unlike zeroedthick, thin devices do not reserve space on the VMFS volume-allowing more virtual disks per VMFS. Upon the initial provisioning of the virtual disk, the disk is provided with an allocation equal to one block size worth of storage from the datastore. As that space is filled, additional chunks of storage in multiples of the VMFS block size are allocated for the virtual disk so that the underlying storage demand will grow as its size increases. It is important to note that this allocation size depends on the block size of the target VMFS. For VMFS-3, the default block size is 1 MB but can also be 2, 4, or 8 MB. Therefore, if the VMFS block size is 8 MB, then the initial allocation of a new thin virtual disk will be 8 MB and will be subsequently increased in size by 8 MB chunks. For VMFS-5 and VMFS-6, the block size is not configurable and therefore is, with one exception, always 1 MB. The sole exception is if the VMFS-5 volume was upgraded from a VMFS-3 volume. In this case, the block size will be inherited from the original VMFS volume and therefore could be 1, 2, 4 or 8 MB. A single 10 GB virtual disk (with only 1.6 GB of actual data on it) that uses the thin allocation method is shown in Figure 135. The datastore browser reports the virtual disk as consuming only 1.6 GB on the volume. Figure 135 Thin virtual disk allocation size as seen in a VMFS datastore browser As is the case with the zeroedthick allocation format, the VMware kernel does not actually initialize unused blocks for thin virtual disks, so the full 10 GB is neither reserved on the VMFS nor 204 Using EMC VMAX Storage in VMware vsphere Environments

207 EMC Virtual Provisioning and VMware vsphere consumed on the thin device on the array. The virtual disk presented in Figure 135 on page 204 resides on thin device 007D and, as seen in Figure 136, only consumes about 1,600 MB of space on it. Figure 136 Thin virtual disk allocation on VMAX thin devices Eagerzeroedthick allocation format With the eagerzeroedthick allocation mechanism (referred to as Thick Provision Eager Zeroed in the vsphere Client), space required for the virtual disk is completely allocated and written to at creation time. This leads to a full reservation of space on the VMFS datastore and on the underlying VMAX device. Accordingly, it takes longer to create disks in this format than to create other types of disks. 1 Because of this behavior, eagerzeroedthick format is not ideal for use with virtually provisioned devices. 1. This creation, however, is considerably quicker when the VAAI primitive, Block Zero, is in use. See Eagerzeroedthick with ESXi 5 and 6 and Enginuity 5875+/HYPERMAX OS 5977 on page 206 for more detail. Virtual disk allocation schemes 205

208 EMC Virtual Provisioning and VMware vsphere A single 10 GB virtual disk (with only 1.6 GB of actual data on it) that uses the eagerzeroedthick allocation method is shown in Figure 137. The datastore browser reports the virtual disk as consuming the entire 10 GB on the volume. Figure 137 Eagerzeroedthick virtual disk allocation size as seen in a VMFS datastore browser Unlike with zeroedthick and thin, the VMware kernel initializes all the unused blocks, so the full 10 GB is reserved on the VMFS and consumed on the thin device. The eagerzeroedthick virtual disk resides on thin device 0484 and, as highlighted in Figure 138, consumes 10 GB of space on it. Figure 138 Eagerzeroedthick virtual disk allocation on VMAX thin devices Eagerzeroedthick with ESXi 5 and 6 and Enginuity 5875+/HYPERMAX OS 5977 VMware vsphere 5 and 6 offer a variety of VMware Storage APIs for Array Integration (VAAI) that provide the capability to offload specific storage operations to EMC VMAX to increase both overall 206 Using EMC VMAX Storage in VMware vsphere Environments

209 EMC Virtual Provisioning and VMware vsphere system performance and efficiency. If vsphere 5 or 6 is used in conjunction with VMAX with a minimum of Enginuity 5875 or VMAX3/VMAX All Flash with HYPERMAX OS 5977, one of the supported primitives is Block Zero. Using software to zero out a virtual disk is a slow and inefficient process. By passing off this task to the array, the typical back and forth from host to array is avoided and the full capability of the VMAX can be utilized to accomplish the zeroing. Without the Block Zeroing primitive, the ESXi server must complete all the zero writes of the entire disk before it reports back that the disk zeroing is complete. For a very large disk this is time-consuming. When employing the Block Zeroing primitive (also referred to as write same since it uses the WRITE SAME SCSI operation (0x93) to achieve its task) the disk array returns the cursor to the requesting service as though the process of writing the zeros has been completed. It then finishes the job of zeroing out those blocks asynchronously, without the need to hold the cursor until the job is done, as is the case with software zeroing. This primitive has a profound effect on VMAX Virtual Provisioning when deploying eagerzeroedthick virtual disks or writing to unallocated sectors of a zeroedthick or thin virtual disk. When enabled, the VMAX will not actually write the zeros to disk but instead simply allocate the relevant tracks on the thin pool (if not already allocated) and then set the Never Written By Host flag on them. Block Zero operations therefore result in allocated, but not written to, tracks. Tracks such as these will always return zeros to a host read and will be skipped in the case of a TimeFinder or SRDF copy. On the VMAX3/VMAX All Flash, however, the architecture is different and no space is actually reserved. This is because the VMAX3/VMAX All Flash can be thought of as a single pool of storage, since new extents can be allocated from any thin pool on the array. Therefore the only way that a thin device can run out of space is if the entire array runs out of space. It was deemed unnecessary, therefore, to reserve space on the VMAX3/VMAX All Flash when using Block Zero. Note, however, that because of the architecture of the VMAX3/VMAX All Flash there is no performance impact even though the space is not pre-allocated. Block Zero is enabled by default on both the VMAX running 5875 Enginuity or later and the VMAX3/VMAX All Flash running HYPERMAX 5977 or later, used with 4.1, 5 or 6 ESXi servers (properly Virtual disk allocation schemes 207

210 EMC Virtual Provisioning and VMware vsphere licensed) and should not require any user intervention. Block Zero can be disabled on an ESXi host if desired through the use of the vsphere Client or command line utilities. Block Zero can be disabled or enabled by altering the setting DataMover.HardwareAcceleratedInit in the ESXi host advanced settings under DataMover. Virtual disk type recommendations In general, before vsphere 4.1, EMC recommended using zeroedthick instead of thin virtual disks when using VMAX Virtual Provisioning. The reason that thin on thin was not always recommended is that using thin provisioning on two separate layers (host and array) increases the risk of out-of-space conditions for virtual machines. vsphere 4.1 (and even more so in vsphere 5 and 6) in conjunction with the latest release of Enginuity/HYPERMAX OS integrate these layers better than ever. As a result, using thin on thin is perfectly acceptable, and may be the preferred disk type for many customers for the reasons previously covered. Nevertheless it is important to remember and consider that certain risks still remain in doing so. EMC supports all virtual disk types with VMAX Virtual Provisioning and does not recommend one type over the other in all situations. The VMware administrator must evaluate the requirements of the application being hosted and the needs of the application users/stakeholders when selecting a virtual disk type. Choosing a virtual disk allocation mechanism is typically based on the answers to these three questions: 1. Performance - is performance the most critical requirement of the application? 2. Space reservation - is the possibility (however low) of running out of space due to oversubscription a risk the application/owner is willing to tolerate? 3. Space efficiency - is efficiency and storage cost the most important factor for the application/owner or VMware Admin? While thin virtual disks are logically more efficient than zeroedthick virtual disks (less space on the VMFS), they are not physically more efficient because thin virtual disks as compared to zeroedthick virtual disks do not provide increased space savings on the physical disks/pool when using VMAX Virtual Provisioning. In other words they are equal. As previously discussed, zeroedthick 208 Using EMC VMAX Storage in VMware vsphere Environments

211 EMC Virtual Provisioning and VMware vsphere only writes data written by the guest OS, it does not consume more space on the VMAX thin pool than a similarly sized thin virtual disk. The primary difference between zeroedthick and thin virtual disks is not a question of space, rather it is a matter of performance and the quantity and/or host-observed capacity of the VMAX thin devices presented to the ESXi server. Due to the architecture of Virtual Provisioning, binding more thin devices to a thin pool or increasing the size of them does not take up more physical disk space than fewer or smaller thin devices. So whether 10 small thin devices are presented to an ESXi host, or 1 large thin device, the storage usage on the VMAX is essentially the same. Therefore, packing more virtual machines into smaller or fewer thin devices is not necessarily more space efficient in the true sense, but does allow for higher virtual machine density. This, however, may not be a desirable state. High virtual machine density can cause performance problems. Besides the typical issues such as device queuing, metadata locks can become a problem with thin virtual disks. Fortunately, this problem was solved with the hardware-assisted locking primitive offered by VAAI, introduced in ESXi 4.1, further promoting the viability of larger devices and thin on thin. Hardware-assisted locking provides a more granular means to protect the VMFS metadata during updates than the SCSI reservation mechanism used before hardware-assisted locking was available. Hardware-assisted locking leverages a storage array atomic test and set capability to enable a fine-grained block-level locking mechanism. With or without VAAI, VMFS metadata updates are required when creating, deleting or changing file metadata (among many other operations). An example of such an operation is the growth of a thin virtual disk. As previously mentioned, earlier versions of ESXi used SCSI reservations to obtain the on-disk metadata locks. For the duration of the SCSI reservation all other I/O from hosts (and subsequently their virtual machines) was queued until the reservation was released (I/O from the reservation holder and its virtual machines continued uninterrupted). The SCSI reservation only exists to get the on-disk lock, and once the on-disk lock has been obtained, the SCSI reservation is released allowing other hosts and their virtual machines access to the volume again. It should be noted that while the time to require the on-disk locks is a very short operation, performance degradation can occur if such operations occur frequently on multiple servers accessing the same VMFS. Therefore datastores, especially large ones, hosting many thin virtual disks with Virtual disk allocation schemes 209

212 EMC Virtual Provisioning and VMware vsphere high growth rates experience this degradation. With hardware-assisted locking, the ESXi host does not use SCSI reservations and instead uses small locks to just the portion of the metadata that needs to be updated. This does not require a SCSI reservation and therefore allows simultaneous access to the VMFS throughout the metadata update. This behavior essentially eliminates the performance hit caused by SCSI reservations and high virtual machine density further improving the viability of thin on thin. Furthermore, as previously mentioned, VMware and EMC have ameliorated the practicality of using thin virtual disks with VMAX Virtual Provisioning. This has been achieved through: 1. Refining vcenter and array reporting while also widening the breadth of direct VMAX integration (EMC Virtual Storage Integrator, EMC Storage Analytics VASA) 2. Removing the slight performance overhead that existed with thin VMDKs. This was caused by zeroing-on-demand and intermittent expansion of the virtual disk as new blocks were written to by the guest OS and has been respectively significantly diminished with the advent of Block Zero and Hardware-Assisted Locking (ATS) and is all but eliminated on the VMAX3/VMAX All Flash. 3. The introduction of Storage DRS in vsphere 5 further reduces the risk of running out of space on a VMFS volume as it can be configured to migrate virtual machines from a datastore when that datastore reaches a user-specified percent-full threshold. This all but eliminates the risk of running out of space on VMFS volumes, with the only assumption being that there is available capacity elsewhere to move the virtual machines to. These improvements can make thin on thin a much more attractive choice for a default VMDK disk type in customer environments, especially in vsphere 5 and 6, and most significantly with the VMAX3/VMAX All Flash. Essentially, it depends on how risk-averse an application is and the importance/priority of an application running in a virtual machine. If virtual machine density and storage efficiency is valued above the added protection provided by a thick virtual disk (or simply the possible risk of an out-of-space condition is acceptable), thin on thin may be used. If thin on thin is used it is important to configure alerts on the vcenter and array level. 210 Using EMC VMAX Storage in VMware vsphere Environments

213 EMC Virtual Provisioning and VMware vsphere VMAX3/VMAX All Flash array While most of the information in the previous section applies to the VMAX3/VMAX All Flash array, there are some noteworthy differences which impact which virtual disk type should be chosen. As explained in Eagerzeroedthick with ESXi 5 and 6 and Enginuity 5875+/HYPERMAX OS 5977, when an eagerzeroedthick disk is created on the VMAX3/VMAX All Flash, space is not preallocated in the thin pool for the device. The architecture of the array makes it unnecessary. As the VMAX3/VMAX All Flash does not actually reserve space, it seems logical to conclude that there is little difference between using an eagerzeroedthick and a zeroedthick virtual disk on that platform. While this is essentially true from a storage performance perspective, it is important to remember that if VMware or an application requires the use of an eagerzeroedthick virtual disk (e.g. Fault Tolerant VM), that disk type must be selected when creating the virtual machine. Virtual disk allocation schemes 211

214 EMC Virtual Provisioning and VMware vsphere Dead Space Reclamation (UNMAP) Referred to as Dead Space Reclamation by VMware, ESXi 5.0 U1 introduced support for the SCSI UNMAP command (0x42) which issues requests to de-allocate extents on a thin device. The purpose behind UNMAP is the reclamation of space in the thin pool on the array. UNMAP can be utilized in two different scenarios. The first is to free space in a VMFS datastore - manually or automatically. This capability has been available in one or both forms since vsphere 5.0 U1. The second is to free space in a thin vmdk that is part of a Guest OS in a VM. This is known as Guest OS, or in-guest UNMAP, and was introduced in vsphere 6.0 with Windows 2012 R2 and expanded to Linux in vsphere 6.5. It supports both VMFS and VVol datastores. Datastore As virtual machines are created and grow, then are moved or deleted, space is stranded in the thin pool because the array does not know that the blocks where the data once resided is no longer needed. Prior to the integration of this new Storage API in vsphere and in the VMAX, reclamation of space required zeroing out the unused data in the pool and then running a zero reclaim process on the array. With UNMAP support, vsphere tells the VMAX what blocks can be unmapped or reclaimed. If those blocks span a complete extent or extents, they are deallocated and returned to the thin pool for reuse. If the range covers only some tracks in an extent, those tracks are marked Never Written By Host (NWBH) on the VMAX but the extent cannot be deallocated. This is still beneficial, however, as those tracks do not have to be retrieved from disk should a read request be performed. Instead, the VMAX array sees the NWBH flag and immediately returns all zeros. The VMAX3/VMAX All Flash performs a similar process. VMware will, of course, reuse existing space in the datastore even if the data has not been reclaimed on the array; however it may not be able to use all of the space for a variety of reasons. In addition, by reclaiming the space in the thin pool it becomes available to any device in the pool, not just the one backing the datastore. The use of the SCSI command UNMAP, whether issued manually or automatically, is the only way to guarantee that the full amount of space is reclaimed for the datastore. 212 Using EMC VMAX Storage in VMware vsphere Environments

215 EMC Virtual Provisioning and VMware vsphere Guest OS Guest OS or in-guest UNMAP was introduced in vsphere 6.0. In the initial release it supported Windows 2012 R2 and higher but not Linux. In vsphere 6.5 SPC-4 support was added which enables Guest OS UNMAP for Linux VMs. There are a number of prerequisites that must be met to take advantage of Guest OS UNMAP. There are some differences between the two vsphere versions that should be noted. They are covered in the next two sections. vsphere 6.0 and Windows 2012 R2+ To enable Guest OS UNMAP in vsphere 6.0, the prerequisites are as follows: ESXi 6.0+ Thin virtual disk (vmdk) VM hardware version k allocation unit (recommended) for NTFS The advanced parameter EnableBlockDelete must be set to 1 if using VMFS (VVols do not require the parameter). It is off by default. esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete An important note is that change block tracking (CBT) is not supported with Guest OS UNMAP in vsphere 6.0 but is supported in vsphere 6.5. vsphere 6.5 and Linux To enable Guest OS UNMAP in vsphere 6.5, the prerequisites are as follows: ESXi 6.5+ Thin virtual disk (vmdk) VM hardware version 13+ A Linux OS and file system that supports UNMAP The advanced parameter EnableBlockDelete must be set to 1 if using VMFS (VVols do not require the parameter). It is off by default. esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete Dead Space Reclamation (UNMAP) 213

216 EMC Virtual Provisioning and VMware vsphere While Windows automatically issues UNMAP, Linux does not do so by default. Though sg_unmap and fstrim can be issued manually to free space, a simple parameter is available on file systems to reclaim space automatically. When mounting the file system (e.g. ext4), use the discard option and all deleted files will cause UNMAP to be issued. Here is an example. mount /dev/sdb1 /u02 -o discard It is important to note that the VMAX only supports UNMAP with Enginuity and higher and all versions of HYPERMAX OS. If a customer is running Enginuity or lower, they must therefore upgrade Enginuity to leverage this functionality or utilize the workaround process described in the next section Reclaiming space on VMFS volumes without Dead Space Reclamation support. To confirm that the Enginuity/HYPERMAX OS level is correct via the ESXi host and that UNMAP is in fact supported on a LUN, the following command can be issued to query the device: esxcli storage core device vaai status get -d <naa> Example output: naa.<naa> VAAI Plugin Name: VMW_VAAI_SYMM ATS Status: supported Clone Status: supported Zero Status: supported Delete Status: supported A device displaying Delete Status as supported means that the device is capable of receiving SCSI UNMAP commands or, in other words, the VMAX is running the correct level of Enginuity/HYPERMAX OS to support UNMAP from an ESXi host. 1. Enginuity and require an E-Pack for VAAI functionality. Refer to emc for more information. 214 Using EMC VMAX Storage in VMware vsphere Environments

217 EMC Virtual Provisioning and VMware vsphere UNMAP in ESXi 5.0 and 5.1 In order to reclaim space prior to ESXi 5.5, ESXi 5.0 Update 1 includes an updated version of vmkfstools that provides an option (-y) to issue the UNMAP operation. This command should be issued against the device after navigating to the root directory of the VMFS volume residing on it: cd /vmfs/volumes/<volume-name> vmkfstools -y <percentage of deleted blocks to reclaim> This command creates temporary files at the top level of the datastore. These files can be as large as the aggregate size of blocks being reclaimed as seen in Figure 139. If the reclaim operation is somehow interrupted, these temporary files may not be deleted automatically and users may need to delete them manually. Figure 139 SSH session to ESXi server to reclaim dead space with vmkfstools There is no practical way of knowing how long a vmkfstools -y operation will take to complete. It can run for a few minutes or even for a few hours depending on the size of the datastore, the amount of dead space, and the available resources on the array to execute the operation. Reclamation on the VMAX is a low-priority task (in order for it not to negatively affect host I/O performance) so a heavy workload on the device, pool or the array itself can throttle the process and slow the reclamation. Dead Space Reclamation (UNMAP) 215

218 EMC Virtual Provisioning and VMware vsphere UNMAP in ESXi 5.5 and 6.x Starting with ESXi 5.5, the vmkfstools -y command has been deprecated. VMware has introduced a new command to the esxcli command framework to unmap storage. The new command is unmap and is under the esxcli storage vmfs namespace: esxcli storage vmfs unmap <--volume-label=<str> --volume-uuid=<str>> [--reclaimunit=<long>] Command options: -n --reclaim-unit=<long>: Number of VMFS blocks that should be unmapped per iteration. -l --volume-label=<str>: The label of the VMFS volume to unmap the free blocks. -u --volume-uuid=<str>: The uuid of the VMFS volume to unmap the free blocks. The new command has no output when run as displayed in Figure 140 on page 216. Figure 140 SSH session to ESXi server to reclaim dead space with esxcli storage vmfs unmap command With the former implementation of vmkfstools, a temp file(s) was created at the start of the operation and then an UNMAP was issued to all the blocks that were allocated to these temp files. Since the temp file(s) could take up all the free blocks in the volume, any other 216 Using EMC VMAX Storage in VMware vsphere Environments

219 EMC Virtual Provisioning and VMware vsphere operation that required a new block allocation would fail. To prevent that failure, VMware permitted the user to provide a percentage of the free blocks to reclaim. The user could then issue the command multiple times to reclaim all of the space. In ESXi 5.5 and 6, dead space is now reclaimed in multiple iterations instead of all at once. A user can now provide the number of blocks (the default is 200) to be reclaimed during each iteration. This specified block count controls the size of each temp file that VMware creates. Since vsphere 5.5 and 6 default to 1 MB block datastores, this would mean that if no number is passed, VMware will unmap 200 MB per iteration. VMware still issues UNMAP to all free blocks, even if those blocks have never been written to. VMware will iterate serially through the creation of the temp file as many times as needed to issue UNMAP to all free blocks in the datastore. The benefit now is that the risk of temporarily filling up available space on a datastore due to a large balloon file is essentially zero. As before, this command uses the UNMAP primitive to inform the array that these blocks in this temporary file can be reclaimed. This enables a correlation between what the array reports as free space on a thin-provisioned datastore and what vsphere reports as free space. Previously, there was a mismatch between the host and the storage regarding the reporting of free space on thin-provisioned datastores 1. Unlike the previous unmap methodology with vmkfstools, VMware advises that the new procedure in ESXi 5.5 and 6 can be run at any time, and not relegated to maintenance windows; however, EMC still suggests reclaiming space in off-peak time periods. EMC recommends using VMware s default block size number (200), though fully supports any value VMware does. EMC has not found a significant difference in the time it takes to unmap by increasing the block size number. Figure 141 on page 218 contains some baseline results 2 of using the UNMAP command in ESXi 5.1 at 99% reclaim (vmkfstools -y) and ESXi 5.5+ using the default block value of 200 (esxcli storage vmfs 1. Note that with the use of zeroedthick virtual disks capacity disparity will still exist. The free space reported on the VMFS will be much lower than what the array reports if those virtual disks do no hold much data. 2. Baseline in this case means no other relationships such as TimeFinder or SRDF were on the devices. Please refer to the following white paper for more UNMAP results: Using VMware vsphere Storage APIs for Array Integration with EMC VMAX. Dead Space Reclamation (UNMAP) 217

220 EMC Virtual Provisioning and VMware vsphere unmap). In both cases, the results demonstrate that the time it takes to reclaim space increases fairly consistently as the amount of space increases. This is true for both the VMAX (V2) and the VMAX3/VMAX All Flash (V3) platform. For instance, to reclaim 150 GB at ESXi 5.1 takes 100 seconds while to reclaim 300 GB at takes 202 seconds, almost exactly double the time for double the size. It is also clear that the procedure in ESXi 5.5+ takes longer than the process in ESXi 5.1. Remember, however, that in ESXi 5.5, all of the space to be reclaimed is not taken up by a balloon file as it is in ESXi 5.1 environments. Space is made available as it is unmapped. On the VMAX3/VMAX All Flash, however, the change in architecture from the VMAX results in far better performance than both previous methodologies on the VMAX. Figure 141 UNMAP results on thin devices for ESXi 5.1 and ESXi Using EMC VMAX Storage in VMware vsphere Environments

221 EMC Virtual Provisioning and VMware vsphere ESXi 5.5 P3+ and ESXi 6.x Beginning with ESXi 5.5 Patch 3 (build ), VMware changes UNMAP behavior once again. At this code version, when issuing an UNMAP, any time a non-default block value is used that is higher than 1% of the free space in the datastore, VMware will revert to the default size (200). In addition, if the datastore is 75% full or more, it will also revert to the default. To determine the most ideal value to pass to VMware, therefore, calculate the amount of free space in the datastore in MB and multiply it by.01 and round down. Failing to round down may result in a number that exceeds 1% and VMware will revert to the default. Using this methodology is most beneficial when there is a large amount of free space to reclaim as the VMAX3 is very fast to UNMAP even at the default. Free space can be determined through the vsphere Client or even a PowerCLI script. Figure 142 on page 219 demonstrates the benefit of using 1% over the default. Figure 142 UNMAP results on thin devices for ESXi 5.5 P3 and ESXi 6.x Dead Space Reclamation (UNMAP) 219

222 EMC Virtual Provisioning and VMware vsphere Note that using 1% has the same scaling as the default, e.g. double the storage reclaimed takes about double the time. Note: These results are only meant as examples of reclaim operations and are dependent upon the lab environment under which they were generated. Although every effort is made to mimic the typical activity in a customer environment it is important to understand that results will vary. ESXi 6.5 Beginning with vsphere 6.5, VMware offers automated UNMAP capability at the VMFS 6 datastore level (VMFS 5 is not supported). The automation is enabled by a background process (UNMAP crawler) which keeps track of which LBAs that can be unmapped and then issues those UNMAPs at various intervals. It is an asynchronous process. VMware estimates that if there is dead space in a datastore, automated UNMAP should reclaim it in under 12 hours. UNMAP is enabled by default when a VMFS 6 datastore is created through a vsphere Client wizard. There are two parameters available, one of which can be modified: Space Reclamation Granularity and Space Reclamation Priority. The first parameter, Space Reclamation Granularity, designates at what size VMware will reclaim storage. It cannot be modified from the 1 MB value. The second parameter, Space Reclamation Priority, dictates how aggressive the UNMAP 220 Using EMC VMAX Storage in VMware vsphere Environments

223 EMC Virtual Provisioning and VMware vsphere background process will be when reclaiming storage. This parameter only has 2 values: None (off), and Low. It is set to Low by default. These parameters are seen in Figure 143. Figure 143 Enabling automated UNMAP on a VMFS 6 datastore If it becomes necessary to either enable or disable UNMAP on a VMFS 6 datastore after creation, the setting can be modified through the vsphere Web Client or CLI. Within the datastore view in the Client, navigate to the Configure tab and the General sub-tab. Here find a section on Space Reclamation where the Edit button can be used to make the desired change as in Figure 144 on page 222. Dead Space Reclamation (UNMAP) 221

224 EMC Virtual Provisioning and VMware vsphere Figure 144 Modifying automated UNMAP on a VMFS 6 datastore Changing the parameter is also possible through CLI, demonstrated in Figure 145. Figure 145 Modifying automated UNMAP through CLI Though vsphere 6.5 offers automated UNMAP, manual UNMAP using esxcli is still supported and can be run on a datastore whether or not it is configured with automated UNMAP. 222 Using EMC VMAX Storage in VMware vsphere Environments

225 EMC Virtual Provisioning and VMware vsphere Thin pool management IMPORTANT Thin pools on thevmax3/vmax All Flash cannot be managed by the user. When storage is provisioned from a thin pool to support multiple thin devices there is usually more virtual storage provisioned to hosts than is supported by the underlying data devices. This condition is known as oversubscription. This is one of the main reasons for using Virtual Provisioning. However, there is a possibility that applications using a thin pool may grow rapidly and request more storage capacity from the thin pool than is actually there. This section discusses the steps necessary to avoid out-of-space conditions. Capacity monitoring There are several methodologies to monitor the capacity consumption of the VMAX thin pools. The Solutions Enabler symcfg monitor command can be used to monitor pool utilization, as well as display the current allocations through the symcfg show pool command. There are also event thresholds that can be monitored through the SYMAPI event daemon, thresholds (defaults shown) that can be set with the Unisphere for VMAX as shown in Figure 146 on page 224. Thin pool management 223

226 EMC Virtual Provisioning and VMware vsphere Figure 146 Setting Pool Utilization Thresholds in Unisphere for VMAX 224 Using EMC VMAX Storage in VMware vsphere Environments

227 EMC Virtual Provisioning and VMware vsphere VMFS volume capacity can be monitored through the use of a vcenter alarm which is shown in Figure 147. Figure 147 Datastore capacity monitoring in vcenter System administrators and storage administrators must put processes in place to monitor the capacity for thin pools to make sure that they do not get filled. Thin pools can be dynamically expanded to include more physical capacity without application impact. For DMX arrays running 5773 or VMAX arrays running an Enginuity level earlier than 5874 Service Release 1, these devices should be added in groups long before the thin pool approaches a full condition. If devices are added individually, hot spots on the disks can be created since much of the write activity is suddenly directed to the newly added data device(s) because other data devices in the pool are full. With the introduction of thin pool write balancing in Enginuity 5874 SR1, the data devices can be added individually without introducing hot spots on the pool, though it is best to add the disks when the activity levels on the pool are low to allow the balancing to take place without conflict. Viewing thin pool details with the Virtual Storage Integrator The Storage Viewer feature of Virtual Storage Integrator (VSI Classic) offers the ability to view the details of thin pools backing VMAX thin devices in use as a VMFS volume or RDM in a VMware vsphere environment 1. Thin pool management 225

228 EMC Virtual Provisioning and VMware vsphere When the Storage Pools button is selected, the view will show the storage pool information for each LUN for that datastore. For each LUN, the following fields are shown: Array Shows the serial number of the array on which the LUN resides. If a friendly name is configured for the array, it will appear in place of the serial number. Device Shows the device name for the LUN. Capacity pie chart Displays a graphic depicting the total capacity of the LUN, along with the free and used space. Storage Pools In most cases a LUN is configured from a single storage pool. However, with FAST VP or after a rebinding operation, portions of a LUN can be moved to other storage pools which can better accommodate its performance needs. For that reason, the number of storage pools is displayed, as well as the details for each storage pool. The details for a particular storage pool can be collapsed by selecting the toggle arrow on the heading for that storage pool. The following details are presented for each storage pool: Name The heading bar shows the name of the storage pool. Description Shows the description that the user assigned to the storage pool when it was created. Capacity pie chart Displays a graphic depicting the total physical capacity of the storage pool, along with the free and used space. State Shows the state of the storage pool as Enabled or Disabled. RAID Shows the RAID type of the physical devices backing the thin pool, such as RAID_1, RAID_5, etc. Drive Type Shows the type of drives used in the storage pool. Used By LUN Shows the total space consumed by this LUN in the storage pool. 1. A connection to a remote or local instance of Solutions Enabler with access to the correct array must be configured to allow this functionality. Consult VSI documentation on support.emc.com for more information. Note this functionality does not exist in the VSI Web Client. 226 Using EMC VMAX Storage in VMware vsphere Environments

229 EMC Virtual Provisioning and VMware vsphere Subscribed Shows the total capacity of the sum of all of the thin devices bound to the pool and the ratio (as a percentage) of that capacity to the total enabled physical capacity of the thin pool. Max subscription Shows the maximum subscription percentage allowed by the storage pool if configured. An example of this is shown in Figure 148 on page 228. In this case, a virtual machine is selected in the vcenter inventory and its single virtual disk resides on a VMFS volume on a VMAX thin device. The device shown is of type Thin and has allocations on a single pool. This screen can be arrived at in several different ways. Figure 148 on page 228 shows accessing the thin pool details by clicking on a virtual machine. It can also be reached by clicking the EMC VSI tab after selecting a datastore or an ESXi host within the vsphere Client inventory. Thin pool management 227

230 EMC Virtual Provisioning and VMware vsphere Figure 148 Viewing thin pool details with the Virtual Storage Integrator Figure 149 on page 229 shows a VMFS volume residing on a VMAX thin device with allocations on two different pools. For more information refer to the EMC VSI for VMware vsphere: Storage Viewer Version 5.x Product Guide on support.emc.com. 228 Using EMC VMAX Storage in VMware vsphere Environments

231 EMC Virtual Provisioning and VMware vsphere Figure 149 Viewing pool information for a VMAX thin device bound to multiple pools with VSI Classic 5.6 Exhaustion of oversubscribed pools A VMFS datastore that is deployed on a VMAX thin device can detect only the logical size of the thin device. For example, if the array reports the thin device is 2 TB in size when in reality it is only backed by 1 TB of data devices, the datastore still considers 2 TB to be the LUN's size as it has no way of knowing the thin pool s configuration. As the datastore fills up, it cannot determine whether the actual amount of physical space is still sufficient for its needs. Before ESXi 5, the resulting behavior of running out of space on a thin pool could cause data loss or corruption on the affected virtual machines. With the introduction of ESXi 5 and Enginuity 5875, a new Storage API for Thin Provisioning referred to Out of Space errors (sometimes called Stun & Resume ) was added. This enables the Thin pool management 229

232 EMC Virtual Provisioning and VMware vsphere storage array to notify the ESXi kernel that the thin pool capacity is approaching physical capacity exhaustion or has actually been exhausted. Behavior of ESXi without Out of Space error support If a VMware environment is running a version of ESXi prior to version 5, or the Enginuity level on the target array is lower than 5875, the out-of-space error (Stun & Resume) is not supported. In this case, if a thin pool is oversubscribed and has no available space for new extent allocation, different behaviors can be observed depending on the activity that caused the thin pool capacity to be exceeded. If the capacity was exceeded when a new virtual machine was being deployed using the vcenter wizard, the error message shown in Figure 150 on page 231 is posted. 230 Using EMC VMAX Storage in VMware vsphere Environments

233 EMC Virtual Provisioning and VMware vsphere Figure 150 Error message displayed by vsphere Client on thin pool full conditions The behavior of virtual machines residing on a VMFS volume when the thin pool is exhausted depends on a number of factors including the guest operating system, the application running in the virtual machine, and also the format that was utilized to create the virtual disks. The virtual machines behave no differently from the same configuration running in a physical environment. In general the following comments can be made for VMware vsphere environments: Virtual machines configured with virtual disks using the eagerzeroedthick format continue to operate without any disruption. Thin pool management 231

234 EMC Virtual Provisioning and VMware vsphere Virtual machines that do not require additional storage allocations continue to operate normally. If any virtual machine in the environment is impacted due to lack of additional storage, other virtual machines continue to operate normally as long as those machines do not require additional storage. Some of the virtual machines may need to be restarted after additional storage is added to the thin pool. In this case, if the virtual machine hosts an ACID-compliant application (such as relational databases), the application performs a recovery process to achieve a consistent point in time after a restart. The VMkernel continues to be responsive as long as the ESXi server is installed on a device with sufficient free storage for critical processes. Behavior of ESXi with Out of Space error support With the new Storage API for Thin Provisioning introduced in ESXi 5, the host integrates with the physical storage and becomes aware of underlying thin devices and their space usage. Using this new level of array thin provisioning integration, an ESXi host can monitor the use of space on thin pools to avoid data loss and/or data unavailability. This integration offers an automated warning that an out-of-space condition on a thin pool has occurred. This in-band out-of-space error is only supported with Enginuity 5875 and later. If a thin pool has had its physical capacity exhausted, vcenter issues an out-of-space error. The out-of-space error works in an in-band fashion through the use of SCSI sense codes issued directly from the array. When a thin pool hosting VMFS volumes exhausts its assigned capacity the array returns a tailored response to the ESXi host. 1 This informs the ESXi host that the pool is full and consequently any virtual machines requiring more space will be paused or stunned until more capacity is added. Furthermore, any Storage vmotion, Clone or Deploy from Template operations with that device as a target (or any device sharing that pool) will be fail with an out-of-space error. 1. The response is a device status of CHECK CONDITION and Sense Key = 7h, ASC = 27h, ASCQ=7h 232 Using EMC VMAX Storage in VMware vsphere Environments

235 EMC Virtual Provisioning and VMware vsphere IMPORTANT The out-of-space error stun behavior will prevent data loss or corruption, but, by design it will also cause virtual machines to be temporarily halted. Therefore, any applications running on suspended virtual machines will not be accessible until additional capacity has been added and the virtual machines have been manually resumed. Even though data unavailability might be preferred over experiencing data loss/corruption, this is not an acceptable situation for most organizations in their production environments and should only be thought of as a last resort. It would be best that this situation is never arrived at prevention of such an event can be achieved through proper and diligent monitoring of thin pool capacity. Once a virtual machine is paused, additional capacity must then be added to the thin pool to which the thin device is bound. After this has been successfully completed, the VMware administrator must manually resume each paused virtual machine affected by the out-of-space condition. This can be seen in Figure 151. Any virtual machine that does not require additional space (or virtual machines utilizing eagerzeroedthick virtual disks) will continue to operate normally. Figure 151 Resuming a paused virtual machine after an out-of-space condition Thin pool management 233

236 EMC Virtual Provisioning and VMware vsphere 234 Using EMC VMAX Storage in VMware vsphere Environments

237 4 Data Placement and Performance in vsphere This chapter addresses EMC and VMware technologies that help with placing data and improving performance in the VMware environment and on the VMAX: Introduction EMC Fully Automated Storage Tiering (FAST) EMC FAST for VMAX3/VMAX All Flash EMC Virtual LUN migration Federated Live Migration Non-Disruptive Migration EMC Host I/O Limit vsphere Storage I/O Control VMware Storage Distributed Resource Scheduler (SDRS) Data Placement and Performance in vsphere 235

238 Data Placement and Performance in vsphere Introduction Unlike storage arrays of the past, today's EMC VMAX arrays contain multiple drive types and protection methodologies. This gives the storage administrator, server administrator, and VMware administrator the challenge of selecting the correct storage configuration, or storage class, for each application being deployed. The trend toward virtualizing the entire environment to optimize IT infrastructures exacerbates the problem by consolidating multiple disparate applications on a small number of large devices. Given this challenge, it is not uncommon that a single storage type, best suited for the most demanding application, is selected for all virtual machine deployments, effectively assigning all applications, regardless of their performance requirements, to the same high-performance tier. This traditional approach is wasteful since all applications and data are not equally performance-critical to the business. Furthermore, within applications themselves, particularly those reliant upon databases, there is also the opportunity to further diversify the storage make-up. Both EMC and VMware offer technologies to help ensure that critical applications reside on storage that meets the needs of those applications. These technologies can work independently or, when configured correctly, can complement each other. This chapter addresses the following technologies: EMC Fully Automated Storage Tiering (FAST) for VMAX EMC FAST for VMAX3/VMAX All Flash EMC Virtual LUN Federated Live Migration Non-Disruptive Migration EMC Host I/O Limit vsphere Storage I/O Control vsphere Storage Dynamic Resource Scheduler 236 Using EMC VMAX Storage in VMware vsphere Environments

239 Data Placement and Performance in vsphere EMC Fully Automated Storage Tiering (FAST) Beginning with the release of Enginuity 5874, EMC offers Fully Automated Storage Tiering (FAST) technology. The first incarnation of FAST is known as EMC VMAX Fully Automated Storage Tiering for disk provisioned environments, or FAST DP. With the release of Enginuity 5875, EMC offers the second incarnation of FAST, known as EMC Fully Automated Storage Tiering for Virtual Provisioning environments, or FAST VP. EMC VMAX FAST and FAST VP automate the identification of data volumes for the purposes of relocating application data across different performance/capacity tiers within an array. FAST DP operates on standard VMAX devices. Data movements executed between tiers are performed at the full volume level. FAST VP operates on thin devices. As such, data movements executed can be performed at the sub-lun level, and a single thin device may have extents allocated across multiple thin pools within the array, or on an external array using Federated Tiered Storage. This promotion/demotion of the data across these performance tiers is based on policies that associate a storage group to multiple drive technologies, or RAID protection schemes, by way of thin storage pools, as well as the performance requirements of the application contained within the storage group. Data movement executed during this activity is performed non-disruptively, without affecting business continuity and data availability. Figure 152 on page 238 depicts how FAST VP technology moves the most active parts of the workloads (hot data) to high-performance disks in this case flash and the least-frequently accessed storage (cold data) to lower-cost drives, leveraging the best performance and cost characteristics of each different drive type. FAST delivers higher performance using fewer drives to help reduce acquisition, power, cooling, and footprint costs. EMC Fully Automated Storage Tiering (FAST) 237

240 Data Placement and Performance in vsphere Figure 152 FAST data movement Because FAST DP and FAST VP support different device types standard and virtual, respectively they both can operate simultaneously within a single array. Aside from some shared configuration parameters, the management and operation of each are separate. Federated Tiered Storage (FTS) Introduced with Enginuity , Federated Tiered Storage (FTS) allows LUNs that exist on external arrays to be used to provide physical storage for VMAX arrays. The external LUNs can be used as raw storage space for the creation of VMAX devices in the same way internal VMAX physical drives are used. These devices are referred to as edisks. Data on the external LUNs can also be preserved and accessed through VMAX devices. This allows the use of VMAX Enginuity functionality, such as local replication, remote replication, storage tiering, data management, and data migration with data that 1. To utilize FTS on the VMAX 10K platform requires Enginuity release Using EMC VMAX Storage in VMware vsphere Environments

241 Data Placement and Performance in vsphere resides on external arrays. Beginning with Enginuity release , FAST VP will now support four tiers if one of those tiers is on a external array using FTS. FAST benefits The primary benefits of FAST include: Elimination of manually tiering applications when performance objectives change over time. Automating the process of identifying data that can benefit from Enterprise Flash Drives or that can be kept on higher capacity, less expensive SATA drives without impacting performance. Improving application performance at the same cost, or providing the same application performance at lower cost. Cost is defined as: acquisition (both HW and SW), space/energy, and management expense. Optimizing and prioritizing business applications, allowing customers to dynamically allocate resources within a single array. Delivering greater flexibility in meeting different price/performance ratios throughout the life-cycle of the information stored. Due to advances in drive technology, and the need for storage consolidation, the number of drive types supported by VMAX arrays has grown significantly. These drives span a range of storage service specializations and cost characteristics that differ greatly. Several differences exist between the four drive technologies supported by the VMAX Series arrays: Enterprise Flash Drive (EFD), Fibre Channel (FC), Serial Attached SCSI (SAS), and SATA. The primary differences are: Response time Cost per unit of storage capacity Cost per unit of storage request processing At one extreme are EFDs, which have a very low response time, and a low cost per IOPS but with a high cost per unit of storage capacity At the other extreme are SATA drives, which have a low cost per unit of storage capacity, but high response times and high cost per unit of storage request processing EMC Fully Automated Storage Tiering (FAST) 239

242 Data Placement and Performance in vsphere Between these two extremes lie Fibre Channel and SAS drives Based on the nature of the differences that exist between these four drive types, the following observations can be made regarding the most suited workload type for each drive. Enterprise Flash Drives: EFDs are more suited for workloads that have a high back-end random read storage request density. Such workloads take advantage of both the low response time provided by the drive, and the low cost per unit of storage request processing without requiring a log of storage capacity. SATA drives: SATA drives are suited toward workloads that have a low back-end storage request density. Fibre Channel/SAS drives: Fibre Channel and SAS drives are the best drive type for workloads with a back-end storage request density that is not consistently high or low. This disparity in suitable workloads presents both an opportunity and a challenge for storage administrators. To the degree it can be arranged for storage workloads to be served by the best suited drive technology, the opportunity exists to improve application performance, reduce hardware acquisition expenses, and reduce operating expenses (including energy costs and space consumption). The challenge, however, lies in how to realize these benefits without introducing additional administrative overhead and complexity. The approach taken with FAST is to automate the process of identifying which regions of storage should reside on a given drive technology, and to automatically and non-disruptively move storage between tiers to optimize storage resource usage accordingly. This also needs to be done while taking into account optional constraints on tier capacity usage that may be imposed on specific groups of storage devices. FAST managed objects There are three main elements related to the use of both FAST and FAST VP on VMAX. These are shown in Figure 153 on page 241 and are: Storage tier A shared resource with common technologies. 240 Using EMC VMAX Storage in VMware vsphere Environments

243 Data Placement and Performance in vsphere FAST policy Manages a set of tier usage rules that provide guidelines for data placement and movement across VMAX tiers to achieve service levels and for one or more storage groups. Storage group A logical grouping of devices for common management. Figure 153 FAST VP managed objects Each of the three managed objects can be created and managed by using either Unisphere for VMAX (Unisphere) or the Solutions Enabler Command Line Interface (SYMCLI). FAST VP components There are two components of FAST VP the FAST controller and the VMAX microcode or Enginuity. The FAST controller is a service that runs on the VMAX service processor. The VMAX microcode is a part of the Enginuity operating environment that controls components within the array. When FAST VP is active, both components participate in the execution of two algorithms, the intelligent tiering algorithm and the allocation compliance algorithm, to determine appropriate data placement. The intelligent tiering algorithm uses performance data collected by Enginuity, as well as supporting calculations performed by the FAST controller, to issue data movement requests to the Virtual LUN (VLUN) VP data movement engine. The allocation compliance algorithm enforces the upper limits of storage capacity that can be used in each tier by a given storage group by also issuing data movement requests to satisfy the capacity compliance. Performance time windows can be defined to specify when the FAST VP controller should collect performance data, upon which analysis is performed to determine the appropriate tier for devices. By default, EMC Fully Automated Storage Tiering (FAST) 241

244 Data Placement and Performance in vsphere this will occur 24 hours a day. Defined data movement windows determine when to execute the data movements necessary to move data between tiers. Data movements performed by the microcode are achieved by moving allocated extents between tiers. The size of data movement can be as small as 768 KB, representing a single allocated thin device extent, but more typically will be an entire extent group, which is 7,680 KB in size. FAST VP has two modes of operation: Automatic or Off. When operating in Automatic mode, data analysis and data movements will occur continuously during the defined data movement windows. In Off mode, performance statistics will continue to be collected, but no data analysis or data movements will take place. Figure 154 shows the FAST controller operation. Figure 154 FAST VP components 242 Using EMC VMAX Storage in VMware vsphere Environments

245 Data Placement and Performance in vsphere Note: For more information on FAST VP specifically please see the technical note FAST VP for EMC VMAX Theory and Best Practices for Planning and Performance available at FAST VP allocation by FAST Policy A feature for FAST VP beginning with 5876 is the ability for a device to allocate new extents from any thin pool participating in the FAST VP Policy. When this feature is enabled, FAST VP will attempt to allocate new extents in the most appropriate tier, based upon performance metrics. If those performance metrics are unavailable it will default to allocating in the pool to which the device is bound. If, however, the chosen pool is full, regardless of performance metrics, then FAST VP will allocate from one of the other thin pools in the policy. As long as there is space available in one of the thin pools, new extent allocations will be successful. This new feature is enabled at the VMAX array level and applies to all devices managed by FAST VP. The feature cannot, therefore, be applied to some FAST VP policies and not others. By default it is disabled and any new allocations will come from the pool to which the device is bound. Note: A pinned device is one that is not considered to have performance metrics available and therefore new allocations will be done in the pool to which the device is bound. FAST VP coordination The use of FAST VP with SRDF devices is fully supported; however FAST VP operates within a single array, and therefore will only impact the RDF devices on that array. There is no coordination of the data movement between RDF pairs. Each device's extents will move according to the manner in which they are accessed on that array, source or target. For instance, an R1 device will typically be subject to a read/write workload, while the R2 will only experience the writes that are propagated across the link from the R1. Because the reads to the R1 are not propagated to the R2, FAST VP will make its decisions based solely on the writes and therefore the R2 data may not be moved to the same tier as the R1. EMC Fully Automated Storage Tiering (FAST) 243

246 Data Placement and Performance in vsphere To rectify this problem, EMC introduced FAST VP SRDF coordination in FAST VP coordination allows the R1 performance metrics to be transmitted across the link and used by the FAST VP engine on the R2 array to make promotion and demotion decisions. Both arrays involved with replication must be at the same Enginuity level to take advantage of this feature. FAST VP SRDF coordination is enabled or disabled at the storage group that is associated with the FAST VP policy at the storage group level. The default state is disabled. Note that only the R1 site must have coordination enabled. Enabling coordination on the R2 has no impact unless an SRDF swap is performed. FAST VP SRDF coordination is supported for single, concurrent, and cascaded SRDF pairings in any mode of operation: Synchronous, asynchronous, adaptive copy, SRDF/EDP, and SRDF/Star. In the case of cascaded SRDF configurations, the R21 device acts as a relay, transmitting the R1 metrics to the R2 device. In the case of an R22 device, metrics will only be received across the SRDF link that is currently active. FAST VP storage group reassociate Beginning with Solutions Enabler v7.4, released with Enginuity 5876, a storage group that is under a FAST VP policy, may be reassociated to a new FAST VP policy non-disruptively. With a reassociation all current attributes set on the association propagate automatically to the new association. For example, if SRDF coordination is set on the original policy, it is automatically set on the new policy upon reassociation. FAST VP space reclamation and UNMAP with VMware Space reclamation may be run against a thin device under FAST VP control. However, during the space reclamation process, the sub-lun performance metrics are not updated, and no data movements are performed. Furthermore, if FAST VP is moving extents on the device, the reclaim request will fail. It is a best practice, therefore, to pin the device before issuing a reclaim. Space reclamation may also be achieved with the SCSI command UNMAP. Beginning with Enginuity and VMware vsphere 5.0 U1, EMC offers Thin Provisioning Block Space Reclamation, better known as UNMAP. This feature enables the 244 Using EMC VMAX Storage in VMware vsphere Environments

247 Data Placement and Performance in vsphere reclamation of blocks of thin-provisioned LUNs by informing the array that specific blocks are obsolete. Currently UNMAP does not occur automatically - it is performed at the VMFS level as a manual process using vmkfstools in ESXi 5.0 and 5.1 and using esxcli storage vmfs unmap in ESXi 5.5; however this process is fully supported on thin devices under FAST VP control. FAST VP Compression Enginuity release introduces compression capabilities for Virtually Provisioned environments (VP compression) that provide customers with 2:1 or greater data compression for very infrequently accessed data. EMC defines infrequently accessed data as that which is not accessed within month-end, quarter-end, or year-end activities, nor accessed as part of full backup processing. Data within a thin pool may be compressed manually for an individual device or group of devices, via Solutions Enabler. Alternatively, inactive data may be compressed automatically for thin devices that are managed by FAST VP. FAST VP Compression is enabled at the system level by setting the time to compress parameter to a value between 40 and 400 days. Data that is seen to be inactive for a period of time longer than this parameter will be considered eligible for automatic compression. The default value for the time to compress is never. It is important to note that users should leverage compression only if the hosted application can sustain a significantly elevated response time when reading from, or writing to, compressed data. Note: The VMAX3/VMAX All Flash running HYPERMAX OS 5977 does not support the compression feature. Automated FAST VP Compression FAST VP Compression automates VP Compression for thin devices that are managed by FAST VP. This automated compression takes place at the sub-lun level. Compression decisions are based on the activity level of individual extent groups (120 tracks), similar to promotion and demotion decisions. However, the actual compression itself is performed per thin device extent (12 tracks). EMC Fully Automated Storage Tiering (FAST) 245

248 Data Placement and Performance in vsphere Extent groups that have been inactive on disk for a user-defined period of time, known as the time to compress, will be considered eligible for compression. The time to compress parameter can be set between 40 and 400 days. The default time to compress is never, meaning no data will be compressed. IMPORTANT Uncompressing data is an extremely expensive task both in terms of time and resources. It is therefore best to set the time to compress parameter to a value after which the data is not expected to be needed. For example, if a customer runs quarterly reports on financial data it would not be recommended to set the time to compress to something less than 90 days. The reason for this is the following: If only the current month's financial data is accessed except during quarterly reporting, there is a very real chance that a portion of a quarter's data will already be compressed at the time of report running. In order for the reports to gain access to the compressed data it will need to be uncompressed, requiring both thin pool space and time along with processing power.the impact would be significant and should therefore be avoided. It is very likely in such environments that year-end reports are also required. In that event, it may be best to set the time to compress parameter to greater than a year. In order for thin device extent to be compressed automatically by FAST VP, all of the following criteria must be met: The extent must belong to an extent group that has been inactive for a period equal to or greater than the time to compress The extent must be located in a thin pool that has been enabled for compression The extent must compress by at least 50% FAST VP Compression and policy compliance For the purposes of calculating a storage group's compliance with a policy, FAST VP does not take into account any compression that has occurred on any of the devices within the group. Only the logical, allocated capacity consumed by the devices is considered. For example, if 50 GB of data belonging to a storage group was demoted to the SATA tier and subsequently compressed to only consume 20 GB, the compliance calculation for the storage group would be based on 50 GB consumed within the SATA tier, not the 20 GB actually occupied. 246 Using EMC VMAX Storage in VMware vsphere Environments

249 Data Placement and Performance in vsphere FAST VP Compression and SRDF coordination If FAST VP SRDF coordination is enabled for a storage group associated with a FAST VP policy, performance metrics collected for an R1 device are transferred to the R2, allowing the R1 workload to influence promotion and demotion decisions on the R2 data. As a part of the performance metrics collected, the length of time the data has been inactive is also transferred. This period of inactivity on R1 data can then influence compression decisions on the corresponding R2 data. FAST management Management and operation of FAST are provided by SMC or Unisphere as well as the Solutions Enabler Command Line Interface (SYMCLI). Also, detailed performance trending, forecasting, alerts, and resource utilization are provided through Symmetrix Performance Analyzer (SPA) or the Performance monitoring option of Unisphere. EMC Fully Automated Storage Tiering (FAST) 247

250 Data Placement and Performance in vsphere EMC FAST for VMAX3/VMAX All Flash EMC FAST for the VMAX3/VMAX All Flash running HYPERMAX OS 5977 represents the latest improvement to the auto-tiering capability introduced in the VMAX and Enginuity. EMC FAST now dynamically allocates workloads across storage technologies by non-disruptively moving application data to meet stringent Service Level Objectives (SLOs). The internal FAST engine is now inherent to the functioning of the array, optimizing the placement of data automatically 24 hours a day, 7 days a week, 365 days a year. Customers have the option of enabling SLOs, which are tied to different service level tiers and response time criteria. As the VMAX3/VMAX All Flash is a new paradigm, the sections below will present how it differs from the VMAX. SLO/SRP configuration Unlike the VMAX, the VMAX3/VMAX All Flash comes fully pre-configured from EMC with all disk groups, data (thin) pools, and data devices (TDAT) created. An overview of how these objects are constructed on the array is included below: Disk groups a collection of physical drives within the array that share the same performance characteristics, which are determined by rotational speed (15K, 10K, and 7.2K), technology (e.g. EFD), and capacity. Data devices (TDATs) an internal device that is dedicated to providing actual physical storage used by thin devices. The TDATs in a data pool have a single RAID protection, are fixed size, and each physical disk has at least 16 hypers. Data (thin) pools a collection of data devices of identical emulation and protection type, all of which reside on disks of the same technology type and speed. The disks in a data pool are from the same disk group. FAST storage resource pools (SRP) one (default) FAST SRP 1 is pre-configured on the array. The storage resource pool contains all the disk groups on the array, and thus all data pools. When a 1. More than one SRP can be configured on-site with the assistance of EMC, but EMC strongly recommends using the single, default SRP. Neither Solutions Enabler nor Unisphere for VMAX can make these changes. 248 Using EMC VMAX Storage in VMware vsphere Environments

251 Data Placement and Performance in vsphere device is created, it is associated with the storage resource pool and immediately becomes ready. This process is automatic and requires no setup. Note: The physical makeup of the default SRP cannot be modified; however if multiple SRPs are available (EMC involvement is required), the SRP designated as default can be changed. Service level objectives (SLO) 1 are pre-configured performance objectives that have been developed based on applications. Each SLO has a defined response time objective measured from the front-end. There are six default service level objectives shown in Table 4 that are available, depending on the type of disks in the array. For each service level objective, except Optimized, you can associate service level objectives with a workload type, OLTP (small block I/O), DSS (large block I/O), or none (default). An SLO can be assigned to a storage group to match specific performance requirements; otherwise the storage group automatically inherits the Optimized SLO. The default SLOs are not customizable, nor can additional SLOs be configured. Table 4 Service Level Objectives Service Level Performance Type Use Case Drive Required Diamond (Only SL available for VMAX All Flash) Ultra high HPC, latency sensitive EFD Platinum Very high Mission critical, high rate OLTP Gold High Very heavy I/O, Database logs, data sets Silver Price/performance Database data sets, virtual applications EFD and (15K or 10K) EFD and (15K or 10K or 7.2K) EFD and (15K or 10K or 7.2K) Bronze Cost optimized Backup, archive, file 7.2K and (15K or 10K) Optimized (default) Places the most active data on the highest performing storage and the least active on the most cost-effective storage Any 1. On VMAX All Flash arrays an SLO is simply known as a Service Level (SL) as there is only a single level, Diamond, and single disk technology, EFD. EMC FAST for VMAX3/VMAX All Flash 249

252 Data Placement and Performance in vsphere Within Unisphere, the SLOs will also include the average expected response time as in Figure 155. Note: When specifying a workload type of DSS, the average expected response time will increase to reflect that impact of larger I/O. The average expected response time for an OLTP workload type will not change. If replication is being used, response times are increased for both OLTP and DSS. Figure 155 Unisphere for VMAX v8.x Service Level Objectives The VMAX3/VMAX All Flash array works to maintain the requested quality of service values for all of the various applications by using disk tiering and other controls on the array. A report is available which will graphical display what percentage of a storage group is in compliance with the SLO over a selected period. 1 The compliance is shown in terms of what percent of the storage group meets the SLO (stable), is below the SLO (marginal), or is well below the SLO (critical). A sample report is included in Figure 156 on page 251. When the cursor hovers over the colored bar, the percentage is shown. 1. SLO compliance information requires registering the array in the Performance settings in Unisphere. 250 Using EMC VMAX Storage in VMware vsphere Environments

253 Data Placement and Performance in vsphere Figure 156 SLO Compliance Report An example of a pre-configured storage environment on the VMAX3 is shown in Table 157 on page 252. EMC FAST for VMAX3/VMAX All Flash 251

254 Data Placement and Performance in vsphere Figure 157 Service level provisioning elements The SRP can also be queried through Solutions Enabler, which will display information about the size, disk groups, and available SLOs as in Figure 158 on page Using EMC VMAX Storage in VMware vsphere Environments

255 Data Placement and Performance in vsphere Figure 158 Viewing the default SRP (SRP_1) in Solutions Enabler v8.x Unisphere for VMAX provides a view of the Storage Groups Dashboard which shows the default SRP and the SLOs along with the SLO compliance in Figure 159 on page 254. EMC FAST for VMAX3/VMAX All Flash 253

256 Data Placement and Performance in vsphere Storage groups Figure 159 Unisphere for VMAX v8.x Storage Groups Dashboard There is an important concept to understand about storage groups and FAST on the VMAX3/VMAX All Flash. The FAST internal engine manages all host related data on the array; however that does not necessarily mean a storage group is FAST managed. In order for a storage group to be FAST-managed, the user must explicitly provide an SRP and/or an SLO. This is easiest to demonstrate in the Solutions Enabler CLI. Figure 160 on page 255 shows two symsg commands, the first in blue explicitly states both an SRP and SLO, making it a FAST managed storage group. The second in red specifies neither, and therefore is not FAST managed. By default, the second storage group will be in the default SRP and have an SLO of Optimized. 254 Using EMC VMAX Storage in VMware vsphere Environments

257 Data Placement and Performance in vsphere Figure 160 FAST-managed storage group in Solutions Enabler The following points apply to storage groups on the VMAX3/VMAX All Flash: A storage group can be associated with an SRP. This allows the storage group to allocate storage for a device from any pool in the SRP. By default storage groups are associated with the default SRP but are not explicitly associated. A storage group can be associated with an SLO. By default, storage groups are managed by the Optimized SLO, but are not explicitly associated. A storage group is considered FAST managed when either an SLO or an SRP, or both, are explicitly associated. A thin device may only be in one storage group that is FAST managed. Changing the SRP associated with an storage group will result in all data being migrated to the new SRP. EMC FAST for VMAX3/VMAX All Flash 255

258 Data Placement and Performance in vsphere SLO example The process by which one can provision storage to a VMware environment to take advantage of multiple SLO levels is fairly straightforward. Below is an example of how to use Unisphere to accomplish this. In this example the ESXi hosts have been zoned to the VMAX3/VMAX All Flash array, and both an initiator group and port group have been created. Environment The following devices are requested for the VMware environment in this example: 1 TB for OLTP application requiring ~ 3.0 ms response time 500 GB for OLTP application requiring ~ 9 ms response time 2 TB for DSS archive application 200 GB for new application with unknown response time requirements The storage administrator starts the process from the Storage Groups Dashboard in Unisphere and select Provision Storage to Host highlighted in Figure 161 on page Using EMC VMAX Storage in VMware vsphere Environments

259 Data Placement and Performance in vsphere Figure 161 Unisphere for VMAX v8.x Provision Storage to Host The next screen is the storage provisioning screen. Here the storage administrator (SA) is able to review the requirements of the VMware administrator and select the correct SLO for each application. For each application, the SA categorizes the need as follows, based on the SLO definitions as seen in Figure 155 on page 250. Here is how the SA planned the provisioning: 1 TB for OLTP application requiring ~ 3.0 ms response time The Platinum SLO with a workload type of OLTP 500 GB for OLTP application requiring ~ 9 ms response time The Silver SLO with a workload type of OLTP 2 TB for DSS archive application The Bronze SLO with a workload type of DSS 200 GB for new application with unknown response time requirements EMC FAST for VMAX3/VMAX All Flash 257

260 Data Placement and Performance in vsphere As this is a new application, and the response time requirement is unknown, the SA follows the EMC recommendation of using an Optimized SLO with no defined workload type which will place the most active data on the highest performing storage and the least active on the most cost-effective storage. The SA knows that as the requirements of the new application become clear, the SLO can be changed on-the-fly to meet those needs. The above is easily accomplished using drop-down boxes and the Add Service Level button. Unisphere automatically assigns each child storage group a name after the parent group is provided. The completed storage screen is demonstrated in Figure 162. Figure 162 Unisphere for VMAX v8.x Storage group and SLO assignment The wizard follows with the selection of the initiator group, port group and finally the naming of the masking view. These are contracted in Figure 163 on page Using EMC VMAX Storage in VMware vsphere Environments

261 Data Placement and Performance in vsphere Figure 163 Unisphere for VMAX v8.x Initiator group, port group, and masking view Once the job is finished, detail of the parent storage group within Unisphere details the various selected SLOs for each child storage group below in Figure 164 on page 260. EMC FAST for VMAX3/VMAX All Flash 259

262 Data Placement and Performance in vsphere Figure 164 Unisphere for VMAX v8.x Parent and child groups and SLOs The SA can now pass off control to the VMware administrator who creates each datastore on the newly provisioned storage. In Figure 165 the datastores have been named according to their SLO for ease of recognition, though because the SLO can be changed dynamically, customers may choose to name their datastores without an SLO designation. Figure 165 VMware datastores representing multiple SLOs As mentioned, what is so powerful about this new SLO model is that if the business requirements of the applications residing in those datastores change, the SA can adjust the SLO on-the-fly for the VMware administrator. This can be done one of two ways, either by modifying the SLO for the storage group containing the device(s) as seen in Figure 166, or by moving the individual device to a new storage group with the desired SLO shown in Figure 167 on page Using EMC VMAX Storage in VMware vsphere Environments

263 Data Placement and Performance in vsphere Figure 166 Unisphere for VMAX v8.x Modifying SLO Figure 167 Unisphere for VMAX v8.x Moving device to new SLO EMC FAST for VMAX3/VMAX All Flash 261

264 Data Placement and Performance in vsphere It is clear from this example that the integration of the new EMC FAST with SLOs is transparent to the VMware administrator yet provides a powerful and easy to use methodology by which he or she can place their applications on datastores according to their performance requirements. 262 Using EMC VMAX Storage in VMware vsphere Environments

265 Data Placement and Performance in vsphere EMC Virtual LUN migration This feature offers system administrators the ability to transparently migrate host visible LUNs from differing tiers of storage that are available in the VMAX. The storage tiers can represent differing hardware capability as well as differing tiers of protection. The LUNs can be migrated to either unallocated space (also referred to as unconfigured space) or to configured space, which is defined as existing VMAX LUNs that are not currently assigned to a server existing, not-ready volumes within the same subsystem. The data on the original source LUNs is cleared using instant VTOC once the migration has been deemed successful. The migration does not require swap or DRV space, and is non-disruptive to the attached hosts or other internal VMAX applications such as TimeFinder and SRDF. IMPORTANT The VMAX3/VMAX All Flash does not have a concept of Virtual LUN migration. Figure 168 shows the valid combinations of drive types and protection types that are available for migration. Figure 168 Virtual LUN eligibility tables The device migration is completely transparent to the host on which an application is running since the operation is executed against the VMAX device. Thus, the target and LUN number are not changed and applications are uninterrupted. Furthermore, in SRDF environments, the migration does not require customers to re-establish their disaster recovery protection after the migration. EMC Virtual LUN migration 263

266 Data Placement and Performance in vsphere The Virtual LUN feature leverages the virtual RAID architecture introduced in Enginuity 5874, which abstracts device protection from its logical representation to a server. This powerful approach allows a device to have more simultaneous protection types such as BCVs, SRDF, Concurrent SRDF, and spares. It also enables seamless transition from one protection type to another while servers and their associated applications and VMAX software are accessing the device. The Virtual LUN feature offers customers the ability to effectively utilize SATA storage a much cheaper, yet reliable, form of storage. It also facilitates fluid movement of data across the various storage tiers present within the subsystem the realization of true tiered storage in the box. Thus, VMAX becomes the first enterprise storage subsystem to offer a comprehensive tiered storage in the box, ILM capability that complements the customer's tiering initiatives. Customers can now achieve varied cost/performance profiles by moving lower priority application data to less expensive storage, or conversely, moving higher priority or critical application data to higher performing storage as their needs dictate. Specific use cases for customer applications enable the moving of data volumes transparently from tier to tier based on changing performance (moving to faster or slower disks) or availability requirements (changing RAID protection on the array). This migration can be performed transparently without interrupting those applications or host systems utilizing the array volumes and with only a minimal impact to performance during the migration. 264 Using EMC VMAX Storage in VMware vsphere Environments

267 Data Placement and Performance in vsphere VLUN VP Beginning with Enginuity 5875, the Virtual LUN feature supports thin-to-thin migrations. Known as VLUN VP (VLUN), thin-to-thin mobility enables users to meet tiered storage requirements by migrating thin FBA LUNs between virtual pools in the same array. Virtual LUN VP mobility gives administrators the option to re-tier a thin volume or set of thin volumes by moving them between thin pools in a given FAST configuration. This manual override option helps FAST users respond rapidly to changing performance requirements or unexpected events. Virtual LUN VP migrations are session-based each session may contain multiple devices to be migrated at the same time. There may also be multiple concurrent migration sessions. At the time of submission, a migration session name is specified. This session name is subsequently used for monitoring and managing the migration. While an entire thin device will be specified for migration, only thin device extents that are allocated will be relocated. Thin device extents that have been allocated, but not written to (for example, pre-allocated tracks), will be relocated but will not cause any actual data to be copied. New extent allocations that occur as a result of a host write to the thin device during the migration will be satisfied from the migration target pool. The advances in VLUN enable customers to move VMAX thin devices from one thin pool to another thin pool on the same VMAX without disrupting user applications and with minimal impact to host I/O. Beginning with 5876, VLUN VP users have the option to move only part of the thin device from one source pool to one destination pool. Users may move whole or part 1 of thin devices between thin pools to: Change the disk media on which the thin devices are stored Change the thin device RAID protection level Consolidate thin device that is/was managed by FAST VP to a single thin pool 1. If a thin device has extents in more than one thin pool, the extents in each thin pool can be moved independently to the same or different thin pools.there is no way, however, to relocate part of a thin device when all its extents reside in a single thin pool. EMC Virtual LUN migration 265

268 Data Placement and Performance in vsphere Move all extents from a thin device that are in one thin pool to another thin pool In a VMware environment a customer may have any number of use cases for VLUN. For instance, if a customer elects to not use FAST VP, manual tiering is achievable through VLUN. A heavily used datastore residing on SATA drives may require the ability to provide improved performance. That thin device underlying the datastore could be manually moved to EFD. Conversely a datastore residing on FC that houses data needing to be archived could be moved to a SATA device which, although a less-performing disk tier, has a much smaller cost per GB. In a FAST VP environment, customers may wish to circumvent the automatic process in cases where they know all the data on a thin device has changed its function. Take, for instance, the previous example of archiving. A datastore that contains information that has now been designated as archive could be removed from FAST VP control then migrated over to the thin pool that is comprised of the disk technology most suited for archived data, SATA. Note: When using VLUN VP with FAST VP, the destination pool must be part of the FAST VP policy. Note: The Storage Tiering for VMware Environments Deployed on EMC Symmetrix VMAX with Enginuity 5876 White Paper on provides detailed examples and a specific use case on using FAST VP in a VMware environment. VLUN migration with Unisphere for VMAX example A VLUN migration can be performed using the symmigrate command, with EMC Solutions Enabler 7.0 and higher; or with the Migrate Storage Locally wizard available in the Symmetrix Management Console; or the VLUN Migration wizard in Unisphere for VMAX. The following example uses Unisphere for VMAX to migrate a thin LUN. Within Unisphere, the VLUN Migration option as seen in Figure 169 on page 267, appears in the extended menu in a number of places. 266 Using EMC VMAX Storage in VMware vsphere Environments

269 Data Placement and Performance in vsphere Figure 169 VLUN Migration option in Unisphere for VMAX menu The user, for instance, may find it easier to access it from the Storage Groups section of Unisphere as the LUNs are grouped together. However, the menu is available from the Volumes section also. This location will be used in this example with the assumption the LUN has not been presented to a host. 1. Start by navigating the Unisphere menu to Storage > Volumes > TDEV and highlighting the thin device to be migrated as in Figure 170 on page 268. EMC Virtual LUN migration 267

270 Data Placement and Performance in vsphere Figure 170 Selecting thin device in Unisphere for VMAX 268 Using EMC VMAX Storage in VMware vsphere Environments

271 Data Placement and Performance in vsphere 2. Now select the double-arrow at the bottom of the screen and select VLUN Migration from the menu as shown in Figure 171. Figure 171 Selecting thin device and VLUN Migration menu in Unisphere for VMAX EMC Virtual LUN migration 269

272 Data Placement and Performance in vsphere 3. Once the wizard is launched, enter the following information as shown in Figure 172: A session name (no spaces). The name associated with this migration can be used to interrogate the progress of the migration. It can be anything. Select the pool to migrate to, in this case EFD_Pool. Note that the wizard will allow you to select the same pool the thin device is currently bound to. This is because if the device has extents in other pools, the user may wish to bring them all back to the original pool. There is an advanced option, boxed in blue in the figure, which will permit the user to migrate only a portion of the LUN. If the device does have extents in multiple pools, using the advanced option will only move extents from the chosen pool into the target pool. Using the radio buttons, select whether to Pin the volumes. This is specifically for FAST VP. Note that during a VLUN Migration, FAST VP will not move extents, regardless if the device is pinned or not; however upon termination of the migration session, it can move extents so it is important to pin the device to prevent that. Select OK to start the migration. Figure 172 Inputing information for the VLUN migration in Unisphere for VMAX 270 Using EMC VMAX Storage in VMware vsphere Environments

273 Data Placement and Performance in vsphere 4. After the session is submitted, navigate to Data Protection > Migration to see the active session. The status will show SyncInProg while the migration is active. Once the status shows Migrated as in Figure 173 it can be terminated. Note that the session will not terminate on its own. Figure 173 Terminating a VLUN Migration session in Unisphere for VMAX 5. Returning to Storage > Volumes > TDEV the thin device now shows the new thin pool, FC_Pool, in Figure 174 on page 272. EMC Virtual LUN migration 271

274 Data Placement and Performance in vsphere Figure 174 Migrated thin device in new pool as seen in Unisphere for VMAX 272 Using EMC VMAX Storage in VMware vsphere Environments

275 Data Placement and Performance in vsphere Federated Live Migration Migrating a virtual machine that is comprised solely of virtual disks is a simple task because of the VMware Storage vmotion technology. It is especially useful when moving between arrays. Migrating a VM that uses Raw Device Mappings from one array to another, however, is a little more complicated especially if it must be done non-disruptively. VMware has the ability to convert a RDM into a virtual disk but that might not be feasible for applications that still require RDMs. There are other options such as host-based/in-guest mechanisms to copy the data from one device to another, but that can be complex and requires special drivers or even possibly downtime to complete. To solve this issue for physical hosts using a VMAX, EMC developed a feature called Federated Live Migration. IMPORTANT The VMAX3 running HYPERMAX OS 5977 does not have the Federated Live Migration feature, nor is FLM supported with vsphere 6. Federated Live Migration (FLM) allows the migration of a device from one VMAX array to another VMAX array without any downtime to the host, while at the same time not affecting the SCSI inquiry information of the original source device. Therefore, even though the device and data is now residing on a completely different VMAX array the host is unaware of this transition. FLM accomplishes this by leveraging the EMC Open Replicator functionality to migrate the data. In the FLM release, support is extended to VMware ESXi 5 VMFS and RDM devices. In order to use FLM with VMware there are a few prerequisites that must be met: 1. The source array must be running Enginuity 5773 or If running 5773 code, it must be at version with fixes and The target array must be no lower than The ESXi host must be 5.0 U1 or later and requires the VMware patch PR for proper functioning. 4. The source device must be configured to use VMware NMP Round Robin or PP/VE at the correct supported version. Federated Live Migration 273

276 Data Placement and Performance in vsphere 5. In order to initiate FLM, Solutions Enabler version 7.6 is required. Unisphere version offers a easy-to-use wizard as well for creating and managing FLM sessions for VMware environments. FLM permits the source device to be thick and the target to be thin. This allows for a non-disruptive way to not only migrate a device from a traditionally thick array to a fully thin array like VMAX 10K, but also to immediately leverage the benefits of VMAX Virtual Provisioning, such as FAST VP, post-migration. In addition, FLM offers the ability to run zero detection during the migration so that any large chunks of contiguous zeros that are detected are not copied to the target device, which is especially valuable when migrating to a thin device. Furthermore, the target device can be either the same size or larger than the source device. For detailed instructions and additional information on using FLM refer to the Symmetrix Procedure Generator or the FLM Technote found here: hnical-overview-technical-notes.pdf 274 Using EMC VMAX Storage in VMware vsphere Environments

277 Data Placement and Performance in vsphere Non-Disruptive Migration Non-Disruptive Migration (NDM) is a newer migration technology that like FLM, provides a method for migrating data from a source array to a target array. NDM is specifically available for moving from a VMAX array to a VMAX3/VMAX All Flash array. NDM works across a metro distance, typically within a data center, without application host downtime. NDM requires a VMAX array running Enginuity 5876 with a required epack (source array), and an array running HYPERMAX OS or higher (target array). NDM utilizes Symmetrix Remote Data Facility (SRDF) as its method of migration and therefore both the source and target array must be configured with SRDF before a migration is possible. The NDM operations involved in a typical migration are: Environment setup - Configures source and target array infrastructure for the migration process. Create - Duplicates the application storage environment from source array to target array. Cut-over - Switches the application data access form the source array to the target array and duplicates the application data on the source array to the target array. Commit - Removes application resources from the source array and releases the resources used for migration. Application permanently runs on the target array. Environment remove -Removes the migration infrastructure created by the environmental setup. Some key features of NDM are: Simple process for migration: 1. Select storage group to migrate. 2. Create the migration session. 3. Discover paths to the host. 4. Cut-over storage group to VMAX3 or VMAX All Flash array. 5. Monitor for synchronization to complete. 6. Commit the migration. Non-Disruptive Migration 275

278 Data Placement and Performance in vsphere Allows for inline compression on VMAX All Flash array during migration. Maintains snapshot and disaster recovery relationships on source array, but are not migrated. Allows for non-disruptive revert to source array. Allows up to 16 concurrent migration sessions. Requires no license since it is part of HYPERMAX OS. Requires no additional hardware in the data path. NDM is fully supported with VMware environments and may aid migrations of larger environments, providing a viable alternative to Storage vmotion. 276 Using EMC VMAX Storage in VMware vsphere Environments

279 Data Placement and Performance in vsphere EMC Host I/O Limit Beginning with Enginuity release , EMC provides support for the setting of the Host I/O Limits. This feature allows users to place limits on the front-end bandwidth and IOPS consumed by applications on VMAX systems. Limits are set on a per-storage-group basis. As users build masking views with these storage groups, limits for maximum front-end IOPS or MB/s are evenly distributed across the directors within the associated masking view. The bandwidth and IOPS are monitored by the VMAX system to ensure that they do not exceed the user-specified maximum. The benefits of implementing Host I/O Limit are: Ensures that applications cannot exceed their set limit, reducing the potential of impacting other applications Enforces performance ceilings on a per-storage-group basis Ensures applications do not exceed their allocated share, thus reducing the potential of impacting other applications Provides greater levels of control on performance allocation in multi-tenant environments Enables predictability needed to service more customers on the same array Simplifies quality-of-service management by presenting controls in industry-standard terms of IOPS and throughput Provides the flexibility of setting either IOPS or throughput or both, based on application characteristics and business needs Manages expectations of application administrators with regard to performance and allows customers to provide incentives for their users to upgrade their performance service levels Host I/O Limits requires (minimum) both Enginuity and Solutions Enabler v7.5 or HYPERMAX OS 5977 and Solutions Enabler 8.x. To configure the Host I/O Limits feature on a set of devices, a Host I/O Limit is added to a storage group. A provisioning view is then created using that storage group. This causes the Host I/O Limit defined on the storage group to be applied to the devices in the EMC Host I/O Limit 277

280 Data Placement and Performance in vsphere storage group for the ports defined in the port group, as well as any child storage group(s). The Host I/O Limit for a storage group can be either active or inactive. Beginning with Enginuity and Solutions Enabler v7.6, Host I/O Limit support cascaded quotas such that a child storage group can have a different quota than the parent group. Note: In most cases, the total Host I/O Limit may only be achieved with proper host load balancing between directors (by multipath software on the hosts, such as PowerPath/VE. The Host I/O Limit can be set using the SYMCLI command symsg or Unisphere v1.5. Figure 175 demonstrates the use of SYMCLI to set the Host I/O Limit. Figure 175 Setting the Host I/O Limit through SYMCLI v7.5 Using Unisphere v1.6, navigate to the storage group and by selecting it and then clicking on the double-arrow on the bottom of the screen, the option to set the Host I/O Limit is available. In this case, seen in Figure 176 on page 279, the check box is selected for MB/Sec. Run the job to complete the change. 278 Using EMC VMAX Storage in VMware vsphere Environments

281 Data Placement and Performance in vsphere Figure 176 Setting the Host I/O Limit through Unisphere v1.6 Using Unisphere v8.x, navigate to the Storage Groups Dashboard and double-click on one of the storage groups. This will bring up the properties of the storage group. Fill in the desired limit, by MB/sec or I/O/sec and hit Apply as seen in Figure 177 on page 280. EMC Host I/O Limit 279

282 Data Placement and Performance in vsphere Figure 177 Setting the Host I/O Limit through Unisphere v8.x For more information about Host I/O Limit please refer to the Technical Note Host I/O Limits for Symmetrix VMAX Family Arrays on support.emc.com. 280 Using EMC VMAX Storage in VMware vsphere Environments

283 Data Placement and Performance in vsphere vsphere Storage I/O Control Storage I/O Control is a capability in vsphere 4.1 and higher that allows cluster-wide storage I/O prioritization. Storage I/O Control extends the constructs of shares and limits to handle storage I/O resources in other words I/O DRS. The I/O control is enabled on the datastore and the prioritization is set at the virtual machine (VM) level. VMware claims that this allows better workload consolidation and helps reduce extra costs associated with over-provisioning. The idea is that you can take many different applications, with disparate I/O requirements, and store them on the same datastore. If the datastore (that is the underlying LUN) begins to provide degraded performance in terms of I/O response, VMware will automatically adjust a VM s I/O share based upon user-defined levels. This ensures that those VMs running more important applications on a particular datastore get preference over VMs that run less critical applications against that datastore. When one enables Storage I/O Control on a datastore, ESXi begins to monitor the device latency that hosts observe when communicating with that datastore. When device latency exceeds a threshold (currently defaults to 30 milliseconds), the datastore is considered to be congested and each virtual machine that accesses that datastore is allocated I/O resources in proportion to their shares as assigned by the VMware administrator. Note: Storage I/O Control is fully supported to be activated on volumes also under FAST VP/FAST control. Configuring Storage I/O Control is a two-step process as it is disabled by default: 1. Enable Storage I/O Control on the datastore. 2. Set the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each VM. By default, all virtual machine shares are set to Normal (1000) with unlimited IOPS. vsphere Storage I/O Control 281

284 Data Placement and Performance in vsphere Enable Storage I/O Control The first step is to enable Storage I/O Control on the datastore. Figure 178 illustrates how the datastore entitled STORAGEIO_DS is enabled for Storage I/O Control. Simply select the Properties link from the Configuration tab of the datastore and one will be presented with the Properties box where there is a check box to enable the Storage I/O Control feature. The vsphere Web Client equivalent is seen in Figure 179 on page 283. Figure 178 Enabling Storage I/O Control on a datastore in the vsphere Client 282 Using EMC VMAX Storage in VMware vsphere Environments

285 Data Placement and Performance in vsphere Figure 179 Enabling Storage I/O Control on a datastore in the vsphere Web Client vsphere Storage I/O Control 283

286 Data Placement and Performance in vsphere The configuration change will show up as a Recent Task and once complete the Configuration page will show the feature as enabled as seen in Figure 180 in the vsphere Client and Figure 181 on page 285 in the vsphere Web Client. Figure 180 Storage I/O Control enabled on datastore in the vsphere Client 284 Using EMC VMAX Storage in VMware vsphere Environments

287 Data Placement and Performance in vsphere Figure 181 Storage I/O Control enabled on datastore in the vsphere Web Client Storage I/O Control resource shares and limits The second step in the setup of Storage I/O Control is to allocate the number of storage I/O shares and upper limit of I/O operations per second (IOPS) allowed for each virtual machine. When storage I/O congestion is detected for a datastore, the I/O workloads of the virtual machines accessing that datastore are adjusted according to the proportion of virtual machine shares each VM has been allocated. Shares Storage I/O shares are similar to those used for memory and CPU resource allocation. They represent the relative priority of a virtual machine with regard to the distribution of storage I/O resources. Under resource contention, VMs with higher share values have greater access to the storage array, which typically results in higher throughput and lower latency. There are three default values for vsphere Storage I/O Control 285

288 Data Placement and Performance in vsphere shares: Low (500), Normal (1000), or High (2000) as well as a fourth option, Custom. Choosing Custom will allow a user-defined amount to be set. IOPS In addition to setting the shares, one can limit the IOPS that are permitted for a VM. 1 By default, the IOPS are always unlimited as they have no direct correlation with the shares setting. If one prefers to limit based upon MB per second, it will be necessary to convert to IOPS using the typical I/O size for that VM. For example, to restrict a backup application with 64 KB IOs to 10 MB per second a limit of 160 IOPS ( /64000) should be set. One should allocate storage I/O resources to virtual machines based on priority by assigning a relative amount of shares to the VM. Unless virtual machine workloads are very similar, shares do not necessarily dictate allocation in terms of I/O operations or MBs per second. Higher shares allow a virtual machine to keep more concurrent I/O operations pending at the storage device or datastore compared to a virtual machine with lower shares. Therefore, two virtual machines might experience different throughput based on their workloads. Setting Storage I/O Control resource shares and limits The screen to edit shares and IOPS can be accessed from a number of places within the vsphere Client. Perhaps the easiest is to select the Virtual Machines tab of the datastore on which Storage I/O Control has been enabled. In Figure 182 on page 287 there are two VMs listed for the datastore STORAGEIO_DS: STORAGEIO_VM1 and STORAGEIO_VM2. To set the appropriate SIOC policy, the settings of the VM need to be changed. This can be achieved by right-clicking on the chosen VM and select Edit Settings. 1. For information on how to limit IOPS at the storage level see EMC Host I/O Limit on page Using EMC VMAX Storage in VMware vsphere Environments

289 Data Placement and Performance in vsphere Figure 182 Edit settings for VM for SIOC Within the VM Properties screen shown in Figure 183 on page 288, select the Resources tab. One will notice that this tab is also where the CPU and memory resources can be allocated in a DRS cluster. Choose the Disk Setting in the left-hand panel and notice that for each hard disk on the right the default of Normal for Shares and Unlimited for IOPS is present. vsphere Storage I/O Control 287

290 Data Placement and Performance in vsphere Figure 183 Default VM disk resource allocation Using the drop-down boxes within the Shares and IOPS columns, one can change the values. Figure 184 on page 289 shows that the shares for Hard disk 2 are changed to Low, while the IOPS value has been set to 160. Selecting the OK button at the bottom of the VM Properties screen commits the changes. The new values are reflected in the Virtual Machines tab, as shown in Figure 184 on page Using EMC VMAX Storage in VMware vsphere Environments

291 Data Placement and Performance in vsphere Figure 184 Changing VM resource allocation in thick client Similar changes may be made in the vsphere Web Client in Figure 185 on page 290. vsphere Storage I/O Control 289

292 Data Placement and Performance in vsphere Figure 185 Changing VM resource allocation in vsphere Web Client Changing Congestion Threshold for Storage I/O Control It is possible to change the Congestion Threshold, i.e. the value at which VMware begins implementing the shares and IOPS settings for a VM. The current default value is 30 ms. There may be certain cases that necessitate altering the value. If, for instance, the datastore is backed by an EFD LUN and the response time requirement for an application is lower than 30 milliseconds; or conversely if the datastore is backed by a SATA LUN and the response time requirement is more flexible. In these cases it might be beneficial to alter the threshold. In Figure 186 the location to change the Congestion Threshold is shown. Following the same process as when enabling Storage I/O Control ( Enable Storage I/O Control on page 282), enter into the 290 Using EMC VMAX Storage in VMware vsphere Environments

293 Data Placement and Performance in vsphere Properties of the datastore. Select the Advanced button next to the Enabled check box. Immediately, the user is presented with a warning message about changing the setting. Select OK. The Congestion Threshold value can then be manually changed. Dynamic Threshold Figure 186 Changing the congestion threshold Beginning with vsphere 5.1, VMware will dynamically determine the best congestion threshold for each device, rather than using the default 30 ms value. VMware does this by determining the peak latency value of a device and setting the threshold of that device to 90% of that value. For example a device that peaks at a 10 ms will have its threshold set to 9 ms. If the user wishes to change the percent amount, that can be done within the vsphere Web Client (note that the thick client does not support this feature of SIOC). Figure 187 on page 292 demonstrates how the percentage can be changed to anything between 50 and 100%. Alternatively a manual millisecond threshold can be set between 5 and 100 ms. EMC recommends vsphere Storage I/O Control 291

294 Data Placement and Performance in vsphere allowing VMware to dynamically control the congestion threshold, rather than using a manual millisecond setting, though a different percent amount is perfectly acceptable. Figure 187 Storage I/O Control dynamic threshold setting Storage I/O Control requirements The following requirements and limitations apply to Storage I/O Control: 292 Using EMC VMAX Storage in VMware vsphere Environments

295 Data Placement and Performance in vsphere Datastores that are Storage I/O Control-enabled must be managed by a single vcenter Server system. Storage I/O Control is supported on Fibre Channel-connected, iscsi-connected, and NFS-connected storage (only in vsphere 5 and 6). Raw Device Mapping (RDM) is not supported. Storage I/O Control does not support datastores with multiple extents. Congestion Threshold has a limiting range of 10 to 100 milliseconds. Note: EMC does not recommend making changes to the Congestion Threshold. Spreading I/O demands Storage I/O Control was designed to help alleviate many of the performance issues that can arise when many different types of virtual machines share the same VMFS volume on a large SCSI disk presented to the VMware ESXi hosts. This technique of using a single large disk allows optimal use of the storage capacity. However, this approach can result in performance issues if the I/O demands of the virtual machines cannot be met by the large SCSI disk hosting the VMFS, regardless if SIOC is being used. In order to prevent these performance issues from developing, no matter the size of the LUN, EMC recommends using VMAX Virtual Provisioning (VP) on VMAX (VMAX3 and VMAX All Flash use VP by default), which will spread the I/O over a large pool of disks. A detailed discussion of Virtual Provisioning can be found in EMC VMAX Virtual Provisioning overview on page 185. SIOC and EMC Host I/O Limit Because SIOC and EMC Host I/O Limits can restrict the amount of I/O either by IOPS per second or MB per second, it is natural to question their interoperability. The two technologies can be utilized together, though the limitations around SIOC reduce its benefits as compared to Host I/O Limits. There are a few reasons for this. First, SIOC can only work against a single datastore that is comprised of a single extent, in other words one LUN. Therefore only the VMs on that datastore will be subject to any Storage I/O Control and only vsphere Storage I/O Control 293

296 Data Placement and Performance in vsphere when the response threshold is breached. In other words it is not possible to setup any VM hierarchy of importance if the VMs do not reside on the same datastore. In contrast, Host I/O Limits work against the storage group as a whole which typically comprise multiple devices. In addition, child storage groups can be utilized so that individual LUNs can be singled out for a portion of the total limit. This means that a VM hierarchy can be establish across datastores because each device can have a different Host I/O limit but still be in the same storage group and thus bound by the shared parental limit. The other problems to consider with SIOC is the type and number of disks. SIOC is set at the vmdk level and therefore if there is more than one vmdk per VM, each one requires a separate I/O limit. Testing has shown this makes it extremely difficult to achieve an aggregate IOPs number unless the access patterns of the application are known so well that the number of I/Os to each vmdk can be determined. For example if the desire is to allow 1000 IOPs for a VM with two vmdks, it may seem logical to limit each vmdk to 500 IOPs. However, if the application generates two I/Os on one vmdk for every one I/O on the other vmdk, the most IOPs that can be achieved is 750 as one vmdk hits the 500 IOPs limit. If Raw Device Mappings are used in addition to vmdks, the exercise becomes impossible as they are not supported with SIOC. This is not to say, however, that SIOC cannot be useful in the environment. If the VMs that require throttling will always share the same datastore, then using SIOC would be the only way to limit I/O among them in the event there was contention. In such an environment, a Host I/O limit could be set for the device backing the datastore so that the VM(s) that are not throttled could generate IOPs up to another limit if desired. Note: As the VMAX limits the I/O when using Host I/O Limits, unless SIOC is set at the datastore level, individual VMs on a particular device will be free to generate their potential up to the Host I/O limit, regardless of latency. 294 Using EMC VMAX Storage in VMware vsphere Environments

297 Data Placement and Performance in vsphere VMware Storage Distributed Resource Scheduler (SDRS) Datastore clusters Beginning with vsphere 5, a new VMware vcenter object called a datastore cluster was introduced. A datastore cluster is what Storage DRS acts upon. A datastore cluster is a collection of datastores with shared resources and a shared management interface. Datastore clusters are to datastores what clusters are to hosts. When you create a datastore cluster, you can use vsphere Storage DRS to manage storage resources. When a datastore cluster is created, Storage DRS can manage the storage resources comparably to how vsphere DRS manages compute resources in a cluster. As with a cluster of hosts, a datastore cluster is used to aggregate storage resources, enabling smart and rapid placement of new virtual machines and virtual disk drives as well as load balancing of existing workloads. It is important to note that Storage DRS does not have to be enabled on a datastore cluster, when Storage DRS is not enabled, a datastore cluster is essentially just a folder to group datastores. Storage DRS provides initial placement and ongoing balancing recommendations to datastores in a Storage DRS-enabled datastore cluster. Initial placement occurs when Storage DRS selects a datastore within a datastore cluster on which to place a virtual machine disk. This happens when the virtual machine is being created or cloned, when a virtual machine disk is being migrated to another datastore cluster, or when you add a disk to an existing virtual machine. Initial placement recommendations are made in accordance with space constraints and with respect to the goals of space and I/O load balancing. Figure 188 on page 296 demonstrates the options available for either manual or automated control of different features for a datastore cluster. VMware Storage Distributed Resource Scheduler (SDRS) 295

298 Data Placement and Performance in vsphere Figure 188 Datastore Cluster creation These goals aim to minimize the risk of over-provisioning one datastore, storage I/O bottlenecks, and performance impact on virtual machines. It is particularly useful when VMs use thin vmdks. Storage DRS is invoked at the configured frequency (by default, every eight hours) or when one or more datastores in a datastore cluster exceeds the user-configurable space utilization or I/O latency thresholds. When Storage DRS is invoked, it checks each datastore's space utilization and I/O latency values against the threshold. For I/O latency, Storage DRS uses the 90th percentile I/O latency measured over the course of a day to compare against the threshold. Storage DRS applies the datastore utilization reporting mechanism of VMware vcenter Server, to make recommendations whenever the configured utilized space threshold is exceeded. Storage DRS will calculate all possible moves, to balance the load accordingly while considering the cost and the benefit of the migration. Storage DRS considers moving virtual machines that are powered off or powered on for space balancing. Storage DRS includes powered-off virtual machines with snapshots in these considerations. 296 Using EMC VMAX Storage in VMware vsphere Environments

299 Data Placement and Performance in vsphere Affinity rules and maintenance mode Storage DRS affinity rules enable control over which virtual disks should or should not be placed on the same datastore within a datastore cluster. By default, a virtual machine's virtual disks are kept together on the same datastore. Storage DRS offers three types of affinity rules: VMDK Anti-Affinity Virtual disks of a given virtual machine with multiple virtual disks are always placed on different datastores. VMDK Affinity Virtual disks are always kept together on the same datastore. VM Anti-Affinity Two specified virtual machines, including associated disks, are always placed on different datastores. In addition, Storage DRS offers Datastore Maintenance Mode, which automatically evacuates all registered virtual machines and virtual disk drives from the selected datastore to the remaining datastores in the datastore cluster. This is shown in Figure 189 on page 298. VMware Storage Distributed Resource Scheduler (SDRS) 297

300 Data Placement and Performance in vsphere Figure 189 Putting a datastore in maintenance mode in SDRS Datastore cluster requirements Datastores and hosts that are associated with a datastore cluster must meet certain requirements to use datastore cluster features successfully. EMC strongly encourages using the vsphere Storage APIs for Storage Awareness (VASA) integration within vsphere to ensure the environment adheres to the guidelines outlined below. For detailed information on implementing VASA with VMAX storage please see vsphere Storage APIs for Storage Awareness (VASA) on page 170. Follow these guidelines when you create a datastore cluster: Datastore clusters should contain similar or interchangeable datastores. On the VMAX3/VMAX All Flash, all datastores in a datastore cluster should have the same SLO. 298 Using EMC VMAX Storage in VMware vsphere Environments

301 Data Placement and Performance in vsphere A datastore cluster can contain a mix of datastores with different sizes and I/O capacities, and can be from different arrays and vendors. Nevertheless, EMC does not recommend mixing datastores backed by devices that have different properties, i.e. different RAID types or disk technologies, unless the devices are part of the same FAST VP policy. NFS and VMFS datastores cannot be combined in the same datastore cluster. If VMFS volumes are protected by TimeFinder, do not mix and match protected and non-protected devices in a datastore cluster. If SDRS is enabled, only manual mode is supported with datastores protected by TimeFinder. All hosts attached to the datastores in a datastore cluster must be ESXi 5.0 and later. If datastores in the datastore cluster are connected to ESX/ESXi 4.x and earlier hosts, Storage DRS does not run. Datastores shared across multiple datacenters cannot be included in a datastore cluster. As a best practice, do not include datastores that have hardware acceleration enabled in the same datastore cluster as datastores that do not have hardware acceleration enabled. Datastores in a datastore cluster must be homogeneous to guarantee consistent hardware acceleration-supported behavior. Replicated datastores cannot be combined with non-replicated datastores in the same Storage DRS enabled datastore cluster. If SDRS is enabled, only manual mode is supported with replicated datastores. Do not mix and match replication modes and solutions within a datastore cluster. For example, do not have SRDF/A and SRDF/S protected devices within the same datastore cluster or mix EMC RecoverPoint and SRDF devices. To maintain proper consistency, if a device is contained in a consistency group and that device is part of a datastore cluster, do not add devices into that datastore cluster that are not protected by that consistency group. This will make sure that a VM that has consistency dependencies on other VMs does not move out of the consistency group due to a Storage DRS move. In general, a datastore cluster should have a one to one relationship with a consistency group. VMware Storage Distributed Resource Scheduler (SDRS) 299

302 Data Placement and Performance in vsphere Site Recovery Manager Interoperability with Storage DRS and Storage vmotion Support for Storage DRS and Storage vmotion on datastores managed by VMware vcenter Site Recovery Manager was introduced in Site Recovery Manager version 5.5. There are some limitations though. Storage vmotion or SDRS movements must be limited to devices within a single consistency group. Therefore, for things like datastore clusters, it is important to have only devices in the same consistency group in the cluster. Doing so will make sure Storage DRS does not make a movement that will violate SRM support. Starting in vsphere 6 with SRM 6.0, VMware will not allow SDRS to violate these rules and permits greater flexibility with SDRS. Consult VMware documentation for further information. Using SDRS with EMC FAST (DP and VP) When EMC FAST is used in conjunction with SDRS, only capacity-based SDRS is recommended and not storage I/O load balancing. To use this configuration, simply uncheck the box labeled Enable I/O metric for SDRS recommendations when creating the datastore cluster as in Figure 190 or after creation by editing it as seen in Figure 191 on page 301. Unchecking this box will ensure that only capacity-based SDRS is in use. Figure 190 Disabling performance-based SDRS during creation 300 Using EMC VMAX Storage in VMware vsphere Environments

303 Data Placement and Performance in vsphere Figure 191 Disabling performance-based SDRS after creation Unlike FAST DP, which operates on thick devices at the whole device level, FAST VP operates on thin devices at the far more granular extent level. Because FAST VP is actively managing the data on disks, knowing the performance requirements of a VM (on a datastore under FAST VP control) is important before a VM is migrated from one datastore to another. This is because the exact thin pool distribution of the VM's data may not be the same as it was before the move (though it could return to it at some point assuming the access patterns and FAST VP policy do not change). Therefore, if the VM contains performance sensitive applications, EMC advises not using SDRS with FAST VP for that VM. Preventing movement could be accomplished by setting up a rule or at the very least using Manual Mode (no automation) for SDRS. IMPORTANT As all devices are under FAST control on the VMAX3/VMAX All Flash, only capacity-based SDRS is recommended and not storage I/O load balancing with that platform. VMware Storage Distributed Resource Scheduler (SDRS) 301

304 Data Placement and Performance in vsphere SDRS and EMC Host I/O Limit It is important to take note when EMC Host I/O Limits are set on storage groups that contain devices that are part of a datastore cluster (as part of a SDRS implementation). When SDRS is utilized all devices that are in that datastore cluster should have the same Host I/O Limit. Moreover, the devices should come from the same EMC array. This recommendation is given because the Host I/O Limit throttles I/O to those devices. This may be done to limit the ability of applications on those devices from impacting other devices on the array or perhaps to simplify QoS management. Whatever the reason, if a datastore cluster contains devices - whether from the same or a different array - that do not have a Host I/O limit, there is always the possibility in the course of its balancing that SDRS will relocate virtual machines on those limited devices to non-limited devices. Such a change might alter the desired QoS or permit the applications on the virtual machines to exceed the desired throughput. It is therefore prudent to have device homogeneity when using EMC Host I/O Limits in conjunction with SDRS. 302 Using EMC VMAX Storage in VMware vsphere Environments

305 5 Cloning of vsphere Virtual Machines This chapter discusses the use of EMC technology to clone VMware virtual machines. Introduction EMC TimeFinder overview Copying virtual machines after shutdown Copying running virtual machines using EMC Consistency technology Transitioning disk copies to cloned virtual machines VMware vsphere Storage API for Array Integration Choosing a virtual machine cloning methodology Cloning at disaster protection site with vsphere 5 and Detaching the remote volume LVM.enableResignature parameter RecoverPoint Cloning of vsphere Virtual Machines 303

306 Cloning of vsphere Virtual Machines Introduction VMware ESXi virtualizes IT assets into a flexible, cost-effective pool of compute, storage, and networking resources. These resources can be then mapped to specific business needs by creating virtual machines. VMware ESXi provides several utilities to manage the environment. This includes utilities to clone, back up and restore virtual machines. All these utilities use host CPU resources to perform the functions. Furthermore, the utilities cannot operate on data not residing in the VMware environment. All businesses have line-of-business operations that interact and operate with data on a disparate set of applications and operating systems. These enterprises can benefit by leveraging technology offered by storage array vendors to provide alternative methodologies to protect and replicate the data. The same storage technology can also be used for presenting various organizations in the business with a point-in-time view of their data without any disruption or impact to the production workload. VMware ESXi can be used in conjunction with SAN-attached EMC VMAX storage arrays and the advanced storage functionality they offer. The configuration of virtualized servers when used in conjunction with EMC VMAX storage array functionality is not very different from the setup used if the applications were running on a physical server. However, it is critical to ensure proper configuration of both the storage array and of VMware ESXi so applications in the virtual environment can exploit storage array functionality. This chapter will include the use of EMC TimeFinder with VMware ESXi to clone virtual machines and their data. It will also include the benefits of offloading software-based VMware cloning operations to the array that can be achieved using Full Copy, which is one of the primitives available through VMware Storage APIs for Array Integration, or VAAI. 304 Using EMC VMAX Storage in VMware vsphere Environments

307 Cloning of vsphere Virtual Machines EMC TimeFinder overview The TimeFinder 1 family of products are VMAX local replication solutions designed to non-disruptively create point-in-time copies of critical data. You can configure backup sessions, initiate copies, and terminate TimeFinder operations from mainframe and open systems controlling hosts using EMC VMAX host-based control software. The TimeFinder local replication solutions include TimeFinder/Clone, TimeFinder/Snap, and TimeFinder VP Snap 2. TimeFinder/Clone creates full-device and extent-level point-in-time copies. TimeFinder/Snap creates pointer-based logical copies that consume less storage space on physical drives. TimeFinder VP Snap provides the efficiency of Snap technology with improved cache utilization and simplified pool management. Each solution guarantees high data availability. The source device is always available to production applications. The target device becomes read/write enabled as soon as you initiate the point-in-time copy. Host applications can therefore immediately access the point-in-time image of critical data from the target device while TimeFinder copies data in the background. Detailed description of these products is available in the EMC TimeFinder Product Guide on support.emc.com. VMAX3/VMAX All Flash arrays Beginning with the Q release of HYPERMAX OS, the VMAX3 supports TimeFinder. TimeFinder on the VMAX3/VMAX All Flash has been completely re-architected. To that end, a single underlying technology called SnapVX provides the basis of TimeFinder. SnapVX creates snapshots by storing changed tracks (deltas) directly in the Storage Resource Pool (SRP) of the source device. With SnapVX, you do not need to specify a target device and source/target pairs when you create a snapshot. If there is ever a need for the application to use the point-in-time data, you can create links from the snapshot to one or more target devices. If there are multiple snapshots and the application needs to find a particular point-in-time copy for host access, you can link and relink until the correct snapshot is located. 1. The VMAX3 at GA release Q ( ) does not support EMC TimeFinder. The Q ( ) release is required. 2. TimeFinder/Mirror will no longer be addressed in the TechBook. TimeFinder/Clone should be used. EMC TimeFinder overview 305

308 Cloning of vsphere Virtual Machines Note: TimeFinder supports legacy local replication solutions including TimeFinder/Clone, TimeFinder VP Snap, and TimeFinder/Mirror. HYPERMAX OS uses emulations to transparently convert legacy commands to SnapVX commands. One can still run existing scripts that utilize legacy commands, but the underlying mechanism is SnapVX. TimeFinder Emulation sessions and TimeFinder snapshots cannot coexist on the same device. Despite the fact that SnapVX is being used under the covers, the legacy technologies will still function as they are designed, i.e. a target device is required. TimeFinder products run on the EMC VMAX storage array. However, the management of the functionality is performed using EMC Solutions Enabler software (with appropriate add-on modules), Symmetrix Management Console (SMC) or Unisphere for VMAX (Unisphere). TimeFinder SnapVX As noted above, SnapVX operations are performed using the symsnapvx command to create point-in-time copies (snapshots) of critical data. In VMAX3/VMAX All Flash arrays running HYPERMAX OS, SnapVX supports up to 256 snapshots per source device (including any emulation mode snapshots). The legacy session limits still apply to the emulations. For a more detailed overview of TimeFinder SnapVX functionality including I/O flow, refer to the EMC VMAX3 Family with HYPERMAX OS Product Guide. TimeFinder SnapVX is only available with Solutions Enabler and higher. For detail on syntax, see the EMC Solutions Enabler TimeFinder Family CLI User Guide. TimeFinder VP Snap VP Snap leverages TimeFinder/Clone technology to create space-efficient snaps for thin devices by allowing multiple sessions to share capacity allocations within a thin pool. VP Snap provides the efficiency of Snap technology with improved cache utilization and simplified pool management. With VP Snap, tracks can be stored in the same thin pool as the source, or in another pool of your choice. VP Snap sessions copy data from the source device to the target device only if triggered by a host I/O. Read I/Os to protected tracks on the target device do not result in data being copied. 306 Using EMC VMAX Storage in VMware vsphere Environments

309 Cloning of vsphere Virtual Machines Figure 192 shows several VP Snap sessions sharing allocations within a thin pool. Figure 192 VP Snap sessions sharing allocations within a thin pool Note: VP Snap source and target devices are optimized by FAST VP, but the shared target allocations are not moved. There are certain restrictions when using VP Snap: The -vse attribute may only be applied during the creation of the session. Both the source device and the target device must be thin devices. All of the VP Snap target devices for a particular source device must be bound to the same thin pool. Once created, VP Snap sessions cannot be changed to any other mode. Copying from a source device to a larger target device is not supported with VP Snap. VP Snap sessions cannot be combined with TimeFinder/Clone nocopy sessions on the same source device. If a FAST VP target extent is part of a VP Snap session, the shared tracks cannot be moved between tiers. EMC TimeFinder overview 307

310 Cloning of vsphere Virtual Machines TimeFinder and VMFS The TimeFinder family of products operates on VMAX devices. Using TimeFinder on a VMAX device containing a VMFS requires that all extents of the file system be replicated. If proper planning and procedures are not followed, this requirement forces replication of all data that is present on a VMFS, including virtual disk images that are not needed. Therefore, EMC recommends separation of virtual machines that require use of storage-array-based replication on one or more VMFS volumes. This does not completely eliminate replication of unneeded data, but minimizes the storage overhead. The storage overhead can be eliminated by the use of RDMs on virtual machines that require storage-array-based replication. VMware ESXi allows creation of VMFS on partitions of the physical devices. Furthermore, VMFS supports spanning of the file systems across partitions, so it is possible, for example, to create a VMFS with part of the file system on one partition on one disk and another part on a different partition on a different disk. However, such designs complicate the management of the VMware storage environment by introducing unnecessary intricacies. If use of EMC TimeFinder technology (or EMC SRDF) with VMFS is desired, EMC strongly recommends creating only one partition per physical disk, and one device for each VMFS datastore. Each TimeFinder software product mentioned in the earlier paragraphs has different performance and availability characteristics; however, functionally all variants of TimeFinder software essentially are being used to create a copy. Therefore, each individual product will not be covered in detail herein. Where appropriate, mention will be made of any important differences in the process. Furthermore, beginning with Enginuity 5875 code, EMC offers storage integrations with VMware vsphere 4.1, 5, and 6, enabling customers to dramatically improve efficiency in their VMware environment. The integration is known as VMware Storage APIs for Array Integration (VAAI) and it permits the offloading of specific storage operations to EMC VMAX to increase both overall system performance and efficiency. One of these primitives, Full Copy is tremendously helpful in reducing the time to create VMware clones. This feature delivers hardware-accelerated copying of data by performing all duplication and migration operations on the array. Using VAAI customers can achieve considerably faster data 308 Using EMC VMAX Storage in VMware vsphere Environments

311 Cloning of vsphere Virtual Machines movement through VMware Storage vmotion, as well as virtual machine creation, deployment from templates, and virtual machine cloning. EMC TimeFinder overview 309

312 Cloning of vsphere Virtual Machines Copying virtual machines after shutdown Ideally, virtual machines should be shut down before the metadata and virtual disks associated with the virtual machines are copied. Copying virtual machines after shut down ensures a clean copy of the data that can be used for back up or quick startup of the cloned virtual machine. Using TimeFinder/Clone with cold virtual machines TimeFinder/Clone provides the flexibility of copying any standard device on the VMAX storage array to another standard device of equal or larger size in the storage array. 1 TimeFinder/Clone, by removing the requirement of having devices with a special BCV attribute, offers customers greater flexibility and functionality to clone production data. Note: For the types of supported devices with TimeFinder/Clone see the EMC TimeFinder Product Guide on support.emc.com. When the clone relationship between the source and target devices is created, the target devices are presented as not ready (NR) to any host that is accessing the volumes. It is for this reason that EMC recommends unmounting any VMFS datastores that are currently on those devices before creating the clone. The creation operation makes the VMFS on the target unavailable to the VMware ESXi hosts. Therefore, the recommendation of unmounting the datastores before performing an establish operation applies directly to the virtual machines that are impacted by the absence of those datastores. The target devices in a TimeFinder/Clone operation can be accessed by VMware ESXi as soon as the clone relationship is activated. However, accessing the data on the target volume before the copy process completes causes a small performance impact since the data is copied from the source device before being presented to the requesting host. Similarly, writes to tracks on the source device not copied to the target device experiences a small performance impact. 1. The full set of features of TimeFinder/Clone are not supported when going from a smaller source to a larger target volume. For more information on these restrictions refer to the EMC Solutions Enabler TimeFinder Family CLI Product Guide. 310 Using EMC VMAX Storage in VMware vsphere Environments

313 Cloning of vsphere Virtual Machines Note: TimeFinder/Clone technology also offers the option of copying the data as soon as the TimeFinder/Clone relationship is created (by using the keyword precopy instead of copy). This option mitigates the performance impact that is experienced when the TimeFinder/Clone pairs are activated. Organizations use a cloned virtual machine image for different purposes. For example, a cloned virtual machine may be configured for reporting activities during the day and for backups in the night. VMware ESXi does not allow virtual machines to power on if any of the virtual disk devices configured on it are not available (this includes reasons such as the storage device is not presented to the ESXi or the device is in the Not Ready state). When clone targets are in a created state, the VMFS volume on the clone target is unavailable to the VMware ESXi, and any virtual machine using that VMFS volume cannot be powered on. This restriction must be considered when designing virtual environments that re-provision cloned virtual machines. Virtual machines running on VMware ESXi can be powered on with RDMs (in either virtual or physical compatibility mode) that map to devices that are in a not ready state. However, the VMFS holding the configuration file, the metadata information about the virtual machine, and the virtual disk mapping files have to be available for the power-on operation. Copying cold virtual machines on VMFS using TimeFinder/Clone TimeFinder/Clone devices are managed as an atomic unit by creating a device or composite group. Composite groups are used if the virtual machines that are being cloned use VMFS volumes from multiple VMAX storage arrays. Solutions Enabler commands can be used to create the device or composite group, and manage the copying process. The next few paragraphs present the steps required to clone a group of virtual machines utilizing EMC TimeFinder/Clone technology: 1. Identify the device number of the VMAX volumes used by the VMFS by utilizing EMC Virtual Storage Integrator for VMware vsphere (EMC VSI) which maps physical extents of a VMFS to the VMAX device number. A device group containing the member(s) of the VMFS should be created. This can be done with the following commands: symdg create <group_name> symld g <group_name> -sid <VMAX SN> add dev <dev #> Copying virtual machines after shutdown 311

314 Cloning of vsphere Virtual Machines where <dev #> are the devices identified by EMC VSI. 2. The devices that hold the copy of the data need to be added or associated with the group. Note that TimeFinder/Clone can be used when devices with the BCV attributes are used as a target. If the target devices have the BCV attribute turned on, the devices should be associated with the device group using the command symbcv g <group_name> associate dev <dev #> When STD devices are used as the target, the devices need to be added to the devices using the symld command along with the -tgt switch 1 symld g <group_name> -sid <VMAX SN> -tgt add dev <dev #> 3. The next step in the process of copying virtual machines using EMC TimeFinder/Clone is the creation of TimeFinder/Clone pairs. The command to perform this activity depends on the attributes of the target devices. If the target devices have the BCV attribute, the command symclone g <group_name> copy noprompt create automatically creates a TimeFinder/Clone relationship between the source and target devices. If STD devices are used, the -tgt switch is required to the above command. 1. Standard devices that would be used as a target device can be added to a device group (or composite group) by using the symld command with the -tgt keyword. When standard devices are added using this option, it is not necessary to explicitly define the relationship between the source and target device when the TimeFinder/Clone pairs are created. 312 Using EMC VMAX Storage in VMware vsphere Environments

315 Cloning of vsphere Virtual Machines Figure 193 shows a devices group that consists of two standard devices and two BCV devices. Figure 193 Displaying the contents of a device group The VMAX devices, 20E and 20F, contain the VMFS that needs to be copied. An explicit relationship between device 20E and 20F, and 212 and 213 can be created as shown in Figure 194 on Copying virtual machines after shutdown 313

316 Cloning of vsphere Virtual Machines page 314. When the TimeFinder/Clone pairs are created no data is copied (see note on page 311 for an exception to this rule). However, the target devices are changed to a not ready (NR) state. Figure 194 Creating a TimeFinder/Clone relationship between standard devices 4. After the creation is run, the virtual machines can be brought down to make a cold copy of the data. The virtual machines can be either shut down using the vsphere Client, ESXi VMware remote CLI, or scripts written using Perl or PowerShell. On ESXi 5 the CLI vim-cmd can be used by first determining the VMs on the host, their current states, and then powering them off. The commands are: 314 Using EMC VMAX Storage in VMware vsphere Environments

317 Cloning of vsphere Virtual Machines vim-cmd vmsvc/getallvms vim-cmd vmsvc/power.getstat VMID vim-cmd vmsvc/power.off VMID Figure 195 presents an example of shutting down a virtual machine using the vim-cmd command. Figure 195 Using command line utilities to shut down virtual machines Once all of the virtual machines accessing the VMFS have been shut down, the pairs can be activated. 5. With the machines down, the TimeFinder/Clone pairs can be activated. The command, symclone g <group_name> -noprompt activate activates the TimeFinder/Clone pairs. The state of the target devices is changed to read-write. Any VMware ESXi with access to the target devices is presented with a point-in-time view of the source device at the moment of activation. The VMAX storage array copies the data from the source device to the target device in the background. 6. The virtual machines accessing the VMFS on the source devices can be powered on and made available to the users. Copying virtual machines after shutdown 315

318 Cloning of vsphere Virtual Machines The steps discussed above are shown in Figure 196. Figure 196 Copying virtual machine data using EMC TimeFinder/Clone Copying cold virtual machines with RDMs using TimeFinder/Clone The first step in copying virtual machines accessing disks as raw device mappings (RDM) is identifying the VMAX device numbers associated with the virtual machine. This can be most easily done by using EMC VSI. Once the VMAX devices used by the virtual machines are identified, the process to create a copy of a virtual machine that is utilizing raw device mapping (RDM) is identical to the one presented in Copying cold virtual machines on VMFS using TimeFinder/Clone on page Using EMC VMAX Storage in VMware vsphere Environments

319 Cloning of vsphere Virtual Machines Using TimeFinder/Snap with cold virtual machines TimeFinder/Snap 1 enables users to create what appears to be a complete copy of their data while consuming only a fraction of the disk space required by the original copy. This is achieved by use of virtual devices as the target of the process. Virtual devices are constructs inside the VMAX storage array with minimal physical storage associated with them. Therefore, the virtual devices are normally presented in a not-ready state to any host accessing it. When the TimeFinder/Snap relationship is created, the VMAX generates the metadata to associate the specified virtual devices with the source devices. The virtual devices continue to be presented in a not-ready (NR) to any host that is accessing the volumes. Therefore, the same considerations discussed in Using TimeFinder/Clone with cold virtual machines on page 310 when using TimeFinder/Clone technology in a VMware ESXi environment apply. The virtual devices in a TimeFinder/Snap operation can be accessed by VMware ESXi as soon as the snap relationship is activated. However, unlike TimeFinder/Clone technology, there is no copying of data after the session is activated. The virtual devices tracks are populated with the address of the source device tracks. When VMware ESXi accesses the virtual devices, the data is fetched from the source device and presented to the requester. The data changed by either the hosts accessing the source device or target device is stored in a temporary save area. The amount of data saved depends on the write activity on the source devices, target devices, and the duration for which the TimeFinder/Snap session remains activated. Copying cold virtual machines on VMFS using TimeFinder/Snap TimeFinder/Snap devices are managed together using devices or composite groups. Solutions Enabler commands are executed to create SYMCLI groups for TimeFinder/Snap operations. If the VMFS spans more than one VMAX or the virtual machines are presented with storage from multiple VMFS volumes spanning multiple VMAX devices, a composite group should be used. 1. TimeFinder/Snap is not supported on the VMAX 10K platform running Enginuity 5875 and higher. For space-saving capabilities on the VMAX 10K platform, TimeFinder VP Snap should be used. See the Simple Support Matrix on support.emc.com for additional information. Copying virtual machines after shutdown 317

320 Cloning of vsphere Virtual Machines Figure 197 depicts the necessary steps to make a copy of powered off virtual machines using TimeFinder/Snap technology. Figure 197 Copying inactive VMFS with TimeFinder/Snap 1. The device (or composite) groups containing the source devices and the target virtual devices should be created first. The commands, symdg and symld, shown on page 311 should be used for this step. 2. The TimeFinder/Snap pairs can be created using the command: symsnap g <group_name> create noprompt The command creates the protection bitmaps to use when the TimeFinder/Snap pairs are activated. No data is copied or moved at this time. 318 Using EMC VMAX Storage in VMware vsphere Environments

321 Cloning of vsphere Virtual Machines 3. Once the create operation is completed, the virtual machines accessing the source VMFS must be shut down to make a cold copy of the virtual machines. The virtual machines can be either shut down using the vsphere Client or the remote VMware CLI. 4. With the virtual machines in a powered-off state, the TimeFinder/Snap copy can now be activated: symsnap g <group_name> -noprompt activate The activate keyword presents a virtual copy of the VMFS to the target hosts for further processing. If using STD devices, the -tgt switch is needed. 5. The virtual machines using the source VMFS volumes can now be powered on. The same tools that were utilized to shut down the virtual machines can be used to power them on. Copying cold virtual machines with RDMs using TimeFinder/Snap The process for using TimeFinder/Snap technology to copy virtual machines that access disks as raw device mapping (RDM) is no different from that discussed for TimeFinder/Clone. The first step in copying virtual machines accessing disks as raw device mappings (RDM) is identifying the VMAX device numbers associated with the virtual machine. This can be most easily done by using EMC VSI. Once the VMAX devices used by the virtual machines are identified, the process to create a copy of a virtual machine that is utilizing raw device mapping (RDM) is identical to the one presented in Copying cold virtual machines on VMFS using TimeFinder/Snap on page 317. Using TimeFinder/VP Snap with cold virtual machines TimeFinder/VP Snap, although using the TimeFinder/Clone technology, enables users to create space-efficient snaps with multiple sessions sharing allocations in the thin pool. This is achieved as VP Snap sessions copy data from the source device to the target device only if triggered by a host write operation. Read I/Os to protected tracks on the target device do not result in data being copied. For a single activated VP Snap session on a source device, the target represents a single point-in-time copy of the source. Copied data resides on allocations in the thin pool. When there is a second VP Snap session from the same source device to a different target, the allocations can be shared. Copying virtual machines after shutdown 319

322 Cloning of vsphere Virtual Machines When the VP Snap relationship between the source and target devices is created, the target devices are presented as not ready (NR) to any host that is accessing the volumes. It is for this reason that EMC recommends unmounting any VMFS datastores that are currently on those devices before creating the clone. The creation operation makes the VMFS on the target unavailable to the VMware ESXi hosts. Therefore, the recommendation of unmounting the datastores before performing an establish operation applies directly to the virtual machines that is impacted by the absence of those datastores. The target devices in a TimeFinder/VP Snap operation can be accessed by VMware ESXi as soon as the VP Snap relationship Copying cold virtual machines on VMFS using TimeFinder/VP Snap TimeFinder/VP Snap devices are managed as an atomic unit by creating a device or composite group. Composite groups are used if the virtual machines that are being cloned use VMFS volumes from multiple VMAX storage arrays. Unisphere or Solutions Enabler commands can be used to create the device or composite group 1, and manage the copying process. The next few paragraphs present the steps required to clone a group of virtual machines utilizing EMC TimeFinder/VP Snap technology: 1. Identify the device number of the VMAX volumes used by the VMFS by utilizing EMC Virtual Storage Integrator for VMware vsphere (EMC VSI) which maps physical extents of a VMFS to the VMAX device number. A device group containing the member(s) of the VMFS should be created. This can be done with the following commands: symdg create <group_name> symld g <group_name> -sid <VMAX SN> add dev <dev #> where <dev #> are the devices identified by EMC VSI. 2. The devices that hold the copy of the data need to be added or associated with the group. Note that TimeFinder/VP Snap can be used when devices with the BCV attributes are used as a target. If the target devices have the BCV attribute turned on, the devices should be associated with the device group using the command symbcv g <group_name> associate dev <dev #> 1. Composite groups are not yet supported in Unisphere 320 Using EMC VMAX Storage in VMware vsphere Environments

323 Cloning of vsphere Virtual Machines When STD devices are used as the target, the devices need to be added to the devices using the symld command along with the -tgt switch symld g <group_name> -sid <VMAX SN> -tgt add dev <dev #> 3. The next step in the process of copying virtual machines using EMC TimeFinder/VP Snap is the creation of TimeFinder/VP Snap pairs. The command will essentially be the same as that for TimeFinder/Clone except the VP Snap switch is used -vse. The command to perform the create depends on the attributes of the target devices. If the target devices have the BCV attribute, the command symclone g <group_name> -vse noprompt create automatically creates a TimeFinder/VP Snap relationship between the source and target devices. If STD devices are used, the -tgt switch is required to the above command. Copying virtual machines after shutdown 321

324 Cloning of vsphere Virtual Machines Figure 198 shows a devices group that consists of one standard devices and one target device. Figure 198 Displaying the contents of a device group 322 Using EMC VMAX Storage in VMware vsphere Environments

325 Cloning of vsphere Virtual Machines The VMAX device 26B contains the VMFS that needs to be copied. In Figure 199 the create is shown along with a query of the newly created relationship with the VP Snap identifier highlighted in the red boxes. Note the target device is changed to a not ready (NR) state. Figure 199 Creating a TimeFinder/VP Snap relationship between standard devices 4. The virtual machines utilizing the VMFS can be shut down as soon as the TimeFinder/VP Snap create process completes successfully. This can be performed using different graphical interface or command line tools. 5. The TimeFinder/Clone pairs can be activated once the virtual machines have been shut down. The command, symclone g <group_name> -noprompt activate activates the TimeFinder/VP Snap pair. The state of the target devices is changed to read-write. Once again, the -tgt switch is needed for STD devices. Any VMware ESXi with access to the target devices is presented with a point-in-time view of the source device at the moment of activation. The VMAX storage array will Copying virtual machines after shutdown 323

326 Cloning of vsphere Virtual Machines only use the disk space in the thin pool if writes are performed on the VP Snap source or target devices. Reads do not trigger new writes. As mentioned, if more VP Snap sessions are added to the same source device, data is copied to the targets based on whether the source data is new with respect to the point-in-time of each copy. When data is copied to more than one target, only a single shared copy resides in the thin pool. 6. The virtual machines accessing the VMFS on the source devices can be powered on and made available to the users. Figure 200 depicts the necessary steps to make a copy of powered off virtual machines using TimeFinder/Snap technology. Figure 200 Copying virtual machine data using EMC TimeFinder/VP Snap 324 Using EMC VMAX Storage in VMware vsphere Environments

327 Cloning of vsphere Virtual Machines Copying cold virtual machines with RDMs using TimeFinder/VP Snap The first step in copying virtual machines accessing disks as raw device mappings (RDM) is identifying the VMAX device numbers associated with the virtual machine. This can be most easily done by using EMC VSI. Once the VMAX devices used by the virtual machines are identified, the process to create a copy of a virtual machine that is utilizing raw device mapping (RDM) is identical to the one presented in Copying cold virtual machines on VMFS using TimeFinder/VP Snap on page 320. Copying virtual machines after shutdown 325

328 Cloning of vsphere Virtual Machines Copying running virtual machines using EMC Consistency technology Copying virtual machines after shutdown on page 310 discussed use of the TimeFinder family of products to clone virtual machines that have been shut down. Although this is the ideal way to obtain a copy of the data, it is impractical in most production environments. For these environments, EMC Consistency technology can be leveraged to create a copy of the virtual machines while it is servicing applications and users. TimeFinder/CG enables a group of live virtual machines to be copied in an instant when used in conjunction with storage array based copying software such as TimeFinder/Clone. The image created in this way is a dependent-write consistent data state and can be utilized as a restartable copy of the virtual machine. Virtual machines running modern operating systems such as Microsoft Windows and database management systems enforce the principle of dependent-write I/O. That is, no dependent write is issued until the predecessor write it is dependent upon has completed. For example, Microsoft Windows does not update the contents of a file on a NT file system (NTFS) until an appropriate entry in the file system journal is made. This technique enables the operating system to quickly bring NTFS to a consistent state when recovering from an unplanned outage such as a power failure. Note: Microsoft Window NT file system is a journaled file system and not a logged file system. When recovering from an unplanned outage the contents of the journal may not be sufficient to recover the file system. A full check of the file system using chkdsk is needed for these situations. Using the EMC Consistency technology option during the virtual machine copying process also creates a copy with a dependent-write consistent data state. The process of creating a consistent copy of a running virtual machine does not differ from the process of creating a copy of a cold virtual machine save for one switch in the activate command. Simply follow the same steps as outlined in this chapter but do not shutdown the virtual machines. At the point of activation, add the switch -consistent to the command as follows: 326 Using EMC VMAX Storage in VMware vsphere Environments

329 Cloning of vsphere Virtual Machines TimeFinder/SnapVX TimeFinder/Clone By default, all SnapVX snapshots are consistent with the database when created and activated using the symsnapvx establish command. Depending on the state of the devices at the time of the snapshot, SnapVX pauses I/O to ensure there are no writes to the source device while the snapshot is created. When the activation completes, writes are resumed and the target device contains a consistent production database copy of the source device at the time of activation. For BCV devices: symclone g <group_name> -noprompt -consistent activate or For STD devices: symclone g <group_name> -noprompt -tgt -consistent activate TimeFinder/Snap For BCV devices: symsnap g <group_name> -noprompt -consistent activate or For STD devices: symsnap g <group_name> -noprompt -tgt -consistent activate TimeFinder/VP Snap For BCV devices: symclone g <group_name> -noprompt -consistent activate or For STD devices: symclone g <group_name> -noprompt -tgt -consistent activate Copying running virtual machines using EMC Consistency technology 327

330 Cloning of vsphere Virtual Machines Transitioning disk copies to cloned virtual machines This section discusses how to use the copy of the data to create cloned virtual machines. The cloned virtual machines can be deployed to support ancillary business processes such as development and testing. The methodology deployed in creating the copy of the data influences the type of supporting business operations that it can support. Cloning virtual machines on VMFS in vsphere environments VMware ESXi 5 and 6 assign a unique signature to all VMFS volumes when they are formatted with VMFS. Furthermore, the VMFS label is also stored on the device. Since storage array technologies create exact replicas of the source volumes, all information including the unique signature and label is replicated. If a copy of a VMFS volume is presented to any VMware ESXi host or cluster group, the VMware ESXi, by default, automatically masks the copy. The device holding the copy is determined by comparing the signature stored on the device with the computed signature. BCVs, for example, have a different unique ID from the standard device it is associated with it. Therefore, the computed signature for a BCV device always differs from the one stored on it. This enables VMware ESXi to always identify the copy correctly. VMware vsphere has the ability to individually resignature and/or mount VMFS volume copies through the use of the vsphere Client or with the CLI utility esxcfg-volume (or vicfg-volume with the VMware remote CLI tools). Since it is volume-specific, it allows for much greater control in the handling of snapshots. This feature is very useful when creating and managing volume copies created by local replication products such as TimeFinder/Snap or TimeFinder/Clone or remote replication technologies such as VMAX Remote Data Facility (SRDF). While either the CLI or the GUI can be used to perform these operations, the GUI (vsphere Client) is always recommended. Nevertheless, there are a few features available in the CLI that cannot be performed in vsphere Client. The use and implications of both options will be discussed in the following sections. 328 Using EMC VMAX Storage in VMware vsphere Environments

331 Cloning of vsphere Virtual Machines Cloning vsphere virtual machines using the vsphere Client In the vsphere Client, the Add Storage wizard is the function to resignature VMFS volume copies or mount cloned ones. The Add Storage wizard displays the VMFS label next to storage devices that contain a VMFS volume copy. Therefore, if a device is not mounted, and shows up in the wizard and has a label associated with it, you can make the assumption that it is indeed a snapshot. Once a replicated VMFS datastore is presented to a host, it will appear in the Add Storage wizard with the name of the original datastore in the VMFS label column shown in Figure 201 for the vsphere Client and in Figure 202 on page 330 for the vsphere Web Client. Figure 201 Mounting a VMFS volume copy using the Add Storage wizard in the vsphere Client Transitioning disk copies to cloned virtual machines 329

332 Cloning of vsphere Virtual Machines Figure 202 Mounting a VMFS volume copy using the Add Storage wizard in the vsphere Web Client Once the device with the replicated VMFS volume is selected the wizard offers three choices of how to handle presenting it to the ESXi as a datastore which is displayed in Figure 203 on page 331 in the vsphere Client and in Figure 204 on page 332 in the vsphere Web Client. Keep the existing signature This option initiates a persistent force-mount. This will cause the VMFS volume copy to be persistently mounted on the ESXi host server (in other words it will remain an accessible datastore over reboots of the ESXi host). 1 If the original VMFS volume is already presented to the same ESXi host the mounting of the volume copy will fail. The ability to non-persistently mount a VMFS volume copy is a feature only available through the CLI. In addition, CLI utility has to be used to force-mount a VMFS datastore copy while retaining the existing signature on multiple ESXi servers. 1. This is true so long as the original device is not presented to the ESXi host, in which case the mount of the VMFS volume copy will fail. 330 Using EMC VMAX Storage in VMware vsphere Environments

333 Cloning of vsphere Virtual Machines Note: If the device holding the copy of VMware datastore is going to be re-provisioned for other activities, it is important to unmount the persistently mounted VMware datastore before the device is reused. A snapshot datastore can be unmounted by using the vsphere Client. Assign a new signature This option will resignature the VMFS volume copy entirely and will therefore allow it to be presented to the same host as the original VMFS volume. Format the disk This option wipes out any data on the device and creates an entirely new VMFS volume with a new signature. Figure 203 VMFS volume copy mounting options in the vsphere Client Transitioning disk copies to cloned virtual machines 331

334 Cloning of vsphere Virtual Machines Figure 204 VMFS volume copy mounting options in the vsphere Web Client If the volume has been resignatured, it will appear in the Datastores list, as in Figure 205 on page 333 and Figure 206 on page 333, with snap-xxxxxxxx- appended to the beginning of the original VMFS volume name. Any virtual machines residing on resignatured datastore will have to be manually registered with an ESXi to be recognized. 332 Using EMC VMAX Storage in VMware vsphere Environments

335 Cloning of vsphere Virtual Machines Figure 205 Original VMFS volume and resignatured VMFS volume copy in the vsphere Client Figure 206 Original VMFS volume and resignatured VMFS volume copy in the vsphere Web Client Transitioning disk copies to cloned virtual machines 333

336 Cloning of vsphere Virtual Machines Using the command line to mount VMFS volume copies In the ESXi CLI, the esxcfg-volume (or vicfg-volume with the VMware remote CLI tools) command is the function used to resignature VMFS volume copies or mount cloned ones. A feature that is unique to esxcfg-volume and not available in vsphere Client is the ability to non-persistently force-mount a volume copy (meaning it will not be mounted after a reboot of the ESXi host). esxcfg-volume comes with the following parameter options: -l --list -m --mount <vmfs uuid label> -u --umount <vmfs uuid label> -r --resignature <vmfs uuid label> -M --persistent-mount <vmfs uuid label The command esxcfg-volume -l will list all of the volumes that have been detected as snapshots or replicas. If a volume copy is found, it displays the UUID, whether or not it can be mounted (decided by the presence of the original VMFS volume), whether or not it can be resignatured, and its extent name. Figure 207 shows the list command returns two VMFS volume copies. The first volume copy can be mounted due to the fact that the original VMFS volume is not present on that ESXi host. The second discovered volume copy cannot be mounted because the original volume is still online. Figure 207 Listing available VMFS volume copies using the esxcfg-volume CLI command 334 Using EMC VMAX Storage in VMware vsphere Environments

337 Cloning of vsphere Virtual Machines In this example the first discovered volume copy 4a2f708a-1c50210d-699c ab633/Resignature_test will be non-persistently force-mounted using the command esxcfg-volume -m. Since it does not have its original VMFS volume present it can be mounted without the need to resignature. In Figure 208 on page 335, this volume is non-persistently mounted through the CLI and then esxcfg-scsidevs -m is executed to list the VMFS volumes and the newly mounted volume is listed. Since it is non-persistent, it will disappear after the next reboot of the ESXi host. Figure 208 Non-persistently force-mounting a VMFS volume copy using the esxcfg-volume CLI command Otherwise, if the cloned volume is never going to be presented to this host again it can be mounted persistently, which is the equivalent of using the vsphere Client Add Storage wizard to do it, by using the esxcfg-volume -M command. Cloning virtual machines using RDM in VMware vsphere environments The configuration file located in the VMFS volumes can be used to clone virtual machines provided with storage using RDM. However, in a vsphere environment it is easier to use copies of the configuration files on the target VMware ESXi. When a RDM is generated, a file is created on a VMFS that points to the physical device that is mapped. The file that provides the mapping also includes the unique ID and LUN number of the device Transitioning disk copies to cloned virtual machines 335

338 Cloning of vsphere Virtual Machines it is mapping. The configuration file for the virtual machine using the RDM contains an entry that includes the label of the VMFS holding the RDM and its name. If the VMFS holding the information for the virtual machines is replicated and presented on the target VMware ESXi, the virtual disks that provide the mapping are also available in addition to the configuration files. However, the mapping file cannot be used on the target VMware ESXi since the cloned virtual machines need to be provided with access to the devices holding the copy of the data. Therefore, EMC recommends using a copy of the source virtual machine s configuration file instead of replicating the VMFS. The following steps clone virtual machines using RDMs in a vsphere environment: 1. On the target VMware ESXi create a directory on a datastore (VMFS or NAS storage) that holds the files related to the cloned virtual machine. A VMFS on internal disk, un-replicated SAN-attached disk or NAS-attached storage should be used for storing the files for the cloned virtual disk. This step has to be performed once. 2. Copy the configuration file for the source virtual machine to the directory created in step 1. The command line utility scp can be used for this purpose. This step has to be repeated only if the configuration of the source virtual machine changes. 3. Register the cloned virtual machine using the vcenter client or the VMware remote CLI. This step does not need to be repeated. 4. Generate RDMs on the target VMware ESXi in the directory created in step 1. The RDMs should be configured to address the target devices. 5. The cloned virtual machine can be powered on using either the VMware vsphere Client or the VMware remote CLI. Note: The process listed in this section assumes the source virtual machine does not have storage presented as a mix of a virtual disk on a VMFS and RDM. The process to clone virtual machines with a mix of RDMs and virtual disks is complex and beyond the scope of this document. Cloning virtual machines using Storage APIs for Array Integration (VAAI) With a minimum vsphere 4.1 release and a VMAX with a minimum of Enginuity 5875 or a VMAX3/VMAX All Flash with HYPERMAX OS 5977, users can take advantage of Storage APIs that offload certain 336 Using EMC VMAX Storage in VMware vsphere Environments

339 Cloning of vsphere Virtual Machines functions in vsphere to the array, such as cloning. When a customer desires to clone a single or a few virtual machines, or move a virtual machine from one datastore to another, using TimeFinder adds unnecessary complexity. Using VAAI in these cases, in essence, does what TimeFinder does, just on a more granular scale. The following section will cover VAAI in more detail. Transitioning disk copies to cloned virtual machines 337

340 Cloning of vsphere Virtual Machines VMware vsphere Storage API for Array Integration Storage APIs for Array Integration is an API for storage partners to leverage that permits certain functions to be delegated to the storage array, thus greatly enhancing the performance of those functions. EMC currently offers four storage integrations as part of VAAI. Three of these integrations are available beginning with Enginuity software version 5875 on the VMAX and HYPERMAX OS software version 5977 on the VMAX3/VMAX All Flash running with VMware vsphere 4.1. The fourth integration, Thin Provisioning Block Space Reclamation (UNMAP), is available with Enginuity 5876 Q SR ( ) on the VMAX and HYPERMAX OS software version 5977 on the VMAX3/VMAX All Flash running with VMware vsphere 5.0 U1 and higher. 1 The primitive that will be covered herein is hardware-accelerated Full Copy 2 as it is this functionality that enables the offloading of VMware clones to the array. 3 The UNMAP primitive was addressed in Chapter 3, EMC Virtual Provisioning and VMware vsphere. Note: The VAAI primitives are supported with Federated Tiered Storage (FTS) both in external provisioning mode and encapsulation mode with a single restriction - the Thin Provisioning Block Space Reclamation (UNMAP) primitive is not supported with encapsulation mode. Hardware-accelerated Full Copy The time it takes to deploy or migrate a virtual machine will be greatly reduced by use of the Full Copy primitive, as the process for data migration is entirely executed on the storage array and not on the ESXi server. The host simply initiates the process and reports on the progress of the operation on the array. This decreases overall traffic on the ESXi server. In addition to deploying new virtual machines from a template or through cloning, Full Copy is also 1. Refer to Table 5 on page 363 for detail on the proper patching level for each Enginuity/HYPERMAX OS level. 2. Full Copy running on the VMAX3 platform requires the Q ( ) release of HYPERMAX OS For more information on other Storage APIs, including ATS and ATS heartbeat, please refer to the EMC whitepaper Using VMware vsphere Storage APIs for Array Integration with EMC VMAX found on EMC.com. 338 Using EMC VMAX Storage in VMware vsphere Environments

341 Cloning of vsphere Virtual Machines utilized when doing a Storage vmotion. When a virtual machine is migrated between datastores on the same array the live copy is performed entirely on the array. Not only does Full Copy save time, but it also saves server CPU cycles, memory, IP and SAN network bandwidth, and storage front-end controller I/O. This is due to the fact that the host is relieved of the normal function it would serve of having to read the data from the array and then write the data back down to the array. That activity requires CPU cycles and memory as well as significant network bandwidth. Figure 209 provides a graphical representation of Full Copy. Figure 209 Hardware-accelerated Full Copy Enabling the Storage APIs for Array Integration The VAAI primitives are enabled by default on both the VMAX running 5875 Enginuity or later and on the VMAX3/VMAX All Flash running HYPERMAX OS 5977 and later, and on the 4.1 ESX and ESXi 5 and 6 servers (properly licensed) and should not require any user intervention. 1 The primitive, however, can be disabled through the VMware vsphere Storage API for Array Integration 339

342 Cloning of vsphere Virtual Machines ESXi server if desired. Using the vsphere Client, Full Copy can be disabled or enabled by altering the setting, DataMover.HardwareAcceleratedMove, in the ESXi server advanced settings under DataMover. Figure 210 on page 341 walks through the four steps to get to the setting. A similar process is available in the vsphere Web Client shown in Figure 211 on page Refer to VMware documentation for required licensing to enable VAAI features. The UNMAP primitive requires a minimum of vsphere 5.0 U1 and Enginuity or HYPERMAX OS Using EMC VMAX Storage in VMware vsphere Environments

343 Cloning of vsphere Virtual Machines Figure 210 Enabling/disabling hardware-accelerated Full Copy in vsphere Client VMware vsphere Storage API for Array Integration 341

344 Cloning of vsphere Virtual Machines Full Copy Tuning Figure 211 Enabling/disabling hardware-accelerated Full Copy in vsphere Web Client By default, when the Full Copy primitive is supported, VMware informs the storage array to copy data in 4 MB chunks. For vsphere environments that utilize many different arrays, the default value should not be altered. However, in all-vmax or a combination of VMAX and VNX environments for vsphere x, customers are advised to take advantage of VMware's ability to send larger chunks to the array for copy. In vsphere 6 there is a more advanced capability that can be used in all-vmax environments to increase the extent size even higher. The following two sections outline how to make these changes in the different vsphere environments. vsphere x In vsphere x the default copy size can be incremented to a maximum value of 16 MB and should be set at that value for all VMAX/VNX environments to take advantage of the array's ability to move larger chunks of storage at once. The following commands show how to query the current copy (or transfer) size and then how to alter it. # esxcfg-advcfg -g /DataMover/MaxHWTransferSize 342 Using EMC VMAX Storage in VMware vsphere Environments

345 Cloning of vsphere Virtual Machines Value of MaxHWTransferSize is 4096 # esxcfg-advcfg -s /DataMover/MaxHWTransferSize Value of MaxHWTransferSize is Note that this change is per ESX(i) server and can only be done through the CLI (no GUI interface) as in Figure 212: Figure 212 Changing the default copy size for Full Copy on the ESXi host Alternatively, vsphere PowerCLI can be used at the vcenter level to change the parameter on all hosts as in Figure 213. Figure 213 Changing the default copy size for Full Copy using vsphere PowerCLI VMware vsphere Storage API for Array Integration 343

346 Cloning of vsphere Virtual Machines vsphere 6 In vsphere 6, the previous outlined mechanism will still work; however VMware has enabled a different methodology by which the extent size can be increased - claim rules. These claim rules will take precedence over the MaxHWTransferSize and can be utilized with both VMAX and VMAX3/VMAX All Flash. Claim rules have always been used to implement VAAI for VMAX (see Figure 5). VAAI is a plug-in claim rule that says if a device is from a particular vendor, in this case an EMC VMAX device, it should go through the VAAI filter. In other words if VAAI can be used it will be used (various situations exists - detailed in this paper - where primitives like XCOPY will not be used). Each block storage device managed by a VAAI plug-in needs two claim rules, one that specifies the hardware acceleration filter and another that specifies the hardware acceleration plug-in for the device. These claim rules are where the change has been made (in addition to the code). What VMware has done is to add 3 new columns to the storage core claim rules to control the new behavior. They are: XCOPY Use Array Reported Values XCOPY Use Multiple Segments XCOPY Max Transfer Size Figure 214 on page 345 shows the output from vsphere 5.x and Figure 215 on page 345 shows the output from vsphere 6 with the new columns and their default values. 344 Using EMC VMAX Storage in VMware vsphere Environments

347 Cloning of vsphere Virtual Machines Figure 214 VAAI claim rule class in vsphere 5.x Figure 215 VAAI claim rule class in vsphere 6 By altering the rules and changing these 3 new settings, one can adjust the max transfer size up to a maximum of 240 MB. Though the columns show in both the Filter and VAAI claim rule classes, only the VAAI class needs to be changed. The new required settings are: XCOPY Use Array Reported Values = true XCOPY Use Multiple Segments = true XCOPY Max Transfer Size = 240 It is not sufficient to simply change the transfer size because VMware requires that the array reported values is set to true if the transfer size is set. When use array reported values is set it means that VMware will query the array to get the largest transfer size if the max transfer VMware vsphere Storage API for Array Integration 345

348 Cloning of vsphere Virtual Machines size is not set or if there is a problem with the value. Unfortunately the VMAX advertises a value higher than 240 MB so in the case of an issue with the transfer size VMware will default to the previous value set for MaxHWTransferSize so it is important to set that to 16 MB. In addition, in order for the VMAX to accept up to 240 MB, multiple segments must be enabled since a single segment is limited to 30 MB (there are 8 segments available). Therefore when we modify the claim rules we will set the size to 240 MB and set both other columns to true. The process to change the claim rules involves removing the existing VAAI rule class, adding it back with the new columns set, and then loading the rule (to be completed on all servers involved in testing).following this a reboot of the server is needed. The commands to run on the ESXi host as root are the following. They will not return a response: esxcli storage core claimrule remove --rule claimrule-class=vaai esxcli storage core claimrule add -r t vendor -V EMC -M SYMMETRIX -P VMW_VAAIP_SYMM -c VAAI -a -s -m 240 esxcli storage core claimrule load --claimrule-class=vaai To verify the changes are set before reboot, list the claim rules as in Figure 216 on page 347 below: esxcli storage core claimrule list --claimrule-class=vaai 346 Using EMC VMAX Storage in VMware vsphere Environments

349 Cloning of vsphere Virtual Machines Figure 216 Updated VAAI claim rule for XCOPY Note however that the changes will not take effect until a reboot of the server. Once the reboot is complete, re-run the command to ensure the changes appear. It is important to remember that the new claim rule does not impact how VAAI functionality works. For instance, the following still hold true: It will revert to software copy when it cannot use XCOPY It can still be enabled/disabled through the GUI or CLI It will use the DataMover/MaxHWTransferSize when the claim rules are not changed or a non-vmax array is being used The maximum value is just that, a maximum. That is the largest extent VMware can send, but does not guarantee all extents will be that size. Thin vmdks One area where the increased extent size has made no difference in the past is with thin vmdks. As the section Caveats for using hardware-accelerated Full Copy explains, VMware only uses 1 MB extents when copying thin vmdks no matter the value of the extent size. This holds true for both vsphere 4.1 and 5.x. In vsphere 6, however, VMware makes an attempt to consolidate extents and therefore when cloning will use the largest extent it can consolidate for each copy I/O (software or hardware). This new behavior has resulted in two benefits on the VMAX. The first is the obvious one that any task requiring XCOPY on thin vmdks will be faster than on previous vsphere versions. The second benefit is somewhat unexpected and that is a clone of a thin vmdk now behaves like a thick vmdk in terms of further cloning. In other words if one clones a VMware vsphere Storage API for Array Integration 347

350 Cloning of vsphere Virtual Machines thin vmdk VM and then uses that clone as a template for further clones, those further clones will be created at a speed almost equal to a thick vmdk VM clone. This is the case whether that initial clone is created with or without XCOPY. The operation is essentially a defragmentation of the thin vmdk. It is critical, therefore, if a thin vmdk VM is going to be a template for other VMs, that rather than converting the VM to template it is cloned to a template. This will ensure all additional deployments from the template will benefit from the consolidation of extents. The following graph, Figure 217 demonstrates the effectiveness of the new vsphere 6 claim rule for XCOPY. Note in particular the clone of the clone for the thin vmdk (light green) as explained in the previous paragraph. Figure 217 XCOPY performance using vsphere 6 claim rules Full Copy use cases This section provides Full Copy use cases. 348 Using EMC VMAX Storage in VMware vsphere Environments

351 Cloning of vsphere Virtual Machines Hardware-accelerated Full Copy The primary use cases that show the benefit of hardware-accelerated Full Copy are related to deploying virtual machines - whether from a template or executing a hot or cold clone. With Full Copy the creation of any of these virtual machines will be offloaded to the array. In addition to deploying new virtual machines, Full Copy is also utilized by Storage vmotion operations, significantly reducing the time to move a virtual machine to a new datastore. Following are a number of use cases for Full Copy concerned with the deployment of virtual machines - some from a template and others from cloning - and Storage vmotion. Each use case test is run on thin metadevices with hardware-accelerated Full Copy disabled and enabled. For the Full Copy enabled results both the 4 MB and 16 MB maximum copy sizes are displayed, as well as the results for VMAX and VMAX3/VMAX All Flash where possible. The benefits of using a 16 MB copy size over the default of 4 MB are readily apparent in the use cases, as is the superiority of the VMAX3/VMAX All Flash platform when compared to the previous generation. Note: In order to distinguish the VMAX platform from the VMAX3/VMAX All Flash platform in the graphs, the abbreviations V2 and V3 are used. Use case configuration In general, the Full Copy use cases were conducted using the same virtual machine, or a clone of that virtual machine converted to a template. This virtual machine was created using the vsphere Client, taking all standard defaults and with the following characteristics: 40 GB virtual disk Windows Server 2008 R2 operating system (no VMware Tools) The virtual machine was then filled with data equal to 25 GB of the 40 GB. This configuration might be thought of as a worst case scenario for customers since most virtual machines that are used for cloning, even with the OS and all applications deployed on it, do not approach 25 GB of actual data. Note: All results presented in graphs of Full Copy functionality are dependent upon the lab environment under which they were generated.while the majority of the time hardware acceleration will outperform a software copy, there are many factors that can impact how fast VMware vsphere Storage API for Array Integration 349

352 Cloning of vsphere Virtual Machines both software clones and Full Copy clones are created. For software clones the amount of CPU and memory on the ESXi server as well as the network will play a role. For Full Copy the number of engines and directors and other hardware components on the VMAX will impact the cloning time. The VMAX also intelligently balances workloads to ensure the highest priority to mission critical tasks. In both cases, the existing load on the environment will most certainly make a difference. Therefore results will vary. Use Case 1: Deploying a virtual machine from a template The first use case for Full Copy is the deployment of a virtual machine from a template. This is probably the most common type of virtual machine duplication for customers. A customer begins by creating a virtual machine, then installs the operating system and following this, loads the applications. The virtual machine is then customized so that when users are presented with the cloned virtual machine, they are able to enter in their personalized information.when an administrator completes the virtual machine base configuration, it is powered down and then converted to a template. As using thin striped metavolumes is a VMAX best practice for VMware environments, these clones were created on a datastore backed by a thin striped metavolume. 1 The following graph in Figure 218 on page 351 shows the time difference between a clone created using Full Copy and one using software copy. 1. VMAX3/VMAX All Flash does not have a concept of metadevices. 350 Using EMC VMAX Storage in VMware vsphere Environments

353 Cloning of vsphere Virtual Machines Figure 218 Deploying virtual machines from template Use Case 2: Cloning hot and cold virtual machines The second use case involves creating clones from existing virtual machines, both running (hot) and not running (cold). In each instance, utilizing the Full Copy functionality resulted in a significant performance benefit as visible in Figure 219 on page 352 and Figure 220 on page 353. VMware vsphere Storage API for Array Integration 351

354 Cloning of vsphere Virtual Machines Figure 219 Performance of cold clones using hardware-accelerated Full Copy 352 Using EMC VMAX Storage in VMware vsphere Environments

355 Cloning of vsphere Virtual Machines Figure 220 Performance of hot clones using hardware-accelerated Full Copy Use Case 3: Creating simultaneous multiple clones The third use case that was tested was to deploy four clones simultaneously from the same source virtual machine. This particular test can only be conducted against a cold virtual machine. As seen in Figure 221 on page 354, the results again demonstrate the benefit of offloading as compared to software copying. VMware vsphere Storage API for Array Integration 353

356 Cloning of vsphere Virtual Machines Figure 221 Cloning multiple virtual machines simultaneously with Full Copy Use Case 4: Storage vmotion The final use case is one that demonstrates a great benefit for customers who require datastore relocation of their virtual machine, but at the same time desire to reduce the impact to their live applications running on that virtual machine. This use case then is for Storage vmotion. With Full Copy enabled, the process of moving a virtual machine from one datastore to another datastore is offloaded. As mentioned previously, software copying requires CPU, memory, and network bandwidth. The resources it takes to software copy, therefore, might negatively impact the applications running on that virtual machine that is being moved. By utilizing Full Copy this is avoided and additionally as seen in Figure 222 on page 355, Full Copy is far quicker in moving that virtual machine. 354 Using EMC VMAX Storage in VMware vsphere Environments

357 Cloning of vsphere Virtual Machines Figure 222 Storage vmotion using Full Copy IMPORTANT As can be seen from the graphs, utilizing the 16 MB copy size improves performance by factors over the default of 4 MB for vsphere x. In addition for vsphere 6 environments the benefit is even greater. Therefore if the VMware environment is only utilizing VMAX storage or VMAX and VNX, adjust the parameters accordingly for the vsphere version as detailed in Full Copy Tuning. Server resources Impact to CPU and memory While the relative time benefit of using Full Copy is apparent in the presented use cases, what of the CPU and memory utilization on the ESXi host? In this particular test environment and in these particular use cases, the CPU and memory utilization of the ESXi host did not differ significantly between performing a software clone or a Full VMware vsphere Storage API for Array Integration 355

358 Cloning of vsphere Virtual Machines Copy clone. That is not to say, however, this will hold true of all environments. The ESXi host used in this testing was not constrained by CPU or memory resources in any way as they were both plentiful; and the ESXi host was not under any other workload than the testing. In a customer environment where resources may be limited and workloads on the ESXi host many and varied, the impact of running a software clone instead of a Full Copy clone may be more significant. Caveats for using hardware-accelerated Full Copy 1 The following are some general caveats that EMC provides when using this feature: VMware publishes a KB article ( ) which details when Full Copy will not be issued. Such cases include VMs that have snapshots or cloning or moving VMs between datastores with different block sizes, among others. As can be seen from the graphs, utilizing the 16 MB copy size improves performance by factors over the default of 4 MB. Therefore if the VMware environment is only utilizing VMAX or VMAX and VNX storage, adjust the parameter accordingly as detailed in this paper. If using vsphere 6, follow the process in this paper to increase the extent size to 240 MB. One caveat to the extent size change is that the virtual thin disk format (vmdk) is not impacted by the copy size in vsphere x. Thin disks will always copy in the extent size of the datastore block size, regardless of the parameter setting. For VMFS3 and VMFS5, the default is 1 MB. Therefore it will always take longer to clone/move a thin disk than a thick disk of equal size (actual data size) with Full Copy. This is not the case with vsphere 6. Limit the number of simultaneous clones using Full Copy to three or four. This is not a strict limitation, but EMC believes this will ensure the best performance when offloading the copy process. 1. Beginning with Enginuity , Full Copy (XCOPY) can be disabled at the system level if the caveats are particularly problematic for a customer. A special E-pack is required for GA Enginuity 5875 and If this change is required, please contact EMC Customer Support as there is no capability for customers to disable XCOPY even if at the correct Enginuity version. 356 Using EMC VMAX Storage in VMware vsphere Environments

359 Cloning of vsphere Virtual Machines A VMAX metadevice that has SAN Copy, TimeFinder/Clone, TimeFinder/Snap, TimeFinder SnapVX or ChangeTracker sessions and certain RecoverPoint sessions will not support hardware-accelerated Full Copy. Any cloning or Storage vmotion operation run on datastores backed by these volumes will automatically be diverted to the default VMware software copy. Note that the vsphere Client has no knowledge of these sessions and as such the Hardware Accelerated column in the vsphere Client will still indicate Supported for these devices or datastores. Full copy is not supported for use with Open Replicator. VMware will revert to software copy in these cases. Full Copy is not supported for use with Federated Live Migration (FLM) target devices. Although SRDF is supported with Full Copy 1, certain RDF operations, such as an RDF failover, will be blocked until the VMAX has completed copying the data from a clone or Storage vmotion. SRDF/Star is not supported with Full Copy if the device is a metadevice. Using Full Copy on SRDF devices that are in a consistency group can render the group into a sync in progress mode until the copy completes on the VMAX. If this is undesirable state, customers are advised to disable Full Copy through the GUI or CLI while cloning to these devices. Alternatively at Enginuity level , Full Copy can be disabled on the VMAX for RDF devices only. By default on the VMAX3 Full Copy is disabled for RDF devices to prevent this condition. Full Copy will not be used if there is a metavolume reconfiguration taking place on the device backing the source or target datastore. VMware will revert to traditional writes in these cases. Conversely if a Full Copy was recently executed and a metavolume reconfiguration is attempted on the source or target metavolume backing the datastore, it will fail until the background copy is complete. 1. On the VMAX3/VMAX All Flash Full Copy is automatically disabled for SRDF/Metro devices. VMware vsphere Storage API for Array Integration 357

360 Cloning of vsphere Virtual Machines Full Copy cannot be used between devices that are presented to a host using different initiators. For example, take the following scenario: A host has four initiators The user creates two initiator groups, each with two initiators The user creates two storage groups and places the source device in one, the target device in the other The user creates a single port group The user creates two masking views, one with the source device storage group and one of the initiator groups, and the other masking view with the other initiator group and the storage group with the target device. The user attempts a Storage vmotion of a VM on the source device to the target device. Full Copy is rejected, and VMware reverts to software copy. SRDF/Metro specific 1 The following are caveats for SRDF/Metro when using Full Copy. In SRDF/Metro configurations the use of Full Copy does depend on whether the site is the one supplying the external WWN identity. Full Copy will not be used between a non-srdf/metro device and an SRDF/Metro device when the device WWN is not the same as the external device WWN. Typically, but not always, this means the non-biased site (recall even when using witness, there is a bias site). In such cases Full Copy is only supported when operations are between SRDF/Metro devices or within a single SRDF/Metro device; otherwise software copy is used. As Full Copy is a synchronous process on SRDF/Metro devices, which ensures consistency, performance will be closer to software copy than asynchronous copy. While an SRDF/Metro pair is in a suspended state, Full Copy reverts to asynchronous copy for the target R1. It is important to remember, however, that Full Copy is still bound by the restriction that the copy must take place on the same array. For example, assume SRDF/Metro pair AA, BB is on array 001, 002. In this configuration device AA is the R1 on 001 and its WWN is the external identity for device BB, the R2 on 002. If the pair is 1. Full Copy is enabled on SRDF/Metro devices beginning with release Q3 SR. 358 Using EMC VMAX Storage in VMware vsphere Environments

361 Cloning of vsphere Virtual Machines suspended and the bias is switched such that BB becomes the R1, the external identity still remains that of AA from array 001 (i.e. it appears as a device from 001). The device, however, is presented from array 002 and therefore Full Copy will only be used in operations within the device itself or between devices on array 002. Compression specific 1 The following are caveats when using Full Copy with devices in storage groups with compression enabled. When Full Copy is used on a device in a storage group with compression enabled, or between storage groups with compression enabled, Full Copy will uncompress the data during the copy and then re-compress on the target. When Full Copy is used on a device in a storage group with compression enabled to a storage group without compression enabled, Full Copy will uncompress the data during the copy and leave the data uncompressed on the target. When Full Copy is used on a device in a storage group without compression enabled to a storage group with compression enabled, the data will remain uncompressed on the target and be subject to the normal compression algorithms. enas and VAAI On the VMAX3 starting with the 5977 Q4 release ( ) EMC offers Embedded NAS (enas) guest OS (GOS). VMAX3/VMAX All Flash with enas extends the value of VMAX3/VMAX All Flash to file storage by enabling customers to leverage vital Tier 1 features including SLO-based provisioning, Host I/O Limits, and FAST technologies for both block and file storage. enas enables virtual instances of VNX Software Data Movers and Control Stations to leverage a single platform creating a unified VMAX3/VMAX All Flash array. The VMAX3/VMAX All Flash unified solution eliminates gateway complexity by leveraging standard VMAX3/VMAX All Flash hardware and a factory pre-configured enas solution. 1. Compression on the VMAX All Flash is available beginning with release Q3 SR. VMware vsphere Storage API for Array Integration 359

362 Cloning of vsphere Virtual Machines Because enas is based on the VNX software, file systems configured on the VMAX3/VMAX All Flash support VAAI. The feature in particular that NAS can take advantage of is NFS Clone Offload, though other features include extended stats, space reservations, and snap of a snap. In essence the NFS clone offload works much the same way as Full Copy as it offloads ESXi clone operations to the VNX Data Mover. The implementation of VAAI on VNX for NAS, however, is a manual process and is not enabled by default like block. It requires the installation of a plug-in on the ESXi host. Note: VAAI with enas is only supported on NFS version 3 and not NFS version 4.1. Installation To install the plug-in, download the NAS plug-in from EMC support - EMCNasPlugin zip. The plug-in is delivered as a VMware Installation Bundle (vib). The plug-in can be installed through VMware vcenter Update Manager or through the CLI as demonstrated in Figure 223. Figure 223 Installation of the NAS plug-in for VAAI 360 Using EMC VMAX Storage in VMware vsphere Environments

363 Cloning of vsphere Virtual Machines Once installed, the vsphere Client will show that VAAI is supported on the NFS datastores as in Figure 224. Figure 224 NFS datastore with VAAI support Extended stats As mentioned, one of the other VAAI integrations for NFS is extended stats. Using vmkfstools the user can display the disk utilization for virtual machine disks configured on NFS datastores. The extendedstat argument provides disk details for the virtual disks. The command reports, in bytes, virtual disk size, used space, and unshared space. An example is shown in Figure 225. Figure 225 VAAI NFS extended stats feature Nested clones Another VAAI capability on NFS is the ability to create a thin-clone virtual machine from an existing thin-clone virtual machine. This functionality is referred to as nested clones. The functionality uses the snapshot architecture to instantaneously create lightweight virtual machine clones. VMware products such as View Composer and vcloud Director use View Composer Array Integration (VCAI) to initiate virtual machine clones using the VNX Data Mover API for nested snapshots. The clone operations are off-loaded to the VNX VMware vsphere Storage API for Array Integration 361

364 Cloning of vsphere Virtual Machines Data Mover. To take advantage of this feature, enable the nested clone support property when you create the file system as highlighted in Figure 226. It cannot be changed after the file system is created. Figure 226 Setting nested clone support on a file system VAAI and EMC Enginuity/HYPERMAX OS version support EMC's implementation of VAAI is done within the Enginuity and HYPERMAX OS software. As with any software, bugs are an inevitable byproduct. While EMC makes every effort to test each use case for VAAI, it is impossible to mimic every customer environment and every type of use of the VAAI primitives. To limit any detrimental impact our customers might experience with VAAI, Table 5 on page 363 is provided. It includes all current Enginuity and HYPERMAX OS releases and any E Packs that must be applied at that level, beginning with the recommended base level of 5875 to the current 5977 release. Using this table will ensure that a customer's Enginuity or HYPERMAX OS release is at the proper level for VAAI interaction. 362 Using EMC VMAX Storage in VMware vsphere Environments

365 Cloning of vsphere Virtual Machines IMPORTANT In August 2016, VMware ended general support for vsphere 5.0 and 5.1. Therefore, starting with the Q3 SR in the table below, these releases are no longer listed as minimum versions. It is important to remember, however, that VMware allows for customers to purchase extended support for up to 2 years and EMC will support those versions with the HYPERMAX OS code levels if customers choose to do that. Table 5 Enginuity/HYPERMAX OS and VAAI support Enginuity/ HYPERMAX OS Code Level ( Q2 SR) ( Q4 SR) ( Q3 SR) E Pack required No No No VAAI primitives supported ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP Minimum version of vsphere required vsphere 5.5, vsphere 6.5 for automated UNMAP vsphere 5.5, vsphere 6.5 for automated UNMAP vsphere 5.5, vsphere 6.5 for automated UNMAP Important Notes Important VMware patch for vsphere 6.5 to fix UNMAP issues - VMware KB All primitives supported with SRDF/Metro. Support for synchronous Full Copy on SRDF/Metro devices. All primitives supported with SRDF/Metro. Support for synchronous Full Copy on SRDF/Metro devices. First VMAX All Flash release with compression available No No ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP All primitives supported with SRDF/Metro. Support for synchronous Full Copy on SRDF/Metro devices. First VMAX All Flash release with compression available. All primitives supported with SRDF/Metro except Full Copy. VMware vsphere Storage API for Array Integration 363

366 Cloning of vsphere Virtual Machines ( Q1 SR) ( Q3 SR) ( Q1 SR) ( Q4 SR) Recommend ed Yes for SRDF/Metro No No No Yes ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, UNMAP ATS, Block Zero, Full Copy, UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 5.0 (U1 for UNMAP), vsphere 6.5 for automated UNMAP vsphere 4.1, vsphere 5.0 U1 for UNMAP All primitives supported with SRDF/Metro except Full Copy. Refer to KB for recommended HYPERMAX OS version. Support for synchronous Full Copy on SRDF target devices. Only ATS supported on SRDF/Metro. Recommended minimum code level for VMAX3/VMAX All Flash. Performance improvements in all VAAI implementations. First HYPERMAX OS release with Full Copy support No Full Copy support Refer to ETA for E Pack information ( Q3 SR) Recommend ed ATS, Block Zero, Full Copy, UNMAP vsphere 4.1, vsphere 5.0 U1 for UNMAP Refer to ETA for E Pack information ( Q2 SR) Recommend ed ATS, Block Zero, Full Copy, UNMAP vsphere 4.1, vsphere 5.0 U1 for UNMAP Refer to ETA for E Pack information Yes ( Q4 SR) Yes ATS, Block Zero, Full Copy, UNMAP ATS, Block Zero, Full Copy, UNMAP vsphere 4.1, vsphere 5.0 U1 for UNMAP vsphere 4.1, vsphere 5.0 U1 for UNMAP Refer to ETAs emc and for E Pack information. First Enginuity level with support for the UNMAP primitive. Refer to ETAs emc and for E Pack information. 364 Using EMC VMAX Storage in VMware vsphere Environments

367 Cloning of vsphere Virtual Machines No (5876 GA) No No No No ATS, Block Zero, Full Copy ATS, Block Zero, Full Copy ATS, Block Zero, Full Copy ATS, Block Zero, Full Copy ATS, Block Zero, Full Copy vsphere 4.1 vsphere 4.1 vsphere 4.1 vsphere 4.1 vsphere No ATS, Block Zero, Full Copy vsphere 4.1 First Enginuity level that supports vsphere 5.1 without E Pack Yes - vsphere 5.1 and higher ATS, Block Zero, Full Copy vsphere 4.1 This E Pack described in ETA emc fixes a problem with ATS where a VMFS-5 datastore can fail to create (fix included in ). Note this Enginuity level + E Pack is the minimum version required to use vsphere No ATS, Block Zero, Full Copy vsphere 4.1 Recommended base level for using VAAI Upgrade to ATS, Block Zero, Full Copy vsphere 4.1 If upgrade is not possible refer to ETA emc for E Pack information. EMC does not recommend using VAAI prior to this Enginuity release and E Pack. Note: Be aware that this table only displays information related to using vsphere with VAAI and EMC Enginuity and HYPERMAX OS. It is not making any statements about general vsphere support with Enginuity or HYPERMAX OS. For general support information consult E-Lab Navigator. ODX and VMware The Offloaded Data Transfer (ODX) feature of Windows Server 2012 and Window 8 or ODX, is a Microsoft technology that like Full Copy allows the offloading of host activity to the array. ODX requires a minimum of Enginuity or later running on the VMAX and HYPERMAX OS or later on the VMAX3/VMAX All VMware vsphere Storage API for Array Integration 365

368 Cloning of vsphere Virtual Machines Flash. ODX can be a feature utilized in VMware environments when the guest OS is running one of the two aforementioned Microsoft operating systems, and the virtual machine is presented with physical RDMs. In such an environment, ODX will be utilized for certain host activity, such as copying files within or between the physical RDMs. ODX will not be utilized on virtual disks that are part of the VM. ODX also requires the use of Fixed path. PP/VE or NMP will not work and in fact it is recommended that ODX be disabled on Windows if using those multipathing softwares. 1 In order to take full advantage of ODX, the allocation unit on the drives backed by physical RDMs should be 64k. Note: Like VAAI, there is no user interface to indicate ODX is being used. 1. See Microsoft TechNote Using EMC VMAX Storage in VMware vsphere Environments

369 Cloning of vsphere Virtual Machines Choosing a virtual machine cloning methodology The replication techniques described in the previous sections each have pros and cons with respect to their applicability to solve a given business problem. The following matrix in Table 6 provides a comparison of the different replication methods to use and the differing attributes of those methods. Note: VMAX 10K and the VMAX3/VMAX All Flash do not support TF/Snap. Table 6 Comparison of storage array-based virtual machine cloning technologies TF/SnapVX TF/Snap TF/VP Snap TF/Clone VAAI Full Copy Maximum number of copies b 32 Incremental: 8 Non-inc: 16 active Unlimited a Number of simultaneous copies b Unlimited a Production impact Minimal Minimal c Minimal Minimal c Minimal Scripting Required Required Required Required None VM clone needed a long time Recommended with copy Not Recommended Not Recommended Recommended Recommended High write usage to VM clone Recommended with copy Not Recommended Not Recommended Recommended Recommended Restore capability Yes Yes Yes Yes No Clone a single VM No No No No Yes a. If the maximum number of extent copy tasks is reached on the storage system, cloning will revert to software. b. The DMX array needs to be running Enginuity version 5772 or later for this feature. Older releases of code support a maximum of 15 simultaneous TF/Snap sessions. c. Enhancements in Enginuity version 5772 or later minimize impact of TF/Snap and TF/Clone operations. Production volumes experience Copy on First Write and Copy on Access impact with older releases of Enginuity code. Choosing a virtual machine cloning methodology 367

370 Cloning of vsphere Virtual Machines Table 7 shows examples of the choices a VMware or storage administrator might make for cloning virtual machines based on the matrix presented in Table 6 on page 367. Table 7 Virtual machine cloning requirements and solutions System Requirements The application on the source volumes is performance sensitive, and the slightest degradation causes responsiveness of the system to miss SLAs Space and economy are a concern. Multiple copies are needed and retained only for a short time, with performance not critical More than two simultaneous copies need to be made and the copies are to be used by development teams for a long time A single VM in the environment needs to be copied Replication Choice TimeFinder/Clone or TimeFinder/SnapVX TimeFinder/Snap, TimeFinder/VP Snap or TimeFinder/SnapVX TimeFinder/Clone or TimeFinder/SnapVX VAAI Full Copy 368 Using EMC VMAX Storage in VMware vsphere Environments

371 Cloning of vsphere Virtual Machines Cloning at disaster protection site with vsphere 5 and 6 Many EMC customers employ a disaster recovery site for their VMware environments. Many also wish to take advantage of this hardware if it is idle, utilizing it as perhaps a test or development environment. They do not, however, wish to compromise their acceptable data loss, or Recovery Point Objective (RPO). In order to accomplish this, TimeFinder software can be used in conjunction with SRDF or RecoverPoint Continuous Remote Replication (CRR) to create point-in-time copies of the production data on the disaster recovery site. This is accomplished with no downtime and risk to the production data. This section explains how to use TimeFinder copies of the remotely replicated production volumes and present them to a VMware environment without impacting the RPO of the production environment. Note: This section does not address the cloning of devices used as RDMs at the production site. Note: The procedures included in this section do not address solutions that leverage VMware s vcenter Site Recovery Manager. For information on using EMC VMAX with SRM, consult the Using EMC SRDF Adapter for VMware vcenter Site Recovery Manager TechBook. Using TimeFinder copies Creating copies of volumes with TimeFinder software is covered extensively in Cloning of vsphere Virtual Machines on page 303 and, therefore, it will not be repeated herein. In particular, as this section is concerned with production environments, Copying running virtual machines using EMC Consistency technology on page 326 should be consulted. That section details the use of EMC Consistency technology which ensures that the point-in-time copy of the volumes is in a dependent-write consistent data state and the virtual machine in the datastore are restartable. In addition, the assumption is made that the system administrator has mapped and masked all devices to the ESXi host(s) involved in this process. This means that both the R2 and RecoverPoint remote devices and any of their potential BCVs, clones, or snaps are seen by Cloning at disaster protection site with vsphere 5 and 6 369

372 Cloning of vsphere Virtual Machines the ESXi host. If additional information is required on this procedure, see Adding and removing EMC VMAX devices to VMware ESXi hosts on page 21. The examples in this chapter are based upon a production and disaster recovery site that each have a VMAX 10K running SRDF between them. Because of this, only TimeFinder/Clone is used as TimeFinder/Snap is not supported on the VMAX 10K. If the environments are running on a VMAX, there is no reason TimeFinder/Snap cannot be used instead of Clone with SRDF; with a VMAX3/VMAX All Flash, TimeFinder/SnapVX would be used. ESXi and duplicate extents In vsphere 5 and 6, mounting a copy of a remote production volume requires some thought and planning. The issue that one encounters is that when the R2 is cloned, the VMFS volume signature is copied over. The signature is generated in part from the WWN of the device. Since the R2 has a different WWN than the R1 device, ESXi sees the signature as invalid. It will, therefore, require the R2 device to be resignatured when mounting. 1 The same problem also exists when cloning the R2 to another device using TimeFinder software. The clone or BCV device has a different WWN (just like the R2) and therefore putting the R1 signature on that volume will also result in ESXi requiring a resignature. What complicates the situation, however, is now ESXi sees not just the R2 but also the copy of the R2 and recognizes that they have the same signature. When ESXi is presented with duplicates, by default it will not allow the user to mount either of the VMFS volumes. Using the CLI, one can see if duplicates exist. Figure 227 on page 371 shows an example of what ESXi will report when only the R2, which is identified in the figure by the last three digits of the network address authority number (naa), or 239:1, is masked to the host. 1. The user can force mount the volume with the same signature if desired. The only exception to this is when using VMware SRM. 370 Using EMC VMAX Storage in VMware vsphere Environments

373 Cloning of vsphere Virtual Machines Using the command esxcfg-volume -l, ESXi displays only the R2 that is masked to the host and reports that it can both be mounted and resignatured. ESXi has recognized that the signature is not valid for the device because the WWN is different than the R1. Figure 227 An R2 device on the remote ESXi host which can be resignatured Now if a masked BCV, with the naa ending 646 as seen in Figure 228, of the R2 in the previous Figure 227 is established, and the same command is run on the ESXi host, the results are much different, as seen in Figure 228. ESXi recognizes that the devices have the same VMFS volume signature. When presented with this situation, ESXi will not allow either device to be mounted or resignatured. Figure 228 An R2 device and its copy on the remote ESXi host that cannot be resignatured Cloning at disaster protection site with vsphere 5 and 6 371

374 Cloning of vsphere Virtual Machines If an attempt is made in the vsphere Client to resignature and mount this datastore, it will be prevented. In the vsphere Client, using the EMC Virtual Storage Integrator (VSI) Storage Viewer feature shown in Figure 229, both the R2 (239, L28) and the BCV (646, L29) are visible. Figure 229 Duplicate extents in VSI Using the add storage wizard, the user is presented with a list of devices to use for mounting or creating a new datastore. In Figure 230 on page 373 both the R2 and BCV are displayed along with the VMFS label which is the same for both, VMAXe_283_R1_ Using EMC VMAX Storage in VMware vsphere Environments

375 Cloning of vsphere Virtual Machines Figure 230 Attempt to mount duplicate extents If the user proceeds, the only option presented will be to format the disk and thus the user would lose the datastore on the BCV or clone. This is shown in Figure 231 on page 374. IMPORTANT If the R2 device is chosen by accident during this process, the format will fail as the R2 is a write disabled device. Cloning at disaster protection site with vsphere 5 and 6 373

376 Cloning of vsphere Virtual Machines Figure 231 Attempt to mount duplicate extents permits format only Fortunately, this situation can be avoided. There are a few methodologies that can be employed but only two are recommended and presented. These will be addressed in the following sections. 374 Using EMC VMAX Storage in VMware vsphere Environments

377 Cloning of vsphere Virtual Machines Detaching the remote volume One of the benefits that comes with vsphere 5 and 6 is that the user is able to detach a device from the ESXi host. This was discussed in Avoiding APD/PDL Device removal on page 94 in relation to the All Paths Down condition. In a situation where duplicate extents exist, if one device is detached from the ESXi host, ESXi will then see a single snapshot VMFS volume and revert to the behavior of being able to resignature and mount the device. The next section will walk through an example of this using the vsphere Client. Note: Recall from the section on APD/PDL that the vsphere Web Client will not provide any prerequisite checking before attempting the detach as does the vsphere Client in Figure 234 on page 376. Detaching a device and resignaturing the copy For consistency this example will use the same devices as presented earlier. Reviewing the VSI screen, Figure 232, there is an R2 and a BCV masked to the ESXi host. Figure 232 VSI displaying two duplicate volumes presented to a single ESXi host At this point the BCV is established and split from the R2 device. The establish and split is done first to reduce the amount of time the device is detached. Using the vsphere 5 or 6 functionality, the R2 can now be detached. Select the R2 device in the devices window in the vsphere Client. Right-click on it and select Detach as in Figure 233 on page 376. Detaching the remote volume 375

378 Cloning of vsphere Virtual Machines Figure 233 Detaching the R2 device VMware will run through three prerequisites, seen in Figure 234 to be sure that the device can be detached before allowing the user to confirm. Note: Detaching the device is a VMware function and has no impact on the VMAX mapping or masking of the device to the ESXi host. Figure 234 Confirming the detaching of the R2 376 Using EMC VMAX Storage in VMware vsphere Environments

379 Cloning of vsphere Virtual Machines Once confirmed, the device is detached. The device does not disappear from the devices screen, but rather grays out. This is shown in Figure 235. Figure 235 Post-detach of device now grayed out Now that the device has been detached, when esxcfg-volume -l is run, only the BCV is shown, and it is available for resignaturing and mounting as seen in Figure 236. Figure 236 Esxcfg-volume -l showing only the BCV after the R2 device is detached Subsequently when using the vsphere client, the R2 no longer appears, just the BCV device as in the add storage wizard in Figure 237 on page 378. Detaching the remote volume 377

380 Cloning of vsphere Virtual Machines Figure 237 Mounting the BCV after R2 detach Proceeding through the wizard, the user can now choose among all three options presented in Figure 238 on page 379: keeping the signature, assigning a new signature, or formatting the disk. 378 Using EMC VMAX Storage in VMware vsphere Environments

381 Cloning of vsphere Virtual Machines Figure 238 Choosing how to mount the BCV The best practice is to always assign a new signature to the volume to prevent any problems when the R2 is re-attached. If the user chooses NOT to resignature the volume and mounts it with the same signature, the R2 cannot be re-attached. The user will receive an error that states the operation is not allowed in the current state. This is because ESXi recognizes the volumes have the same signature and since one is already mounted, a second volume cannot, in a sense, occupy th e same space. The resolution to this is to unmount the datastore and detach the underlying BCV before attempting to re-attach the R2. Detaching the remote volume 379

382 Cloning of vsphere Virtual Machines When a new signature is created, VMware automatically assigns a prefix to the datastore that is snap-xxxxxxxx-. The xxxxxxxx part of the name is system generated. In this example, once the datastore is mounted, it appears as snap-5e6cbd8c-vmaxe_283_r1_365 as in Figure 239. Figure 239 Post-mount of BCV with resignature With the completion of the resignature and mount, the R2 can now be re-attached. This is a best practice to ensure in the event of the production site failure it would be available immediately. To do this, return to the devices screen and locate the grayed-out R2 device. Right-click on the device and select Attach from the menu as in Figure 240 on page 381. If there are multiple detached devices, use the VSI Storage Viewer to correlate the runtime names with the correct devices. 380 Using EMC VMAX Storage in VMware vsphere Environments

383 Cloning of vsphere Virtual Machines Figure 240 Re-attach R2 after mounting BCV Unlike detaching, re-attaching does all the validation in the background. A task will appear indicating that the device is attaching as in Figure 241. Once the task completes the device will return to black from gray. Figure 241 Successful re-attachment of the R2 Note: It is possible to unmask the R2 from the remote ESXi hosts instead of detaching the device, and then remasking it once the resignature and mounting is complete, rescanning the HBAs each time. This is not a recommended best practice, however, and can lead to complications. Detaching the remote volume 381

384 Cloning of vsphere Virtual Machines LVM.enableResignature parameter The second methodology for mounting TimeFinder copies of remote devices involves changing an advance parameter on the ESXi host. This parameter is /LVM/EnableResignature and controls whether ESXi will automatically resignature snapshot LUNs (e.g. TimeFinder copies). By default this parameter is set to 0 so that resignaturing must proceed as outlined in the previous section. If this parameter is set to 1, all snapshot LUNs will be resignatured when the ESXi host rescans the HBAs. Since in vsphere 5 and 6 this will be done automatically when there is some VMFS management operation, once the parameter is set it is inevitable that any of the VMFS volumes that can be resignatured, will be resignatured. It is therefore essential when setting this parameter to be aware of all volumes that the ESXi host in question can see. Note: If there are multiple ESXi hosts in a cluster that all have access to the same devices, even if only one hosts has the parameter set the resignature and mounting will be done. Setting the LVM.enableResignature flag on ESXi hosts is a host-wide operation and, if set, all snapshot LUNs that can be resignatured are resignatured during the subsequent host rescan. If snapshot volumes are forcefully mounted to ESXi hosts on the recovery site, these LUNs are resignatured as part of a host rescan during a test failover operation. Accordingly, all of the virtual machines on these volumes become inaccessible. To prevent outages, ensure that no forcefully-mounted snapshot LUNs are visible to ESXi hosts on the recovery site. 382 Using EMC VMAX Storage in VMware vsphere Environments

385 Cloning of vsphere Virtual Machines LVM.enableResignature example To demonstrate how this parameter works, the LUNs presented previously will be used. As a reminder, the two devices used in the example are the R2 device, 229, and the BCV device, 2FF, both seen here in Figure 242. Figure 242 LMV.enableResignature example devices LVM.enableResignature parameter 383

386 Cloning of vsphere Virtual Machines First, the parameter needs to be set. As this parameter is not available through the GUI client, the CLI will need to be used. Through local or remote CLI issue the command to check the current value of the parameter which should be 0 as in Figure 243. Figure 243 Default LVM.EnableResignature setting Now, change the value to 1, which activates the setting. The command is esxcfg-advcfg -s 1/LVM/EnableResignature and seen in Figure 244. Figure 244 Active LVM.EnableResignature setting 384 Using EMC VMAX Storage in VMware vsphere Environments

387 Cloning of vsphere Virtual Machines With the setting active, the BCV can be split and once the ESXi host does a rescan, the BCV with the VMFS volume from the R2 will automatically be resignatured and mounted with the prefix snap-xxxxxxxx-. Figure 245 is a capture of the split of the BCV from the R2 device. Recall that the BCV is already masked to the ESXi host. Figure 245 Splitting the BCV from the R2 LVM.enableResignature parameter 385

388 Cloning of vsphere Virtual Machines Once the BCV is split, if the esxcfg-volume -l command is run on the ESXi host, the host will report that there are duplicate extents as in Figure 246. Figure 246 Duplicate extents post-bcv split before resignaturing If no VMFS volume management commands have been run, which by default forces an auto-rescan 1, the user should execute a rescan either from the CLI or vsphere Client. Once the rescan completes the datastore will be mounted with the system-generated name, and ESXi will no longer report duplicate extents. This is all demonstrated in Figure 247 on page This behavior can be disabled with an advanced setting if desired. 386 Using EMC VMAX Storage in VMware vsphere Environments

389 Cloning of vsphere Virtual Machines Figure 247 Automated VMFS volume mounting after setting LVM.EnableResignature parameter IMPORTANT As there can be unintended consequences with this methodology, extreme care should be taken. LVM.enableResignature parameter 387

390 Cloning of vsphere Virtual Machines RecoverPoint Creating TimeFinder copies with RecoverPoint CRR and presenting them to a VMware environment has a special requirement that differs from creating TimeFinder copies with SRDF. This involves making a RecoverPoint image accessible on the remote site before activating or splitting a TimeFinder copy. The following section will detail that requirement. The RecoverPoint software discussed in this section is only supported with the VMAX platform, and not the VMAX3/VMAX All Flash arrays; however, the functionality presented is equivalent to that provided by RecoverPoint version 5.1 which is required for the VMAX3/VMAX All Flash platform. Note: There are significant differences between the VMAX and VMAX3/VMAX All Flash implementations of RecoverPoint which should be reviewed before attempting this procedure. Enabling image access RecoverPoint CRR uses the concept of consistency groups to maintain write-order consistency. A consistency group is an object that is comprised of paired volumes called replication sets. 1 A replication set is a relationship between two devices or volumes, one local and one remote. Single or multiple replication sets can exist in a consistency group. The consistency group guarantees write-order consistency over multiple volumes. The consistency group, whether made up of a single or multiple volumes, represents the image that is enabled or disabled. Figure 248 on page 389 shows the setup used in this particular test from the Unisphere for RecoverPoint interface. There is a single replication set being replicated which will subsequently be enabled as the image. 1. There are additional components of consistency groups which are not mentioned as they are not essential to this discussion. 388 Using EMC VMAX Storage in VMware vsphere Environments

391 Cloning of vsphere Virtual Machines Figure 248 RecoverPoint consistency group and replication set When creating a TimeFinder copy of the remote RecoverPoint volume, there are no special requirements - TimeFinder session can be created or established. While TimeFinder copies can be created off the RecoverPoint image while it is in a disabled state, they should not be activated or split until the RecoverPoint image is set to enabled for image access. To enable image access using Unisphere for RecoverPoint, select the Add Copy button demonstrated in Figure 249 on page 390. Unisphere will then activate the wizard to walk the user through enabling access to a particular image. RecoverPoint 389

392 Cloning of vsphere Virtual Machines Figure 249 Enable Image Access through Test Copy in RecoverPoint First, select the copy to enable an image upon, shown in Figure 250 on page Using EMC VMAX Storage in VMware vsphere Environments

393 Cloning of vsphere Virtual Machines Figure 250 Selecting a copy for image access In the next screen there are three different options in relation to the image, available from the drop-down box as seen in Figure 251 on page 392: a. The latest image b. An image from the image list c. A specific point in time or bookmark Though any of the options will produce a viable image for TimeFinder, typically it will be sufficient to use the latest image. In this case, an image is selected from the list. Be sure to leave the image access mode at the default of Logged access. RecoverPoint 391

394 Cloning of vsphere Virtual Machines Figure 251 Image access options Once the image is selected, it can be tested by selecting the button at the bottom. RecoverPoint will begin the process of enabling the image which can be actively viewed as in Figure 252 on page Using EMC VMAX Storage in VMware vsphere Environments

395 Cloning of vsphere Virtual Machines Figure 252 RecoverPoint actively enabling access to an image When it is complete, the storage will show as Logged Access and include the image chosen. Note the red boxes in Figure 253 on page 394. RecoverPoint 393

396 Cloning of vsphere Virtual Machines Figure 253 Image access complete Once Finish is selected, the Status in Unisphere for RecoverPoint in Figure 254 on page 395 will show that the image is now enabled. 394 Using EMC VMAX Storage in VMware vsphere Environments

397 Cloning of vsphere Virtual Machines Figure 254 RecoverPoint status of the consistency group Once the image is enabled, the user can activate or split the TimeFinder copy. As image access is enabled at the consistency group level, it ensures that the TimeFinder copy is write-order consistent when it is activated or split. As soon as the TimeFinder copy is activated, the RecoverPoint image can be disabled in much the same way as it was enabled. Figure 255 on page 396 provides the GUI example of disabling the image. RecoverPoint 395

398 Cloning of vsphere Virtual Machines Figure 255 Disabling image access in RecoverPoint Before disabling the image, RecoverPoint will provide a warning dialog seen in Figure 256 to ensure that the image is not being actively used. RecoverPoint will not fail if there is active I/O so it is important to check before disabling. Figure 256 Warning dialog upon image disabling in RecoverPoint The TimeFinder copy can now be mounted and resignatured. Note that because the RecoverPoint image is enabled read/write at one point in the process, unlike an SRDF R2, EMC does not recommend the second methodology presented in LVM.enableResignature parameter on page 382 of changing the LVM.EnableResignature parameter. Though the image may only be set read/write for a short period of time, there is a chance that a rescan of an ESXi host could be run during that time and the enabled image would be resignatured and mounted. If the image was then disabled after that it would cause problems with the ESXi host. For this reason EMC recommends using the best practice of the first methodology of detaching the RecoverPoint device found in Detaching the remote volume on page Using EMC VMAX Storage in VMware vsphere Environments

399 Cloning of vsphere Virtual Machines Note: The use of Enginuity Consistency Assist (ECA) does not change the requirement for RecoverPoint image access before activating the TimeFinder copy. RecoverPoint 397

400 Cloning of vsphere Virtual Machines 398 Using EMC VMAX Storage in VMware vsphere Environments

401 6 Disaster Protection for VMware vsphere This chapter discusses the use of EMC SRDF technology to provide disaster restart protection for a VMware Environment using VMAX or VMAX3/VMAX All Flash. Introduction SRDF Storage Replication Adapter for VMware Site Recovery Manager Business continuity solutions for VMware vsphere Disaster Protection for VMware vsphere 399

402 Disaster Protection for VMware vsphere Introduction The VMware virtualization platform virtualizes the x86-based physical infrastructure into a pool of resources. Virtual machines are presented with a virtual hardware environment independent of the underlying physical hardware. This enables organizations to leverage disparate physical hardware in the environment and provide low total cost of ownership. The virtualization of the physical hardware can also be used to create disaster recovery and business continuity solutions that would have been impractical otherwise. These solutions normally involve a combination of virtual environments at one or more geographically separated data centers and EMC remote replication technology. One example of such architecture has physical servers running various business applications in their primary data center while the secondary data center has limited number of virtualized physical servers. During normal operations, the physical servers in the secondary data center are used for supporting workloads such as QA and testing. In case of a disruption in services at the primary data center, the physical servers in the secondary data center run the business applications in a virtualized environment. The focus of this chapter is a discussion of these types of solutions. The purpose of this chapter is to discuss the following: VMware vcenter Site Recovery Manager EMC SRDF solutions Mounting and resignaturing of replicated VMFS volumes Note that in particular the sections on VMware SRM are presented at a high level. There is a separate TechBook specifically for the SRDF SRA, Using the EMC SRDF Adapter for VMware Site Recovery Manager, which can be found on support.emc.com. 400 Using EMC VMAX Storage in VMware vsphere Environments

403 Disaster Protection for VMware vsphere SRDF Storage Replication Adapter for VMware Site Recovery Manager VMware vcenter Site Recovery Manager (SRM) delivers advanced capabilities for disaster recovery management, non-disruptive disaster recovery testing, and automated failover. VMware vcenter SRM can manage the failover from production data centers to disaster recovery sites, as well as the failover between two sites with active workloads. SRM solves two main issues: disaster recovery automation and simple, repeatable testing of disaster recovery scenario processes. VMware vcenter Site Recovery Manager helps to significantly reduce the operational complexity of a traditional DR solution by streamlining the entire process. This includes the initial setup of the solution, the maintenance of the solution and finally the execution of the solution. Without SRM, the initial configuration and maintenance complexity can be a huge demand on resources. Like any physical environment, the manual recovery of a virtual environment can be complex and error-prone leading to unwelcome, additional downtime. SRM avoids this through the automation and simplicity provided by its highly customizable recovery plans and virtual machine protection groups. Tested, proven, and documented procedures are imperative for a disaster recovery solution. VMware vcenter Site Recovery Manager offers the ability to create operational procedures that allow for periodic, automated, repeatable and, most importantly, non-disruptive execution of disaster recovery tests. This helps to avoid/reduce costly downtime to business critical application caused by traditional disaster recovery testing while helping to ensure that actual disaster recoveries execute as effortlessly as possible. Site Recovery Manager workflow features VMware vcenter Site Recovery Manager includes six workflow capabilities, each discussed in this section: Test Recovery Cleanup Planned migration Reprotection SRDF Storage Replication Adapter for VMware Site Recovery Manager 401

404 Disaster Protection for VMware vsphere Failback Disaster recovery event Test Recovery Cleanup Planned migration In order to assure confidence in a recovery plan that has been configured for failover it is important to test that recovery plan to make sure all parts are working as expected. The test recovery operation provided by VMware SRM allows administrators to create recovery plans and realistically test them without having to execute an actual failover. VMware SRM in conjunction with a SRA, allows recovery plans to be executed in test mode and recovers the virtual machines on the recovery site in an isolated environment where they cannot interfere with production applications. The virtual machines are not recovered on the remote site R2 device but instead on a local copy of that R2 device ensuring that replication and remote protection is not affected during the test. The cleanup operation simply reverts the environment after a test recovery operation has been executed. Once the test recovery is complete and all aspects of the plan have been approved and verified the administrator can revert the environment to its previous state quickly and easily by initiating the cleanup operation workflow. Prior to Site Recovery Manager 5, workflow capabilities included both execution and testing of a recovery plan. With version 5 and higher, VMware has introduced a new workflow designed to deliver migration from a protected site to a recovery site through execution of a planned migration workflow. Planned migration ensures an orderly and pretested transition from a protected site to a recovery site while minimizing the risk of data loss. The migration runs in a highly controlled fashion. It will halt the workflow if an error occurs, which differs from a traditional disaster-event recovery plan. With Site Recovery Manager 5 and higher, all recovery plans, whether they are for migration or recovery, run as part of a planned workflow that ensures that systems are properly shut down and that data is synchronized with the recovery site prior to migration of the workloads. This ensures that systems are properly quiesced and that all data changes have been completely replicated prior to starting the virtual machines at the recovery site. If, however, an error is 402 Using EMC VMAX Storage in VMware vsphere Environments

405 Disaster Protection for VMware vsphere encountered during the recovery plan execution, planned migration will stop the workflow, providing an opportunity to fix the problem that caused the error before attempting to continue. Reprotection Failback After a recovery plan or planned migration has run, there are often cases where the environment must continue to be protected against failure, to ensure its resilience or to meet objectives for disaster recovery. With Site Recovery Manager 5 and higher, reprotection is an extension to recovery plans for use only with array-based replication. It enables the environment at the recovery site to establish synchronized replication and protection of the environment back to the original protected site. After failover to the recovery site, selecting to reprotect the environment will establish synchronization and attempt to replicate data between the protection groups now running at the recovery site and the previously protected, primary site. This capability to reprotect an environment ensures that environments are protected against failure even after a site recovery scenario. It also enables automated failback to a primary site following a migration or failover. An automated failback workflow can be run to return the entire environment to the primary site from the secondary site. This will happen after reprotection has ensured that data replication and synchronization have been established to the original site. Failback will run the same workflow that was used to migrate the environment to the protected site. It will guarantee that the critical systems encapsulated by the recovery plan are returned to their original environment. The workflow will execute only if reprotection has successfully completed. Failback ensures the following: All virtual machines that were initially migrated to the recovery site will be moved back to the primary site. Environments that require that disaster recovery testing be done with live environments with genuine migrations can be returned to their initial site. Simplified recovery processes will enable a return to standard operations after a failure. SRDF Storage Replication Adapter for VMware Site Recovery Manager 403

406 Disaster Protection for VMware vsphere Failover can be done in case of disaster or in case of planned migration. Disaster recovery event As previously noted, all recovery plans since Site Recovery Manager 5 include an initial attempt to synchronize data between the protection and recovery sites, even during a disaster recovery scenario. During a disaster recovery event, an initial attempt will be made to shut down the protection group s virtual machines and establish a final synchronization between sites. This is designed to ensure that virtual machines are static and quiescent before running the recovery plan, to minimize data loss where possible during a disaster. If the protected site no longer is available, the recovery plan will continue to execute and will run to completion even if errors are encountered. This new attribute minimizes the possibility of data loss while still enabling disaster recovery to continue, balancing the requirement for virtual machine consistency with the ability to achieve aggressive recovery-point objectives. SRDF Storage Replication Adapter VMware vcenter Site Recovery Manager leverages storage array-based replication such as EMC Symmetrix Remote Data Facility (SRDF) to protect virtual machines in VMware vcenter vsphere environments. The interaction between VMware Site Recovery Manager and the storage array replication is managed through a well-defined set of specifications. The VMware-defined specifications are implemented by the storage array vendor as a lightweight application referred to as the storage replication adapter or SRA. EMC SRDF Adapter is an SRA that enables VMware Site Recovery Manager to interact with an EMC VMAX storage environment. Each version of SRM requires a certain version of the SRDF SRA. Note: The SRA supports both VMAX/VMAX3/VMAX All Flash homogeneous and VMAX/VMAX3/VMAX All Flash heterogeneous environments. 404 Using EMC VMAX Storage in VMware vsphere Environments

407 Disaster Protection for VMware vsphere The EMC SRDF Adapter for VMware Site Recovery Manager, through the use of consistency groups, can also address configurations in which the data for a group of related virtual machines is spread across multiple RDF groups. SRDF Storage Replication Adapter for VMware Site Recovery Manager 405

408 Disaster Protection for VMware vsphere Business continuity solutions for VMware vsphere Business continuity solutions for a production environment with the VMware virtualization platform fortunately is not too complicated. In addition to, or in place of, a tape-based disaster recovery solution, EMC SRDF can be used as the mechanism to replicate data from the production data center to the remote data center. The copy of the data in the remote data center can be presented to a VMware ESXi cluster. The vsphere environment at the remote data center thus provides a business continuity solution. Recoverable vs. restartable copies of data The VMAX-based replication technologies can generate a restartable or recoverable copy of the data. The difference between the two types of copies can be confusing; a clear understanding of the differences between the two is critical to ensure that the recovery goals for a vsphere environment can be met. Recoverable disk copies A recoverable copy of the data is one in which the application (if it supports it) can apply logs and roll the data forward to a point in time after the copy was created. The recoverable copy is most relevant in the database realm where database administrators use it frequently to create backup copies of database, e.g. Oracle. In the event of a failure to the database, the ability to recover the database not only to a point in time when the last backup was taken, but also to roll-forward subsequent transactions up to the point of failure, is critical to most business applications. Without that capability, in an event of a failure, there will be an unacceptable loss of all transactions that occurred since the last backup. Creating recoverable images of applications running inside virtual machines using EMC replication technology requires that the application or the virtual machine be shut down when it is copied. A recoverable copy of an application can also be created if the application supports a mechanism to suspend writes when the copy of the data is created. Most database vendors provide functionality in their RDBMS engine to suspend writes. This functionality has to be invoked inside the virtual machine when EMC technology is deployed to ensure a recoverable copy of the data is generated on the target devices. 406 Using EMC VMAX Storage in VMware vsphere Environments

409 Disaster Protection for VMware vsphere Restartable disk copies If a copy of a running virtual machine is created using EMC Consistency technology 1 without any action inside the virtual machines, the copy is normally a restartable image of the virtual machine. This means that when the data is used on cloned virtual machines, the operating system and/or the application goes into crash recovery. The exact implications of crash recovery in a virtual machine depend on the application that the machine supports: If the source virtual machine is a file server or runs an application that uses flat files, the operating system performs a file-system check and fixes any inconsistencies in the file system. Modern file systems such as Microsoft NTFS use journals to accelerate the process When the virtual machine runs any database or application with a log-based recovery mechanism, the application uses the transaction logs to bring the database or application to a point of consistency. The process deployed varies depending on the database or application, and is beyond the scope of this document. Most applications and databases cannot perform roll-forward recovery from a restartable copy of the data. Therefore, in most cases a restartable copy of data created from a virtual machine that is running a database engine is inappropriate for performing backups. However, applications that use flat files or virtual machines that act as file servers can be backed up from a restartable copy of the data. This is possible since none of the file systems provide a roll-forward logging mechanism that enables recovery. SRDF/S Synchronous SRDF (SRDF/S) is a method of replicating production data changes from locations less than 200 km apart. Synchronous replication takes writes that are inbound to the source VMAX and copies them to the target VMAX. The resources of the storage arrays are exclusively used for the copy. The write operation from the virtual machine is not acknowledged back to the host until both VMAX arrays have a copy of the data in their cache. Readers should consult support.emc.com for further information about SRDF/S. 1. See Copying running virtual machines using EMC Consistency technology on page 326 for more detail. Business continuity solutions for VMware vsphere 407

410 Disaster Protection for VMware vsphere Figure 257 is a schematic representation of the business continuity solution that integrates a VMware environment and SRDF technology. The solution shows two virtual machines accessing devices on the VMAX storage arrays as RDMs. Figure 257 Business continuity solution using SRDF/S in a VMware environment using RDM While the above solution uses TimeFinder/Clone, in a VMAX3 environment it would be preferable to use TimeFinder/SnapVX. An equivalent solution utilizing the VMFS is depicted in Figure 258 on page 409. The proposed solution provides an excellent opportunity to consolidate the vsphere environment at the remote site. It is possible to run virtual machines on any VMware ESXi host in the cluster. This capability also allows the consolidation of the production VMware ESXi hosts to fewer VMware ESXi hosts at the recovery site. However, by doing so, there is a potential for duplicate virtual machine IDs when multiple virtual machines are consolidated in the remote site. If this occurs, the virtual machine IDs can be easily changed at the remote site. 408 Using EMC VMAX Storage in VMware vsphere Environments

411 Disaster Protection for VMware vsphere Figure 258 Business continuity solution using SRDF/S in a VMware environment with VMFS SRDF/A SRDF/A, or asynchronous SRDF, is a method of replicating production data changes from one VMAX to another using delta set technology. Delta sets are the collection of changed blocks grouped together by a time interval that can be configured at the source site. The default time interval is 30 seconds. The delta sets are then transmitted from the source site to the target site in the order they were created. SRDF/A preserves the dependent-write consistency of the database at all times at the remote site. Further details about SRDF/A and the differences between VMAX and VMAX3/VMAX All Flash can be obtained on support.emc.com. Business continuity solutions for VMware vsphere 409

412 Disaster Protection for VMware vsphere The distance between the source and target VMAX is unlimited and there is no host impact. Writes are acknowledged immediately when they hit the cache of the source VMAX. Figure 259 shows the SRDF/A process as applied to a VMware environment using RDM. Figure 259 Business continuity solution using SRDF/A in a VMware environment with RDM A similar process, as shown in Figure 260 on page 411, can also be used to replicate disks containing VMFS. 410 Using EMC VMAX Storage in VMware vsphere Environments

413 Disaster Protection for VMware vsphere Figure 260 Business continuity solution using SRDF/A in a VMware environment with VMFS Before the asynchronous mode of SRDF can be established, initial copy of the production data has to occur. In other words, a baseline ESXi of all the volumes that are going to participate in the asynchronous replication must be executed first. This is usually accomplished using the adaptive-copy mode of SRDF. SRDF/Metro SRDF/Metro is a feature available on the VMAX3 and VMAX All Flash arrays starting with HYPERMAX OS which provides active/active access to the R1 and R2 of an SRDF configuration. In traditional SRDF, R1 devices are Read/Write Business continuity solutions for VMware vsphere 411

414 Disaster Protection for VMware vsphere accessible while R2 devices are Read Only/Write Disabled. In SRDF/Metro configurations both the R1 and R2 are Read/Write accessible. The way this is accomplished is the R2 takes on the personality of the R1 in terms of geometry and most importantly the WWN. By sharing a WWN, the R1 and R2 appear as a shared virtual device across the two arrays for host presentation. A host or typically multiple hosts in a cluster can read and write to both the R1 and R2. SRDF/Metro ensures that the each copy remains current and consistent and addresses any write conflicts which might arise. A simple diagram of the feature is available in Figure 261. Figure 261 SRDF/Metro 412 Using EMC VMAX Storage in VMware vsphere Environments

415 Disaster Protection for VMware vsphere Figure 262 on page 413 is a schematic representation of the business continuity solution that integrates a VMware environment and SRDF/Metro. The solution shows two virtual machines accessing devices on VMAX All Flash storage arrays on VMFS. Figure 262 Business continuity solution using SRDF/Metro in a VMware environment with VMFS In an SRDF/Metro environment, the VMFS datastores are the same on both sites which permits different VMs to run on each site on the same datastore, depicted in Figure 262 as VMs A on the R1 and VMs B on the R2. If copies of the A VMs are desired on the remote site, TimeFinder SnapVX could be used to take snaps of the R2 devices, linked to target devices, and after resignature and mounting, the A VMs could be registered for testing. One advantage that SRDF/Metro has over other SRDF modes is that because the same device/datastore is available in an active-active state on both sites, VMware technology can be used to create copies rather than EMC technology. Clones of the A VMs can be taken and placed on ESXi hosts on the R2 site in the same datastore as the source. This would alleviate the need for additional target devices. Business continuity solutions for VMware vsphere 413

416 Disaster Protection for VMware vsphere While VM clones are still possible in other SRDF modes, there must be a target datastore available on both sites that is distinct from the datastores participating in the SRDF relationships. For more detail on SRDF/Metro configurations, see the whitepaper Using SRDF/Metro in a VMware Metro Storage Cluster Running Oracle E-Business Suite and 12C RAC, available on support.emc.com. SRDF Automated Replication SRDF Automated Replication, or SRDF/AR, is a continuous movement of dependent-write consistent data to a remote site using SRDF adaptive copy mode and TimeFinder consistent split technology. TimeFinder copies are used to create a dependent-write consistent point-in-time image of the data to be replicated. The BCVs also have a R1 personality, which means that SRDF in adaptive-copy mode can be used to replicate the data from the BCVs to the target site. Since the BCVs are not changing, replication completes in a finite length of time. The time required for replication depends on the size of the network pipe between the two locations and the amount of data to copy. On the remote VMAX, another BCV copy of the data is made using data on the R2s. This is necessary because of the behavior of SRDF adaptive mode. SRDF adaptive copy mode does not maintain write-order when moving data. The copy of data on the R2s is inconsistent until all of the tracks owed from the BCVs are copied. Without a BCV at the remote site, if a disaster were to occur while the R2s were synchronizing with BCV/R1s, no valid copy of the data would be available at the DR site. The BCV copy of the data in the remote VMAX is commonly called the gold copy of the data. The whole process then repeats. With SRDF/AR, there is no host impact. Writes are acknowledged immediately when they hit the cache of the source VMAX. A solution using SRDF/AR to protect a VMware environment is depicted in Figure 263 on page Using EMC VMAX Storage in VMware vsphere Environments

417 Disaster Protection for VMware vsphere Figure 263 Business continuity solution using SRDF/AR in a VMware environment with VMFS Cycle times for SRDF/AR are usually in the minutes-to-hours range. The RPO is double the cycle time in a worst-case scenario. This may be a good fit for customers who do not require RPOs measured in seconds but also cannot afford to have RPOs measured in days. An added benefit of having a longer cycle time is the increased probability of a track being updated more than once in an hour interval than in, for instance, a 30-second interval. Since this multi-changed track will only be sent once in that hour, rather than multiple times if the interval were 30 seconds, there are reduced bandwidth requirements for the SRDF/AR solution. A variation of the SRDF/AR solution presented in Figure 263 is the SRDF/AR multi-hop solution. The variation enables long-distance replication with zero seconds of data loss. This is achieved through the use of a bunker VMAX. Production data is replicated Business continuity solutions for VMware vsphere 415

418 Disaster Protection for VMware vsphere synchronously to the bunker VMAX, which is within 200 km of the production VMAX. The distance between the production and bunker VMAX is far enough to avoid potential disasters at the primary site, but is also close enough to operate SRDF in synchronous mode. The data on the R2s at the bunker site is replicated to another VMAX located thousands of miles away using the process shown in Figure 263 on page 415. Readers should consult the Symmetrix Remote Data Facility (SRDF) Product Guide on support.emc.com for more detailed discussions on SRDF/AR technology. SRDF/Star overview The SRDF/Star disaster recovery solution provides advanced multi-site business continuity protection for enterprise environments. It combines the power of Symmetrix Remote Data Facility (SRDF) synchronous and asynchronous replication, enabling the most advanced three-site business continuance solution available today. SRDF/Star enables concurrent SRDF/S and SRDF/A operations from the same source volumes with the ability to incrementally establish an SRDF/A session between the two remote sites in the event of a primary site outage a capability only available through SRDF/Star software. This capability takes the promise of concurrent synchronous and asynchronous operations (from the same source device) to its logical conclusion. SRDF/Star allows you to quickly re-establish protection between the two remote sites in the event of a primary site failure, and then just as quickly restore the primary site when conditions permit. With SRDF/Star, enterprises can quickly resynchronize the SRDF/S and SRDF/A copies by replicating only the differences between the sessions-allowing for much faster resumption of protected services after a source site failure. Note: SRDF/Metro does not support SRDF/Star. 416 Using EMC VMAX Storage in VMware vsphere Environments

419 Disaster Protection for VMware vsphere Concurrent SRDF (3-site) Concurrent SRDF allows the same source data to be copied concurrently to VMAX arrays at two remote locations. As Figure 264 shows, the capability of a concurrent R1 device to have one of its links synchronous and the other asynchronous is supported as an SRDF/Star topology. Additionally, SRDF/Star allows the reconfiguration between concurrent and cascaded modes dynamically. Figure 264 Concurrent SRDF Note: SRDF/Metro supports 3-site Concurrent SRDF beginning with HYPERMAX OS Q3 SR. Business continuity solutions for VMware vsphere 417

420 Disaster Protection for VMware vsphere Cascaded SRDF (3-site) Introduced with Enginuity 5773, Cascaded SRDF allows a device to be both a synchronous target (R2) and an asynchronous source (R1) creating an R21 device type. SRDF/Star supports the cascaded topology and allows the dynamic reconfiguration between cascaded and concurrent modes. See Figure 265 for a representation of the configuration. Figure 265 Cascaded SRDF Note: SRDF/Metro supports 3-site Cascaded SRDF beginning with HYPERMAX OS Q3 SR. Extended Distance Protection The SRDF/Extended Distance Protection (EDP) functionality is a licensed SRDF feature that offers a long distance disaster recovery (DR) solution. Figure 266 on page 419 shows an SRDF/EDP configuration. The SRDF/EDP is a three-site configuration that requires Enginuity version 5874 or higher running on Symmetrix B and Enginuity version 5773 or higher running on Symmetrix A and Symmetrix C. 418 Using EMC VMAX Storage in VMware vsphere Environments

421 Disaster Protection for VMware vsphere Figure 266 SRDF/EDP basic configuration This is achieved through a Cascaded SRDF setup, where a VMAX system at a secondary site uses DL R21 devices to capture only the differential data that would be owed to the tertiary site in the event of a primary site failure. It is the data on the diskless R21 devices that helps these configurations achieve a zero Recovery Point Objective (RPO). Diskless R21 devices (DL R21) operate like R21 volumes in Cascaded SRDF except that DL R21 devices do not store full copies of data and have no local mirrors (RAID groups) configured to them. Unlike ordinary R21 volumes, DL R21 devices are not VMAX logical volumes. The DL R21 devices must be preconfigured within the VMAX system prior to being placed in an SRDF relationship. Note: Diskless R21 devices are not supported on the VMAX3/VMAX All Flash. Business continuity solutions for VMware vsphere 419

422 Disaster Protection for VMware vsphere Configuring remote site virtual machines on replicated VMFS volumes The process of creating matching virtual machines at the remote site is similar to the process that was presented for cloning virtual machines in VMware environments in Chapter 5, Cloning of vsphere Virtual Machines. Inherent in ESXi 5 and 6 is the ability to handle replicas of source volumes. Since storage array technologies create exact replicas of the source volumes, all information including the unique VMFS signature (and label, if applicable) is replicated. If a copy of a VMFS volume is presented to any VMware ESXi 5 or 6 cluster, the VMware ESXi automatically masks the copy. The device that holds the copy is determined by comparing the signature stored on the device with the computed signature. R2s, for example, have different unique IDs from the R1 devices with which it is associated. Therefore, the computed signature for a R2 device differs from the one stored on it. This enables the VMware ESXi host to always identify the copy correctly. vsphere 5 and 6 include the ability to individually resignature and/or mount VMFS volume copies through the use of the vsphere Client or with the remote CLI utility vicfg-volume. The resignature and force-mount are volume-specific, which allows for greater granularity in the handling of snapshots. This feature is very useful when creating and managing volume copies created by local replication products such as TimeFinder, and in this particular case remote replication technologies such as Symmetrix Remote Data Facility (SRDF). 420 Using EMC VMAX Storage in VMware vsphere Environments

423 Disaster Protection for VMware vsphere Figure 267 shows an example of this. In the screenshot, there are two highlighted volumes, VMFS_Datastore_VMAX and snap-6e6918f4-vmfs_datastore_vmax. These volumes are hosted on two separate devices, one device is a copy of the other created with the TimeFinder software. Figure 267 Individually resignature a volume Business continuity solutions for VMware vsphere 421

424 Disaster Protection for VMware vsphere The vsphere Client was used to resignature the volume with the name snap-6e6918f4-vmfs_datastore_vmax. The Add Storage wizard in Figure 268 demonstrates how the volume was resignatured. Figure 268 Add Storage wizard used to resignature a volume Creating virtual machines at the remote site The following steps discuss the process of creating virtual machines at the remote site for ESXi 5 or The first step to create virtual machines at the remote site is to enable access to the R2 devices for the VMware ESXi cluster at the remote data center. 422 Using EMC VMAX Storage in VMware vsphere Environments

425 Disaster Protection for VMware vsphere 2. The vcenter Server does not allow duplication of objects in a Datacenter object. If the same vcenter is used to manage the VMware ESXi at the production and remote site, the servers should be added to different Datacenter constructs in vcenter Server. 3. The SCSI bus should be scanned after ensuring the R2 devices are split from the R1 devices after the device pairs are in a synchronized or a consistent state. The scanning of the SCSI bus can be done either using the service console or the vsphere Client. The devices that hold the copy of the VMFS are displayed on the VMware ESXi cluster at the remote site. 4. In vsphere 5 and 6 environments, when a virtual machine is created all files related to the virtual machine are stored in a directory on a datastore. This includes the configuration file and, by default, the virtual disks associated with a virtual machine. Thus, the configuration files automatically get replicated with the virtual disks when storage array based copying technologies are leveraged. The registration of the virtual machines from the target device can be performed using the vsphere CLI tools or the vsphere Client. The re-registration of cloned virtual machines is not required when the configuration information of the production virtual machine changes. The changes are automatically propagated and used when needed. As recommended in step 2, if the VMware ESXi hosts at the remote site are added to a separate Datacenter construct, the names of the virtual machines at the remote Datacenter matches those of the production Datacenter. 5. The virtual machines can be started on the VMware ESXi hosts at the remote site without any modification if the following requirements are met: The target VMware ESXi hosts have the same virtual network switch configuration i.e., the name and number of virtual switches should be duplicated from the source VMware ESXi cluster. All VMFS used by the source virtual machines are replicated. Furthermore, the VMFS labels should be unique on the target VMware ESXi hosts. Business continuity solutions for VMware vsphere 423

426 Disaster Protection for VMware vsphere The minimum memory and processor resource reservation requirements of all cloned virtual machines can be supported on the target VMware ESXi host(s). For example, if ten source virtual machines, each with a memory resource reservation of 256 MB, need to be cloned, the target VMware ESXi cluster should have at least 2.5 GB of physical RAM allocated to the VMkernel. Virtual devices such as CD-ROM and floppy drives are attached to physical hardware, or are started in a disconnected state when the virtual machines are powered on. 6. The cloned virtual machines can be powered on the vsphere Client or through command line utilities when required. Configuring remote site virtual machines with replicated RDMs When a RDM is generated, a virtual disk is created on a VMFS pointing to the physical device that is mapped. The virtual disk that provides the mapping also includes the unique ID of the device it is mapping. The configuration file for the virtual machine using the RDM contains an entry that includes the label of the VMFS holding the RDM and the name of the RDM. If the VMFS holding the information for the virtual machines is replicated and presented on the VMware ESXi host at the remote site, the virtual disks that provide the mapping are also available in addition to the configuration files. However, the mapping file cannot be used on the VMware ESXi host at the remote site since they point to non-existent devices. Therefore, EMC recommends using a copy of the source virtual machine s configuration file instead of replicating the VMFS. The following steps create copies of production virtual machines using RDMs at the remote site: 1. On the VMware ESXi host cluster at the remote site create a directory on a datastore to hold the files related to the cloned virtual machine. A VMFS on internal disk, unreplicated SAN-attached disk, or NAS-attached storage should be used for storing the files for the cloned virtual disk. This step has to be performed only once. 2. Copy the configuration file for the source virtual machine to the directory created in step 1. This step has to be repeated only if the configuration of the source virtual machine changes. 424 Using EMC VMAX Storage in VMware vsphere Environments

427 Disaster Protection for VMware vsphere 3. Register the cloned virtual machine using the vsphere Client or VMware remote tools. This step does not need to be repeated. 4. Generate RDMs on the target VMware ESXi host in the directory created in step 1 above. The RDMs should be configured to address the R2. 5. The virtual machine at the remote site can be powered on using either the vsphere Client or the VMware remote tools when needed. Note: The process listed in this section assumes the source virtual machine does not have a virtual disk on a VMFS. The process to clone virtual machines with a mix of RDMs and virtual disks is complex and beyond the scope of this document. Write disabled devices in a VMware environment In order to prevent unwanted and invalid changes to target SRDF devices, EMC sets these devices (e.g. R2) to write-disabled upon RDF pair creation. These devices remain write-disabled until the pair is split or failed over. In most VMware environments, the read/write enabled R1 devices are presented to one cluster while the write-disabled R2 devices are presented to another. The presence of a multitude of write-disabled devices that contain unmounted VMFS volume copies (referred to as unresolved VMFS volumes) pose issues for certain VMware operations. These operations are: HBA storage rescan The Add Storage Wizard Both of these operations query the devices and the presence of these write-disabled devices can slow them down due to the ESXi hostd process retrying operations on these devices until a retry limit or timeout is reached. IMPORTANT The HBA storage rescan issue has been resolved in vsphere 4 only at vsphere 4.1 U3 and later and in vsphere 5 only at vsphere 5.1 and later. Business continuity solutions for VMware vsphere 425

428 Disaster Protection for VMware vsphere In the Add Storage Wizard the slow scan manifests itself in the second screen when ESXi is attempting to display available devices. The screen will say Loading until it completes reading through these write-disabled devices which can take more than 25 seconds per device as seen in Figure 269. Normally ESXi would spend in the order of milliseconds reading a device. Figure 269 Device query screen in the Add Storage Wizard During the device query operation of the wizard, the hostd process examines all of the devices that do not fail the initial SCSI open() call such as the SRDF R2 volumes. Even though write-disabled, these devices are still have a status of Ready and therefore succeed the call. The ESXi host therefore can detect that they host a VMFS volume (since they are only write-disabled, not read/write disabled). Furthermore, ESXi detects that the volumes have an invalid signature. After the ESXi host detects this situation it attempts to update metadata on the unresolved VMFS volume. This attempt fails because the SRDF R2 device is protected and write-disabled by the 426 Using EMC VMAX Storage in VMware vsphere Environments

429 Disaster Protection for VMware vsphere active SRDF session. Nevertheless, ESXi will attempt this metadata update numerous times before excluding it as a candidate and moving on. These retries greatly increase the time to discover all of the candidate devices in the Add Storage Wizard. Note: Invalid signatures are caused when a VMFS volume is exactly copied from an source device to an target device. Since the WWN and therefore the Network Address Authority (NAA) is different on the source and target device, the VMFS signature, which is based on the device NAA, is consequently invalid on that device. In addition to the increased scan times, the vmkernel.log will contain messages similar to the following: Jul 20 10:05:25 VMkernel: 44:00:29: cpu46:4142)scsideviceio: 1672: Command 0x2a to device "naa xxxxxx " failed H:0x0 D:0x2 P:0x3 Possible sense data: 0x7 0x27 0x0. In order to avoid this retry situation and therefore reducing scan times, users can use SYMCLI (Figure 270) or Unisphere for VMAX (Figure 271 on page 428) to set the write disabled devices to a state of Not Ready. When a device is Not Ready, ESXi fails the SCSI open() call and immediately skips to the next device. Setting devices to a status of Not Ready will therefore remediate the Add Storage Wizard slowdown. Figure 270 Setting devices not ready with Solutions Enabler CLI Note: The SRDF Storage Replication Adapter for VMware SRM does not support devices being set to Not Ready by default. In order for the SRA to manage these the devices the advanced SRA option SetReplicaDeviceToReady must be enabled. Business continuity solutions for VMware vsphere 427

430 Disaster Protection for VMware vsphere Figure 271 Setting devices not ready with Unisphere for VMAX The other operation that is subject to long rescan times is the HBA storage rescan. Fortunately, setting the devices to Not Ready will also prevent issues with this operation, in all versions of ESX(i). 428 Using EMC VMAX Storage in VMware vsphere Environments

EMC VSI for VMware vsphere: Path Management

EMC VSI for VMware vsphere: Path Management EMC VSI for VMware vsphere: Path Management Version 5.6 Product Guide P/N 300-013-068 REV 06 Copyright 2011 2013 EMC Corporation. All rights reserved. Published in the USA. Published September 2013. EMC

More information

EMC VSI for VMware vsphere : Path Management Version 5.3

EMC VSI for VMware vsphere : Path Management Version 5.3 EMC VSI for VMware vsphere : Path Management Version 5.3 Product Guide P/N 300-013-068 REV 03 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2012

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Storage Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 Update 1 Modified 16 JAN 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Workflow Guide for 7.2 release July 2018 215-13170_B0 doccomments@netapp.com Table of Contents 3 Contents Deciding

More information

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5 Setup for Microsoft Cluster Service Setup for Microsoft Cluster Service Revision: 041108

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Virtual Volumes FAQs First Published On: Last Updated On:

Virtual Volumes FAQs First Published On: Last Updated On: First Published On: 03-20-2017 Last Updated On: 07-13-2018 1 Table of Contents 1. FAQs 1.1.Introduction and General Information 1.2.Technical Support 1.3.Requirements and Capabilities 2 1. FAQs Frequently

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.0 vcenter Server 5.0 This document supports the version of each product listed and supports all subsequent versions until the

More information

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH EMC VMAX Increasing operational efficiency with VMware and EMC VMAX Abstract This white paper discusses how VMware s vsphere Storage APIs for

More information

Using EMC VNX Storage with VMware vsphere

Using EMC VNX Storage with VMware vsphere Using EMC VNX Storage with VMware vsphere Version 2.1 Configuring VMware vsphere on VNX Storage Cloning Virtual Machines Establishing a Backup and Recovery Plan for VMware vsphere on VNX Storage Using

More information

VMware vsphere 5.5 Professional Bootcamp

VMware vsphere 5.5 Professional Bootcamp VMware vsphere 5.5 Professional Bootcamp Course Overview Course Objectives Cont. VMware vsphere 5.5 Professional Bootcamp is our most popular proprietary 5 Day course with more hands-on labs (100+) and

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

HOL09-Entry Level VMAX: Provisioning, FAST VP, TF VP Snap, and VMware Integration

HOL09-Entry Level VMAX: Provisioning, FAST VP, TF VP Snap, and VMware Integration HOL09-Entry Level VMAX: Provisioning, FAST VP, TF VP Snap, and VMware Integration HOL09-Entry Level VMAX: Provisioning, FAST VP, TF VP Snap, and VMware Integration 1 VMAX 10K "Hands On Lab" Exercises 1.1

More information

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01 EMC ViPR Controller Version 2.3 Integration with VMAX and VNX Storage Systems Guide 302-002-075 REV 01 Copyright 2013-2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

VMware ESX Server Using EMC Symmetrix Storage Systems Solutions Guide

VMware ESX Server Using EMC Symmetrix Storage Systems Solutions Guide VMware ESX Server Using EMC Symmetrix Storage Systems Solutions Guide Version 2.0 Connectivity of VMware ESX Server to Symmetrix Storage Generating Restartable Clone Copies with Symmetrix Storage VMware

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Administration Guide for 7.2 release June 2018 215-13169_A0 doccomments@netapp.com Table of Contents 3 Contents

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3

Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3 Implementing Virtual Provisioning on EMC Symmetrix with VMware Infrastructure 3 Applied Technology Abstract This white paper provides a detailed description of the technical aspects and benefits of deploying

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x Application notes Abstract These application notes explain configuration details for using Infortrend EonStor DS Series iscsi-host

More information

EMC VSI for VMware vsphere Web Client

EMC VSI for VMware vsphere Web Client EMC VSI for VMware vsphere Web Client Version 6.2 Product Guide P/N 302-000-051 REV 03 July 2014 Copyright 2014 EMC Corporation. All rights reserved. Published in the USA. Published July 2014 EMC believes

More information

Using the EMC SRDF Adapter for VMware Site Recovery Manager

Using the EMC SRDF Adapter for VMware Site Recovery Manager Using the EMC SRDF Adapter for VMware Site Recovery Manager Version 5.0 SRDF Overview Installing and Configuring the SRDF SRA Initiating Test Site Failover Operations Performing Site Failover/Failback

More information

Interfamily Connectivity

Interfamily Connectivity Interfamily Connectivity SRDF Interfamily Connectivity Information REV 01 May 2017 This document defines the versions of HYPERMAX OS and Enginuity that can make up valid SRDF replication and SRDF/Metro

More information

EMC Secure Remote Support Device Client for Symmetrix Release 2.00

EMC Secure Remote Support Device Client for Symmetrix Release 2.00 EMC Secure Remote Support Device Client for Symmetrix Release 2.00 Support Document P/N 300-012-112 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX

USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX USING VMWARE VSPHERE STORAGE APIs FOR ARRAY INTEGRATION WITH DELL EMC VMAX AND POWERMAX Increasing operational efficiency with VMware and Dell EMC VMAX and PowerMax Abstract This white paper discusses

More information

vsphere Virtual Volumes

vsphere Virtual Volumes vsphere Virtual Volumes Technical Overview Josh Atwell Andy Banta Special Thanks to Rawlinson Rivera and Cormac Hogan Presenters Josh Atwell Solutions Architect, SolidFire Andy Banta Storage Janitor, SolidFire

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure

Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure Using EMC Celerra Storage with VMware vsphere and VMware Infrastructure Version 4.0 Connectivity of VMware vsphere or VMware Infrastructure to Celerra Storage Backup and Recovery of VMware vsphere or VMware

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, four-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

VMware vsphere 5.5 Advanced Administration

VMware vsphere 5.5 Advanced Administration Format 4-day instructor led training Course Books 630+ pg Study Guide with slide notes 180+ pg Lab Guide with detailed steps for completing labs vsphere Version This class covers VMware vsphere 5.5 including

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version Installation and Administration Guide P/N 300-007-130 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

MIGRATING TO DELL EMC UNITY WITH SAN COPY

MIGRATING TO DELL EMC UNITY WITH SAN COPY MIGRATING TO DELL EMC UNITY WITH SAN COPY ABSTRACT This white paper explains how to migrate Block data from a CLARiiON CX or VNX Series system to Dell EMC Unity. This paper outlines how to use Dell EMC

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 SP1 Console Client for Microsoft Windows P/N 300-012-249 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000

More information

Setup for Failover Clustering and Microsoft Cluster Service

Setup for Failover Clustering and Microsoft Cluster Service Setup for Failover Clustering and Microsoft Cluster Service Update 1 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 This document supports the version of each product listed and supports all subsequent

More information

EMC ProtectPoint. Solutions Guide. Version REV 03

EMC ProtectPoint. Solutions Guide. Version REV 03 EMC ProtectPoint Version 3.5 Solutions Guide 302-003-476 REV 03 Copyright 2014-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2017 Dell believes the information in this publication

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Administration Guide for 7.1 release January 2018 215-12648_B0 doccomments@netapp.com Table of Contents 3 Contents

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7

Setup for Failover Clustering and Microsoft Cluster Service. 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 Setup for Failover Clustering and Microsoft Cluster Service 17 APR 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware website

More information

VMware vsphere VMFS First Published On: Last Updated On:

VMware vsphere VMFS First Published On: Last Updated On: First Published On: 10-24-2017 Last Updated On: 10-24-2017 1 Table of Contents 1. Introduction 1.1.Introduction 2. Background 2.1.Background 3. VMFS Technical Overview 3.1.VMFS Technical Overview 4. Features

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows

EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows EMC DiskXtender File System Manager for UNIX/Linux Release 3.5 Console Client for Microsoft Windows Installation Guide P/N 300-009-578 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103

More information

EMC Innovations in High-end storages

EMC Innovations in High-end storages EMC Innovations in High-end storages Symmetrix VMAX Family with Enginuity 5876 Sasho Tasevski Sr. Technology consultant sasho.tasevski@emc.com 1 The World s Most Trusted Storage System More Than 20 Years

More information

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot Course Name Format Course Books vsphere Version Delivery Options Remote Labs Max Attendees Requirements Lab Time Availability May, 2017 Suggested Price

More information

DELL EMC UNITY: DATA REDUCTION

DELL EMC UNITY: DATA REDUCTION DELL EMC UNITY: DATA REDUCTION Overview ABSTRACT This white paper is an introduction to the Dell EMC Unity Data Reduction feature. It provides an overview of the feature, methods for managing data reduction,

More information

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY HYPERMAX OS Integration with CloudArray ABSTRACT With organizations around the world facing compliance regulations, an increase in data, and a decrease in IT spending,

More information

EMC Ionix Network Configuration Manager Version 4.1.1

EMC Ionix Network Configuration Manager Version 4.1.1 EMC Ionix Network Configuration Manager Version 4.1.1 RSA Token Service Installation Guide 300-013-088 REVA01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices

HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices HP P4000 LeftHand SAN Solutions with VMware vsphere Best Practices Technical whitepaper Table of contents Executive summary...2 New Feature Challenge...3 Initial iscsi setup of vsphere 5...4 Networking

More information

Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration

Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration Using IBM FlashSystem V9000 TM with the VMware vstorage APIs for Array Integration Summary: IBM FlashSystem V9000 provides integration and performance benefits for VMware vsphere environments through support

More information

VMware VMFS Volume Management VMware Infrastructure 3

VMware VMFS Volume Management VMware Infrastructure 3 Information Guide VMware VMFS Volume Management VMware Infrastructure 3 The VMware Virtual Machine File System (VMFS) is a powerful automated file system that simplifies storage management for virtual

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.

Setup for Failover Clustering and Microsoft Cluster Service. Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6. Setup for Failover Clustering and Microsoft Cluster Service Update 1 16 OCT 2018 VMware vsphere 6.7 VMware ESXi 6.7 vcenter Server 6.7 You can find the most up-to-date technical documentation on the VMware

More information

Dell EMC ViPR Controller

Dell EMC ViPR Controller Dell EMC ViPR Controller Version 3.6.2 Ingest Services for Existing Environments Guide 302-004-917 Copyright 2013-2018 Dell Inc. or its subsidiaries. All rights reserved. Published June 2018 Dell believes

More information

Dell EqualLogic PS Series Arrays: Advanced Storage Features in VMware vsphere

Dell EqualLogic PS Series Arrays: Advanced Storage Features in VMware vsphere Technical Report Dell EqualLogic PS Series Arrays: Advanced Storage Features in VMware vsphere ABSTRACT This Technical Report will introduce the advanced vsphere storage topics of: VMware vstorage APIs

More information

Recommendations for Aligning VMFS Partitions

Recommendations for Aligning VMFS Partitions VMWARE PERFORMANCE STUDY VMware ESX Server 3.0 Recommendations for Aligning VMFS Partitions Partition alignment is a known issue in physical file systems, and its remedy is well-documented. The goal of

More information

VMware ESX Server Using EMC CLARiiON Storage Systems Solutions Guide

VMware ESX Server Using EMC CLARiiON Storage Systems Solutions Guide VMware ESX Server Using EMC CLARiiON Storage Systems Solutions Guide Version 3.0 Connectivity of VMware ESX Server to CLARiiON Storage CLARiiON Virtual LUN Technology on VMware ESX Server Generating Restartable

More information

USING VMWARE VIRTUAL VOLUMES WITH DELL EMC VMAX AND POWERMAX

USING VMWARE VIRTUAL VOLUMES WITH DELL EMC VMAX AND POWERMAX USING VMWARE VIRTUAL VOLUMES WITH DELL EMC VMAX AND POWERMAX A practical guide to implementing and working with VVols Abstract This white paper discusses how VMware s VVols are implemented on the VMAX

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

INTRODUCTION TO THE VVNX COMMUNITY EDITION

INTRODUCTION TO THE VVNX COMMUNITY EDITION INTRODUCTION TO THE VVNX COMMUNITY EDITION A Detailed Review ABSTRACT This white paper introduces the architecture and functionality of the EMC vvnx Community Edition. This paper also discusses some of

More information

NexentaStor VVOL

NexentaStor VVOL NexentaStor 5.1.1 VVOL Admin Guide Date: January, 2018 Software Version: NexentaStor 5.1.1 VVOL Part Number: 3000-VVOL-5.1.1-000065-A Table of Contents Preface... 3 Intended Audience 3 References 3 Document

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

DELL EMC UNITY: VIRTUALIZATION INTEGRATION

DELL EMC UNITY: VIRTUALIZATION INTEGRATION DELL EMC UNITY: VIRTUALIZATION INTEGRATION A Detailed Review ABSTRACT This white paper introduces the virtualization features and integration points that are available on Dell EMC Unity. July, 2017 WHITE

More information

EMC ViPR Controller. Ingest Services for Existing Environments Guide. Version REV 01

EMC ViPR Controller. Ingest Services for Existing Environments Guide. Version REV 01 EMC ViPR Controller Version 3.5 Ingest Services for Existing Environments Guide 302-003-280 REV 01 Copyright 2013-2016 EMC Corporation. All rights reserved. Published in the USA. Published October 2016

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

iscsi Configuration for ESXi using VSC Express Guide

iscsi Configuration for ESXi using VSC Express Guide ONTAP 9 iscsi Configuration for ESXi using VSC Express Guide May 2018 215-11181_E0 doccomments@netapp.com Updated for ONTAP 9.4 Table of Contents 3 Contents Deciding whether to use this guide... 4 iscsi

More information

NEW FEATURES IN EMC ENGINUITY 5875 FOR OPEN SYSTEMS ENVIRONMENTS

NEW FEATURES IN EMC ENGINUITY 5875 FOR OPEN SYSTEMS ENVIRONMENTS White Paper NEW FEATURES IN EMC ENGINUITY 5875 FOR OPEN SYSTEMS ENVIRONMENTS Abstract This white paper introduces new features made available by Enginuity 5875 on EMC Symmetrix VMAX storage arrays. Sub-LUN

More information

Configuration Maximums

Configuration Maximums Configuration s vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions

More information

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE

VMware vcenter Site Recovery Manager 4.1 Evaluator s Guide EVALUATOR'S GUIDE VMware vcenter Site Recovery Manager 4.1 EVALUATOR'S GUIDE Table of Contents. Getting Started.... 3 About VMware vcenter Site Recovery Manager.... 3 About This Guide.... 3 Intended Audience.... 3 Assumptions....

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide

Dell Storage vsphere Web Client Plugin. Version 4.0 Administrator s Guide Dell Storage vsphere Web Client Plugin Version 4.0 Administrator s Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:

More information

Table of Contents HOL-PRT-1467

Table of Contents HOL-PRT-1467 Table of Contents Lab Overview - - Virtual Volumes with Dell EqualLogic... 2 Lab Guidance... 3 Pre-flight Instructions... 5 Module 1 - Working with Dell EqualLogic and VVOLs (60 Mins)... 11 Creating a

More information

EMC SourceOne Discovery Manager Version 6.7

EMC SourceOne Discovery Manager Version 6.7 EMC SourceOne Discovery Manager Version 6.7 Installation and Administration Guide 300-012-743 REV A01 EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

APPSYNC REPURPOSING COPIES ON UNITY

APPSYNC REPURPOSING COPIES ON UNITY APPSYNC REPURPOSING COPIES ON UNITY Repurposing Block Based Storage Volumes ABSTRACT This white paper discusses and provides guidelines for users who wants to utilize repurposing capabilities of Dell EMC

More information

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.0 Document revision Date Revision Comments /9/0 A Initial Draft THIS GUIDE IS FOR INFORMATIONAL

More information

Future Datacenter. Žaneta Findrik Nöthig Presales Manager. David Hanáček vspecialist Technical Lead

Future Datacenter. Žaneta Findrik Nöthig Presales Manager. David Hanáček vspecialist Technical Lead Future Datacenter Žaneta Findrik Nöthig Presales Manager David Hanáček vspecialist Technical Lead 1 Agenda EMC Strategy & Vision Why EMC is #1 for VMware Business Continuity / Disaster Recovery with VPLEX

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy Applied Technology Abstract This white paper highlights a new EMC Celerra feature that simplifies the process of creating specific custom

More information

USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC

USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC White Paper USING SRDF/METRO IN A VMWARE METRO STORAGE CLUSTER RUNNING ORACLE E-BUSINESS SUITE AND 12C RAC DEMONSTRATING INTEROPERABILITY OF SRDF/METRO AND VMWARE, USING ORACLE APPLICATIONS RUNNING ON

More information

Introduction to Using EMC Celerra with VMware vsphere 4

Introduction to Using EMC Celerra with VMware vsphere 4 Introduction to Using EMC Celerra with VMware vsphere 4 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, page 1 Virtual Machine Configuration Recommendations, page 1 Configuring Resource Pools Using vsphere Web Client, page 4 Configuring a Virtual Machine

More information

Best Practice Guide for Implementing VMware vcenter Site Recovery Manager 4.x with Oracle ZFS Storage Appliance

Best Practice Guide for Implementing VMware vcenter Site Recovery Manager 4.x with Oracle ZFS Storage Appliance An Oracle Technical White Paper March 2014; v2.1 Best Practice Guide for Implementing VMware vcenter Site Recovery Manager 4.x with Oracle ZFS Storage Appliance Introduction... 1 Overview... 2 Prerequisites...

More information

REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX

REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX White Paper REDUCE COSTS AND OPTIMIZE MICROSOFT SQL SERVER PERFORMANCE IN VIRTUALIZED ENVIRONMENTS WITH EMC SYMMETRIX VMAX An Architectural Overview EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, on page 1 Virtual Machine Configuration Recommendations, on page 1 Configuring Resource Pools Using vsphere Web Client, on page 4 Configuring a Virtual

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information