EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING

Similar documents
Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

EMC VPLEX with Quantum Stornext

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

EMC Business Continuity for Microsoft Applications

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Surveillance Dell EMC Storage with Digifort Enterprise

EMC Integrated Infrastructure for VMware. Business Continuity

DATA PROTECTION IN A ROBO ENVIRONMENT

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC VPLEX Geo with Quantum StorNext

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Surveillance Dell EMC Storage with FLIR Latitude

AVAILABILITY AND DISASTER RECOVERY. Ravi Baldev

Surveillance Dell EMC Storage with Bosch Video Recording Manager

TPC-E testing of Microsoft SQL Server 2016 on Dell EMC PowerEdge R830 Server and Dell EMC SC9000 Storage

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

Reference Architecture

Copyright 2012 EMC Corporation. All rights reserved.

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

vsphere 4 The Best Platform for Business-Critical Applications Gaetan Castelein Sr Product Marketing Manager VMware, Inc.

Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Microsoft Office SharePoint Server 2007

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Technical Field Enablement. Symantec Messaging Gateway 10.0 HIGH AVAILABILITY WHITEPAPER. George Maculley. Date published: 5 May 2013

SQL Server 2008 Consolidation

Assessing performance in HP LeftHand SANs

DELL EMC VxRAIL vsan STRETCHED CLUSTERS PLANNING GUIDE

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

EMC VSPEX END-USER COMPUTING

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC CLARiiON CX3 Series FCP

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

vsan Mixed Workloads First Published On: Last Updated On:

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE

Surveillance Dell EMC Storage with Synectics Digital Recording System

EMC Backup and Recovery for Microsoft Exchange 2007

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vcloud Director Infrastructure Resiliency Case Study

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Foundation for Cloud Computing with VMware vsphere 4

EMC VSPEX END-USER COMPUTING

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

ENTERPRISE HYBRID CLOUD 4.1.1

EMC VSPEX END-USER COMPUTING

VMware vshield App Design Guide TECHNICAL WHITE PAPER

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

VMware Virtual SAN Technology

VMware vsphere with ESX 4 and vcenter

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND ISCSI INFRASTRUCTURE

Eliminate the Complexity of Multiple Infrastructure Silos

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

Microsoft SQL Server 2014 on VMware vsan 6.2 All-Flash October 31, 2017

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

ENTERPRISE HYBRID CLOUD 4.1.2

Video Surveillance EMC Storage with Digifort Enterprise

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Scale out a 13th Generation XC Series Cluster Using 14th Generation XC Series Appliance

Dell Technologies IoT Solution Surveillance with Genetec Security Center

Branch offices and SMBs: choosing the right hyperconverged solution

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

1V Number: 1V0-621 Passing Score: 800 Time Limit: 120 min. 1V0-621

VMware, Cisco and EMC The VCE Alliance

Stellar performance for a virtualized world

IT Infrastructure: Poised for Change

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

7 Things ISVs Must Know About Virtualization

VMware vsphere with ESX 6 and vcenter 6

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE

EMC Storage for VMware vsphere Enabled by EMC VPLEX Local

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

LONG-DISTANCE APPLICATION MOBILITY ENABLED BY EMC VPLEX GEO

Symantec Reference Architecture for Business Critical Virtualization

Reference Architectures for designing and deploying Microsoft SQL Server Databases in Active System800 Platform

ORACLE PROTECTION AND RECOVERY USING VMWARE VSPHERE VIRTUAL VOLUMES ON EMC VMAX3

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

Surveillance Dell EMC Storage with Verint Nextiva

Transcription:

White Paper EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING Abstract This white paper provides an overview of VPLEX/VE use cases and performance characteristics

Copyright 2014 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part Number H13216.1 For more information: Explore and compare the latest VPLEX products in the EMC Store 2

Table of Contents Executive summary... 4 Document scope and limitations... 4 Audience... 4 Introduction... 5 VPLEX/VE Overview... 6 Architecture and Deployment... 7 VPLEX Witness... 8 VPLEX/VE vapp Architecture... 9 VPLEX/VE STORAGE CONCEPTS... 9 VPLEX/VE Use Cases... 11 Block Storage High Availability... 11 Application HA... 11 Migration... 12 Dynamic Workload Load Balancing across Sites... 12 Immediate Cross-Site vmotion... 12 VPLEX/VE Performance Planning... 13 Summary... 13 Background... 13 Hardware Configuration... 13 Key Performance Indicators (KPIs)... 14 Simulated Application Profiles... 14 I/O Profile of Simulated Applications... 14 Notes on Selecting the Application Profile... 15 Testing Methodology... 16 Type of Measurements... 16 Data Charts... 16 Conclusion... 19 VPLEX References... 20 Supplemental ware References... 20 3

Executive summary EMC VPLEX/VE delivers software-defined data mobility and availability within and across sites for ware based infrastructures. VPLEX/VE is a unique virtual storage technology that enables mission critical applications to remain up and running during any of a variety of planned and unplanned downtime scenarios. VPLEX/VE permits fluid, non-disruptive data movement, taking technologies that were built assuming a single block storage instance and enabling them to function across disparate iscsi arrays and across distance. EMC VPLEX/VE leverages the proven advantages of the EMC VPLEX family and puts that power in a customer installable, COTS hardware based configuration that is managed using standard ware workflows and interfaces. Document scope and limitations The use cases and performance overview discussed in this white paper are applicable to the VPLEX/VE 2.1 release. The use cases and performance details provided in this white paper are only applicable to environments consisting of the following elements: ESXi 5.1, 5.5 ware Enterprise+ Licensing EMC VPLEX/VE 2.1 2+ EMC VNXe 3100 and 3200 series iscsi arrays 8-32 ESXi hosts [See the EMC VPLEX/VE Product Guide available at http://support.emc.com for detailed host requirements] Please consult with EMC sales and support representatives if there is uncertainty as to the applicability of this information for specific VPLEX/VE environments. Audience This white paper is intended for technology architects, storage administrators, and ware system administrators who use or are planning to use EMC VPLEX/VE technology. It is assumed that the reader is familiar with ware and iscsi storage array technologies. 4

Introduction Data center infrastructure is undergoing a massive shift. Virtualization in the data center has had a profound impact on customer expectations of flexibility and agility. Especially as customers get to 70+% virtualized, they have the potential to realize tremendous operational savings by consolidating management in their virtualization framework. In this state, customers typically do not want to deploy physical appliances and want everything handled from their virtualization context. Similar changes in networking and storage have meant that the basic infrastructure is now completely in software running on generic hardware. This is the software defined data center. VPLEX has been no stranger to this conversation. Especially given the very strong affinity of VPLEX to ware use-cases, customers have been asking us for a software only version of VPLEX. This is precisely what is being delivered with VPLEX/VE 2.1 software release. This white paper reviews the following topics: VPLEX/VE Technology Overview VPLEX/VE Architecture and Deployment VPLEX/VE Use Cases VPLEX/VE Performance Planning 5

VPLEX/VE Overview EMC VPLEX /VE represents next-generation architecture for data mobility and information access. This architecture is based on EMC s 20+ years of expertise in designing; implementing and perfecting enterprise class intelligent cache and distributed data protection solutions. VPLEX/VE for ware vsphere provides simplified block storage management and data mobility for ware vsphere clusters. VPLEX/VE is exclusively managed using the ware vsphere Web Client through a plug-in. The VPLEX/VE virtual appliance runs at each site or fault domain within an ESXi Cluster or ware Metro Storage Cluster (vmsc). VPLEX/VE enables automatic data sharing and workload balancing along with HA and non-disruptive application mobility across sites. These benefits are achieved by providing a highly available shared block storage device to the ESX cluster. By default, the shared block storage device provided by VPLEX/VE will be formatted with FS and then consumed as a Distributed Datastore by the ESX cluster. By eliminating the need to move between datastores to vmotion from one ESXi host to another, otion and HA are now enabled across separate fault domains or sites with up to 10ms RTT latency between them. Site-1 ware Metro Storage Cluster Site-2 vsms vdirector ESXi VPLEX/VE Site-1 vapp vdirector ESXi vdirector ESXi VPLEX/VE Distributed Datastore vdirector ESXi ESXi vsms vdirector ESXi vdirector ESXi VPLEX/VE Site-2 vapp vdirector ESXi vdirector ESXi ESXi iscsi IP SAN iscsi VNXe 3150 / 3200 VNXe 3150 / 3200 Figure 1: VPLEX/VE within a 2 Site vsphere Cluster VPLEX/VE is a two site (fault domain) solution. Comparing the hardware version of VPLEX Metro to VPLEX/VE we can see that the VPLEX directors have been converted into vdirectors. The VPLEX/VE vdirector configuration is called a 4 4 -- meaning deployment of four vdirectors at each site. From a configuration standpoint, that is analogous to two VPLEX engines on each site of a VPLEX Metro. 6

Architecture and Deployment VPLEX/VE is deployed as vapp into an ESXi cluster managed by a single vsphere Server instance. ware vsphere Web Client and Desktop Client do not have a concept of physical sites or fault domains within a cluster. One suggestion is to use host names that indicate which Site they are located in. It is incumbent on the ware administrator to understand the physical location of each ESXi host in the cluster during VPLEX/VE deployment. The VPLEX/VE installation wizard will organize the vdirectors and vmanagement Server as follows: Figure 2: VPLEX/VE vdirector and vmanagement Server Deployment Within the ESXi cluster: ESXi hosts are organized into 2 Sites, each consisting of 4 or more hosts 1 VPLEX/VE vapp is deployed per Site Each VPLEX/VE Site consists of 4 Director virtual machines (vdirectors) Each VPLEX/VE Site contains 1 Management Server virtual machine 7

vcenter Server ESXi Cluster Site-1 ESXi Hosts Site-2 ESXi Hosts Site-1 vapp Site-1 Virtual Directors Site-1 Virtual Mgmt Server Site-2 vapp Site-2 Virtual Directors Site-2 Virtual Mgmt Server Figure 3: VPLEX/VE within vsphere Web Client VPLEX Witness VPLEX/VE provides an intelligent quorum mechanism to allow VPLEX/VE to make a distinction between a site loss and a wan partition. This mechanism is known as VPLEX Witness. It consists of a single virtual machine that is deployed within a 3 rd fault domain. VPLEX Witness communicates with Site 1 and Site 2 via independent IP networks. Figure 4: VPLEX Witness Deployed in a 3rd Fault Domain Technical Note: The VPLEX Witness virtual machine must reside on an ESXi host outside of the ware cluster that contains the two VPLEX/VE Sites. The design goal is to run the VPLEX Witness in an isolated 3 rd fault domain up to 1000ms (1 second) RTT from Site1 and Site 2. 8

VPLEX/VE allows the ware administrator to select which Site will continue I/O operations during a dual wan-partition event. Setting the affinity to align with Site Bias for the virtual machines that comprise an application enables them to non-disruptively ride through dual wan partition events. For Site failures, VPLEX/VE automatically guides the surviving/healthy site to continue I/O regardless of the Site Bias setting. No matter which site fails, IO carries on at the surviving site. VPLEX/VE vapp Architecture Each VPLEX/VE Site consists of 4 vdirectors running on independent ESXi hosts and 1 Management Server (vsms). One vdirector per ESXi host ensures the VPLEX/VE vapp can survive at least 2 ESXi host failures before a Site is lost. Site-1 vcenter Server ware ESXi Cluster One Logical Group of Hosts Two Physical Locations / Fault Domains VPLEX vapp 2 Site-2 VPLEX vapp 1 vsms 4 - vdirector s vsms 4 - vdirector s ESXi-1 ESXi-2 ESXi-3 ESXi-4 ESXi-n ESXi-5 ESXi-6 ESXi-7 ESXi-8 ESXi-m VPLEX/VE STORAGE CONCEPTS Figure 5: vdirector Deployment across ESXi Hosts Without VPLEX/VE ESXi hosts in a ware cluster consume shared block storage devices from physical storage arrays that are within the same physical site. The ESXi hosts and the virtual machines at one site do not typically consume the iscsi storage or datastores from another site. When attempting to vmotion a virtual machine across sites, the block storage device containing the datastore and corresponding files that make up the virtual machine would not be available to the ESXi host at the second site. This creates the need for Storage vmotion, to move the data across sites prior to using vmotion to move the virtual machine. VPLEX/VE creates Distributed Datastores with backing physical block storage devices at both sites. VPLEX/VE distributed cache coherence feature ensures that the data is consistent and available in an active-active configuration at both sites. VPLEX/VE enables the creation of Distribute Datastores which are backed by distributed virtual volumes within VPLEX/VE. These Distributed Datastores are accessible to all ESXi hosts participating in the within the vsphere cluster. Virtual machines provisioned with 9

Distributed Datastores enable vmotion across sites with no limitation due to local shared physical storage. Figure 6: VPLEX/VE Storage I/O Path 10

VPLEX/VE Use Cases VPLEX/VE allows ware admins to combine ESXi infrastructure at disparate data centers or fault domains into a single pool of resources. Storage, CPU, Memory, and Network at two locations now becomes a single, larger, more resilient pool of IT infrastructure. Block Storage High Availability The AccessAnywhere feature of VPLEX/VE ensures cache-coherent active-active access to datastores across VPLEX/VE sites. The features of VPLEX/VE increase resiliency in the event of a site outage. The data is protected in the event of disasters or failure of components within data centers. With VPLEX/VE, the applications can withstand failures of storage arrays and site components. The VPLEX/VE components are not disrupted by a sequential failure of up to two vdirectors in a site. The failure of a VPLEX/VE site or a dual WAN partition event is tolerated to the extent that the site configured with site bias continues to access the storage infrastructure. This means that if a storage array is unavailable, another storage array configured under VPLEX/VE continues to serve the I/O. Application HA Figure 7: VPLEX/VE Provides RAID-1 mirroring across Sites VPLEX/VE provides a shared block storage device that enables a single datastore to be accessible across sites. In doing so, the ESXi cluster can leverage all of the traditional functionality found within a single site across two sites separated by up to 10ms RTT latency. This means ware HA can be applied to virtual machines within the ESXi cluster and provide automatic (hands-off) virtual machine restart in the event of a site loss or hardware failure. 11

Migration VPLEX/VE simplifies data center block storage management and eliminates outages during data migrations between arrays or upgrading/maintaining array technology. With VPLEX/VE, a ware Admin is able to: Perform non-disruptive array technology refresh tasks using the two-way data exchange between locations Create an active-active ESXi cluster configurations to achieve active use of resources at both sites Provide instant access to data between data centers Non-disruptively move between Datacenters up with up to 10ms RTT latency Figure 8: Non-Disruptive Array Technology Refresh Dynamic Workload Load Balancing across Sites Dynamically move storage from busy arrays to idle arrays for better asset utilization Use ware DRS to balance virtual machine workloads within and across data centers Figure 9: Cross-Site DRS and vmotion Immediate Cross-Site vmotion Leverage vmotion without the need to use Storage vmotion when moving between sites. 12

VPLEX/VE Performance Planning Summary This section provides information about VPLEX/VE performance and some sample performance metrics. The information is straightforward, but fairly simplified. In other words, a number of simplifying assumptions have been made. Individual results WILL vary. Background Adding VPLEX/VE to a vsphere cluster provides increased block storage availability, flexibility, and capability. These benefits come at the expense of adding processing overhead / latencies associated with servicing I/O for the ESXi hosts running VPLEX/VE. Performance of the host system hardware and IP network infrastructure is, therefore, critical to the success of a VPLEX/VE deployment. A representative vsphere cluster environment along with the corresponding performance data charts and tables are provided to illustrate what VPLEX/VE was able to achieve for various IO workloads. All numbers are based on measurements from the test environment with VPLEX/VE running against common IO profiles. Hardware Configuration The example vsphere cluster test configuration, which includes the VPLEX/VE Systems Under Test, or SUTs, has the following components: 4x4 VPLEX VE Metro configuration running on isolated ESX Server. The table below for performance information on the vdirectors. Cluster Storage: VNX iscsi Array with LUNs evenly distributed across the two clusters. IP WAN network consisting of two iscsi Ethernets. Performance Attribute CPU Cores Processor Type Description 12 CPUs X 2.5 GHz Intel Xeon CPU ES-2640 at 2.5 GHz Processor Sockets 2 Cores per Socket 6 Logical Processors 24 Hyper-Threading Active Number of 1 GbE NICs 11 Figure 10: ESXi Test Host Specifications 13

Key Performance Indicators (KPIs) Five simulated application workloads were measured. These are the same application IO footprints that are part of the key performance indicators (KPIs) for our VPLEX appliance characterization. Simulated Application Profiles The following simulated application profiles were tested: 1. OLTP1 (mail application) 2. OLTP2 (small oracle application / database transactional) 3. OLTP2-HW (large oracle application / heavy-weight transactions) 4. DSS2 (database decision support) 5. DSS128K (multimedia streaming / large block read and writes) These simulated application profiles are each a composite of five simple I/O profiles: Random Read Hits (rrh), Random Reads (Miss) (rr), Random Writes (rw), Sequential Reads (sr) and Sequential Writes (sw). I/O Profile of Simulated Applications The I/O size and proportion of each component I/O profile varies across the application profiles as detailed in the following tables. I/O Profile OLTP1 I/O Size (KB) % of Total rrh 4 40 rr 4 24 rw 4 16 sr 4 10 sw 4 10 I/O Profile OLTP2 I/O Size (KB) % of Total rrh 8 20 rr 8 45 rw 8 15 sr 64 10 sw 64 10 14

I/O Profile OLTP2HW I/O Size (KB) % of Total rrh 8 10 rr 8 35 rw 8 35 sr 64 5 sw 64 15 I/O Profile DSS2 I/O Size (KB) % of Total rrh 4 0 rr 4 15 rw 4 5 sr 64 70 sw 64 10 I/O Profile DSS128K I/O Size (KB) % of Total rrh 64 18 rr 64 18 rw 64 4 sr 128 48 sw 128 12 Notes on Selecting the Application Profile In order to select the application profile to use, i.e., which is most like the application that will be used with VPLEX/VE; the following section will serve as a mini guide. The subtitles of the 5 apps are representative of their origin: OLTP1 is based on Exchange; OLTP2 on a typical Oracle OLTP app; OLTP2HW on the TPC-C benchmark (which emulates a heavy-weight OLTP application); DSS2 on a typical Decision Support app; and DSS128k on a multimedia distribution app. 15

While OLTP1 is the lightest application type, i.e., an OLTP1 transaction takes the fewest resources to execute, if the workload is specified in terms of KBps, then OLTP1 is also the densest application, i.e., it is able to saturate the CPU resources with the smallest bandwidth. That is because OLTP1 is made up of only small size (4 KB) I/Os. When comparing a given bandwidth s worth of application demand, it should not be surprising when OLTP1 transactions saturate the system earlier than OLTP2HW transactions. The former will be doing many more IOps. The correct application to select is the one that is predominately like the application workload that will be deployed onto VPLEX/VE storage. And the correct metric to size the workload (IOps or KBps) is the one for which the most environment specific data is available. Experience shows that the app estimates tend to be too conservative. Pick the average or (better) median case, not the one-time-highest-peak-ever-seen-in-a-1-minute-interval. Testing Methodology Type of Measurements The types of performance measurements are presented in this document are called Steady-State Performance or Steady-State Latency Curves. Initial peak workload is used to determine the workload range ( peak = 100% duty cycle). A series of offered workloads at various duty cycles (10%, 30%, 50%, 70%, and 99% of peak ) is run, attaining a steady-state measurement of each. This type of run is useful for creating more realistic workloads, or at least for measuring latencies by plotting the observed load against the observed latency. The latency measure at peak is not steady-state, and, therefore not a real or useful latency. Data Charts The charts provide an easy way to lookup the vapp vcpu utilization (%) and the latencies (Average Response Time) by picking either the I/O per second or KB per second on the X-axis and following up to the relevant curve and across to the relevant Y-axis. The default IP network MTU of 1500 is shown. Testing did not show a significant difference in performance when using an MTU of 9000. For example: 65,000 OLTP1 I/O per second would run at ~ 40% utilization (green line) and have about 2.3 ms response time (red line). Technical Note: For the following charts, WAN RTT latency between Sites was set to 0 milliseconds. VPLEX/VE uses write-through caching, so the actual WAN RTT latency for the environment should be added each of the Average RT (ms) data points on the chart for the best overall estimates. 16

OLTP1 IOps mtu:1500 avgrt mtu:1500 Percent Utilization 100% 80% 60% 40% 20% 0% KB per second - 100 200 300 400 500 8 6 4 2 0 0 20,000 40,000 60,000 80,000 100,000 120,000 140,000 I/Os per second 10 Average Response Time (ms) OLTP2 IOps mtu:1500 avgrt mtu:1500 Percent Utilization 100% 80% 60% 40% 20% 0% KB per second - 200 400 600 800 1,000 1,200 1,400 10 8 6 4 2 0 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 I/Os per second Average Response Time (ms) 17

OLTP2HW IOps mtu:1500 avgrt mtu:1500 Percent Utilization 100% 80% 60% 40% 20% 0% KB per second - 200 400 600 800 1,000 0 10,000 20,000 30,000 40,000 50,000 60,000 I/Os per second DSS2 35 30 25 20 15 10 5 0 Average Response Time (ms KBps mtu:1500 avgrt mtu:1500 Percent Utilization 100% 80% 60% 40% 20% 0% I/Os per second - 5,000 10,000 15,000 20,000 25,000 30,000 35,000 8 6 4 2 0 0 500 1,000 1,500 2,000 KBs per second 10 Average Response Time (ms) 18

Conclusion Adding VPLEX/VE to a new or existing vsphere cluster provides increased storage resiliency, flexibility, and functionality, but it does come with a cost. Though minor, the expense of added I/O processing overhead / latencies when servicing IO from must be accounted for during the initial planning process. Performance of the underlying host hardware system is the key to getting the desired result. This document reviewed VPLEX/VE architecture, use cases, and performance characteristics. Data charts and a test methodology were provided that can be used to estimate VPLEX/VE ability to support a given IO workload. The estimates are based on measurements from a representative host hardware and IP network configuration running various well known IO profiles. A number of environmental and applications assumptions were made to simplify the overall discussion. For this reason, the results indicated are only a guide and may not represent the actual results obtained in specific VPLEX/VE environments. 19

VPLEX References The following reference documents are available at http://support.emc.com: EMC VPLEX/VE Site Preparation Guide EMC VPLEX/VE Release Notes EMC VPLEX/VE Security Configuration Guide EMC VPLEX/VE Configuration Worksheet EMC VPLEX/VE CLI Guide EMC VPLEX/VE Product Guide EMC ware ESXi Host Connectivity Guide Supplemental ware References The CPU Scheduler in ware vsphere 5.1: http://www.vmware.com/files/pdf/techpaper/ware-vsphere-cpu-sched- Perf.pdf Performance Best Practices for ware vsphere 5.1: http://www.vmware.com/pdf/perf_best_practices_vsphere5.1.pdf Best Practices for Running ware vsphere on iscsi: http://www.vmware.com/files/pdf/iscsi_design_deploy.pdf Best Practices for Performance Tuning of Latency-Sensitive Workloads in vsphere s: http://www.vmware.com/files/pdf/techpaper/w-tuning-latency-sensitive- Workloads.pdf Deploying Extremely Latency-Sensitive Applications in ware vsphere 5.5 - Performance Study: http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perfvsphere55.pdf For more information: Explore and compare the latest VPLEX products in the EMC Store 20