EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE

Size: px
Start display at page:

Download "EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE"

Transcription

1 DESIGN GUIDE EMC VSPEX WITH EMC XTREMSF AND EMC XTREMCACHE EMC VSPEX Abstract This describes how to use EMC XtremSF and EMC XtremCache in a virtualized environment with an EMC VSPEX Proven Infrastructure for VMware vsphere or Microsoft Hyper-V. This also illustrates how to configure XtremSF, allocate XtremCache resources following best practices for maximum effectiveness, and use all the benefits that XtremCache offers. December 2013

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published December EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H

3 Contents Contents Chapter 1 Introduction 9 Purpose Business value Scope Audience Terminology Chapter 2 Before You Start 13 Deployment workflow overview Essential reading VSPEX Solution Overviews VSPEX Implementation Guides VSPEX Proven Infrastructures Chapter 3 Solution Overview 17 Introduction EMC VSPEX Proven Infrastructure EMC XtremCache: The business case XtremSF and XtremCache XtremSF XtremCache Business benefits of XtremSF and XtremCache XtremSF XtremCache XtremCache features XtremCache management VNX integration Oracle RAC support Software-only feature AIX support Solution architecture How XtremCache works XtremCache in a virtualized environment

4 Contents Chapter 4 Solution Design Considerations and Best Practices 39 Overview XtremCache Performance Predictor Requirements Sample output from XtremCache Performance Predictor VSPEX environments that can benefit from XtremCache Selecting an XtremSF card Design best practices MLC versus SLC Virtualization design considerations Sizing recommendations Performance recommendations XtremCache placement considerations Flexibility Design best practices VMware considerations Hyper-V considerations Chapter 5 XtremCache Solution for Applications 54 Overview Architecture of XtremCache deployment on VMware Architecture of XtremCache deployment on Hyper-V XtremCache for SQL Server OLTP database Benefits of XtremCache in a SQL Server OLTP environment Best practices Use case design and deployment Configuration of XtremCache in the VMware environment Test results XtremCache for Exchange Server Benefits of XtremCache in an Exchange environment Best practices Use case design and deployment Configuration of XtremCache in the VMware environment Test results XtremCache for SharePoint Benefits of XtremCache in a SharePoint environment Best practices Use case design and deployment Configuration of XtremCache in the VMware environment

5 Contents Test results XtremCache for Oracle OLTP database Benefits of XtremCache in an Oracle environment Best practices Use case design and deployment Test results XtremCache for private cloud Benefits of XtremCache in a private cloud environment Best practices Use case design and deployment Configuration of XtremCache in the VMware environment Test results Chapter 6 References 90 EMC documentation Other documentation Links Appendix A Ordering Information 94 Ordering XtremSF and XtremCache

6 Contents Figures Figure 1. VSPEX Proven Infrastructure Figure 2. I/O gap between the processor and storage subsystems Figure 3. VMware live migration Figure 4. XtremCache data deduplication Figure 5. XtremCache data deduplication architecture overview Figure 6. Split-card mode used for SQL Server configuration Figure 7. XtremCache Management Center Figure 8. XtremCache deployment in an Oracle RAC environment Figure 9. Read Hit example with XtremCache Figure 10. Read Miss example with XtremCache Figure 11. Write example with XtremCache Figure 12. XtremCache implementation in a VMware environment Figure 13. XtremCache in a VMware environment Figure 14. XtremCache in a Hyper-V environment Figure 15. Figure 16. Figure 17. Figure 18. XtremCache Performance Predictor sample output: collecting performance data XtremCache Performance Predictor sample output: I/O size distribution XtremCache Performance Predictor sample output: predicting the cache hit rate XtremCache Performance Predictor sample output: disk latency prediction Figure 19. XtremCache use cases Figure 20. Comparison between SLC and MLC flash cell data storage Figure 21. Cache device configuration screen Figure 22. XtremCache configuration using EMC VSI plug-in Figure 23. XtremCache implementation in VMware environment for VSPEX Figure 24. XtremCache implementation in Hyper-V environment for VSPEX Figure 25. Figure 26. Figure 27. Architecture of the VSPEX Proven Infrastructure for XtremCache deployment on VMware Architecture of the VSPEX Proven Infrastructure for XtremCache deployment on Hyper-V Architecture design for XtremCache enabled SQL Server virtual environment Figure 28. SQL Server AlwaysOn XtremCache deployment Figure 29. Performance boost after enabling XtremCache Figure 30. Architecture design for XtremCache-enabled Exchange virtual environment Figure 31. XtremCache deployment for Exchange 2010 on vsphere Figure 32. Enabling data deduplication on the XtremCache device

7 Contents Figure 33. Exchange 2010 performance with XtremCache and LoadGen workload 71 Figure 34. XtremCache statistics with data deduplication Figure 35. Exchange server CPU utilization with XtremCache data deduplication. 73 Figure 36. Exchange server disk latencies with XtremCache data deduplication Figure 37. Exchange database LUN performance with XtremCache data deduplication Figure 38. Architecture design for XtremCache enabled SharePoint environment. 76 Figure 39. XtremCache deployment for SharePoint 2010 on vsphere Figure 40. Content database latency dropped after enabling XtremCache Figure 41. Full crawl performance improved after enabling XtremCache Figure 42. Architecture design for XtremCache enabled Oracle 11g R2 environment Figure 43. XtremCache deployment for Oracle 11g R2 on vsphere Figure 44. OLTP TPM improvement Figure 45. Architecture design for XtremCache-enabled private cloud environment with multiple applications Figure 46. Deduplication statistics for SQL Server OLTP Tables Table 1. Terminology Table 2. Deployment process: XtremSF and XtremCache overlay on VSPEX Proven Infrastructure Table 3. Performance characteristics of selected XtremSF cards Table 4. XtremSF device card group for cache pool in ESXi environment Table 5. XtremCache management utilities Table 6. XtremCache management utilities Table 7. SLC and MLC flash comparison Table 8. Recommended cache for each application Table 9. Performance data with OLTP load Table 10. XtremCache deployment in a private cloud environment Table 11. Performance summary for the private cloud environment

8 Contents 8

9 Chapter 1: Introduction Chapter 1 Introduction This chapter presents the following topics: Purpose Business value Scope Audience Terminology

10 Chapter 1: Introduction Purpose Business value Scope EMC VSPEX Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides partners with the ability to plan and design the virtual assets to support applications such as Microsoft SQL Server, Microsoft SharePoint, Microsoft Exchange, and Oracle Database on a VSPEX private cloud. The EMC VSPEX with EMC XtremSF and EMC XtremCache solution provides partners with a server-based caching solution that reduces application latency and increases throughput. This solution runs on a VMware vsphere or Microsoft Hyper-V virtualization layer, backed by the highly available EMC VNX family of storage systems. The computing and network components, while vendor-definable, are designed to be redundant and are sufficiently powerful to handle the processing and data needs of the virtual machine environment. This design guide describes how to select and configure XtremCache resources for a VSPEX Proven Infrastructure and includes best practices and the results of use case testing. IT administrators are often challenged to improve the performance of applications running heavy input/output (I/O) loads, while continuing to minimize the cost of the supporting systems. These I/O sensitive applications are typically limited by storage latency and response times. XtremCache is intelligent caching software that uses server-based flash technology to improve performance by reducing latency and accelerating throughput for dramatic application performance improvement. XtremCache accelerates read performance by putting the data closer to the application. It also protects data by using a write-through cache to the networked storage array to deliver persistent high availability (HA), integrity, and disaster recovery. XtremCache, coupled with array-based EMC FAST software, creates the most efficient and intelligent I/O path from the application to the datastore. The result is a networked infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. This design guide is an overlay solution that describes how to select and deploy XtremCache resources on a VSPEX Proven Infrastructure for VMware vsphere or Microsoft Hyper-V. Furthermore, this guide illustrates best practices and recommendations for using XtremCache to improve the performance of virtualized applications running on a VSPEX Proven Infrastructure. 10

11 Chapter 1: Introduction Audience This guide is intended for qualified EMC VSPEX partners. The guide assumes that VSPEX partners who intend to deploy XtremSF and XtremCache on respective applications are: Qualified to sell and implement the application(s) that will be used in conjunction with the XtremCache solution Qualified by EMC to sell, install, and configure the EMC VNX family of storage systems Certified for selling VSPEX Proven Infrastructures Qualified to sell, install, and configure the network and server products required for VSPEX Proven Infrastructures Trained in and familiar with EMC s XtremSF hardware and XtremCache software Readers must also have the necessary technical training and background to install and configure: EMC VSPEX private cloud solutions for VMware vsphere or Microsoft Hyper-V, depending on the hypervisor in use Windows Server 2012 with Hyper-V or VMware vsphere as the virtualization platforms External references are provided where applicable and EMC recommends that readers become familiar with these documents. For details, see Essential reading. Terminology Table 1 includes the terminology used in this guide. Table 1. Terminology Term Cache page size CSV DAS DSS IOPS MLC NFS Definition The smallest unit of allocation that is inside the cache, typically a few kilobytes in size. The default XtremCache page size is 8 KB. Cluster-shared volume. A Windows Server clustering feature that enables multiple clustered virtual machines to use the same logical unit number (LUN). Direct-attached storage Decision support system Input/output operations per second Multi-level cell flash. A flash memory technology using multiple levels per cell to allow more bits to be stored using the same number of transistors. Network File System 11

12 Chapter 1: Introduction Term PCIe SLC tempdb VHDX VMDK Working set XtremCache XtremSF Definition Peripheral Component Interconnect Express Single-level cell flash. A type of solid-state storage (SSD) that stores one bit of information per cell of flash media. Refers to a system database used by Microsoft SQL Server as a temporary working area during processing. Hyper-V virtual hard disk format VMware virtual machine disk format The frequently accessed data that is likely to be promoted to XtremCache EMC server flash-caching software EMC Peripheral Component Interconnect Express (PCIe) Flash cards with industry-leading performance 12

13 Chapter 2: Before You Start Chapter 2 Before You Start This chapter presents the following topics: Deployment workflow overview Essential reading

14 Chapter 2: Before You Start Deployment workflow overview EMC recommends that you refer to the process flow in Table 2 to design and implement your XtremSF and XtremCache overlay on the VSPEX Proven Infrastructure. Table 2. Deployment process: XtremSF and XtremCache overlay on VSPEX Proven Infrastructure Step Action Reference 1 Review the Xtrem products and features. EMC documentation 2 Determine if the XtremCache solution is appropriate for your application. 3 Select and order the right VSPEX Proven Infrastructure. 4 Select the required XtremCache hardware and determine where to place the cards. Solution Design Considerations and Best Practices VSPEX Proven Infrastructures XtremCache Solution for Applications 5 Deploy and test your virtualized applications. VSPEX Implementation Guides Essential reading EMC recommends that you read the following documents, available from the VSPEX space in the EMC Community Network or from the VSPEX Enablement Center. VSPEX Solution Overviews VSPEX Implementation Guides Refer to the following VSPEX Solution Overview documents: EMC VSPEX Server Virtualization for Midmarket Businesses EMC VSPEX Server Virtualization for Small and Medium Businesses Refer to the following VSPEX Implementation Guides: EMC VSPEX for Virtualized Microsoft Exchange 2010 with Microsoft Hyper-V EMC VSPEX for Virtualized Microsoft Exchange 2010 with VMware vsphere EMC VSPEX for Virtualized Microsoft Exchange 2013 with Microsoft Hyper-V EMC VSPEX for Virtualized Microsoft Exchange 2013 with VMware vsphere EMC VSPEX for Virtualized Microsoft SharePoint 2010 with Microsoft Hyper-V EMC VSPEX for Virtualized Microsoft SharePoint 2010 with VMware vsphere EMC VSPEX for Virtualized Microsoft SharePoint 2013 with Microsoft Hyper-V EMC VSPEX for Virtualized Microsoft SharePoint 2013 with VMware vsphere EMC VSPEX for Virtualized Microsoft SQL Server 2012 with Microsoft Hyper-V EMC VSPEX for Virtualized Microsoft SQL Server 2012 with VMware vsphere 14

15 Chapter 2: Before You Start EMC VSPEX for Virtualized Oracle Database 11g OLTP VSPEX Proven Infrastructures Refer to the following VSPEX Proven Infrastructures: EMC VSPEX Private Cloud VMware vsphere 5.1 for up to 100 Virtual Machines EMC VSPEX Private Cloud VMware vsphere 5.1 for up to 500 Virtual Machines EMC VSPEX Private Cloud VMware vsphere 5.1 for up to 1,000 Virtual Machines EMC VSPEX Private Cloud VMware vsphere 5.5 for up to 1,000 Virtual Machines EMC VSPEX Private Cloud Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines EMC VSPEX Private Cloud Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines EMC VSPEX Private Cloud Microsoft Windows Server 2012 with Hyper-V for up to 1,000 Virtual Machines 15

16 Chapter 2: Before You Start 16

17 Chapter 3: Solution Overview Chapter 3 Solution Overview This chapter presents the following topics: Introduction EMC VSPEX Proven Infrastructure EMC XtremCache: The business case XtremSF and XtremCache Business benefits of XtremSF and XtremCache XtremCache features Solution architecture

18 Chapter 3: Solution Overview Introduction This design guide describes the requirements and process for deploying EMC XtremSF and XtremCache on VSPEX Proven Infrastructures. The guidance applies to all VSPEX Proven Infrastructures unless specifically stated otherwise. This chapter provides an overview of VSPEX Proven Infrastructure, XtremSF and XtremCache, and the key technologies used in the XtremSF and XtremCache overlay for the VSPEX Proven Infrastructure. A VSPEX Proven Infrastructure includes servers, storage, network components, and application components that focus on small and medium business private cloud environments. The XtremSF and XtremCache overlay provides latency reduction and accelerates throughput for dramatic application performance improvement. EMC VSPEX Proven Infrastructure A VSPEX Proven Infrastructure, as shown in Figure 1, is a modular, virtualized infrastructure validated by EMC and delivered by EMC partners. VSPEX includes components supporting virtualization, servers, network, storage, and backup designed by EMC to deliver reliable and predictable performance. VSPEX enables businesses to transform their IT, application and end user computing environments by providing complete virtualization solutions that have been sized and tested by EMC. Figure 1. VSPEX Proven Infrastructure VSPEX provides the flexibility to choose network, server, and virtualization technologies that fit a customer s environment to create a complete virtualization solution. VSPEX delivers faster deployment for EMC partner customers, with greater simplicity and efficiency, more choice, and lower risk to a customer s business. 18

19 Chapter 3: Solution Overview EMC XtremCache: The business case The capabilities of modern processors continue to widen the performance gap between CPUs and disks. Often, the disk datastore becomes a bottleneck in any deployed solution. As processing capacity and workloads increase, the storage system is challenged to keep pace with growing I/O demands. The performance of the magnetic disk remains relatively flat while CPU performance improves 100-fold every decade, as shown in Figure 2. XtremSF Flash drives can help to close the gap. Figure 2. I/O gap between the processor and storage subsystems Flash technology can be used in different ways in the storage environment to compensate for the performance limitations of disk based storage. EMC s architectural approach is to use the right technology in the right place at the right time. This includes using flash in the following ways: In the storage array As an array-side cache As a server-side cache As a tier As the storage for the entire application 19

20 Chapter 3: Solution Overview XtremSF and XtremCache XtremCache (formerly known as VFCache or EMC XtremSW Cache) is the first step in EMC s long-term server flash strategy. This strategy delivers a server-side storage product featuring a combination of intelligent caching software XtremCache and server-based Peripheral Component Interconnect Express (PCIe) Flash hardware XtremSF. XtremCache software turns the XtremSF card into a caching device, to enhance the performance of a wide variety of critical transactional and decision support applications. XtremCache can run with a wide variety of multilevel cell (MLC) and single-level cell (SLC) XtremSF Flash cards. VSPEX partners can order XtremCache software and XtremSF hardware through Channel Express. For ordering information, refer to Appendix A: Ordering Information. XtremSF XtremCache XtremSF is single, low-profile server flash hardware card that fits in any rack-mounted server within the power envelope of a single PCIe slot, and is available with a broad set of MLC and SLC capacities. It can be deployed: As local storage that sits within the server to deliver high performance In combination with XtremCache software to improve network storage array performance, while maintaining the level of protection required by critical application environments You can use EMC XtremCache software to create server-side cache for data. XtremCache is designed with the following basic principles: Performance: Reduce latency and increase throughput to dramatically improve application performance. Intelligence: Add another tier of intelligence by extending FAST array-based technology into the server. Protection: Deliver performance with protection by using the high availability and disaster recovery features of EMC networked storage. Table 3 shows the performance characteristics of some selected XtremSF cards. Table 3. Performance characteristics of selected XtremSF cards Read Bandwidth (MB/s) Write Bandwidth (MB/s) Random 4 K Read IOPS 350 GB MLC 550 GB MLC 700 GB MLC 1.4 TB MLC 2.2 TB MLC 350 GB SLC 700 GB SLC 3,175 1,555 3,215 3,215 2,600 3,215 3, , K 175 K 750 K 750 K 340 K 715 K 750 K 20

21 Chapter 3: Solution Overview Random 4 K Write IOPS Random 4 K Mixed IOPS Read Access Latency (4 kb) μs Write Access Latency (4 kb) μs 350 GB MLC 550 GB MLC 700 GB MLC 1.4 TB MLC 2.2 TB MLC 350 GB SLC 700 GB SLC 23 K 50 K 50 K 95 K 110 K 95 K 205 K 105 K 110 K 190 K 200 K 220 K 415 K 415 K Business benefits of XtremSF and XtremCache XtremSF XtremCache XtremSF delivers extremely high performance with low latency and enables applications to achieve memory-class based performance. It eliminates the need for additional memory or storage capacity purchases, and thereby helps reduce overall deployment footprint. The XtremSF family of server-based PCIe Flash cards offers customers the following benefits: Leading performance: XtremSF Flash devices are proven to deliver a record 1.13 million IOPS in a standard form factor an achievement unmatched in the industry. The XtremSF device s next-generation design delivers twice the throughput of other offerings in the market to enhance real world workloads in Web-scale and other applications. Unmatched flexibility: The XtremSF Flash device is available in a broad range of emlc (from 350 GB up to 2.2 TB) and SLC (350 GB and 700 GB) capacities. In addition, when deployed with XtremCache, XtremSF devices can be used as caching devices for accelerated performance with array protection for applications such as Oracle, Microsoft SQL Server, and Microsoft Exchange. New levels of efficiency: XtremSF Flash devices deliver the industry s lowest total cost of ownership (TCO) up to 58 percent better TCO than other offerings. All XtremSF products are standard half-height, half-length, 25W PCIe cards, providing the highest storage capacity with the smallest footprint for maximum performance, best density, and lowest power consumption, reducing CPU utilization by up to 50 percent. XtremCache delivers the following major benefits: Provides performance acceleration for read-intensive workloads As a write-through cache, enables accelerated performance with the protection of the back-end, networked storage array Provides an intelligent path for the I/O and ensures that the right data is in the right place at the right time 21

22 Chapter 3: Solution Overview In split-card mode, enables you to use part of the server flash for cache and the other part as DAS for temporary data By offloading flash and wear-level management onto the PCIe card, uses minimal CPU and memory resources from the server Achieves greater economic value when data deduplication is enabled by providing an effective cache size larger than the physical size, and longer card life expectancy Works in both physical and virtual environments Integrated with EMC Virtual Storage Integrator (VSI) plug-ins for vsphere makes it simple to manage and monitor XtremCache in a VMware environment Works in active/passive clustering environments Works with VMware live migration Provides a highly scalable performance model in the storage environment XtremCache features Server-side flash caching for maximum speed Write-through caching for total protection Application and storage agnostic XtremCache software caches the most frequently referenced data on the server-based PCIe card XtremSF, thereby putting the data closer to the application. The XtremCache caching optimization automatically adapts to changing workloads by determining the most frequently referenced data and promoting it to the server flash cache. This means that the hottest (most active) data automatically resides on the PCIe card in the server for faster access. XtremCache accelerates reads and protects data by using a write-through cache to the storage array to deliver persistent high availability, integrity, and disaster recovery. XtremCache is transparent to applications, so no rewriting, retesting, or recertification is required to deploy XtremCache in the environment. XtremCache works with any storage array in the enviornment. Regardless of the vendor or type of the storage, it works seamlessly to improve the performance of the storage array. XtremCache offloads much of the read traffic from the storage array, which allows it to allocate greater processing power to other applications. While one application is accelerated with XtremCache, the array s performance for other applications is maintained or even slightly enhanced. XtremCache vsphere integration XtremCache enhances both virtualized and physical environments. Integration with VSI plug-ins for vsphere makes it simple to manage and monitor XtremCache. 22

23 VMware automated live migration Chapter 3: Solution Overview XtremCache supports live virtual machine migration (vmotion), HA, DRS, and SRM. You can continue to use these technologies exactly as they are used without the presence of XtremCache. During the migration process, the virtual machine is operational and the cache is purged with a temporary I/O performance impact. XtremCache software must be installed on the virtual machines and the ESX host. The XtremCache device is created as a RDM device in the XtremCache pool, and passes through to the assigned virtual machine. The cache device appears to the source and target ESX hosts as shared resources with a multipath plug-in (MPP) over RDM. On each virtual machine, a virtual SCSI device is created with a fixed ID. This ID is the same on all ESX hosts within the cluster. The virtual machine accesses the flash device using an RDM disk over that SCSI device. Write activity to the cache flows through the RDM disk to the MPP on the ESX server and from there it flows straight to the flash device. Figure 3 illustrates the live migration with XtremCache. Figure 3. VMware live migration Post migration After the migration, the cache, which always starts cold, must warm up again because the virtual machine now uses a new physical device. This warm-up process also prevents data on the source device from becoming out of sync with the cached data. The migrated virtual machine can then follow the HA/DRS policies without any problems regardless of the XtremCache availability on the new ESXi server. Integration with Hyper-V XtremCache works seamlessly with the Windows Hyper-V host and the virtual machines that are deployed from it. 23

24 Chapter 3: Solution Overview Minimum impact on system resources Data deduplication XtremCache does not require a significant amount of memory or CPU cycles because all flash and wear-level management is done on the PCIe card and unlike other PCIe solutions does not use server resources. XtremCache creates the most efficient and intelligent I/O path from the application to the datastore, which results in an infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments. Currently, EMC is the only vendor to provide customers with a deduplication option on a server cache flash card. Deduplication can provide the following benefits: Better cost per gigabyte: Using an effective cache size that is larger than the physical cache size Longer card life expectancy: Reduction in the number of write operations to the flash card resulting in lower wear out Data deduplication can eliminate redundant data by storing only a single copy of identical chunks of data, while enabling this data to be referenced. As shown in Figure 4, when deduplication is enabled, only one copy of data is actually stored in XtremCache. With some additional memory space for pointers, the amount of data that can be cached increases dramatically. Figure 4. XtremCache data deduplication Data deduplication uses server memory to process the deduplication function and maximize the capacity of XtremCache. You can enable or disable this function as needed. Figure 5 shows the deduplication architecture in XtremCache. 24

25 Chapter 3: Solution Overview Figure 5. XtremCache data deduplication architecture overview Active/passive clustering support XtremCache supports several common types of active/passive native operating system clustering. Supported active/passive clustering Some environments (RHEL Cluster Suite, Veritas Cluster Server, and AIX PowerHA) require configuring the supplied XtremCache Clustering script to ensure that stale data is never retrieved. The scripts use Cluster Management events that relate to an application service start/stop transition to trigger a mechanism that purges the cache. Other environments, such as Microsoft Cluster Service and Oracle Real Application Clusters, do not require script configuration. Note: When you use XtremCache in a cluster, do note define quorum disks as source devices. Microsoft active/passive cluster support For XtremCache version 2.0 and higher, multiple applications in a cluster can use XtremCache for Microsoft Cluster Server environments. The required scripts are automatically installed during XtremCache installation. Cluster resources are automatically defined when you define a source device. Microsoft Cluster Service requires the following: Windows PowerShell must be installed on the cluster nodes. PowerShell is usually installed by default during a typical Windows installation. 25

26 Chapter 3: Solution Overview The XtremCache driver must be installed on all nodes in the cluster, including nodes without any server devices. Applications and shared disks with dependencies must be defined before you add or start the XtremCache source device. Resources will appear automatically in the Microsoft Cluster Services (MSCS) window after sources are defined. In Microsoft active/passive clusters, when the passive node of one cluster is also configured as the active node of another database cluster, XtremCache supports that configuration by specifying different XtremCache devices for the two different clusters on two different nodes. Multiple cards per server You can install Multiple XtremSF cards on a single server and configure them as cache devices to improve application performance. XtremCache pool in an ESXi server In VMware environments, each ESXi server can have one or more XtremCache pools. You can add devices from a specific vendor and model to the same cache pool. You can use a flash card in ESXi environments for DAS or for caching, but not for both (split-card). When you add cards to the local cache pool, all cards from the same group are added according to those defined in Table 4: Table 4. XtremSF device card group for cache pool in ESXi environment XtremSF device name XtremSF550 XtremSF2200 XtremSF300S Member of group Group A Group B XtremSF700 XtremSF1400 XtremSF350S Group C XtremSF700S Using flash cards for DAS If you use a flash card for DAS, any card used from the same group will be used as the DAS-intended card and will not be used for caching. For example: For an ESX host on which XtremSF550 and XtremSF2200 are installed, if both cards are from the same group, both cards must be used for caching or both cards must be used for DAS. For an ESX host on which XtremSF550 and XtremSF700 are installed, if the cards are from different groups, the cards can be used in any combination of caching and DAS with no limitations. 26

27 Chapter 3: Solution Overview Split-card mode support XtremCache includes a unique software option that enables you to split the XtremSF card between the cache and the local storage. You can simultaneously use the card as a caching device for critical data, both as a read and as a write storage device for temporary data. You can fully optimize your workload by adjusting caching or storage without having to change your card deployment. With this feature, both read and write operations from the application to the local storage are performed directly on the flash capacity in the server. Since the data on the local flash storage does not persist in any storage array, it is best used for ephemeral data only, such as the operating system swap space and temporary file space. Figure 6 shows an example of a use case for the split-card mode of XtremCache. In a SQL Server, where the tempdb needs acceleration for both read and write operations but the database file only needs read acceleration, XtremSF can be configured so that part of the card can be used for the local storage as tempdb, and part of it can be used as a cache. However, there is a limitation in this configuration as vmotion is not viable when the tempdb storage is local. Figure 6. Split-card mode used for SQL Server configuration 27

28 Chapter 3: Solution Overview XtremCache management XtremCache includes management utilities described in Table 5. Table 5. XtremCache management utilities Management utility Command Line Interface (CLI) VSI Plug-in Lite Client Management Center Description You can use the CLI to execute vfcmt commands to configure and manage XtremCache. It is installed with the XtremCache installation. The EMC VSI Storage Viewer for VMware vsphere (VSI) is a plug-in to VMware s vsphere Client that provides a single management interface used for managing EMC storage including XtremCache within the vsphere environment. Xtrem Lite Client enables you to view, manage, and monitor the XtremCache of a physical or virtual machine or ESX host. You can also use Lite Client to manage the XtremCache of an individual system. Communication between the Lite Client and managed systems uses the CIM/XML protocol over HTTPS on the port The XtremCache Management Center provides all the functionality of the Lite Client. In addition, it retains machine history and enables you to manage multiple machines (physical, virtual, and ESX hosts) from a single view. VNX users can benefit from the integration of the management center with Unisphere remote for VNX. For VNX LUNs being accelerated by XtremCache, this integration simplifies the cache performance monitoring by displaying the information directly on the Unisphere Remote management screens. In addition, you can see the health information of XtremSF flash cards that are managed by the Management Center from Unisphere Remote. To enable this integration, register the XtremCache Management Center in the Unisphere Remote by providing the IP addresses and credentials of the Management Center. 28

29 Chapter 3: Solution Overview Figure 7 shows the XtremCache Management Center s performance view. Figure 7. XtremCache Management Center Table 6 shows the differences between the XtremCache management utilities that you can choose to fit the specific needs of your environment. Table 6. XtremCache management utilities CLI VSI plug-in Lite Client Environment All VMware Physical, except AIX Management Center All, except AIX Installation Installed by default with caching software runs on the server VMware plug-in for vsphere Client Desktop client installed and runs on a windows machine Run as a virtual appliance (vapp), Web interface, no client installation Scale Manages single machine Manages multiple machines Manages single machine Manages multiple machines Recommended for Scripting, when GUI access isn t needed, or for AIX Management of multiple accelerated virtual machines GUI access to a single machine, with minimal setup costs Managing multiple machines, or when history and audit on changes is important External API REST API 29

30 Chapter 3: Solution Overview VNX integration If VNX Unisphere Remote is deployed, XtremCache can be managed and monitored directly through Unisphere Remote. Configuration and monitoring, such as link and launch capabilities to drill down and configure any cache device in the system, can all be done in a single management panel with: LUN selection based on VNX trending analysis Performance and health monitoring Discovery and configuration Oracle RAC support XtremCache support for Oracle RAC enables active/active shared storage in an Oracle environment with a distributed cache coherency algorithm. XtremCache supports Oracle RAC in the following environment: Oracle 11g on Windows, RHEL, or OEL (running the same OS versions that are supported by XtremCache) Running Oracle Clusterware 11g with Ethernet interconnect Up to eight nodes per cluster At installation time, XtremCache automatically recognizes the presence of Oracle RAC and switches operation to clustering mode. When a certain block of information is overwritten in shared storage and on a cache device, other cache devices in the cluster will delete that block from their cache devices to prevent the use of invalid data. When a node joins the cluster, XtremCache must know about it to provide modification to shared storage by that node. The integration with the Oracle cluster management using SCSI-3 Persistent Reservations to notify the back-end storage to wait until XtremCache approves the joining node before accessing the storage. When a node leaves the cluster, all cache devices will change to pass-through mode and are purged to ensure coherency. We recommend using this feature to cache your data file LUNs. Do not use it to cache redo logs, archives, temporary data, or grid data. Note: XtremCache is supported in AIX environments, but we do not support XtremCache for Oracle RAC in AIX environments. 30

31 Figure 8 shows how an Oracle RAC environment deploys XtremCache. Chapter 3: Solution Overview Figure 8. XtremCache deployment in an Oracle RAC environment Software-only feature AIX support XtremCache s software-only feature enables you to use XtremCache to serve as a cache device with any other devices. For example, you can use it in blade servers as well as many other device forms including all SATA or SAS SSD devices and PCIe cards such as HHHL and HHFl. You can also use devices with SATA, ATA, or SCSi bus configurations. VMware environments support SCSi devices only. XtremCache versions 2.0 and above support IBM Power 7 servers with AIX 6.1 and 7.1. The standard edition of PowerVM; Native clustering (PowerHA active/passive); and Certified AIX SSDs are supported as underlying hardware. 31

32 Chapter 3: Solution Overview Solution architecture How XtremCache works If the application I/O is for a source volume on which XtremCache has not been enabled, then the XtremCache driver is transparent to the application I/O and works as if there is no XtremCache driver in the server I/O stack. In the following examples, the application I/O is assumed for a source volume which is being accelerated by XtremCache. Read Hit example In this example, XtremCache has been running for some time and the application working set has already been promoted into XtremCache. The application issues a read request, and the data is present in XtremCache. This process is called Read Hit, as shown in Figure 9. Figure 9. Read Hit example with XtremCache The sequence of the steps in Figure 9 is: 1. The application issues a read request that is intercepted by the XtremCache driver. 2. Because the application working set has already been promoted into XtremCache, the XtremCache driver determines that the data being requested by the application already exists in the XtremCache. Therefore, the read request is sent to the PCIe XtremSF card rather than to the back-end storage. 3. Data is read from the XtremCache and returned to the application. 32

33 Chapter 3: Solution Overview Read Hit provides the entire throughput and latency benefits of XtremCache to the application because the read request is fulfilled within the server rather than incurring latencies by going over the network to the back-end storage. Read Miss example In this example, the application issues a read request and the data is not present in XtremCache. This process is called Read Miss, as shown in Figure 10. The data is not present in XtremCache either because the card has just been installed in the server or the application working set has changed so that the application has not yet referenced this data. Figure 10. Read Miss example with XtremCache The sequence of the steps in Figure 10 is: 1. The application issues a read request that is intercepted by the XtremCache driver. 2. The XtremCache driver determines that the requested data is not in XtremCache and forwards the request to the back-end storage. 3. The data is read from the back-end storage and returned to the application. 4. Once the application read request is completed, XtremCache driver writes the requested data to the XtremSF card. This process is called promotion. This means that when the application reads the same data again, it will be a Read Hit for XtremCache, as described in the previous example. If all the cache pages in XtremCache are already used, XtremCache uses a leastrecently-used (LRU) algorithm to write new data. If needed, the data that is least likely to be used in future is discarded first to create space for the new XtremCache promotions. 33

34 Chapter 3: Solution Overview Write example In this example, the application has issued a write request, as shown in Figure 11. Figure 11. Write example with XtremCache The sequence of the steps in Figure 11 is: 1. The application issues a write request that is intercepted by the XtremCache driver. 2. Since this is a write request, the XtremCache driver passes this request to the back-end storage for completion. The data in the write request is written to the XtremCache card in parallel. If the application is writing to a storage area that has already been promoted to XtremCache, the copy of that data in XtremCache is overwritten. Therefore, the application does not receive a stale or old version of data from the XtremCache in response to future read requests. XtremCache algorithms ensure that, if the application writes some data and then reads the same data later on, the read requests will find the requested data in XtremCache. 3. Once the write operation is completed on the back-end storage, an acknowledgment for the write request is sent back to the application. The process of promoting new data into XtremCache, as described in the previous two examples, is called cache warm-up. Any cache needs to be warmed up with the application working set before the application starts seeing the performance benefits. When the working set of the application changes, the cache automatically warms up with the new data over a period of time. 34

35 Chapter 3: Solution Overview XtremCache in a virtualized environment The implementation of XtremCache in a virtualized environment is slightly different from an implementation in a physical environment. In a virtualized environment, multiple virtual machines on the same server may share the performance advantages of a single XtremSF card or multiple XtremSF cards in the XtremCache pool. VMware environment Figure 12 shows an XtremCache implementation in a VMware virtualized environment. Figure 12. XtremCache implementation in a VMware environment An XtremCache implementation in a VMware environment consists of the following components: A physical XtremSF card on the VMware ESX Server XtremSF firmware and driver and XtremCache software on the ESX Server XtremCache software in each virtual machine that needs to be accelerated using XtremCache. This includes the XtremCache driver, command line interface (CLI) package, and XtremCache Agent. Only virtual machines that need to be accelerated with XtremCache must have XtremCache software installed. The Xtrem VSI Plug-in for XtremCache management in the VMware vcenter client Both the raw device mapping (RDM) and Virtual Machine File System (VMFS) volumes are supported with XtremCache. Network File System (NFS) file systems in VMware environments are supported as well. 35

36 Chapter 3: Solution Overview Figure 13 shows implementation details of a VMware environment. Figure 13. XtremCache in a VMware environment The flash device appears to the source and target ESXi hosts as a shared resource through a multipath plug-in (MPP) over RDM. On each virtual machine, a virtual SCSI device with a fixed ID that is identical over all ESX hosts on the cluster is used to access the flash device. XtremCache provides the flexibility to implement its caching capacity for one or many virtual machines in the ESX host from the vcenter server, with the VSI plug-in or XtremCache Management Center providing a single view for configuration and management. To configure this environment: The Xtrem shared datastore (named XtremSW_Cache_DS) must be created on a LUN that is visible to all ESXi hosts in the datacenter that may host a virtual machine with XtremCache on it. The LUN does not need to be larger than 1 GB. Add the XtremSF devices to the ESXi server XtremCache pool. Multiple cards from the same group (see Table 4) on the same ESXi server will need to be added to the same cache pool. Enable XtremCache remote monitoring from VSI plug-in. Enable UUID mapping to support vmotion, HA, DRS, and SRM to create a cache device from the XtremCache pool. You can determine the size of the cache device by the caching requirements of the specific virtual machine. After creating the cache device, you can use it the way as you use it in the physical environment. Attach a source device to be accelerated. Acceleration starts by default. 36

37 Chapter 3: Solution Overview The cache space in the ESX cache pool is only consumed when the virtual machine is active and there is enough space on that ESX server. vmotion activity will be successful even if there is no cache or there is not enough space on the ESX server. The cache space for XtremCache is allocated on a first-come, first-served basis. If there is no space when the virtual machine is active, the XtremCache operates in pass-through mode (as if there is no XtremCache) until there is space for it. This allows vmotion to move the virtual machines to another ESX server even if there is not enough cache space (or if the cache card has failed for any reason). Hyper-V environment Figure 14 shows an implementation in a Hyper-V virtualized environment. Figure 14. XtremCache in a Hyper-V environment An XtremCache implementation in a Hyper-V environment consists of the following components: A physical XtremSF card on the Windows Hyper-V server XtremSF driver and firmware on the Windows Hyper-V server XtremCache software on the Windows Hyper-V server In a Hyper-V environment, all the devices that need to be accelerated are configured at the Hyper-V root server level. The installation procedure is identical to the procedure for the physical Windows server. Unlike the VMware implementation, all virtual machines in the Hyper-V environment share the same physical XtremSF card installed on the Hyper-V server. Caching is provided through the Hyper-V host. 37

38 Chapter 3: Solution Overview In the Hyper-V environment, XtremCache provides the caching capacity to support one or many virtual machines in the Hyper-V host: Virtual disks can be defined either before or after configuring the LUN as a source device. All virtual disks allocated on a source device LUN will be accelerated. NFS, Hyper-V virtual hard disk (VHDX), and physical pass-through disk types are all supported. Currently, cluster- shared volumes (CSV) are not supported. 38

39 Chapter 4: Solution Design Considerations and Best Practices Chapter 4 Solution Design Considerations and Best Practices This chapter presents the following topics: Overview XtremCache Performance Predictor VSPEX environments that can benefit from XtremCache Selecting an XtremSF card Virtualization design considerations XtremCache placement considerations VMware considerations Hyper-V considerations

40 Chapter 4: Solution Design Considerations and Best Practices Overview This chapter provides best practices and considerations for the XtremCache implementation within the VSPEX Proven Infrastructure for various applications. We 1 considered the following aspects during the solution design: XtremCache Performance Predictor XtremCache remote management console XtremSF card selection XtremCache layout design Virtualization design XtremCache Performance Predictor XtremCache Performance Predictor is a free tool available on EMC Online Support. You can use this tool to estimate the benefits of implementing XtremCache in a specific environment. This tool collects data on the host side using common trace collection tools, and trace analysis on a host or on any laptop that meets the system requirements. The tool simulates XtremCache s operations and generates a PDF output file describing the benefits. Requirements This tool requires no card or software purchase and runs on all XtremCachesupported operating systems. The tool creates a set of charts and graphics that show whether the environment can benefit from XtremCache, and provides an estimate of possible performance improvement based on: Observed host response time Capacity used by the host Skew level 1 In this guide, we refers to the EMC Solutions engineering team that validated the solution. 40

41 Chapter 4: Solution Design Considerations and Best Practices Sample output from XtremCache Performance Predictor This section provides sample output from the XtremCache Performance Predictor tool. Figure 15 shows the performance collection and the cache configuration from a sample PDF output of the tool. Figure 15. XtremCache Performance Predictor sample output: collecting performance data 41

42 Chapter 4: Solution Design Considerations and Best Practices Figure 16 shows the tool s output regarding the disk I/O distribution. You can use this information to set the page size and maximum I/O size of the actual XtremCache for better performance if required (the default for page size is 8 K and maximum I/O size is 64 k). Figure 16. XtremCache Performance Predictor sample output: I/O size distribution 42

43 Chapter 4: Solution Design Considerations and Best Practices Figure 17 shows the cache read analysis. If the tool indicates a very high cache hit rate, then the device under load is a good candidate for XtremCache acceleration. Figure 17. XtremCache Performance Predictor sample output: predicting the cache hit rate 43

44 Chapter 4: Solution Design Considerations and Best Practices Figure 18 shows an estimate of the performance improvement gained by the disk from the XtremCache acceleration. This is a simulated result and serves a good reference for how well the application will benefit from XtremCache acceleration. Figure 18. XtremCache Performance Predictor sample output: disk latency prediction For best performance, use XtremCache Performance Predictor as a planning tool when including XtremCache in a VSPEX environment. VSPEX environments that can benefit from XtremCache Workload environments with these characteristics can generally benefit from XtremCache: A high read-to-write workload ratio. The maximum effectiveness is gained where the same data blocks are read many times and seldom written. A small working set that receives the maximum possible boost. Predominantly random workloads. Sequential workloads tend to have a significantly larger, active dataset in proportion to the available XtremCache size (such as data warehousing), and so do not benefit greatly from XtremCache. A high degree of I/O concurrency (that is, multiple I/O threads). Smaller I/O sizes (8 KB or lower). Applications that generate a large amount of I/O, such as Exchange Server 2010, can still benefit. 44

45 Chapter 4: Solution Design Considerations and Best Practices The XtremCache software enables you to tune features such as page size and maximum I/O sizes, which greatly helps in these environments to continue to accelerate particular I/O activity and avoid other I/O activity (such as backup read I/O). As explained in Chapter 3: Solution Overview, XtremCache can accelerate read operations, while all write operations are written to the storage array and are not affected by XtremCache. In many cases, improvement in write-throughput performance can be observed as XtremCache offloads the read operations, enabling the array to handle more write operations as a side benefit. XtremCache may not be suitable for more write-intensive or sequential applications such as data warehousing, streaming, media, or Big Data applications. Figure 19 shows these use cases. Figure 19. XtremCache use cases The horizontal axis represents a typical read-to-write ratio for an application workload. The left side represents write-heavy applications such as backups. The right side represents read-heavy applications such as reporting tools. The vertical axis represents the working set of the application s workload. The lower end represents applications that have a very large working set and the top of the chart represents applications with a small working set, where the majority of the I/O goes to a very small set of data. Typically, applications with a small working set occupy less space in XtremCache. The greatest performance improvement can be achieved with XtremCache in highread applications with a highly concentrated, small working set of data. 45

46 Chapter 4: Solution Design Considerations and Best Practices Selecting an XtremSF card To summarize, you can use XtremSF as the local storage for read and write acceleration, temporary data, and large working sets, while XtremSF with XtremCache can be used for read acceleration of mission-critical data with small working sets that require data protection. In general, the two major technologies used in all flash drives are: SLC NAND-based flash cell Multilevel cell (MLC) NAND-based flash cell This section discusses which card to select when designing an XtremCache solution. EMC XtremSF has both SLC and MLC cards in different sizes to fit the different needs of a customer environment. For more information about XtremSF card sizes, see Table 3 on page 20. Design best practices Flash storage devices store information in a collection of flash cells made from floating gate transistors. SLC devices store only one bit of information in each flash cell (binary). MLC devices store more than one bit per flash cell by choosing between multiple levels of electrical charge to apply to the floating gates in the transistors, as shown in Figure 20. Figure 20. Comparison between SLC and MLC flash cell data storage MLC versus SLC Because each cell in MLC flash has more information bits, an MLC flash-based storage device offers increased storage density compared to an SLC flash-based version. However, MLC NAND has lower performance and endurance because of its inherent architectural tradeoffs. Higher functionality further complicates the use of MLC NAND, which makes it necessary to implement more advanced flash management algorithms and controllers. Table 7 compares the SLC and MLC flash characteristics with some typical values. 46

47 Chapter 4: Solution Design Considerations and Best Practices Table 7. SLC and MLC flash comparison Features MLC SLC Bits per cell 2 1 Endurance (erase/write cycles) About 10,000 About 100,000 Read service time (Avg.) 129 μs 38 μs Write service time (Avg.) 1,375 μs 377 μs Block erase (Avg.) 4,500 μs 1,400 μs Although SLC NAND Flash offers a lower density, it also provides an enhanced level of performance in the form of faster reads and writes. Because SLC NAND Flash stores only one bit per cell, the need for error correction is reduced. SLC also allows for higher write and erase cycle endurance, making it a better fit for use in applications that require increased endurance and viability in multiyear product life cycles. SLC and MLC NAND offer capabilities that serve two different types of applications those requiring high performance at an attractive cost per bit (MLC), and those that are less cost sensitive and seeking even higher performance over time (SLC). Virtualization design considerations XtremCache is fully supported when deployed in a virtual environment with VMware vsphere ESXi technology or Windows Server Hyper-V technology. The following describes the best practices and design considerations for XtremCache in virtualized environments: Identify the virtual machines on the ESXi server that would be a good candidate for XtremCache to accelerate its performance with reasonable cost. Calculate the total capacity needed for XtremCache. If needed, adjust the placement of the virtual machines in the environment to best utilize XtremCache. Select the appropriate XtremSF card for both capacity and performance. Sizing recommendations Sizing recommendations are available for each different application type. The implementation also varies for each different environment. Table 8 shows the minimum configurations recommended for each application, based on our testing in a controlled environment with a typical database workload and application workload. Use the numbers provided as a guideline. To determine the sizing that best fits a specific application and environment, it is important to consider both the performance level you need and the cost you can afford. In most cases, adding more XtremCache gives better performance until the size of the cache is equal to or greater than the working set. Table 8 provides XtremCache recommendations for each application. The cache-tostorage ratio (the cache and database storage size ratio, a 1:10 ratio, represents 47

48 Chapter 4: Solution Design Considerations and Best Practices a 1 GB XtremCache for each 10 GB of data) largely depends on the active working set of the database, and will change based on actual usage. Table 8. Recommended cache for each application Application Database type Read-to-write ratio SQL Server/ Oracle OLTP 90:10 1:10 SQL Server/ Oracle OLTP 70:30 1:5 SharePoint Server Content/crawl 100% read 1:5 Exchange Server Mailbox 60:40 1:100 Recommended XtremCache-to-storage ratio 2 Performance recommendations For Oracle or SQL Server online analytical processing (OLAP) applications, such as a data warehouse environment, emlc XtremSF (alone, or in split-card mode) can be used as the tempdb to improve the query performance. Consider at least 200 GB tempdb space for every 1 TB of database. XtremCache placement considerations EMC XtremCache can accelerate performance on demand for applications in a VSPEX Proven Infrastructure. Flexibility The flexibility of an XtremCache implementation enables you to place XtremSF on the server that hosts the specific virtual machines requiring performance acceleration. In those virtual machines, you enable only the specific storage LUNs that need XtremCache. To ensure that those virtual machines continue to have access to XtremCache acceleration, set the appropriate affinity rules for the hypervisor so the virtual machines can reside only on those servers that are accelerated with XtremSF Cache. Additionally, you can install XtremSF Flash cards in all physical servers in the server infrastructure, and then install and enable XtremCache across all servers. Design best practices Working from the base configuration of VSPEX, for each application you intend to run within the environment, determine which applications need XtremCache acceleration by using XtremCache Performance Predictor. This will estimate the benefits of adding the XtremCache to the environment. 2 XtremCache-to-storage ratio is the cache and database storage size ratio. If the ratio is 1:10, then for each 10 GB of data, provide at least 1 GB of XtremCache. 48

49 Chapter 4: Solution Design Considerations and Best Practices Use XtremCache for read acceleration of mission-critical data with small working sets that require data protection. Consider the following guidelines when placing the flash card within the server infrastructure: Use XtremCache Performance Predictor to estimate the benefits for adding the XtremCache to the environment Use XtremCache for read acceleration of mission-critical data with small working sets that require data protection. Put at least two XtremSF cards within your VSPEX server infrastructure when redundancy is required. If vmotion is required, calculate the XtremSF capacity and placement so that the remaining server and XtremSF capacity can still serve the configured XtremCache settings of all virtual machines when vmotion takes place. For example, you configure 10 virtual machines to use 100 GB of XtremCache, which requires a total of 1 TB of XtremCache capacity. If vmotion is required, the remaining servers in the virtualized cluster with XtremCache must facilitate at least 1 TB of cache space. If applications only need a small part of the XtremSF card capacity for each virtual machine, the virtual machines with these applications cab share the same physical card. You can place them on the same ESXi or Hyper-V host. If a certain application demands all the available capacity of the XtremSF card, then the host should dedicate that specific card to the virtual machine. You can install Multiple XtremSF cards on the same server, if required. You can also configure multiple XtremSF cards on the same hypervisor or Cache Pool to create larger cache capacity for virtual machine(s). The XtremCache page size is the smallest unit of allocation inside the cache. The default page size is 8 KB. The XtremCache maximum I/O size is the maximum I/O size that will be promoted into the cache. The default maximum I/O size is 64 KB. Determine the I/O size distribution of all applications selected for acceleration. If an application generates significantly large I/O sizes (such as Exchange Server), this may warrant a change of the default page size and maximum I/O size configurations for XtremCache. Figure 21 shows the configuration screen in VSI plugin used to change these configuration settings. The minimum size for the XtremCache device is 20 GB for any virtual machine that needs flash cache acceleration. 49

50 Chapter 4: Solution Design Considerations and Best Practices Figure 21. Cache device configuration screen There is minimal resource consumption (overhead) for virtual machines using XtremCache to accelerate application performance, except when the deduplication feature is enabled. Resource consumption, including CPU and memory, depends on the application and especially depends on the size of the working set. Deduplication introduces very limited memory utilization and CPU consumption when enabled in an environment with a small working set and high skew. This is detailed in the Exchange solution example; for more information, see XtremCache for Exchange Server. XtremCache can be disabled or enabled any time once the XtremSF card is installed on the physical host and configured for the virtual machine. VMware considerations This section provides the most common and important design considerations for implementing XtremCache in a VSPEX with VMware environment. The VMware environment in a VSPEX Proven Infrastructure should follow the general VSPEX design principles and best practices for specific applications on VMware, as detailed in the VSPEX Implementation Guides. XtremSF should be installed on each ESXi server with virtual machines that require XtremCache acceleration, as determined by customer s performance and cost analysis. 50

51 Chapter 4: Solution Design Considerations and Best Practices After installing the XtremSF Flash card, you can configure the XtremCache pool within the ESXi cluster using the VSI plug-in or using XtremCache Management Center, as shown in Figure 22. Figure 22. XtremCache configuration using EMC VSI plug-in You can use multiple XtremCache devices in a single cache pool to support larger cache capacity in certain virtual machines. A single XtremSF cache device can also support multiple virtual machines cache needs, as shown in Figure 23. Figure 23. XtremCache implementation in VMware environment for VSPEX 51

52 Chapter 4: Solution Design Considerations and Best Practices Hyper-V considerations The size of the XtremCache should follow the best practices for each different application, as previously described in the Sizing recommendations section. For multiple applications or database LUNs, simply add the required XtremCache device size, and create a single XtremCache device for the virtual machine, as shown in Figure 23. The only exception to this is when there is a need to segregate the I/O traffic, or when one XtremSF card is not big enough for the virtual machine, and then multiple cache devices are needed. Since each virtual machine in the VMware environment has its own XtremCache cache device, there is no contention among different virtual machines for XtremCache caching. Each deployment should be a careful balance of performance and cost considerations. As previously noted, virtual machines are expected to migrate across the VMware cluster. Ensure that sufficient XtremCache capacity is available on other nodes to accept an incoming virtual machine configured for acceleration. For example, if you want to move SQLVM1 (configured with a 50 GB cache) from the host ESXServer1 to the host ESXServer2 (through a vmotion migration), ensure that ESXServer2 has at least 50 GB of free XtremCache capacity available. This section provides the most common and important design considerations for implementing XtremCache in a Hyper-V environment. The Hyper-V environment in a VSPEX implementation should follow the general VSPEX design best practices for the specific application in the Hyper-V environment, as detailed in the VSPEX Implementation Guides. As shown in Figure 24, install XtremSF on each Hyper-V server with virtual machines that require XtremCache acceleration, as determined by the customer s performance and cost analysis. Once the XtremSF card is installed, configure it as the XtremCache target device on the Hyper-V server. From the Hyper-V server, configure all the LUNs requiring XtremCache acceleration as source LUNs for the XtremCache target device. As shown in Figure 24, all VHDXs for the different virtual machines, as well as the physical pass-through disks on those LUNs configured as XtremCache source LUNs, are accelerated by XtremCache. 52

53 Chapter 4: Solution Design Considerations and Best Practices Figure 24. XtremCache implementation in Hyper-V environment for VSPEX Since XtremCache in a Hyper-V environment works at the Hyper-V level, all the source devices from the different virtual machines are accelerated with the same XtremCache target. This means: Applications may enjoy a higher level of service from XtremCache when other virtual machines on the same Hyper-V server are not as active. This is because the source device is not limited to the calculated capacity of the XtremCache and can potentially use all the available cache capacity. There may be contention between different virtual machines if the workload and active data set (the hot data) on one of the virtual machines is overwhelmingly high and using more than its quota. To avoid contention, it is better to put applications that place a high demand on XtremCache on different Hyper-V servers, or to configure them with a different XtremSF card on the same Hyper-V server. Currently, CSV volumes are not supported with XtremCache 1.5x software. CSV volumes will be supported in future releases. Note: Volumes in a Hyper-V cluster do not need to be CSV to avail of the benefits of Live Migration or other advanced Hyper-V features. Also, in cases where Tier-1 applications require acceleration, it may be best not to enable CSV on those volumes and ensure they are dedicated to the application from the volume to LUN to storage array disks. When using VHDX, all VHDXs on the same LUN that are configured with XtremCache are accelerated with XtremCache. When designing the storage layout, consider placing only the VHDXs that require XtremCache acceleration on the LUNs that are configured with XtremCache. 53

54 Chapter 5: XtremCache Solution for Applications Chapter 5 XtremCache Solution for Applications This chapter presents the following topics: Overview Architecture of XtremCache deployment on VMware Architecture of XtremCache deployment on Hyper-V XtremCache for SQL Server OLTP database XtremCache for Exchange Server XtremCache for SharePoint XtremCache for Oracle OLTP database XtremCache for private cloud

55 Chapter 5: XtremCache Solution for Applications Overview Any VSPEX Proven Infrastructure that needs to boost the performance of applications such as Oracle and SQL Server OLTP applications, web applications, financial trading applications, and Exchange can benefit from XtremCache. XtremCache can be considered as an upgrade or add-on feature for a larger cloud solution. This section describes application use cases where XtremCache provides value. It includes the best practices, the deployment scenarios, and the expected benefits for the following application use cases: SQL Server Exchange SharePoint Oracle Private cloud Architecture of XtremCache deployment on VMware Figure 25 shows the validated architecture for an XtremCache deployment on a VSPEX Private Cloud with VMware. The XtremSF card is installed on the physical VMware ESXi server and put into an XtremCache pool. The XtremCache device created in that cache pool is assigned to the virtual machines hosting the application that needs to be accelerated. The cache device can use part or all of the available storage in the XtremCache pool. On each virtual machine, we configured the LUNs that will be accelerated by the XtremCache as source LUNs for the XtremCache device. After enabling them, data is cached just as it is in a physical environment. The source LUN can be any LUN in the virtual machine, such as Virtual Machine Data file for VMware (VMDK) or RDM. 55

56 Chapter 5: XtremCache Solution for Applications Figure 25. Architecture of the VSPEX Proven Infrastructure for XtremCache deployment on VMware 56

57 Architecture of XtremCache deployment on Hyper-V Chapter 5: XtremCache Solution for Applications Figure 26 shows the validated architecture for an XtremCache deployment on a VSPEX Private Cloud with Hyper-V. In a Hyper-V environment, the XtremCache is deployed on the Hyper-V host and managed from this level. The I/O issued by the virtual machines is accelerated at the Hyper-V level. If there are multiple VHDXs on the same LUN in the Hyper-V host, they will all be accelerated because the XtremCache source LUN is configured at the Hyper-V host level. If a VHDX is used in Hyper-V, the source LUN for XtremCache on the Hyper-V host should contain only VHDXs that need to be accelerated. Figure 26. Architecture of the VSPEX Proven Infrastructure for XtremCache deployment on Hyper-V 57

58 Chapter 5: XtremCache Solution for Applications XtremCache for SQL Server OLTP database In a SQL Server environment, the storage LUNs that host the database data files for the OLTP database are most likely to benefit from XtremCache acceleration. The read-to-write ratio of a typical SQL Server OLTP database data file ranges from 70:30 to 90:10, making the database data file LUN ideal for XtremCache acceleration. In the example use case described in this section, we tested an active OLTP database with a read-to-write ratio of 90:10. Using about a 100 GB cache to accelerate a 1 TB OLTP database reduced the read latency by more than half. Benefits of XtremCache in a SQL Server OLTP environment XtremCache is proven to be highly scalable and reliable. It can relieve the I/O processing pressure from the storage system and boost the disk read operations driven by the host, even in virtual ESXi-based environments. XtremCache increases the overall transaction rate of SQL Server and significantly reduces disk latencies with minimal impact on system resources. XtremCache in SQL Server OLTP environments provides the following benefits: XtremCache can reduce SQL Server storage response time. The XtremCache host driver has minimal impact on server and virtual machine system resources. In testing, the system resources were mostly consumed by the SQL Server workload. The XtremCache driver overhead was negligible 0.4 percent CPU usage in this example use case. With a highly optimized, multitier storage system, XtremCache can offload read I/O processing from the storage array while reducing disk latencies, thus enabling higher transactional throughput and enabling the EMC storage array to consume even more workload. With less optimized, two-tier storage configurations, XtremCache can significantly boost SQL Server transactions and lower overall host disk latency. It can address hot-spots in the datacenter and alleviate possible storage bottlenecks. We observed a performance boost immediately after the LUNs were added to the XtremCache pool. Performance reached a steady state in approximately one hour for all 16 LUNs hosting a 3 TB database file. XtremCache is a server-based cache. Introducing XtremCache to a storage environment does not require any changes to the application or storage system layouts. Because XtremCache is a caching solution rather than a storage solution, there is no need to move data. Therefore, you do not risk having inaccessible data if the server or the PCIe card fails. XtremCache minimizes CPU overhead in the server by offloading flash management operations from the host CPU onto the PCIe card. Managing and monitoring XtremCache in a vsphere environment is easy. After configuration, XtremCache requires no user intervention and continuously adjusts to meet the needs of the application workload. 58

59 Chapter 5: XtremCache Solution for Applications Best practices Based on the XtremCache Performance Predictor, in a SQL Server OLTP environment running a heavy OLTP workload, the primary database LUNs can benefit most from XtremCache acceleration. The log LUNs and tempdb LUNs are write-heavy and should not be used with the XtremCache. In summary, in a typical SQL Server OLTP environment: You should use XtremCache Performance Predictor to estimate the benefits for adding the XtremCache to SQL server environment. The read-intensive database data file LUNs generally have heavy workload, subjected to a high-read skew, and are good candidates for XtremCache. SQL Server OLTP data files experience constant random reads and contribute to the overall duration of transaction times. Data files also experience regular bursts of write activity during a checkpoint operation. Using XtremCache to perform cache reads and avoid an I/O workload on the EMC array enables the array to consume those burst writes faster and avoid any read delays for transactions. Log LUNs and tempdb LUNs in OLTP databases are write-intensive and typically do not benefit from XtremCache. In SQL Server AlwaysOn environments, the secondary databases do not need to be accelerated unless a specific performance requirement justifies the use of XtremCache. Set the page size to 64 KB in the XtremCache to accommodate the large I/O for the SQL Server database. If the workload is not expected to increase after deploying XtremCache in the VSPEX Proven Infrastructure, there is no need for additional system resources such as memory or CPU. With a read-to-write ratio of 90:10 in the OLTP database LUNs, for each 1 TB of database, an XtremCache of 100 GB or more would significantly improve the OLTP query performance and read operations. Use case design and deployment The example use case deployed XtremCache to accelerate OLTP performance in a multiuser SQL Server 2012 database virtualized with the VMware environment. Two ESXi servers each hosted one SQL Server virtual machine. One of the SQL Server virtual machines used a 700 GB SLC XtremSF card. The other server did not have XtremCache configured. The environment is based on a multitier storage solution that is controlled and optimized by EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP). The solution design includes the following components and features as shown in Figure 27: Two vsphere ESXi servers, each hosting one SQL Server virtual machine XtremCache enabled on the primary SQL Server virtual machine 59

60 Chapter 5: XtremCache Solution for Applications Figure 27. Architecture design for XtremCache enabled SQL Server virtual environment Deployment scenarios Figure 28 shows the XtremCache deployment for this use case. All the database file LUNs on the primary server are configured as source LUNs for XtremCache acceleration; tempdb LUNs and log LUNs are excluded. The secondary server does not have XtremCache configured. 60

61 Chapter 5: XtremCache Solution for Applications Figure 28. SQL Server AlwaysOn XtremCache deployment Configuration of XtremCache in the VMware environment In this solution, we configured one 278 GB XtremCache. All 16 source data devices were associated with the cache device, as shown in Figure 28. Configuration is straightforward using the wizards in the VSI integrated plug-in. If preferred, you can use the command line from the Windows virtual machine. Perform the following steps to configure the XtremCache for the database LUNs in the virtual machine: 1. Use vcenter Server to create a VMFS datastore, and then create XtremCache pool with the XtremSF card in the ESXi server. 2. Create the XtremCache device from the Cache pool and assign it to the virtual machines through the VSI plug-in for XtremCache. 3. Add the source devices to the enabled XtremCache device to accelerate their performance. Any source device can be stopped temporarily or removed from the caching operation without affecting other source devices. Test results XtremCache boosts system performance After enabling XtremCache for the first time, the performance boost was visible immediately. XtremCache started to take effect as soon as it was enabled with the devices needing a performance boost added into the cache pool. It took approximately one hour in this environment to reach the maximum performance boost. We tested XtremCache for SQL Server in both a two-tier and three-tier configuration. Figure 29 shows the read and write IOPS for the primary SQL Server before and after enabling XtremCache in a two-tier storage system. 61

62 Chapter 5: XtremCache Solution for Applications 20, IOPS IOPS and latency change after enabling XtremCache Baseline XtremCache Enabled Steady state 15,000 10,000 5, latency (ms) 0 0 IOPS latency ( ms) Figure 29. Performance boost after enabling XtremCache After the system reached the steady state, the system performance was stable during the 24-hour testing period. XtremCache reduces SQL Server response time XtremCache significantly reduced the SQL Server response time for high response time transactions in both the two-tier and three-tier configurations. The XtremCache host driver had a minimal impact on the server and virtual machine system resources. The read latency reduced by approximately 50 to 70 percent after we enabled XtremCache. We observed a similar result with the transaction latency, where XtremCache also significantly lowered the response time of high latency transactions. Without XtremCache, the two-tier configuration can support only 14,000 IOPS. With XtremCache, it can fully support a 24,000 IOPS load with a 90:10 read-to-write ratio. XtremCache significantly lowered the I/O activities on the storage array (about 10,000 IOPS) in the three-tier configuration, thus enabling the storage system to support more server I/O requests. 62

63 Chapter 5: XtremCache Solution for Applications Table 9 shows the detailed test results for all the test scenarios in this solution. Performance Table 9. Performance data with OLTP load Three-tier storage Without XtremCache With XtremCache Two-tier storage Without XtremCache With XtremCache SQL Server virtual machine CPU 67.45% 67.85% 15.50%* 51.43% ESXi CPU 77.80% 78.20% 24.63%* 65.57% Client transactions per second (TPS) 2,193 2,585 1,225 2,229 SQL Server virtual machine IOPS 23,938 23,916 14,123 23,602 Array front-end IOPS 24,698 14,987 15,475 13,798 Latency (ms) (read/write/transfer) 4/1/4 2/2/2 11/1/10 4/3/4 * CPU usage was lower because the storage bottleneck created in this test limited the client load that can be pushed to the system. XtremCache for Exchange Server In an Exchange Server environment, the Exchange database LUNs are most likely to benefit from XtremCache acceleration. The performance of the database can be improved by using 10 GB of XtremCache for each 1 TB of Exchange data in the Mailbox server virtual machines in the example use case described in this section. Even though the typical Exchange Mailbox workload has about a 60:40 read-to-write ratio and a large I/O size, the working set of the Exchange databases is very small. This means that the Mailbox workload performance can be dramatically improved when a small slice of XtremSF is configured as XtremCache for the Mailbox database LUNs. The high I/O skew in this use case also makes it a good candidate for deduplication with limited memory and CPU consumption. Benefits of XtremCache in an Exchange environment Using XtremCache in an Exchange environment offers many benefits: XtremCache improves Exchange performance by reducing read latencies and offloading read operations from the back-end storage. XtremCache helps to maximize I/O throughput for Exchange workloads without changing or adding any additional storage resources. XtremCache reduces bandwidth requirements through deduplication features, offloading write processing from the Exchange back-end storage. 63

64 Chapter 5: XtremCache Solution for Applications XtremCache can be integrated with vsphere for virtual machine migration that has an XtremCache device attached. With proper configuration, the applications can resume the accelerated state after virtual machine automigration occurred. XtremCache has little impact on system resources such as CPU and memory. The initial warm-up period for XtremCache with Exchange-simulated workloads varies for each environment. In this solution, the effect of XtremCache was observed immediately after it was enabled. It reached a steady state in approximately 30 minutes for all Exchange accelerated database LUNs with 15 TB of data. Integration with the VSI plug-in for VMware makes XtremCache easy to manage and monitor in a virtualized environment. XtremCache is designed to minimize CPU overhead in the server by offloading flash management operations from the host CPU onto the XtremSF PCIe card. With an Exchange workload, XtremCache can relieve I/O processing pressure from the storage system and boost the disk read operations driven by the host. XtremCache increases the overall Exchange application IOPS and significantly reduces disk latencies with minimal impact on system resources. Using XtremCache enables customers to configure Exchange for high performance and low cost without making trade-offs. Managing and monitoring XtremCache in a vsphere environment is easy. After configuration, XtremCache requires no user intervention and continuously changes to meet the application workload requirements. Best practices In an Exchange environment configured with Database Availability Groups (DAGs) (for both active and passive copies of DAG), and based on the XtremCache performance predictor tool results, the LUNs for the databases can benefit most from XtremCache acceleration. More importantly, the working set for Exchange database is relatively small; thus, the XtremCache size needed for Exchange server acceleration is also small. In this use case, every 1 TB of Exchange data requires only about 10 GB of XtremCache. Enabling XtremCache acceleration for both active and passive databases also improves the performance. If there is a DAG failover, XtremCache is already warm when the DAG fails over and the whole Exchange environment shows almost no performance impact. The LUNs for the database log should be excluded because of their sequential workload. In summary, in a typical Exchange environment: Use XtremCache Performance Predictor to estimate the benefits of adding XtremCache to the Exchange environment. In Mailbox virtual machines, typically both active and passive database file LUNs with a heavy workload are good candidates for XtremCache source LUNs. XtremCache also helps improve the performance even in a DAG failover scenario. 64

65 Chapter 5: XtremCache Solution for Applications You should typically exclude log LUNs from XtremCache. Set the page size to 64 KB in XtremCache to accommodate the large I/O size of the Exchange Server. For each Exchange virtual machine, for every 1 TB of Exchange data, configure about a 10 GB XtremCache to significantly improve the Mailbox server performance. Use case design and deployment The example use case deployed XtremCache to accelerate the performance of Exchange 2010 in a DAG configuration with two database copies virtualized with the VMware environment. We installed two 700 GB SLC XtremSF cards on the vsphere ESXi servers hosting six Exchange Mailbox server virtual machines. In testing, the system IOPS improved by over 26 percent, and read latencies decreased by about 50 to 70 percent. We also tested the environment for deduplication with little additional system resource consumption. When enabling XtremCache deduplication for Exchange Server, you can reduce the CPU usage by up to 50 percent in certain workloads, with a drop of up to 30 percent in the write IOPS to the back-end array. Figure 30 shows the solution design, which included the following components: A vsphere HA cluster consisting of two vsphere ESXi servers, each hosting three Exchange Mailbox server virtual machines Two copies of the DAG database configured on different Mailbox servers XtremSF installed on both ESXi servers in the HA cluster Each Exchange Mailbox server virtual machine configured with a 50 GB XtremCache for their 5 TB databases (including both active and passive DAG copies). 65

66 Chapter 5: XtremCache Solution for Applications Figure 30. Architecture design for XtremCache-enabled Exchange virtual environment Deployment scenarios Figure 31 shows the XtremCache deployment for the Exchange use case. We configured all database LUNs for active and passive copies on the virtual machines as source LUNs for XtremCache acceleration. The log LUNs were excluded mostly because of their write and sequential I/O. 66

67 Chapter 5: XtremCache Solution for Applications Figure 31. XtremCache deployment for Exchange 2010 on vsphere In this deployment, for each virtual machine with 5 TB of storage, we deployed 50 GB of XtremCache. We configured the rest of the XtremCache capacity to support the vmotion failover. Configuration of XtremCache in the VMware environment The configuration of XtremCache for an Exchange Mailbox server in a VMware environment is similar to the SQL Server configuration previously shown in Figure 28. In addition, for this use case, we configured deduplication and vmotion migration. You can configure the XtremCache data deduplication feature for the Exchange Mailbox server virtual machines. Data deduplication eliminates redundant data by storing only a single copy of identical chunks of data while, at the same time, providing access to the data from the cache. Deduplication also helps to reduce storage and bandwidth requirements and extend the life expectancy of the cache device. Configuring the XtremCache device with data deduplication To enable data deduplication for the XtremCache device, follow these steps: 1. Select the Use Data Deduplication checkbox in the Add XtremCache Device wizard, when adding the XtremCache device to a virtual machine. 67

68 Chapter 5: XtremCache Solution for Applications 2. Select the expected data deduplication gain based on your Exchange workload type, as shown in Figure 32. Figure 32. Enabling data deduplication on the XtremCache device You can also enable data deduplication using the XtremCache CLI on the Windows client machine by running the following command: vfcmt add -cache_dev harddisk13 set_page_size 64 set_max_io_size 64 enable_ddup ddup_gain 20 Where: harddisk13 ddup_gain 20 Is: A configured operating-system cache device for the virtual machine The deduplication gain percentage for the system cache device on the virtual machine After adding the deduplication-enabled XtremCache device, add the Exchange database LUNs as source devices to the XtremCache device for performance acceleration. To determine the appropriate data deduplication gain for your Exchange workload, review the XtremCache statistics information in the XtremCache VSI plug-in or use the CLI on the Windows server. After the cache warm-up, follow these recommendations: Calculate the observed deduplication hit ratio and compare it with the configured ratio. Calculate the observed deduplication hit ratio by dividing the Write Hits by the Writes Received. This is the amount of duplicated data in the cache. If the observed ratio is less than 10 percent, turn off deduplication or reconfigure the deduplication gain to zero percent. To benefit from the extended life of the cache device, keep deduplication enabled. 68

69 Chapter 5: XtremCache Solution for Applications If the observed ratio is over 35 percent, raise the deduplication gain to match the observed deduplication. If the observed ratio is between 10 and 35 percent, leave the deduplication gain as it is. To change the configured ratio, remove the XtremCache device from the Exchange Mailbox server virtual machine, and add it back again with a new deduplication percentage value. To do this, use the VSI plug-in or the CLI command (vfcmt add - cache_dev), as described previously in this section. Migrating an Exchange virtual machine with XtremCache device It is possible to move an Exchange virtual machine that has an XtremCache disk from one vsphere host to another. Under a typical scenario, without an XtremCache device, you can use the native vsphere migrate command to move a virtual machine from one host to another. This is possible because in a typical scenario the virtual machine s datastores and RDMs are shared resources. In the XtremCache environment, however, the XtremCache datastore is mapped to its local host flash drive. Consequently, this datastore is accessible only to that host and the native vsphere migrate command is not supported. Instead, use the EMC XtremCache VSI plug-in to perform the virtual machine migration with the XtremCache device attached. Multiple forms of migration are available. The form of migration that you choose determines the steps you perform to complete the migration. Before you begin, ensure that your system meets the following prerequisites: The target datastore has enough available capacity for the new device. There are no additional DAS flash-based devices for the host virtual machine. Only one XtremCache device is configured on the host virtual machine. The virtual machine you want to migrate is not currently being migrated. The source host and the target host must be able to communicate with each other, so ensure the IP and Domain Name System (DNS) have been properly configured. Recovering Exchange data from a snapshot If you are using backup software that performs snapshots of Exchange LUNs accelerated by XtremCache, follow specific procedures when restoring data from those snapshots to ensure data integrity. If an Exchange LUN snapshot is taken on the array, and later used to roll back changes on the source LUN, the server will not be updated with the changes. This could result in the cache supplying data that may not been updated with the contents of the snapshot. To prevent this from occurring, when recovering from the snapshot, perform the following steps: 1. Quiesce the application that is accessing the source volume using application-specific tools, such as EMC Replication Manager. 69

70 Chapter 5: XtremCache Solution for Applications 2. Flush the data in the host buffers using an appropriate command, such as admsnap flush, and unmount the file system. 3. Invalidate the contents of the source device by using the purge - source_dev command. 4. Perform the snapshot restore operations on the array. 5. After the restore is complete, remount the file system, as necessary. Test results XtremCache acceleration test results We observed consistent reductions in read latencies and increased user IOPS with all workload types when we enabled XtremCache to accelerate performance for the database LUNs. Even 300-message workloads that experienced over 20 ms read latencies without XtremCache became a normal steady workload with reduced latencies and increased IOPS with XtremCache enabled. This extreme workload was expected to fail as the storage and Exchange virtual machine resources were originally designed for 150-message workloads. Figure 33 provides additional details for each test performed. Highlights of the observed test results include: A 150-message per user per day workload achieved a 51 percent reduction in read latencies (by 6.4 ms) and a 14.6 percent increase in user IOPS (by 224 IOPS). A 250-message per user per day workload achieved a 69.3 percent reduction in read latencies (by 11.1 ms) and a 12.8 percent increase in user IOPS (by 275 IOPS). A 300-message per user per day workload achieved a 56.8 percent reduction in read latencies (by 12.5 ms) and a 12 percent increase in user IOPS (by 346 IOPS). 70

71 Chapter 5: XtremCache Solution for Applications Figure 33. Exchange 2010 performance with XtremCache and LoadGen workload Performance with XtremCache data deduplication To validate Exchange performance with XtremCache inline data deduplication, we performed validation on one Exchange virtual machine with 5,000 users. We performed a series of Microsoft Exchange Load Generator (LoadGen) tests, with each test lasting eight hours and with multiple workload profiles, to see the effect of data deduplication. We monitored the XtremCache statistics to determine the appropriate deduplication ratio for each workload. With the LoadGen workloads we generated, we observed that a 30 percent deduplication ratio would be more effective than the default 20 percent. Figure 34 shows the deduplication ratio observed during testing. Figure 34. XtremCache statistics with data deduplication Note: The LoadGen workload does not represent the actual workload in your specific production environment. The results observed and recommendations provided here are based on our lab configuration and results only. Ensure that you configure your environment based on your particular workload requirements and characteristics. 71

72 Chapter 5: XtremCache Solution for Applications Deduplication test results summary In Figure 35 and Figure 36, the XtremCache data deduplication test results with multiple workload profiles for the Exchange 2010 Mailbox server show: Decreased Exchange Server CPU utilization with each workload Slightly increased write latencies due to XtremCache analysis and processing of the duplicated data 72

73 Chapter 5: XtremCache Solution for Applications Figure 35. Exchange server CPU utilization with XtremCache data deduplication Figure 36. Exchange server disk latencies with XtremCache data deduplication 73

74 Chapter 5: XtremCache Solution for Applications Analysis of the back-end VNX storage array shows that when we enabled deduplication on the server, the writes to the VNX array were reduced. In Figure 37, the write activity was reduced from 90 IOPS to around 65 IOPS for one of the database LUNs, which is about a 27.7 percent difference. Figure 37. Exchange database LUN performance with XtremCache data deduplication XtremCache for SharePoint For the SharePoint environment, content database and crawl database LUNs are most suitable for XtremCache acceleration. A typical SharePoint content database workload has a 70:30 read-to-write ratio, making it an ideal candidate for XtremCache acceleration. With two 600 GB XtremCache devices configured on two 700 GB XtremSF cards, the database latency can drop to less than one third during a full crawl. Benefits of XtremCache in a SharePoint environment Best practices This use case demonstrates the following results: XtremCache offloads the read workload of the SharePoint content database workload during the crawl process from the storage array to the server. XtremCache improves the crawl performance by lowering the latencies in the content database of the SharePoint farm in a virtualized environment. XtremCache has little impact on system resources such as CPU and memory. Integration with the VSI plug-in for VMware vsphere vcenter makes XtremCache easy to manage and monitor in a virtualized environment. In a SharePoint environment, based on the XtremCache Performance Predictor tool, the LUNs for the content databases during the crawl process can benefit most from XtremCache acceleration. The database file LUNs for the content database are, therefore, good candidates for the XtremCache source LUNs. Exclude the log LUNs and tempdb LUNs from XtremCache as they are mostly write-heavy. In summary, in a typical SharePoint Farm: Use XtremCache Performance Predictor to estimate the benefits of adding XtremCache to SharePoint Farm. 74

75 Chapter 5: XtremCache Solution for Applications The content database files LUNs and crawl database LUNs with a heavy workload are good candidates for the XtremCache source LUNs. Log LUNs and tempdb LUNs in the SharePoint farm are excluded from the acceleration of XtremCache. Set the page size to 64 KB and the maximum I/O size to 128 KB in the XtremCache to accommodate the large I/O size of the content and crawl databases, especially when NFS is in use. For each 1 TB of the content database, an XtremCache of 200 GB or more can significantly improve the OLTP query performance. Use case design and deployment The example use case deployed a virtualized SharePoint 2010 farm with 1.8 TB content databases in one SQL Server 2012 virtual machine in a vsphere 5.1 virtualized environment, configured with two 700 GB XtremSF cards. You can improve the performance of the SharePoint crawl by: Deploying 600 GB XtremCache in the SQL Server virtual machine Configuring all the content database file LUNs and the crawl database file LUNs to be accelerated by the XtremCache The latency for these LUNs decreases dramatically and the crawl performance improves by more than 20 percent. The configuration of XtremCache for SharePoint in a VMware environment is similar to the SQL Server configuration. Only the SQL Server virtual machine in the SharePoint farm needs XtremCache acceleration. The solution design includes the following components, as shown in Figure 38: XtremSF installed on vsphere ESXi servers hosting the SQL Server virtual machine for SharePoint Server XtremCache enabled on the SQL Server virtual machine, only configured for the content databases and the crawl databases Storage tiers with FAST VP enabled 75

76 Chapter 5: XtremCache Solution for Applications Figure 38. Architecture design for XtremCache enabled SharePoint environment Deployment scenarios Figure 39 shows the XtremCache deployment for this use case. All the content database file LUNs are configured as source LUNs for XtremCache acceleration, but tempdb LUNs and log LUNs are excluded. 76

77 Chapter 5: XtremCache Solution for Applications Figure 39. XtremCache deployment for SharePoint 2010 on vsphere Configuration of XtremCache in the VMware environment In this solution, two cache devices with a usable size of 600 GB out of the 700 GB XtremCache card are configured for the content database virtual machine. All the LUNs for the content databases data file and the crawl database data file are associated with the two cache devices. During the crawl process, the content database data file is 100 percent random read and the crawl database data file is around 60 percent read. Set the I/O page size for the cache device to 64 KB (the default is 8 KB) and the maximum I/O size to 128 KB (the default is 64 KB). Test results The Read Hit rate for the content database during a full crawl is about 70 to 75 percent, and the crawl database is around 40 percent. The hard disks that store the content databases all have an over 70 percent Read Hit rate. The Read Hit rate is around 40 percent for the crawl database hard disk. The latency of the content databases and crawl database dropped dramatically after we enabled the cache device, as shown in Figure 40. Note that the property database is not configured as source devices for cache. The latency drop contributed to the property database improvement because it was in the same disk pool in the storage array. As the I/O on the backend for the content and crawl database were offloaded to 77

78 Chapter 5: XtremCache Solution for Applications the XtremCache, a side effect was an improvement in the latency for the property database. Figure 40. Content database latency dropped after enabling XtremCache The full crawl duration decreased by 21.2 percent when XtremCache was enabled, as shown in Figure 41. Figure 41. Full crawl performance improved after enabling XtremCache 78

79 Chapter 5: XtremCache Solution for Applications XtremCache for Oracle OLTP database In the VSPEX for virtualized Oracle environment, the database LUN for the OLTP database is most likely to benefit from XtremCache acceleration. We tested a database with a read-to-write ratio of 70:30. Within an XtremCache of 200 GB to accelerate 1 TB of the database LUN, the transaction rate almost doubled. Benefits of XtremCache in an Oracle environment Best practices Similar to the application environments described previously, the VSPEX for virtualized Oracle environment will benefit from XtremCache as a server-based cache. Introducing the XtremCache virtual infrastructure does not require any changes to the application or storage system layouts. Because XtremCache is a caching solution rather than a storage solution, there is no need to move data. Therefore, your data is not at risk of becoming inaccessible if the server or the PCIe card fails. XtremCache is designed to minimize CPU overhead in the server by offloading flash management operations from the host CPU to the PCIe card. In a virtualized Oracle OLTP environment, XtremCache: Delivers an 80 percent improvement in transactions per minute (TPM) compared to the baseline without any changes to applications Maintains the integrity of and protects the data In an Oracle Database 11g R2 environment, based on the XtremCache Performance Predictor tool, the database file LUNs can benefit most from XtremCache acceleration and are good candidates for the XtremCache source LUNs. In summary, in a typical Oracle OLTP environment: Use the XtremCache Performance Predictor tool to estimate the benefits of adding XtremCache to the environment. The database file LUNs with a heavy workload are good candidates for the XtremCache source LUNs. Log LUNs and tempdb LUNs in the OLTP databases are excluded from the acceleration of XtremCache. For each 1 TB of database with a read-to-write ratio of 70:30, an XtremCache of 200 GB or more can significantly improve the performance of the database. Use case design and deployment The example use case deployed a standard TPC-C-like OLTP workload, with a 1.2 TB database and a 70 to 30 percent read/write mix on Oracle Database 11g R2 on a Red Hat Enterprise Linux 5 virtual machine virtualized with vsphere 5.1. By deploying 250 GB of usable XtremCache in the Oracle virtual machine from a single 350 GB XtremSF card, the performance of the workload can be dramatically improved. The transactions per minute improved 80 percent compared with the same environment without XtremCache. 79

80 Chapter 5: XtremCache Solution for Applications The solution design includes the physical components shown in Figure 42: A single vsphere ESXi server hosting one Oracle Database 11g R2 server on a Red Hat Enterprise Linux 5 virtual machine A 1.2 TB database on eight VMDK LUNs for the database files and two VMDK LUNs for the logs XtremSF installed on the ESXi server with a 250 GB XtremCache configured for the Oracle virtual machine We configured only database VMDKs as source LUNs for XtremCache. We excluded the log LUNs and the tempdb LUNs. Figure 42. Architecture design for XtremCache enabled Oracle 11g R2 environment 80

81 Deployment scenarios Chapter 5: XtremCache Solution for Applications Figure 43 shows the XtremCache deployment for the Oracle use case. We configured all of the database VMDK LUNs on the virtual machines as source LUNs for XtremCache acceleration. We excluded log LUNs because of their write-intensive nature. In this deployment, we configured 250 GB of XtremCache for caching 1.2 TB of the OLTP database. Figure 43. XtremCache deployment for Oracle 11g R2 on vsphere Configuring XtremCache for Oracle in a VMware environment is similar to configuring the other application environments described in the previous sections. Test results Figure 44 compares the overall system throughput (in TPM) of the baseline and XtremCache-enabled environments. The availability of the hot data in the server s XtremCache resulted in an 80 percent improvement in transactions per minute. 81

82 Chapter 5: XtremCache Solution for Applications Figure 44. OLTP TPM improvement XtremCache for private cloud This use case deployed the XtremCache to accelerate performance of the following applications in a private cloud environment virtualized with VMware: Oracle Database 11g R2 OLTP database SQL Server OLTP database SQL Server decision support system (DSS) database SQL Server 2012 cluster In the Oracle and SQL Server OLTP virtual machines, we configured the XtremCache based on the principals described in the previous application-specific sections. The cluster support configured XtremCache for both active and passive databases. SQL Server DSS uses XtremSF storage in a split configuration for the tempdb of the DSS database. With a comprehensive private cloud environment, XtremCache and XtremSF proved to be flexible and were able to deliver the expected performance improvement for all the applications in different configurations. XtremCache is proven to complement FAST VP for performance improvement of both the SQL Server and Oracle OLTP databases. The tempdb, supported by XtremSF in the database for the DSS workload, gets a performance boost from the XtremSF. 82

83 Chapter 5: XtremCache Solution for Applications Benefits of XtremCache in a private cloud environment This EMC solution has shown the implementation of multiple critical applications in a VMware private cloud environment, supported by XtremSF and XtremCache. Each application had different workload characteristics and placed varying demands on the underlying storage. XtremCache provided better performance for the applications that involve heavy read I/O. The benefits of XtremCache in a private cloud environment include the following: Performance optimization accelerating application-specific performance at the host level using EMC XtremSF cards: With a three-tier FAST VP configuration, XtremCache offloads the IOPS of the array significantly. The array can be free for other I/O requests. With a two-tier FAST VP configuration, XtremCache reduces disk latencies and response times, enabling a higher transaction throughput. XtremCache reduces disk latencies and response time, enabling higher transaction throughput by offloading much of the read I/O traffic from the storage array. XtremCache caches the read I/O so the data is not at risk of being inaccessible if the server or the XtremCache card fails. Using XtremSF storage in a split-card configuration for the tempdb of the DSS database boosts the performance of the tempdb. XtremCache in a virtualized environment is easy to manage and monitor due to its integration with the VSI plug-in to VMware vsphere vcenter. XtremCache deduplication helps to reduce the footprint on bandwidth. In this private cloud environment, XtremCache demonstrated both flexibility and ease of management in a comprehensive configuration, improving performance while having little impact on system resource consumption. Best practices In a private cloud environment, you need to consider multiple applications. Follow the application-specific best practices, particularly for the deployment of XtremCache in a heterogeneous environment: Always use the XtremCache Performance Predictor tool to determine which application can benefit from XtremCache the most. Allocate XtremCache for the most critical application virtual machine first, and then consider the rest of the virtual machines. Consider placing virtual machines on a different physical server to optimize the capacity of XtremSF. MLC XtremSF (alone or in split-card mode) can be used as a tempdb for data warehouse or DSS types of databases. To improve query performance, consider allowing at least 200 GB of tempdb space for every 1 TB of database. 83

84 Chapter 5: XtremCache Solution for Applications Use case design and deployment In the example use case, Microsoft SQL Server 2012 (two OLTP and one DSS), Oracle Database 11g R2 (OLTP), and Microsoft SQL Server failover clustering are all on the virtualized environment. These applications ran on virtual machines in a VMware vsphere 5 environments on FAST VP-enabled EMC storage, which continually monitors and tunes performance by relocating data across tiers based on access patterns and predefined FAST policies. We deployed XtremSF on both ESXi servers, one configured in a split-card mode. We configured XtremCache to support the OLTP databases for caching purposes, while using the remaining XtremSF capacity for the storage of tempdb in the DSS database. Load generation tools drove these applications simultaneously to validate the infrastructure and function of XtremCache acceleration to the data LUNs of the OLTP application. The solution design included the following components, as shown in Figure 45: Two vsphere ESXi servers, one hosting the Oracle Database 11g R2 server and a SQL Server virtual machine as part of Microsoft Server failover cluster; the other hosting the other SQL Server of the MSCS, two SQL Servers with OLTP, and one SQL Server with a DSS workload. XtremSF configured in split-mode is used as tempdb storage for the SQL Server virtual machine with the DSS workload. XtremCache enabled on all other virtual machines. FAST VP-enabled storage tiers. 84

85 Chapter 5: XtremCache Solution for Applications Figure 45. Architecture design for XtremCache-enabled private cloud environment with multiple applications Deployment scenarios Table 10 shows the XtremCache deployment for the private cloud use case. The configuration of the database LUNs follows the same best practices as the application-specific use cases, such as source LUNs for XtremCache acceleration. We excluded the log LUNs because they have mostly write and sequential I/O. We used XtremSF in split mode for the DSS tempdb store to accelerate the DSS workload. 85

86 Chapter 5: XtremCache Solution for Applications Table 10. XtremCache deployment in a private cloud environment XtremCache allocation per application/virtual machine ESXi 01 (allocation unit: GB) ESXi 02 (allocation unit: GB) Configuration details Oracle OLTP TB database under 1,800 Swingbench sessions DSS TB database with DSS workload SQL OLTP TB with OLTP workload SQL OLTP TB with OLTP workload Total Configuration of XtremCache in the VMware environment The configuration of XtremCache for a private cloud in a VMware environment needs to follow all the guidelines for each individual application, such as SharePoint, SQL Server, Exchange, and Oracle. For more information, refer to the EMC XtremCache Installation and Administration Guide. Test results Test result for XtremSF in split mode used as the SQL Server tempdb for a DSS workload In the solution, a 200 GB XtremCache was taken from the 700 GB XtremSF card and was used for the tempdb database data and log storage to accelerate performance. The SQL Server tempdb was heavily used as a temporary table store for sorting, row versioning, and so on. As the tempdb store for a DSS workload, the XtremCache DAS can: Lower the peak latency of the tempdb data LUN from tens of milliseconds to less than 20 ms. Lower the average tempdb data LUN latency from tens of milliseconds to one ms. Test results for XtremCache deduplication The test result shows: The Oracle deduplication hit ratio was about 4 percent. The SQL OLTP deduplication hit ratio was about 3 percent. The recommended deduplication settings for a structured database such as Oracle or SQL Server are: If the observed ratio is less than 10 percent, turn off the deduplication or reconfigure the deduplication gain to zero percent, to benefit from the extended cache device life. 86

87 Chapter 5: XtremCache Solution for Applications If the observed ratio is over 35 percent, raise the deduplication gain to match the observed deduplication. If the observed ratio is between 10 and 35 percent, leave the deduplication gain as it is. Figure 46 shows that the deduplication hit ratio of SQL Server is 3 percent. Figure 46. Deduplication statistics for SQL Server OLTP Test results for two-tier storage Table 11 shows the performance summary for the private cloud environment. For Oracle, the response time dropped from 35 ms to 3 ms. For SQL Server, the response time dropped from over 20 ms to 3 ms. All database transaction rates improved, with the SQL Server OLTP gaining the most (which was a three times transaction rate increase using part of the 700 GB caching space). The increased CPU usage was largely due to the increased workload. When the workload is kept the same or is not greatly increased, CPU usage does not increase much. This is seen in the case of ESXi 01, which hosts Oracle with only a moderate increase in the workload, the CPU usage did not greatly increase. 87

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE

EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE DESIGN GUIDE EMC VSPEX WITH EMC XTREMSF AND EMC XTREMSW CACHE EMC VSPEX Abstract This describes how to use EMC XtremSF and EMC XtremSW Cache in a virtualized environment with an EMC VSPEX Proven Infrastructure

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER White Paper EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER EMC XtremSF, EMC XtremCache, EMC VNX, Microsoft SQL Server 2008 XtremCache dramatically improves SQL performance VNX protects data EMC Solutions

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

INTRODUCTION TO EMC VFCACHE

INTRODUCTION TO EMC VFCACHE White Paper INTRODUCTION TO EMC VFCACHE VFCache is a server Flash-caching solution VFCache accelerates reads and ensures data protection VFCache extends EMC FAST Suite to server Abstract This white paper

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa 1 FLASH.NEXT Zero to One Million IOPS in a Flash Ahmed Iraqi Account Systems Engineer North Africa 2 The Performance Gap CPU performance increases 100x every decade Hard disk drive performance has stagnated

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

INTRODUCTION TO EMC XTREMSF

INTRODUCTION TO EMC XTREMSF White Paper INTRODUCTION TO EMC XTREMSF XtremSF is server-based PCIe Flash hardware XtremSF can be used as local storage or as a caching device with EMC XtremSW Cache Abstract This white paper provides

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 EMC VSPEX Abstract This describes how to design a Microsoft Exchange Server 2013 solution on an EMC VSPEX Proven Infrastructure with Microsoft

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 EMC VSPEX Abstract This describes how to design virtualized Microsoft Exchange Server 2010 resources on the appropriate EMC VSPEX Proven Infrastructures

More information

Cloud Meets Big Data For VMware Environments

Cloud Meets Big Data For VMware Environments Cloud Meets Big Data For VMware Environments

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved. Verron Martina vspecialist 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX. Proven Infrastructure. P.M. Hashin Kabeer Sr. Systems Engineer. Copyright 2013 EMC Corporation. All rights reserved.

EMC VSPEX. Proven Infrastructure. P.M. Hashin Kabeer Sr. Systems Engineer. Copyright 2013 EMC Corporation. All rights reserved. EMC VSPEX Proven Infrastructure P.M. Hashin Kabeer Sr. Systems Engineer 1 Two Fundamental Challenges 65% MAINTAIN Increase Revenue 35% INVEST Lower Operational Costs Gartner IT Key Metrics Data, December

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VSPEX. Proven Infrastructure That Is Simple, Efficient, And Flexible. Copyright 2013 EMC Corporation. All rights reserved.

EMC VSPEX. Proven Infrastructure That Is Simple, Efficient, And Flexible. Copyright 2013 EMC Corporation. All rights reserved. EMC VSPEX Proven Infrastructure That Is Simple, Efficient, And Flexible 1 Two Fundamental Challenges 65% MAINTAIN Increase Revenue 35% INVEST Lower Operational Costs Gartner IT Key Metrics Data, December

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash 1 FLASH NEXT: Zero to One Million IOPs In A Flash 2 Expectations Are Reset Forever 3 DATA IS GROWING 4 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance Must Be

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes the high-level steps and best practices required

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 EMC VSPEX CHOICE WITHOUT COMPROMISE 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/ Distributed Computing Cloud Computing 3 Cloud A New Architecture Old World Physical New World

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION

OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION White Paper OPTIMIZING CLOUD DEPLOYMENT OF VIRTUALIZED APPLICATIONS ON EMC SYMMETRIX VMAX CLOUD EDITION Simplifies cloud storage Automates management and provisioning Transforms as-a-service delivery EMC

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

StorMagic SvSAN: A virtual SAN made simple

StorMagic SvSAN: A virtual SAN made simple Data Sheet StorMagic SvSAN: A virtual SAN made simple StorMagic SvSAN SvSAN is a software-defined storage solution designed to run on two or more servers. It is uniquely architected with the combination

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC VFCache. Performance. Intelligence. Protection. #VFCache. Copyright 2012 EMC Corporation. All rights reserved.

EMC VFCache. Performance. Intelligence. Protection. #VFCache. Copyright 2012 EMC Corporation. All rights reserved. EMC VFCache Performance. Intelligence. Protection. #VFCache Brian Sorby Director, Business Development EMC Corporation The Performance Gap Xeon E7-4800 CPU Performance Increases 100x Every Decade Pentium

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE 1 FLASH REDEFINING THE POSSIBLE 2 REDEFINING THE POSSIBLE 3 Expectations Are Reset Forever 4 DATA IS GROWING 5 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

MICROSOFT APPLICATIONS

MICROSOFT APPLICATIONS MICROSOFT APPLICATIONS Speed The Journey To Your Virtual Private Cloud 1 Business Drivers Increase Revenue INCREASE AGILITY Lower Operational Costs Reduce Risk 2 Cloud Transforms IT Infrastructure

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

VMware Virtual SAN Technology

VMware Virtual SAN Technology VMware Virtual SAN Technology Today s Agenda 1 Hyper-Converged Infrastructure Architecture & Vmware Virtual SAN Overview 2 Why VMware Hyper-Converged Software? 3 VMware Virtual SAN Advantage Today s Agenda

More information

EMC Innovations in High-end storages

EMC Innovations in High-end storages EMC Innovations in High-end storages Symmetrix VMAX Family with Enginuity 5876 Sasho Tasevski Sr. Technology consultant sasho.tasevski@emc.com 1 The World s Most Trusted Storage System More Than 20 Years

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MISSION CRITICAL APPLICATIONS 2 Application Environments Historically Physical Infrastructure Limits Application Value Challenges Different Environments Limits On Performance Underutilized

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc. Modern hyperconverged infrastructure Karel Rudišar Systems Engineer, Vmware Inc. 2 What Is Hyper-Converged Infrastructure? - The Ideal Architecture for SDDC Management SDDC Compute Networking Storage Simplicity

More information

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide - MARCH 2018 VMWARE VSAN 6.6 Licensing Guide Table of Contents Introduction 3 License Editions 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster with

More information

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c White Paper Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c What You Will Learn This document demonstrates the benefits

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.6 November 2017 215-12636_B0 doccomments@netapp.com Updated for ONTAP Select 9.3 Table of Contents 3 Contents

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BACKUP BUILT FOR VMWARE Mark Twomey Technical Director, The Office Of The CTO 2 Agenda Market Forces Optimized VMware Backup Backup And Recovery For VCE Vblock Protecting vcloud Director Customer Success

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture EMC Virtual Architecture for Microsoft SharePoint Server 2007 Enabled by EMC CLARiiON CX3-40, VMware ESX Server 3.5 and Microsoft SQL Server 2005 Reference Architecture EMC Global Solutions Operations

More information

NEXT GENERATION UNIFIED STORAGE

NEXT GENERATION UNIFIED STORAGE 1 NEXT GENERATION UNIFIED STORAGE VNX Re-defines Midrange Price/ Performance VNX and VNXe: From January 2011 through Q12013 2 VNX Family Widely Adopted >63,000 Systems Shipped >3,600 PBs Shipped >200K

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Umair Riaz vspecialist 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/ Distributed Computing Cloud Computing 3 EMC s Mission

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC

FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC white paper FlashGrid Software Intel SSD DC P3700/P3600/P3500 Topic: Hyper-converged Database/Storage FlashGrid Software Enables Converged and Hyper-Converged Appliances for Oracle* RAC Abstract FlashGrid

More information

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

Infinio Accelerator Product Overview White Paper

Infinio Accelerator Product Overview White Paper Infinio Accelerator Product Overview White Paper November 2015 Table of Contents Executive Summary.3 Disruptive datacenter trends and new storage architectures..3 Separating storage performance from capacity..4

More information

Microsoft E xchange 2010 on VMware

Microsoft E xchange 2010 on VMware : Microsoft E xchange 2010 on VMware Availability and R ecovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more

More information

VMware vsan 6.6. Licensing Guide. Revised May 2017

VMware vsan 6.6. Licensing Guide. Revised May 2017 VMware 6.6 Licensing Guide Revised May 2017 Contents Introduction... 3 License Editions... 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster... 7

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the data protection functionality of the Federation Enterprise Hybrid Cloud for Microsoft applications solution, including automated backup as a service, continuous availability,

More information

Installation and Cluster Deployment Guide for VMware

Installation and Cluster Deployment Guide for VMware ONTAP Select 9 Installation and Cluster Deployment Guide for VMware Using ONTAP Select Deploy 2.8 June 2018 215-13347_B0 doccomments@netapp.com Updated for ONTAP Select 9.4 Table of Contents 3 Contents

More information

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile Tegile Systems 1 Introducing Tegile Company Overview Product Overview Solutions & Use Cases Partnering with Tegile 2 Company Overview Company Overview Te gile - [tey-jile] Tegile = technology + agile Founded

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

SvSAN Data Sheet - StorMagic

SvSAN Data Sheet - StorMagic SvSAN Data Sheet - StorMagic A Virtual SAN for distributed multi-site environments StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications

More information

StorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer

StorMagic SvSAN 6.1. Product Announcement Webinar and Live Demonstration. Mark Christie Senior Systems Engineer StorMagic SvSAN 6.1 Product Announcement Webinar and Live Demonstration Mark Christie Senior Systems Engineer Introducing StorMagic What do we do? StorMagic SvSAN eliminates the need for physical SANs

More information

NEXT GENERATION UNIFIED STORAGE

NEXT GENERATION UNIFIED STORAGE 1 NEXT GENERATION UNIFIED STORAGE VNX Re-defines Midrange Price/ Performance VNX and VNXe: From January 2011 through Q12013 2 VNX Family Widely Adopted >63,000 Systems Shipped >3,600 PBs Shipped >200K

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

FLASHARRAY//M Smart Storage for Cloud IT

FLASHARRAY//M Smart Storage for Cloud IT FLASHARRAY//M Smart Storage for Cloud IT //M AT A GLANCE PURPOSE-BUILT to power your business: Transactional and analytic databases Virtualization and private cloud Business critical applications Virtual

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT White Paper EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT Genetec Omnicast, EMC VPLEX, Symmetrix VMAX, CLARiiON Provide seamless local or metropolitan

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Symantec Reference Architecture for Business Critical Virtualization

Symantec Reference Architecture for Business Critical Virtualization Symantec Reference Architecture for Business Critical Virtualization David Troutt Senior Principal Program Manager 11/6/2012 Symantec Reference Architecture 1 Mission Critical Applications Virtualization

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information