EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Size: px
Start display at page:

Download "EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1"

Transcription

1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0 Simplify management and decrease TCO Guarantee a quality desktop experience Minimize the risk of virtual desktop deployment EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of the tests performed to validate an EMC infrastructure for VMware View 5.1 by using EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, View Storage Accelerator, View Persona Management, and VMware View Composer 3.0. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vsphere, and VMware View. August 2012

2 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESXi, VMware vcenter, VMware View, and VMware vsphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number: H

3 Table of contents Table of contents 1 Executive Summary Introduction to the EMC VNX series Introduction Software suites available Software packages available Business case Solution overview Key results and recommendations Introduction Document overview Use case definition Purpose Scope Not in scope Audience Prerequisites Terminology Reference Architecture Corresponding reference architecture Reference architecture diagram Configuration Hardware resources Software resources

4 Table of contents 3 VMware View Infrastructure VMware View Introduction Deploying VMware View components View Manager Server View Composer View Storage Accelerator View Persona Management View Composer linked clones VMware vsphere 5.0 Infrastructure VMware vsphere 5.0 overview Desktop vsphere clusters Infrastructure vsphere cluster Windows infrastructure Introduction Microsoft Active Directory Microsoft SQL Server DNS server DHCP server Storage Design EMC VNX series storage architecture Introduction Storage layout Storage layout overview File system layout EMC VNX FAST Cache VSI for VMware vsphere View Linked Clone Storage Layout VNX shared file systems VMware View Persona Management and folder redirection

5 Table of contents EMC VNX for File Home Directory feature Capacity Network Design Considerations Network layout overview Logical design considerations Link aggregation VNX for File network configuration Data Mover ports LACP configuration on the Data Mover Data Mover interfaces VNX for Block Storage Processor configuration Storage Processor interfaces vsphere network configuration NIC teaming Increase the number of vswitch virtual ports Cisco Catalyst 6509 configuration Overview Cabling Server uplinks Cisco Nexus 5020 Ethernet configuration Overview Cabling Enable jumbo frames on Nexus switch vpc for Data Mover ports Cisco Nexus 5020 Fibre Channel configuration Overview Cabling

6 Table of contents Fibre Channel uplinks Installation and Configuration Installation overview VMware View components VMware View installation overview VMware View setup VMware View desktop pool configuration VMware View Persona Management configuration Storage components Storage pools Enable FAST Cache VNX Home Directory feature Testing and Validation Validated environment profile Profile characteristics Use cases Login VSI Login VSI launcher FAST Cache configuration Boot storm results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS vsphere CPU load vsphere disk response time

7 Table of contents Antivirus results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS vsphere CPU load vsphere disk response time Patch install results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS vsphere CPU load vsphere disk response time Login VSI results Test methodology Desktop logon time Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS Data Mover CPU utilization vsphere CPU load vsphere disk response time Recompose results Test methodology

8 Table of contents Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS vsphere CPU load vsphere disk response time Refresh results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS Storage processor utilization FAST Cache IOPS vsphere CPU load vsphere disk response time Conclusion Summary References Supporting documents VMware documents

9 List of Tables List of Tables Table 1. Terminology Table 2. VMware View Solution hardware Table 3. VMware View Solution software Table 4. VNX5500 File systems Table 5. vsphere Port groups in vswitch0 and vswitch Table 6. VMware View environment profile

10 List of Figures List of Figures Figure 1. VMware View Reference architecture Figure 2. VMware View Linked clones Figure 3. VMware View Logical representation of linked clone and replica disk Figure 4. VNX5500 Core reference architecture physical storage layout Figure 5. VNX5500 Full reference architecture physical storage layout Figure 6. VNX5500 CIFS file system layout Figure 7. VMware View Network layout overview Figure 8. VNX5500 Ports of the two Data Movers Figure 9. VNX5500 Storage Processors Figure 10. vsphere vswitch configuration Figure 11. vsphere Load balancing policy Figure 12. vsphere vswitch virtual ports Figure 13. Example Single initiator zoning Figure 14. VMware View Select Automated Pool Figure 15. VMware View Select View Composer linked clones Figure 16. VMware View Select Provision Settings Figure 17. VMware View vcenter Settings Figure 18. VMware View Select Linked Clone Datastores Figure 19. VMware View Select Replica Disk Datastore Figure 20. VMware View Advanced Storage Options Figure 21. VMware View Guest Customization Figure 22. VMware View Persona Management Initial configuration Figure 23. VMware View Persona Management Folder Redirection policies Figure 24. VNX5500 Storage pools Figure 25. VNX5500 FAST Cache tab Figure 26. VNX5500 Enable FAST Cache Figure 27. VNX5500 Home Directory MMC snap-in Figure 28. VNX5500 Sample Home Directory User folder properties Figure 29. Boot storm Disk IOPS for a single SAS drive Figure 30. Boot storm Pool LUN IOPS and response time Figure 31. Boot storm Storage processor total IOPS Figure 32. Boot storm Storage processor utilization Figure 33. Boot storm FAST Cache IOPS Figure 34. Boot storm vsphere CPU load Figure 35. Boot storm Average Guest Millisecond/Command counter Figure 36. Antivirus Disk I/O for a single SAS drive Figure 37. Antivirus Pool LUN IOPS and response time Figure 38. Antivirus Storage processor IOPS Figure 39. Antivirus Storage processor utilization Figure 40. Antivirus FAST Cache IOPS Figure 41. Antivirus vsphere CPU load Figure 42. Antivirus Average Guest Millisecond/Command counter Figure 43. Patch install Disk IOPS for a single SAS drive Figure 44. Patch install Pool LUN IOPS and response time Figure 45. Patch install Storage processor IOPS Figure 46. Patch install Storage processor utilization Figure 47. Patch install FAST Cache IOPS Figure 48. Patch install vsphere CPU load Figure 49. Patch install Average Guest Millisecond/Command counter

11 List of Figures Figure 50. Login VSI Desktop login time Figure 51. Login VSI Disk IOPS for a single SAS drive Figure 52. Login VSI Pool LUN IOPS and response time Figure 53. Login VSI Storage processor IOPS Figure 54. Login VSI Storage processor utilization Figure 55. Login VSI FAST Cache IOPS Figure 56. Login VSI Data Mover CPU utilization Figure 57. Login VSI vsphere CPU load Figure 58. Login VSI Average Guest Millisecond/Command counter Figure 59. Recompose Disk IOPS for a single SAS drive Figure 60. Recompose Pool LUN IOPS and response time Figure 61. Recompose Storage processor IOPS Figure 62. Recompose Storage processor utilization Figure 63. Recompose FAST Cache IOPS Figure 64. Recompose vsphere CPU load Figure 65. Recompose Average Guest Millisecond/Command counter Figure 66. Refresh Disk IOPS for a single SAS drive Figure 67. Refresh Pool LUN IOPS and response time Figure 68. Refresh Storage processor IOPS Figure 69. Refresh Storage processor utilization Figure 70. Refresh FAST Cache IOPS Figure 71. Refresh vsphere CPU load Figure 72. Refresh Average Guest Millisecond/Command counter

12 List of Figures 12

13 Chapter 1: Executive Summary 1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections: Introduction to the EMC VNX series Business case Solution overview Key results and recommendations Introduction to the EMC VNX series Introduction The EMC VNX series delivers uncompromising scalability and flexibility for the midtier user while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from VNX features such as: Next-generation unified storage, optimized for virtualized applications. Extended cache by using Flash drives with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file. Multiprotocol support for file, block, and object with object access through EMC Atmos Virtual Edition (Atmos VE). Simplified management with EMC Unisphere for a single management framework for all NAS, SAN, and replication needs. Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash. 6 Gb/s SAS back end with the latest drive technologies supported: 3.5 in. 100 GB and 200 GB Flash, 3.5 in. 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5-in. 1 TB, 2 TB and 3 TB 7.2k rpm NL-SAS 2.5 in. 100 GB and 200 GB Flash, 300 GB, 600 GB and 900 GB 10k rpm SAS Expanded EMC UltraFlex I/O connectivity Fibre Channel (FC), Internet Small Computer System Interface (iscsi), Common Internet File System (CIFS), network file system (NFS) including parallel NFS (pnfs), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet. 13

14 Chapter 1: Executive Summary The VNX series includes five software suites and three software packs that make it easier and simpler to attain the maximum overall benefits. Software suites available Software packages available Business case VNX FAST Suite Automatically optimizes for the highest system performance and the lowest storage cost simultaneously (FAST VP is not part of the FAST Suite for the EMC VNX5100 ). VNX Local Protection Suite Practices safe data protection and repurposing. VNX Remote Protection Suite Protects data against localized failures, outages and disasters. VNX Application Protection Suite Automates application copies and proves compliance. VNX Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. VNX Total Efficiency Pack Includes all five software suites (not available for the VNX5100). VNX Total Protection Pack Includes local, remote, and application protection suites. VNX Total Value Pack Includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package). Customers require a scalable, tiered, and highly available infrastructure to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution. The customers need to know how best to use these technologies to maximize their investment, support servicelevel agreements, and reduce their desktop total cost of ownership. The purpose of this solution is to build a replica of a common customer end-user computing (EUC) environment, and validate the environment for performance, scalability, and functionality. Customers will achieve: Increased control and security of their global, mobile desktop environment, typically their most at-risk environment. Better end-user productivity with a more consistent environment. Simplified management with the environment contained in the data center. Better support of service-level agreements and compliance initiatives. Lower operational and maintenance costs. 14

15 Chapter 1: Executive Summary Solution overview This solution demonstrates how to use an EMC VNX platform to provide storage resources for a robust VMware View 5.1 environment and Windows 7 virtual desktops. Planning and designing the storage infrastructure for VMware View are critical steps as the shared storage must be able to absorb large bursts of input/output (I/O) that occur throughout the course of a day. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can often adapt to slow performance, but unpredictable performance will quickly frustrate them. To provide predictable performance for an EUC environment, the storage must be able to handle peak I/O load from clients without resulting in high response times. Designing for this workload involves deploying several disks to handle brief periods of extreme I/O pressure and it is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required, and thus minimizes the cost. Key results and recommendations EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more virtual desktops on fewer drives, and greater IOPS density with a lower drive requirement. Chapter 7: Testing and Validation provides more details. 15

16 Chapter 1: Executive Summary 16

17 Chapter 2: Introduction 2 Introduction Document overview EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently faced by its customers. This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for VMware View 5.1 solution by using the following products: EMC VNX series VMware View Manager 5.1 VMware View Storage Accelerator VMware View Composer 3.0 VMware View Persona Management VMware vsphere 5.0 This chapter includes the following sections: Document overview Reference architecture Prerequisites and supporting documentation Terminology Use case definition The following seven use cases are examined in this solution: Boot storm Antivirus scan Microsoft security patch install Login storm User workload simulated with Login Consultants Login VSI 3.5 tool View recompose View refresh 17

18 Chapter 2: Introduction Chapter 7: Testing and Validation contains the test definitions and results for each use case. Purpose The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by VMware View 5.1, VMware vsphere 5.0, View Storage Accelerator, View Persona Management, View Composer 3.0, EMC VNX series (FC), VNX FAST Cache, and VNX storage pools. This solution includes all the components required to run this environment such as, the infrastructure hardware, software platforms including Microsoft Active Directory, and the required VMware View configuration. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. Scope This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for VMware View 5.1 solution. The objectives of this testing are to establish: A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution. The best practices for storage configuration that provides optimal performance, scalability, and protection in the context of the midtier enterprise market. Not in scope Audience Implementation instructions are beyond the scope of this document. Information on how to install and configure VMware View 5.1 components, vsphere 5.0, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable. The intended audience for this Proven Solutions Guide is: Internal EMC personnel EMC partners Customers Prerequisites It is assumed that the reader has a general knowledge of the following products: VMware vsphere 5.0 VMware View 5.1 EMC VNX series Cisco Nexus and Catalyst switches 18

19 Chapter 2: Introduction Terminology Table 1 lists the terms that are frequently used in this document. Table 1. Term Terminology EMC VNX FAST Cache Definition A feature that enables the use of Flash drive as an expanded cache layer for the array. Linked clone Login VSI Replica VMware View Composer VMware View Storage Accelerator VMware View Persona Management A virtual desktop created by VMware View Composer from a writeable snapshot paired with a read-only replica of a master image. A third-party benchmarking tool developed by Login Consultants that simulates real world EUC workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users. A read-only copy of a master image that is used to deploy linked clones. Integrates effectively with VMware View Manager to provide advanced image management and storage optimization. Reduces the storage load associated with virtual desktops by caching the common blocks of desktop images into local vsphere host memory. View Storage Accelerator is also referred to as host caching for View. Preserves user profiles and dynamically synchronizes them with a remote profile repository. Reference Architecture Corresponding reference architecture This Proven Solutions Guide has a corresponding Reference Architecture document that is available on EMC online support website and EMC.com. EMC Infrastructure for VMware View EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0 Reference Architecture provides more details. If you do not have access to these documents, contact your EMC representative. The reference architecture and the results in this Proven Solutions Guide are valid for 2,000 Windows 7 virtual desktops conforming to the workload described in the Validated environment profile section. 19

20 Chapter 2: Introduction Reference architecture diagram Figure 1 shows the reference architecture of the midsize solution. Figure 1. VMware View Reference architecture Configuration Hardware resources Table 2 lists the hardware used to validate the solution. Table 2. VMware View Solution hardware Hardware Quantity Configuration Notes EMC VNX Three disk-array enclosures (DAEs) configured with: Thirty-six 300 GB, 15krpm 3.5 in. SAS disks Five 100 GB, 3.5 in. Flash drives VNX for File with Two Data Movers (1 active and 1 standby) VNX shared storage for core solution Optional; for user data 20

21 Chapter 2: Introduction Intel-based servers Cisco Catalyst 6509 Two additional diskarray enclosures (DAEs) configured with: Thirty-four 1 TB, 7,200 rpm 3.5 in. NL-SAS disks One additional diskarray enclosure (DAEs) configured with: Five additional 300 GB, 15k-rpm 3.5 in. SAS disks 14 Memory: 144 GB of RAM CPU: Two Intel Xeon E GHz decacore processors Internal storage: Two 73 GB internal SAS disks External storage: VNX5500 (FC) NIC: Quad-port Broadcom BCM Base-T adapters HBA: One QLogic ISP2532 with dual 8 Gb ports 2 Memory: 48 GB of RAM CPU: Two Intel Xeon E GHz quadcore processors Internal storage: Two 73 GB internal SAS disks External storage: VNX5500 (FC) NIC: Quad-port Broadcom BCM Base-T adapters HBA: One QLogic ISP2532 with dual 8 Gb ports 2 WS-6509-E switch WS-x gigabit line cards WS-SUP720-3B supervisor Optional; for user data Optional; for infrastructure storage Virtual desktop vsphere clusters one and two Optional; vsphere cluster to host infrastructure virtual machines 1-gigabit host connections distributed over two line cards Cisco Nexus Six 10-gigabit ports Eighteen 8-gigabit FC ports Redundant FC and LAN A/B configuration 21

22 Chapter 2: Introduction Software resources Table 3 lists the software used to validate the solution. Table 3. VMware View Solution software Software VNX5500 (shared storage, file systems) Configuration VNX OE for File Release VNX OE for Block Release 31 ( ) VSI for VMware vsphere: Unified Storage Management Version 5.3 VSI for VMware vsphere: Storage Viewer Version 5.3 EMC PowerPath EMC PowerPath Viewer 1.0.SP2.b019 EMC PowerPath Virtual Edition Cisco Nexus Cisco Nexus 5020 Version 5.1(5) VMware vsphere vsphere Server 5.0 Update 1 vcenter Server 5.0 Update 1 Operating system for vcenter Server Microsoft SQL Server Windows Server 2008 R2 Standard Edition Version 2008 R2 Standard Edition VMware View Desktop Virtualization VMware View Manager Server Version 5.1 Premier VMware View Composer 3.0 Operating system for VMware View Manager Microsoft SQL Server Windows Server 2008 R2 Standard Edition Version 2008 R2 Standard Edition Virtual desktops Note: Aside from the base OS, this software was used for solution validation and is not required OS VMware tools Microsoft Office MS Windows 7 Enterprise SP1 (32-bit) build Office Enterprise 2007 (Version ) Internet Explorer Adobe Reader X (10.1.3) 22

23 Chapter 2: Introduction McAfee Virus Scan 8.7 Enterprise Adobe Flash Player 11 Bullzip PDF Printer FreeMind Login VSI (EUC workload generator) 3.5 Professional Edition 23

24 Chapter 2: Introduction 24

25 Chapter 3: VMware View Infrastructure 3 VMware View Infrastructure VMware View 5.1 This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections: VMware View 5.1 vsphere 5.0 Infrastructure Windows infrastructure Introduction VMware View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop, including the operating system, applications, and user data. With VMware View 5.1, administrators can virtualize the operating system, applications, and user data, and deliver modern desktops to end users. VMware View 5.1: Provides centralized, automated management of these components with increased control and cost savings. Improves business agility while providing a flexible high-performance desktop experience for end users across a variety of network conditions. VMware View 5.1 integrates effectively with vsphere 5.0 to provide: Performance optimization and tiered storage support View Composer 3.0 optimizes storage utilization and performance by reducing the footprint of virtual desktops. It also supports the use of different tiers of storage to maximize performance and reduce cost. Thin provisioning support Enables efficient allocation of storage resources when virtual desktops are provisioned. This results in better utilization of storage infrastructure and reduced capital expenditure (CAPEX)/operating expenditure (OPEX). Deploying VMware View components This solution uses two VMware View Manager Server instances, each capable of scaling up to 2,000 virtual desktops. The core elements of this VMware View 5.1 implementation are: View Manager Server View Composer

26 Chapter 3: VMware View Infrastructure View Storage Accelerator View Persona Management View Composer Linked Clones Additionally, the following components are required to provide the infrastructure for a VMware View 5.1 deployment: Microsoft Active Directory Microsoft SQL Server DNS server Dynamic Host Configuration Protocol (DHCP) server View Manager Server The View Manager Server is the central management location for virtual desktops and has the following key roles: Broker connections between the users and the virtual desktops Control the creation and retirement of virtual desktop images Assign users to desktops Control the state of the virtual desktops Control access to the virtual desktops View Composer 3.0 View Composer 3.0 works directly with vcenter Server to deploy, customize, and maintain the state of the virtual desktops when using linked clones. Desktops provisioned as linked clones share a common base image within a desktop pool and have a minimal storage footprint. The base image is shared among a large number of desktops. It is typically accessed with sufficient frequency to naturally leverage EMC VNX FAST Cache, where frequently accessed data is promoted to flash drives to provide optimal I/O response time with fewer physical disks. View Composer 3.0 also enables the following capabilities: Tiered storage support to enable the use of dedicated storage resources for the placement of both the read-only replica and linked clone disk images. An optional stand-alone View Composer server to minimize the impact of virtual desktop provisioning and maintenance operations on the vcenter server. This solution uses View Composer 3.0 to deploy 2,000 dedicated virtual desktops running Windows 7 as linked clones. View Storage Accelerator View Storage Accelerator reduces the load on the virtual desktop storage infrastructure by caching the common blocks of Virtual Machine Disk (VMDK) files into memory on the vsphere Hypervisors. The View Storage Accelerator feature uses a VMware vsphere 5.0 feature called Content Based Read Cache (CBRC) that is implemented within the vsphere hypervisor. View Storage Accelerator is enabled on a per-desktop pool basis; when enabled the host hypervisor scans the storage disk 26

27 Chapter 3: VMware View Infrastructure blocks of the virtual desktop VMDK files to generate digests of the block contents. These blocks are cached in the vsphere hypervisor CBRC based on disk access patterns, and subsequent reads of blocks with the same digest are served from the in-memory cache directly. This improves the performance of the virtual desktops, particularly during boot storms, user logon storms, or antivirus scanning storms when a large number of blocks with identical contents are read. View Persona Management View Persona Management preserves user profiles and dynamically synchronizes them with a remote profile repository. This element does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage View user profiles. View Persona Management provides the following benefits over traditional Windows roaming profiles: With View Persona management, a user s remote profile is dynamically downloaded when the user logs in to a View desktop. View downloads persona information only when the user needs it. During login, View downloads only the files that Windows requires, such as user registry files. Other files are copied to the local desktop when the user or an application opens them from the local profile folder. View copies recent changes in the local profile to the remote repository at a configurable interval. During logoff, only the files that were updated since the last replication are copied to the remote repository. View Persona Management can be configured to store user profiles in a secure, centralized repository. 27

28 Chapter 3: VMware View Infrastructure View Composer linked clones VMware View with View Composer uses the concept of linked clones to quickly provision virtual desktops. This solution uses the tiered storage feature of View Composer to build linked clones and place their replica images on separate datastores as shown in Figure 2. Figure 2. VMware View Linked clones The operating system reads all the common data from the read-only replica and the unique data created that is by the operating system or user. This unique data is stored on the linked clone. A logical representation of this relationship is shown in Figure 3. 28

29 Chapter 3: VMware View Infrastructure Figure 3. VMware View Logical representation of linked clone and replica disk VMware vsphere 5.0 Infrastructure VMware vsphere 5.0 overview VMware vsphere 5.0 is the market-leading virtualization hypervisor used across thousands of IT environments around the world. VMware vsphere 5.0 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers to create fully functional virtual machines, each of which run their own operating systems and applications just like physical computers. The high-availability features in VMware vsphere 5.0 along with VMware Distributed Resource Scheduler (DRS) and VMware vsphere Storage vmotion enable seamless migration of virtual desktops from one vsphere server to another with minimal or no disruption to the customers. Desktop vsphere clusters This solution deploys two vsphere clusters to host virtual desktops. These server types were chosen due to availability. Similar results are achievable with a variety of server configurations if the ratios of server RAM per desktop and number of desktops per CPU core is upheld. Both clusters consist of 7 dual deca-core vsphere 5.0 servers to support 1,000 desktops each, resulting in around 142 to 143 virtual machines per vsphere server. Each cluster has access to 9 FC datastores; 8 for storing desktop linked clones and 1 for storing a desktop replica image. Infrastructure vsphere cluster One vsphere cluster is deployed in this solution for hosting the infrastructure servers. This cluster is not required if the resources needed to host the infrastructure servers are already present within the host environment. The infrastructure vsphere 5.0 cluster consists of two dual quad-core vsphere 5.0 servers. The cluster has access to a single datastore used for storing the infrastructure server virtual machines. The infrastructure cluster hosts the following virtual machines: Two Windows 2008 R2 SP1 domain controllers Provides DNS, Active Directory, and DHCP services. One VMware vcenter 5 Server running on Windows 2008 R2 SP1 Provides management services for the VMware clusters and View Composer. This server also runs vsphere 5 Update Manager. Two VMware View Manager 5.1 Servers each running on Windows 2008 R2 SP1 Provides services to manage the virtual desktops. 29

30 Chapter 3: VMware View Infrastructure SQL Server 2008 SP2 on Windows 2008 R2 SP1 Hosts databases for the VMware Virtual Center Server, VMware View Composer, and VMware View Manager server event log. Windows 7 Key Management Service (KMS) Provides a method to activate Windows 7 desktops. Windows infrastructure Introduction Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components: Microsoft Active Directory Microsoft SQL Server DNS server DHCP server Microsoft Active Directory The Windows domain controllers run the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions: Manages the identities of users and their information Applies group policy objects Deploys software and updates Microsoft SQL Server DNS server Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2008 SP2 is used to provide the required databases to vcenter Server and View Composer. DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients. In this solution, the DNS role is enabled on the domain controllers. DHCP server The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops. In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range large enough to support 2,000 virtual desktops. 30

31 Chapter 4: Storage Design 4 Storage Design This chapter describes the storage design that applies to the specific components of this solution. EMC VNX series storage architecture Introduction The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package. The VNX series delivers a single-box block and file solution that offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by enabling Windows and Linux/UNIX clients to share files by using the sophisticated file-locking mechanisms of VNX for File and VNX for Block for high-bandwidth or for latency-sensitive applications. This solution uses file-based storage to leverage the benefits that each of the following provides: Block-based storage over the FC protocol is used to store the VMDK files for all virtual desktops. This has the following benefit: Unified Storage Management plug-in provides seamless integration with VMware vsphere to simplify the provisioning of datastores or virtual machines. File-based storage over the CIFS protocol is used to store user data and the VMware View Persona Management repository. This has the following benefits: Redirection of user data and VMware View Persona Management data to a central location for easy backup and administration. Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency. This section explains the configuration of the storage provisioned over FC for the vsphere cluster to store the VMDK images and the storage provisioned over CIFS to redirect user data and provide storage for the VMware View Persona Management repository. 31

32 Chapter 4: Storage Design Storage layout Figure 4 shows the physical storage layout of the disks in the core reference architecture; this configuration accommodates only the virtual desktops. The Storage layout overview section provides more details about the physical storage configuration. The disks are distributed among two VNX5500 storage buses to maximize array performance. Figure 4. VNX5500 Core reference architecture physical storage layout 32

33 Chapter 4: Storage Design Figure 5 shows the physical storage layout of the disks in the full reference architecture. The disks shaded grey are part of the core storage layout and are required. The disks are distributed among two VNX5500 storage buses to maximize array performance. Figure 5. VNX5500 Full reference architecture physical storage layout Storage layout overview The following configurations are used in the core reference architecture as shown in Figure 4: Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE. Disks 0_0_6, 0_0_7, and 1_0_2 are hot spares. These disks are marked as hot spare in the storage layout diagram. Thirty SAS disks (0_0_10 to 0_0_14, 1_0_5 to 1_0_14, and 0_1_0 to 0_1_14) in the RAID 5 Storage Pool 0 are used to store virtual desktops. FAST Cache is enabled for the entire pool. o Sixteen LUNs of 365 GB each and two LUNs of 50 GB each are carved out of the pool and presented to the vsphere servers for use as VMFS datastores. Four Flash drives (0_0_4 to 0_0_5 and 1_0_0 to 1_0_1) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. 33

34 Chapter 4: Storage Design Disks 0_0_8 to 0_0_9 and 1_0_3 to 1_0_4 are unused. They were not used for testing this solution. The following configurations are used in the full reference architecture as shown in Figure 5: Disks 0_0_9 and 1_0_4 are hot spares. These disks are marked as hot spare in the storage layout diagram. Five SAS disks (1_2_0 to 1_2_4) in the RAID 5 Storage Pool 2 are used to store the infrastructure virtual machines. o One LUN of 1 TB in size is carved out of the pool and presented to the vsphere servers for use as a VMFS datastore. Thirty-two NL-SAS disks (0_0_8, 1_0_3, 1_1_0 to 1_1_14, and 0_2_0 to 0_2_14) in the RAID 6 Storage Pool 1 are used to store user data and roaming profiles. FAST Cache is enabled for the entire pool. o Thirty LUNs of 1 TB each are carved out of the pool to provide the storage required to create four CIFS file systems. Disks 1_2_5 to 1_2_14 are unbound. They were not used for testing this solution. File system layout Figure 6 shows the layout of the optional CIFS file systems. Figure 6. VNX5500 CIFS file system layout Thirty LUNs of 1 TB each are provisioned out of a RAID 6 storage pool configured with 32 NL-SAS-drives. Thirty-two drives are used because the block-based storage pool internally creates 6+2 RAID 6 groups. Therefore, the number of NL-SAS drives used is 34

35 Chapter 4: Storage Design a multiple of eight. Likewise, 30 LUNs are used because AVM stripes across 5 dvols, so the number of dvols is a multiple of 5. The LUNs are presented to VNX File as dvols that belong to a system-defined pool. The CIFS file systems are provisioned from an AVM system pool to store user home directories and the VMware View Persona Management repository. The four file systems are grouped in the same storage pool because their I/O profiles are sequential. FAST Cache is enabled on both storage pools that are used to store the FC and CIFS file systems used by the virtual desktops. Two shared file systems are used for every 1,000 virtual desktops one for the VMware View Persona Management repository, and the other to redirect user storage that resides in home directories. In general, redirecting users data out of the base image to VNX for File enables centralized administration, backup and recovery, and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Starting from VNX for File version , AVM is enhanced to intelligently stripe across dvols that belong to the same block-based storage pool. There is no need to manually create striped volumes and add them to user-defined file-based pools. EMC VNX FAST Cache VNX Fully Automated Storage Tiering (FAST) Cache, a part of the VNX FAST Suite, uses Flash drives as an expanded cache layer for the array. VNX5500 is configured with four 100 GB Flash drives in a RAID 1 configuration for a 183 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 2,000 desktops. FAST Cache is an array-wide feature available for both file and block storage. It works by examining 64 KB chunks of data in FAST Cache-enabled objects on the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. The use of Flash drives dramatically improves the response times for very active data and reduces data hot spots that can occur within the LUN. FAST Cache is also an extended read/write cache that enables VMware View to deliver consistent performance at Flash-drive speeds by absorbing read-heavy activities (such as boot storms and antivirus scans), and write-heavy workloads (such as operating systems patches and application updates). This extended read/write cache is an ideal caching mechanism for View Composer because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the Flash drives without accessing the slower drives at the lower storage tier. VSI for VMware vsphere EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the vsphere client that provides a single management interface for managing EMC storage within the vsphere environment. Managed by using the VSI Feature Manager, features can be added and removed from VSI independently. This provides flexibility for customizing VSI user environments. VSI provides a unified user experience that 35

36 Chapter 4: Storage Design allows new features to be introduced rapidly in response to changing customer requirements. The following VSI features were used during the validation testing: Storage Viewer (SV) Extends the vsphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX platforms. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vsphere client. The EMC VSI for VMware vsphere product guides available on the EMC online support website provide more information. View Linked Clone Storage Layout The following data storage configuration was listed for storing view linked clones: FS1 and FS2 Each of the 50 GB datastores store a replica that is responsible for 1,000 linked clone desktops. The input/output to these LUNs is strictly read-only except during operations that require copying a new replica into the datastore. FS3 to FS18 Each of these 365 GB datastores accommodate 125 virtual desktops. This allows each desktop to grow to a maximum average size of approximately 2.9 GB. Each pool of desktops provisioned in View Manager is balanced across 8 distinct datastores. VNX shared file systems Virtual desktops use two VNX shared file systems, one for VMware View Persona Management data and the other to redirect user storage. Each file system is exported to the environment through a CIFS share. Table 4 lists the file systems used for user profiles and redirected user storage. Table 4. VNX5500 File systems File system Use Size profiles_fs VMware View Persona Management repository 1 2 TB home_fs User data 1 4 TB profiles2_fs VMware View Persona Management repository 2 2 TB home2_fs User data 2 4 TB VMware View Persona Management and folder redirection Local user profiles are not recommended in an EUC environment. One reason for this is that a performance penalty is incurred when a new local profile is created when a user logs in to a new desktop image. Solutions such as VMware View Persona Management and folder redirection enable user data to be stored centrally on a 36

37 Chapter 4: Storage Design network location that resides on a CIFS share hosted by the EMC VNX array. This reduces the performance impact during user logon, while allowing user data to roam with the profiles. EMC VNX for File Home Directory feature The EMC VNX for File Home Directory feature uses the home1_fs and home2_fs file systems to automatically map the H: drive of each virtual desktop to the users own dedicated subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This share is created by the File Home Directory feature, and does not need to be created manually. The Home Directory feature automatically maps this share for each user. The Documents folder for each user is also redirected to this share. This allows users to recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 1 TB, and extends itself automatically when more space is required. Capacity The file systems leverage EMC Virtual Provisioning and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data, such as user documents, typically leads to a 50 percent reduction in consumed storage. The VNX file systems for the VMware View Persona Management repository and user documents are configured as follows: profiles_fs and profiles2_fs are each configured to consume 2 TB of space. With 50 percent space saving, each profile can grow up to 4 GB in size. The file system extends if more space is required. home_fs and home2_fs are each configured to consume 4 TB of space. With 50 percent space saving, each user is able to store 8 GB of data. The file system extends if more space is required. 37

38 Chapter 4: Storage Design 38

39 Chapter 5: Network Design 5 Network Design This chapter describes the network design used in this solution and contains the following sections: Considerations VNX for File network configuration VNX for Block storage processor configuration vsphere network configuration Cisco Catalyst 6509 configuration Cisco Nexus 5020 Ethernet configuration Cisco Nexus 5020 Fibre Channel configuration Considerations Network layout overview Figure 7 shows the 10-gigabit Ethernet and 8-gigabit Fibre Channel connectivity between the Cisco Nexus 5020 switches and the EMC VNX storage. Uplink Ethernet ports coming off the Nexus switches can be used to connect to a 10-gigabit or a 1- gigabit external LAN. In this solution, the 1-gigabit LAN through Cisco Catalyst 6509 switches is used to extend Ethernet connectivity to the desktop clients, VMware View components, and Windows Server infrastructure. 39

40 Chapter 5: Network Design Figure 7. VMware View Network layout overview Logical design considerations This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop. Link aggregation VNX platforms provide network high availability or redundancy by using link aggregation. This is one of the methods used to address the problem of link or switch failure. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on the VNX5500, combining two 10 GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 40

41 Chapter 5: Network Design VNX for File network configuration Data Mover ports The EMC VNX5500 in this solution includes two Data Movers. The Data Movers are configured in an active/active or an active/standby configuration. In the active/standby configuration, the standby Data Mover serves as a failover device for any of the active Data Movers. In this solution, the Data Movers operate in the active/standby mode. Note: The VNX for File option, which includes the Data Movers, is not required if the infrastructure required to provide CIFS shares for user data and View Persona Management repositories exists within the existing infrastructure. The VNX5500 Data Movers are configured for two 10-gigabit interfaces on a single I/O module. LACP is used to configure ports fxg-1-0 and fxg-1-1 to support virtual machine traffic, home folder access, and external access for the VMware View Persona Management repository. Figure 8 shows the rear view of two VNX5500 Data Movers that include two 10-gigabit fibre Ethernet (fxg) ports each in I/O expansion slot 1. Figure 8. VNX5500 Ports of the two Data Movers LACP configuration on the Data Mover To configure the link aggregation that uses fxg-1-0 and fxg-1-1 on Data Mover 2, run the following command: $ server_sysconfig server_2 -virtual -name <Device Name> -create trk option "device=fxg-1-0,fxg-1-1 protocol=lacp" To verify if the ports are channeled correctly, run the following command: $ server_sysconfig server_2 -virtual -info lacp1 server_2: *** Trunk lacp1: Link is Up *** *** Trunk lacp1: Timeout is Short *** *** Trunk lacp1: Statistical Load Balancing is IP *** Device Local Grp Remote Grp Link LACP Duplex Speed fxg Up Up Full Mbs fxg Up Up Full Mbs The remote group number must match for both ports and the LACP status must be Up. Verify if appropriate speed and duplex are established as expected. 41

42 Chapter 5: Network Design Data Mover interfaces The IP address used for the VNX5000 CIFS server should be assigned to the LACP interface created in the section LACP configuration on the Data Mover. The following command shows an example of assigning an IP address to the virtual interface named lacp1: $ server_ifconfig server_2 -all server_2: lacp1-1 protocol=ip device=lacp1 inet= netmask= broadcast= UP, Ethernet, mtu=1500, vlan=276, macaddr=0:60:48:1b:76:92 42

43 Chapter 5: Network Design VNX for Block Storage Processor configuration Storage Processor interfaces Figure 9 shows the back of the Storage Processor Enclosure (SPE) for a VNX5500. The SPE contains two Storage Processors (SPs), each with identical port configurations. Ports A-0 and B-0 are connected to one Fibre Channel enabled switch, while ports A-1 and B-1 are connected to a separate Fibre Channel enabled switch. Figure 9. VNX5500 Storage Processors 43

44 Chapter 5: Network Design vsphere network configuration NIC teaming All network interfaces on the vsphere servers in this solution use 1 GbE connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intelbased servers use two onboard Broadcom GbE Controllers for all the network connections. Figure 10 shows the vswitch configuration in vcenter Server. Figure 10. vsphere vswitch configuration Virtual switch vswitch0 uses two physical network interface cards (NICs). Table 5 lists the configured port groups in vswitch0 and vswitch1. Table 5. Virtual switch vsphere Port groups in vswitch0 and vswitch1 Configured port groups Used for vswitch0 Service console VMkernel port used for vsphere host management vswitch0 VLAN277 Network connection for virtual desktops and LAN traffic The NIC teaming load balancing policy for the vswitches should be set to Route based on IP hash as shown in Figure 11. Figure 11. vsphere Load balancing policy 44

45 Chapter 5: Network Design Increase the number of vswitch virtual ports By default, a vswitch is configured with 120 virtual ports, which may not be sufficient in an EUC environment. On the vsphere servers that host the virtual desktops, each virtual desktop consumes one port. Set the number of ports based on the number of virtual desktops that will run on each vsphere server as shown in Figure 12. Note: Reboot the vsphere server for the changes to take effect. Figure 12. vsphere vswitch virtual ports If a vsphere server fails or needs to be placed in the maintenance mode, other vsphere servers within the cluster must accommodate the additional virtual desktops that are migrated from the vsphere server that goes offline. Consider the worst-case scenario when the maximum number of virtual ports per vswitch is determined. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP address from the DHCP server. 45

46 Chapter 5: Network Design Cisco Catalyst 6509 configuration Overview Cabling Server uplinks The 9-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments as well as data center deployments. In this solution, the vsphere server cabling is evenly spread across two WS-x Gb line cards to provide redundancy and load balancing of the network traffic. The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vswitches are configured to load balance the network traffic based on IP hash. The following is an example of the configuration for one of the server ports: description 8/ rtpserver189-1 switchport switchport trunk encapsulation dot1q switchport trunk allowed vlan 276, switchport mode trunk no ip address spanning-tree portfast trunk channel-group 23 mode on Cisco Nexus 5020 Ethernet configuration Overview Cabling Enable jumbo frames on Nexus switch The two Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE and 8-gigabit Fibre Channel (FC) networking. The Ethernet connections are delivered by a cut-through switching architecture for 10 GbE server access in nextgeneration data centers. In this solution, the VNX Data Mover cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic. The following excerpt of the switch configuration shows the commands that are required to enable jumbo frames at the switch level because per-interface MTU is not supported: policy-map type network-qos jumbo class type network-qos class-default mtu 9216 system qos service-policy type network-qos jumbo vpc for Data Mover ports Because the Data Mover connections for the two 10-gigabit network ports are spread across two Nexus switches and LACP is configured for the two Data Mover ports, virtual Port Channel (vpc) must be configured on both switches. The following excerpt is an example of the switch configuration pertaining to the vpc setup for one of the Data Mover ports. The configuration on the peer Nexus switch is mirrored for the second Data Mover port: 46

47 Chapter 5: Network Design n5k-1# show running-config feature vpc vpc domain 2 peer-keepalive destination <peer-nexus-ip> interface port-channel3 description channel uplink to n5k-2 switchport mode trunk vpc peer-link spanning-tree port type network interface port-channel4 switchport mode trunk vpc 4 switchport trunk allowed vlan interface Ethernet1/4 description 1/4 vnx dm2 fxg-1-0 switchport mode trunk switchport trunk allowed vlan channel-group 4 mode active interface Ethernet1/5 description 1/5 uplink to n5k-2 1/5 switchport mode trunk channel-group 3 mode active interface Ethernet1/6 description 1/6 uplink to n5k-2 1/6 switchport mode trunk channel-group 3 mode active To verify if the vpc is configured correctly, run the following command on both the switches. The output should look like this: n5k-1# show vpc Legend: (*) - local vpc is down, forwarding via vpc peer-link vpc domain id : 2 Peer status : peer adjacency formed ok vpc keep-alive status : peer is alive Configuration consistency status: success vpc role : secondary Number of vpcs configured : 1 Peer Gateway : Disabled Dual-active excluded VLANs : - vpc Peer-link status id Port Status Active vlans Po3 up 1, vpc status id Port Status Consistency Reason Active vlans Po4 up success success

48 Chapter 5: Network Design Cisco Nexus 5020 Fibre Channel configuration Overview Cabling Fibre Channel uplinks The two Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE and 8-gigabit Fibre Channel (FC) networking. The Ethernet connections are delivered by a cut-through switching architecture for 10 GbE server access in nextgeneration data centers. In this solution, the Fibre Channel and Data Mover cabling is evenly distributed across two Nexus 5020 switches to provide redundancy and load balancing of the Fibre Channel and network traffic. The Fibre Channel uplinks are configured using single initiator zoning to provide optimal security and minimize interference. Single initiator zoning requires four Fibre Channel zones per each vsphere host; each vsphere host Fibre Channel port is zoned individually to each of the two VNX5500 Storage Processor Fibre Channel ports. Figure 13 provides a visual representation of single initiator zoning. Figure 13. Example Single initiator zoning 48

49 Chapter 5: Network Design The following is an example of the configuration required to create the necessary Fibre Channel zones for one vsphere host on one of the two Nexus 5020 switches. In this example, we are zoning one of the two vsphere host Fibre Channel ports to each of the two VNX5500 Storage Processor ports. The second Nexus switch would have a similar configuration, but would be zoning the second vsphere host Fibre Channel port to each of the VNX5500 Storage Processors. vsan database vsan 100 interface fc2/1 no shutdown interface fc2/2 no shutdown interface fc2/3 no shutdown fcalias name rtpsan27-spa vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX fcalias name rtpsan27-spb vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX fcalias name rtpserver537-port1 vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX zone name rtpserver37-port1_rtpsan27-spa vsan 100 member fcalias rtpserver37-port1 member fcalias rtpsan27-spa zone name rtpserver37-port1_rtpsan27-spb vsan 100 member fcalias rtpserver37-port1 member fcalias rtpsan27-spb zoneset name rtplab-1 vsan 100 member rtpserver37-port1_rtpsan27-spa member rtpserver37-port1_rtpsan27-spb zoneset activate name rtplab-1 vsan

50 Chapter 5: Network Design 50

51 Chapter 6: Installation and Configuration 6 Installation and Configuration Installation overview This chapter describes how to install and configure this solution and includes the following sections: Installation overview VMware View components Storage components This section provides an overview of the configuration of the following components: Desktop pools Storage pools FAST Cache VNX Home Directory The installation and configuration steps for the following components are available on the VMware website: VMware View Manager Server 5.0 VMware View Composer 3.0 VMware View Storage Accelerator VMware View Persona Management VMware vsphere 5.0 The installation and configuration steps for the following components are not covered: Microsoft System Center Configuration Manager (SCCM) 2007 R3 Microsoft Active Directory, Group Policies, DNS, and DHCP Microsoft SQL Server 2008 SP2 51

52 Chapter 6: Installation and Configuration VMware View components VMware View installation overview The VMware View Installation document available on the VMware website has detailed procedures on how to install View Manager Server and View Composer 3.0. No special configuration instructions are required for this solution. The vsphere Installation and Setup Guide available on the VMware website contains detailed procedures that describe how to install and configure vcenter Server and vsphere. As a result, these subjects are not covered in further detail in this paper. No special configuration instructions are required for this solution. VMware View setup Before deploying the desktop pools, ensure the following steps from the VMware View Installation document have been completed: Prepare Active Directory Install View Composer 3.0 on the vcenter Server Install the View Manager Server Add the vcenter Server instance to View Manager and enable host caching for View VMware View desktop pool configuration VMware supports a maximum of 1,000 desktops per replica image, which requires creating a unique pool for every 1,000 desktops. In this solution, two persistent automated desktop pools were used. To create one of the persistent automated desktop pools as configured for this solution, complete the following steps: 1. Log in to the VMware View Administration page, which is located at where server is the IP address or DNS name of the View Manager server. 2. Click Pools in the left pane. 3. Click Add under the Pools banner. The Add Pool page appears. 4. Under Pool Definition, click Type. The Type page appears on the right pane. 5. Select Automated Pool as shown in Figure 14. Figure 14. VMware View Select Automated Pool 52

53 6. Click Next. The User Assignment page appears. Chapter 6: Installation and Configuration 7. Select Dedicated and leave the Enable automatic assignment checkbox checked. 8. Click Next. The vcenter Server page appears. 9. Select View Composer linked clones and select a vcenter Server that supports View Composer as shown in Figure 15. Figure 15. VMware View Select View Composer linked clones 10. Click Next. The Pool Identification page appears. 11. Type the required information. 12. Click Next. The Pool Settings page appears. 13. Make any required changes. 14. Click Next. The Provisioning Settings page appears. 15. Complete the following steps as shown in Figure 16: a. Select Use a naming pattern. b. In the Naming Pattern field, type the naming pattern. c. In the Max number of desktops field, type the number of desktops to provision. 53

54 Chapter 6: Installation and Configuration Figure 16. VMware View Select Provision Settings 16. Click Next. The View Composer Disks page appears. 17. Make any required changes. 18. Click Next. The Storage Optimization page appears. 19. Select the Select separate datastores for replica and OS disk checkbox. 20. Click Next. The vcenter Settings page appears. 21. Complete the following steps as shown in Figure 17: a. Click Browse to select a default image (Parent VM), the snapshot to use for the default image (Snapshot), a folder for the virtual machines (VM folder location), the cluster hosting the virtual desktops (Host or cluster), and the resource pool to store the desktops (Resource pool). Figure 17. VMware View vcenter Settings 54

55 Chapter 6: Installation and Configuration b. In the configuration line item for Linked clone datastores click Browse. The Select Linked Clone Datastores page appears. Select the checkboxes for the eight LUNs that were provisioned for linked clone storage as shown in Figure 18 and click OK. Figure 18. VMware View Select Linked Clone Datastores c. In the configuration line item for Replica disk datastores click Browse. The Select Replica Disk Datastores page appears. Select the LUN that was provisioned for replica disk storage as shown in Figure 19 and click OK. Figure 19. VMware View Select Replica Disk Datastore 22. Click OK. The Advanced Storage Options page appears as shown in Figure

56 Chapter 6: Installation and Configuration Figure 20. VMware View Advanced Storage Options 23. Verify that the Use host caching checkbox is checked and enable Blackout times for host cache regeneration. Note: Host cache regeneration may temporarily impact desktop performance. It is recommended to set a blackout time to prevent the host cache regeneration from taking place during periods of heavy desktop usage. 24. Click Next. The Guest Customization page appears. 25. Complete the following steps as shown in Figure 21: a. In the Domain list box, select the domain. b. In the AD container field, click Browse, and then select the AD container. c. Select Use QuickPrep. 56

57 Chapter 6: Installation and Configuration Figure 21. VMware View Guest Customization VMware View Persona Management configuration 26. Click Next. The Ready to Complete page appears. 27. Verify the settings for the pool. 28. Click Finish. The deployment of the virtual desktops starts. 29. Repeat this process as needed to provision additional desktop pools. The profiles_fs and profiles2_fs CIFS file systems are used for the VMware View Persona Management repositories. VMware View Persona Management is enabled using a Windows group policy template. The group policy template is located on the View 5 Manager Server in the Install Drive\Program Files\VMware\VMware View\Server\extras\GroupPolicyFiles directory. The group policy template titled ViewPM.adm is needed to configure VMware View Persona Management. VMware View Persona Management is enabled by using computer group policies that are applied to the organizational unit containing the virtual desktop computer objects. Figure 22 shows a summary of the policies configured to enable VMware View Persona Management in the reference architecture environment. 57

58 Chapter 6: Installation and Configuration Figure 22. VMware View Persona Management Initial configuration When deploying VMware View Persona Management in a production environment it is recommended to redirect the folders that users commonly use to store documents or other files. Figure 23 shows the VMware View Persona Management group policy settings required to redirect the user desktop, downloads, My Documents, and My Pictures folders. Figure 23. VMware View Persona Management Folder Redirection policies Storage components Two sets of group policies will be required in order to support the use of multiple View Persona Management repositories and user home directories. Storage pools Storage pools in the EMC VNX OE support heterogeneous drive pools. Three storage pools were configured in this solution as shown in Figure 24: A RAID 5 storage pool (Pool 0) was configured from 30 SAS drives. Sixteen LUNs of 365 GB each and two LUNs of 50 GB each are carved out of the pool 58

59 Chapter 6: Installation and Configuration and presented to the vsphere servers for use as VMFS datastores. FAST Cache was enabled for the pool. A RAID 6 storage pool (Pool 1) was configured from 32 NL-SAS drives. Thirty 1 TB thick LUNs were created from this storage pool. This pool was used to store the user home directory and VMware View Persona Management repository CIFS file shares. FAST Cache was enabled for the pool. A RAID 5 storage pool (Pool 2) was configured from 5 SAS drives. One LUN of 1 TB in size is carved out of the pool and presented to the vsphere servers for use as a VMFS datastore. Figure 24. VNX5500 Storage pools Enable FAST Cache FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere. Click the FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. There are no user-configurable parameters for FAST Cache. Figure 25 shows the FAST Cache settings for VNX5500 array used in this solution. 59

60 Chapter 6: Installation and Configuration Figure 25. VNX5500 FAST Cache tab To enable FAST Cache for any LUN in a pool, navigate to the Storage Pool Properties page in Unisphere, and then click the Advanced tab. Select Enabled to enable FAST Cache as shown in Figure 26. Figure 26. VNX5500 Enable FAST Cache VNX Home Directory feature The VNX Home Directory installer is available on the NAS Tools and Applications CD for each VNX OE for File release, and can be downloaded from the EMC online support website. 60

61 Chapter 6: Installation and Configuration After the VNX Home Directory feature is installed, use the Microsoft Management Console (MMC) snap-in to configure the feature. A sample configuration is shown in Figure 27 and Figure 28. Figure 27. VNX5500 Home Directory MMC snap-in For any user account that ends with a suffix between 1 and 2,000, the sample configuration shown in Figure 28 automatically creates a user home directory in the following location and maps the H: drive to the following path: \home_fs file system in the format \home_fs\<user> Each user has exclusive rights to the folder. Figure 28. VNX5500 Sample Home Directory User folder properties 61

62 Chapter 6: Installation and Configuration 62

63 Chapter 7: Testing and Validation 7 Testing and Validation This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing is to characterize the performance of the solution and its component subsystems during the following scenarios: Boot storm of all desktops McAfee antivirus full scan on all desktops Security patch install with Microsoft SCCM 2007 R3 on all desktops User workload testing using Login VSI on all desktops View recompose View refresh Validated environment profile Profile characteristics Table 6 provides the validated environment profile. Table 6. VMware View environment profile Profile characteristic Value Number of virtual desktops 2,000 Virtual desktop OS CPU per virtual desktop Windows 7 Enterprise SP1 (32- bit) 1 vcpu Number of virtual desktops per CPU core 7.14 RAM per virtual desktop Average storage available for each virtual desktop 1 GB 2.92 GB Average IOPS per virtual desktop in steady state 9.8 Average peak IOPS per virtual desktop during boot storm 22.1 Number of datastores used to store linked clones 16 Number of datastores used to store replicas 2 Number of virtual desktops per datastore 125 Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5 in. SAS disks 63

64 Chapter 7: Testing and Validation Disk and RAID type for CIFS shares to host the VMware View Persona Management repository and home directories RAID 6, 1 TB, 7,200 rpm, 3.5 in. NL-SAS disks Number of VMware clusters for virtual desktops 2 Number of vsphere servers in each cluster 7 Number of virtual desktops in each cluster 1,000 Use cases Six common use cases were executed to validate whether the solution performed as expected under heavy-load situations. The following use cases were tested: Simultaneous boot of all desktops Full antivirus scan of all desktops Installation of a monthly release of security updates using SCCM 2007 R3 on all desktops Login and steady-state user load simulated using the Login VSI medium workload on all desktops Recompose of all desktops Refresh of all desktops In each use case, a number of key metrics are presented that show the overall performance of the solution. Disclaimer The test results listed in the following sections are random. If the tests are repeated, there might be some change in the data obtained. Login VSI Virtual Session Index (VSI) version 3.5 was used to run a user load on the desktops. VSI provided the guidance to gauge the maximum number of users a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). A medium workload was selected for this testing had the following characteristics: The workload emulated a medium knowledge worker who used Microsoft Office Suite, Internet Explorer, Adobe Acrobat Reader, Bullzip PDF Printer, and 7-zip. After a session started, the medium workload repeated every 12 minutes. The response time was measured every 2 minutes during each loop. The medium workload opened up to five applications simultaneously. The type rate was 160 ms for each character. Approximately 2 minutes of idle time was included to simulate real world users. 64

65 Each loop of the medium workload used the following applications: Chapter 7: Testing and Validation Microsoft Outlook 2007 Browsed 10 messages. Microsoft Internet Explorer (IE) On one instance of IE, the BBC.co.uk website, was opened. Another instance browsed Wired.com and Lonelyplanet.com. Finally, another instance opened a flash-based 480p video file. Microsoft Word 2007 One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document. Bullzip PDF Printer and Adobe Acrobat Reader The Word document was printed to PDF and reviewed. Microsoft Excel 2007 A very large Excel worksheet was opened and random operations were performed. Microsoft PowerPoint 2007 A presentation was reviewed and edited. 7-zip Using the command line version, the output of the session was zipped. Login VSI launcher A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers master and slave. There is only one master in a given test bed, but there can be several slave launchers as required. The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vcpus) and a 2 GB RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 2,000 desktop sessions were launched from 62 launchers, with approximately 32 sessions per launcher. Each launcher was allocated two vcpus and 4 GB of RAM. No bottlenecks were observed on the launchers during the Login VSI tests. FAST Cache configuration For all tests, FAST Cache was enabled for the storage pools holding the replica and linked clone datastores, and the user home directories and VMware View Persona Management repository. Boot storm results Test methodology This test was conducted by selecting all the desktops in vcenter Server, and then selecting Power On. Overlays are added to the graphs to show when the last poweron task was completed and when the IOPS to the pool LUNs achieved a steady state. For the boot storm test, all 2,000 desktops were powered on within 8 minutes and achieved a steady state 2 minutes later. All desktops were available for login within 10 minutes. This section describes the boot storm results for each of the three use cases when powering on the desktop pools. 65

66 Chapter 7: Testing and Validation Pool individual disk load Figure 29 shows the disk IOPS and response time for a single SAS drive in the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 29. Boot storm Disk IOPS for a single SAS drive During peak load, the disk serviced a maximum of IOPS and experienced a response time of 11.8 ms. The Data Mover cache, FAST Cache, and View Storage Accelerator helped reduce the disk load associated with the boot storm. Pool LUN load Figure 30 shows the replica LUN IOPS and the response time of one of the linked clone LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. 66

67 Chapter 7: Testing and Validation Figure 30. Boot storm Pool LUN IOPS and response time During peak load, the LUN serviced a maximum of 2,786.1 IOPS and experienced a response time of 10.1 ms. Storage processor IOPS Figure 31 shows the total IOPS serviced by the storage processors during the test. Figure 31. Boot storm Storage processor total IOPS During peak load, the storage processors serviced 44,376.7 IOPS. Storage processor utilization Figure 32 shows the storage processor utilization during the test. The pool-based LUNs were split across both the storage processors to balance the load equally. 67

68 Chapter 7: Testing and Validation Figure 32. Boot storm Storage processor utilization The virtual desktops generated high levels of I/O during the peak load of the boot storm test. The peak storage processor utilization was 50 percent. The EMC VNX5500 had sufficient scalability headroom for this workload. FAST Cache IOPS Figure 33 shows the IOPS serviced from FAST Cache during the boot storm test. Figure 33. Boot storm FAST Cache IOPS During peak load, FAST Cache serviced 31,902.2 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 21,185.8 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) 68

69 Chapter 7: Testing and Validation for 15k rpm SAS drives suggests that approximately 118 SAS drives are required to achieve the same level of performance. vsphere CPU load Figure 34 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 34. Boot storm vsphere CPU load The vsphere server achieved a peak CPU utilization of 44.3 percent. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 35 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. 69

70 Chapter 7: Testing and Validation Figure 35. Boot storm Average Guest Millisecond/Command counter Antivirus results The peak GAVG of the LUN hosting the replica image was 7.2 ms, and the linked clone LUN was 2.5 ms. The 2,000 desktops attained steady state in less than 9 minutes after the initial power on. Test methodology Pool individual disk load This test was conducted by scheduling a full scan of all desktops using a custom script to initiate an on-demand scan using McAfee VirusScan 8.7i. The full scans were started on all the desktops. The difference between start time and finish time was 48 minutes. Figure 36 shows the disk I/O for a single SAS drive in the storage pool that stores the virtual desktops. Each disk had similar results, therefore only the results from a single disk are shown in the graph. 70

71 Chapter 7: Testing and Validation Figure 36. Antivirus Disk I/O for a single SAS drive During peak load, the disk serviced IOPS and experienced a response time of 9.7 ms. The FAST Cache, Data Mover cache, and View Storage Accelerator helped reduce the load on the disks. Pool LUN load Figure 37 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. Figure 37. Antivirus Pool LUN IOPS and response time 71

72 Chapter 7: Testing and Validation During peak load, the LUN serviced IOPS and experienced a response time of 12.8 ms. The majority of the read I/O was served by the FAST Cache, Data Mover cache, and View Storage Accelerator. Storage processor IOPS Figure 38 shows the total IOPS serviced by the storage processor during the test. Figure 38. Antivirus Storage processor IOPS During peak load, the storage processors serviced 18,847.0 IOPS. Storage processor utilization Figure 39 shows the storage processor utilization during the antivirus scan test. Figure 39. Antivirus Storage processor utilization 72

73 Chapter 7: Testing and Validation During peak load, the antivirus scan operations caused moderate CPU utilization of 30.8 percent. The load was shared between both the storage processors during the antivirus scan. EMC VNX5500 had sufficient scalability headroom for this workload. FAST Cache IOPS Figure 40 shows the IOPS serviced from FAST Cache during the test. Figure 40. Antivirus FAST Cache IOPS During peak load, FAST Cache serviced 14,627.3 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 18,104.0 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that approximately 101 SAS drives are required to achieve the same level of performance. vsphere CPU load Figure 41 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. 73

74 Chapter 7: Testing and Validation Figure 41. Antivirus vsphere CPU load The vsphere server achieved a peak CPU utilization of 52.8 percent during peak. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 42 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 42. Antivirus Average Guest Millisecond/Command counter The peak GAVG of the LUN hosting the replica image was 5.3 ms, and the linked clone LUN was15.6 ms. 74

75 Chapter 7: Testing and Validation Patch install results Test methodology Pool individual disk load This test was performed by pushing a monthly release of Microsoft security updates to all desktops using Microsoft System Center Configuration Manager (SCCM) 2007 R3. One thousand desktops were placed in single collection within SCCM. The collection was configured to install updates in a 1-minute staggered schedule that occurred 30 minutes after the patches were available for download. All patches were installed within five minutes. Figure 43 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 43. Patch install Disk IOPS for a single SAS drive During peak load, the disk serviced IOPS and experienced a response time of 13.1 ms. Pool LUN load Figure 44 shows the replica LUN IOPS and response time of one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. 75

76 Chapter 7: Testing and Validation Figure 44. Patch install Pool LUN IOPS and response time During peak load, the LUN serviced a maximum of 1,366.7 IOPS and experienced a response time of 2.4 ms. Storage processor IOPS Figure 45 shows the total IOPS serviced by the storage processor during the test. Figure 45. Patch install Storage processor IOPS During peak load, the storage processors serviced 22,495.8 IOPS. The load was shared between both storage processors during the patch install operation on each pool of virtual desktops. 76

77 Chapter 7: Testing and Validation Storage processor utilization Figure 46 shows the storage processor utilization during the test. Figure 46. Patch install Storage processor utilization The patch install operations caused moderate CPU utilization during peak load, reaching a maximum of 39.9 percent utilization. The EMC VNX5500 had sufficient scalability headroom for this workload. FAST Cache IOPS Figure 47 shows the IOPS serviced from FAST Cache during the test. Figure 47. Patch install FAST Cache IOPS 77

78 Chapter 7: Testing and Validation During patch installation, FAST Cache serviced 12,931.5 IOPS from datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 7,520.0 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that approximately 42 SAS drives are required to achieve the same level of performance. vsphere CPU load Figure 48 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 48. Patch install vsphere CPU load The vsphere server CPU load was well within the acceptable limits during the test, reaching a maximum of 29.7 percent utilization. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 49 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. 78

79 Chapter 7: Testing and Validation Figure 49. Patch install Average Guest Millisecond/Command counter Login VSI results The peak replica LUN GAVG value was 5.9 ms while the peak linked clone LUN GAVG was 6.0 ms. Test methodology Desktop logon time This test was conducted by scheduling 2,000 users to connect through remote desktop in a 90-minute window and starting the Login VSI-medium with Flash workload. The workload ran for one hour in a steady state to observe the load on the system. Figure 50 shows the time required for the desktops to complete the user login process. 79

80 Chapter 7: Testing and Validation Figure 50. Login VSI Desktop login time The time required to complete the login process reached a maximum of 7.2 seconds during peak load of the 2,000 desktop login storm. Pool individual disk load Figure 51 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 51. Login VSI Disk IOPS for a single SAS drive During peak load, the SAS disk serviced IOPS and experienced a response time of 8.2 ms. 80

81 Chapter 7: Testing and Validation Pool LUN load Figure 52 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. Figure 52. Login VSI Pool LUN IOPS and response time During peak load, the LUN serviced IOPS and experienced a response time of 4.9 ms. Storage processor IOPS Figure 53 shows the total IOPS serviced by the storage processor during the test. Figure 53. Login VSI Storage processor IOPS 81

82 Chapter 7: Testing and Validation During peak load, the storage processors serviced a maximum of 21,815.8 IOPS. Storage processor utilization Figure 54 shows the storage processor utilization during the test. Figure 54. Login VSI Storage processor utilization The storage processor peak utilization was 41.2 percent during the login storm. The load was shared between both the storage processors during the VSI load test. The EMC VNX5500 had sufficient scalability headroom for this workload. FAST Cache IOPS Figure 55 shows the IOPS serviced from FAST Cache during the test. Figure 55. Login VSI FAST Cache IOPS 82

83 Chapter 7: Testing and Validation During peak load, FAST Cache serviced 8,348.0 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 8,065.2 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that approximately 45 SAS drives are required to achieve the same level of performance. Data Mover CPU utilization Figure 56 shows the Data Mover CPU utilization during the Login VSI test. Note: The Data Mover was only used to provide CIFS shares which stored the sample user data and profiles used during the Login VSI test. Figure 56. Login VSI Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of 43 percent during this test. vsphere CPU load Figure 57 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. 83

84 Chapter 7: Testing and Validation Figure 57. Login VSI vsphere CPU load The CPU load on the vsphere server reached a maximum of 50.8 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 58 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 58. Login VSI Average Guest Millisecond/Command counter 84

85 Recompose results Chapter 7: Testing and Validation The peak GAVG of the LUN hosting the replica image was 7.8 ms, and the linked clone LUN was 2.5 ms. Test methodology This test was conducted by performing a VMware View desktop recompose operation of all desktop pools. A new virtual machine snapshot of the master virtual desktop image was taken to serve as the target for the recompose operation. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state. A recompose operation deletes the existing virtual desktops and creates new ones. To enhance the readability of the graphs and to show the array behavior during high I/O periods, only those tasks involved in creating new desktops were performed and shown in the graphs. All desktop recompose operations were initiated simultaneously and it took 390 minutes to complete the entire process. Pool individual disk load Figure 59 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 59. Recompose Disk IOPS for a single SAS drive During peak load, the SAS disk serviced a maximum of 66.5 IOPS and experienced a response time of 15.0 ms. Pool LUN load Figure 60 shows the replica LUN IOPS and response time from one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. 85

86 Chapter 7: Testing and Validation Figure 60. Recompose Pool LUN IOPS and response time Copying the new replica images caused heavy sequential-write workloads on the LUN during the copy process. During peak load, the LUN serviced 1,303.5 IOPS and experienced a response time of 2.3 ms. Storage processor IOPS Figure 61 shows the total IOPS serviced by the storage processor during the test. Figure 61. Recompose Storage processor IOPS During peak load, the storage processors serviced 8,212.3 IOPS. Storage processor utilization Figure 62 shows the storage processor utilization during the test. 86

87 Chapter 7: Testing and Validation Figure 62. Recompose Storage processor utilization The storage processor utilization reached 17.9 percent during the logon storm. The load was shared between the two storage processors during peak load. The EMC VNX5500 had sufficient scalability headroom for this workload. FAST Cache IOPS Figure 63 shows the IOPS serviced from FAST Cache during the test. Figure 63. Recompose FAST Cache IOPS During peak load, FAST Cache serviced 6,954.2 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 3,756.4 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that approximately 21 SAS drives are required to achieve the same level of performance. 87

88 Chapter 7: Testing and Validation vsphere CPU load Figure 64 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 64. Recompose vsphere CPU load The vsphere server reached a peak CPU load of 15.2 percent. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 65 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. 88

89 Chapter 7: Testing and Validation Figure 65. Recompose Average Guest Millisecond/Command counter Refresh results The peak GAVG of the LUN hosting the replica image was 2.4 ms, and the linked clone LUN was 1.8 ms. Test methodology Pool individual disk load This test was conducted by selecting a refresh operation for all desktop pools from the View Manager administration console. The refresh operations for all pools were initiated at the same time by scheduling the refresh operation within the View administration console. No users were logged in during the test. Overlays are added to the graphs to show when the last power-on task completed and when the IOPS to the pool LUNs achieved a steady state. The refresh operation took 130 minutes to complete. Figure 66 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. 89

90 Chapter 7: Testing and Validation Figure 66. Refresh Disk IOPS for a single SAS drive During peak load, the SAS disk serviced a maximum of IOPS and experienced a response time of 9.9 ms. Pool LUN load Figure 67 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. Figure 67. Refresh Pool LUN IOPS and response time During peak load, the LUN serviced 1,249.0 IOPS and experienced a response time of 2.0 ms. 90

91 Chapter 7: Testing and Validation Storage processor IOPS Figure 68 shows the total IOPS serviced by the storage processor during the test. Figure 68. Refresh Storage processor IOPS During peak load, the storage processors serviced 9,180.8 IOPS. Storage processor utilization Figure 69 shows the storage processor utilization during the test. Figure 69. Refresh Storage processor utilization The storage processor peak utilization was 21.6 percent during the refresh test. The load was shared between both the storage processors during the test. The EMC VNX5500 had sufficient scalability headroom for this workload. 91

92 Chapter 7: Testing and Validation FAST Cache IOPS Figure 70 shows the IOPS serviced from FAST Cache during the test. Figure 70. Refresh FAST Cache IOPS During peak load, FAST Cache serviced 6,805.8 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 4,666.7 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that approximately 26 SAS drives are required to achieve the same level of performance. vsphere CPU load Figure 71 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. 92

93 Chapter 7: Testing and Validation Figure 71. Refresh vsphere CPU load The vsphere server reached a peak CPU load of 16.9 percent. Hyper-threading was enabled to double the number of logical CPUs. vsphere disk response time Figure 72 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the replica storage is shown as Replica LUN GAVG and the average of all the datastores hosting the linked clone storage is shown as Linked Clone LUN GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 72. Refresh Average Guest Millisecond/Command counter 93

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 Proven Solutions Guide EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Reference Architecture Copyright 2010 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Proven Solution Guide Copyright 2010, 2011 EMC Corporation.

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture. Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2 Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture...

More information

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 White Paper EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 EMC VNX Replicator, VMware vcenter Site Recovery Manager, and VMware View Composer Automating failover of virtual desktop instances Preserving user

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Provisioning Services 6.1 EMC Solutions Group

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Simplify management and decrease total cost of ownership Guarantee a superior desktop experience Ensure a successful virtual desktop deployment

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled by

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

Install & Run the EMC VNX Virtual Storage Appliance (VSA)

Install & Run the EMC VNX Virtual Storage Appliance (VSA) Install & Run the EMC VNX Virtual Storage Appliance (VSA) Simon Seagrave EMC vspecialist Technical Enablement Team & Blogger Kiwi_Si simon.seagrave@emc.com 1 Goal of this session In this session I want

More information

DELL EMC UNITY: HIGH AVAILABILITY

DELL EMC UNITY: HIGH AVAILABILITY DELL EMC UNITY: HIGH AVAILABILITY A Detailed Review ABSTRACT This white paper discusses the high availability features on Dell EMC Unity purposebuilt solution. October, 2017 1 WHITE PAPER The information

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.1, VMware Horizon View 5.2, and VMware Horizon View Composer 5.2 Simplify

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Unified Storage (FC), Microsoft Windows Server 2008 R2 Hyper-V, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

EMC XTREMCACHE ACCELERATES ORACLE

EMC XTREMCACHE ACCELERATES ORACLE White Paper EMC XTREMCACHE ACCELERATES ORACLE EMC XtremSF, EMC XtremCache, EMC VNX, EMC FAST Suite, Oracle Database 11g XtremCache extends flash to the server FAST Suite automates storage placement in

More information

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in this

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Virtual Desktop Infrastructure with Dell Fluid Cache for SAN

Virtual Desktop Infrastructure with Dell Fluid Cache for SAN Virtual Desktop Infrastructure with Dell Fluid Cache for SAN This Dell technical white paper describes the tasks to deploy a high IOPS (heavy user), 800-user, virtual desktop environment in a VMware Horizon

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

EMC VNX2 Deduplication and Compression

EMC VNX2 Deduplication and Compression White Paper VNX5200, VNX5400, VNX5600, VNX5800, VNX7600, & VNX8000 Maximizing effective capacity utilization Abstract This white paper discusses the capacity optimization technologies delivered in the

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Configuration Guide H15108 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell

More information