EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

Size: px
Start display at page:

Download "EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6"

Transcription

1 Proven Solutions Guide EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO User install applications capability on pooled desktops Minimize the risk of virtual desktop deployment EMC Solutions Group Abstract This Proven Solutions Guide provides a detailed summary of tests performed to validate an EMC infrastructure for Citrix XenDesktop 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1. This document focuses on sizing and scalability, and highlights new features introduced in EMC VNX, VMware vsphere, and Citrix XenDesktop 5.6. August 2012

2 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. VMware, ESXi, VMware vcenter, and VMware vsphere are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners. Part Number: H11007 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 2

3 Table of contents Table of contents 1 Executive Summary Introduction to the EMC VNX series Introduction Software suites available Software packages available Business case Solution overview Key results and recommendations Introduction Document overview Use case definition Purpose Scope Not in scope Audience Prerequisites Terminology Reference Architecture Corresponding reference architecture Reference architecture diagram Configuration Hardware resources Software resources Citrix XenDesktop Infrastructure Citrix XenDesktop Introduction Deploying Citrix XenDesktop components Citrix XenDesktop Controller Citrix personal vdisk Citrix Profile Management vsphere 5.0 Infrastructure vsphere 5.0 overview Desktop vsphere clusters Infrastructure vsphere cluster Windows infrastructure EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 3

4 Table of contents Introduction Microsoft Active Directory Microsoft SQL Server DNS server DHCP server Storage Design EMC VNX series storage architecture Introduction Storage layout Storage layout overview File system layout EMC VNX FAST Cache VSI for VMware vsphere vcenter Server storage layout VNX shared file systems Citrix Profile Manager and folder redirection EMC VNX for File Home Directory feature Capacity Network Design Considerations Network layout overview Logical design considerations Link aggregation VNX for File network configuration Data Mover ports LACP configuration on the Data Mover Data Mover interfaces Enable jumbo frames on Data Mover interface vsphere network configuration NIC teaming Increase the number of vswitch virtual ports Enable jumbo frames for the VMkernel port used for NFS Cisco Nexus 5020 configuration Overview Cabling Enable jumbo frames on Nexus switch vpc for Data Mover ports Cisco Catalyst 6509 configuration EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 4

5 Table of contents Overview Cabling Server uplinks Installation and Configuration Installation overview Citrix XenDesktop components Citrix XenDesktop installation overview Virtual Desktop Agent Installation Citrix XenDesktop machine catalog configuration Citrix Profile Manager Configuration Storage components Storage pools NFS active threads per Data Mover NFS performance fix Enable FAST Cache Testing and Validation Validated environment profile Profile characteristics Use cases Login VSI Login VSI launcher FAST Cache configuration Boot storm results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS and CPU utilization FAST Cache IOPS Data Mover CPU utilization Data Mover NFS load vsphere CPU load vsphere disk response time Antivirus results Test methodology Pool individual disk load Pool LUN load Storage processor IOPS and CPU utilization FAST Cache IOPS EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 5

6 Table of contents Data Mover CPU utilization Data Mover NFS load vsphere CPU load vsphere disk response time Login VSI results Test methodology Desktop logon time Pool individual disk load Pool LUN load Storage processor IOPS and CPU utilization FAST Cache IOPS Data Mover CPU utilization Data Mover NFS load vsphere CPU load vsphere disk response time Personal vdisk Implementation considerations Personal vdisk Implementation considerations Storage Layout Login Time vsphere CPU Utilization Conclusion Summary References Supporting documents Citrix documents EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 6

7 List of Tables List of Tables Table 1. Terminology Table 2. Citrix XenDesktop 5.6 Solution hardware Table 3. Citrix XenDesktop Solution software Table 4. VNX5300 File systems Table 5. vsphere Port groups in vswitch Table 6. Citrix XenDesktop environment profile EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 7

8 List of Figures List of Figures Figure 1. Citrix XenDesktop Reference architecture Figure 2. VNX5300 Core reference architecture physical storage layout Figure 3. VNX5300 user data physical storage layout Figure 4. VNX5300 Virtual desktop NFS file system layout Figure 5. VNX5300 Personal vdisk NFS file system layout Figure 6. VNX5300 CIFS file system layout Figure 7. Citrix XenDesktop Network layout overview Figure 8. VNX5300 Ports of the two Data Movers Figure 9. vsphere vswitch configuration Figure 10. vsphere Load balancing policy Figure 11. vsphere vswitch virtual ports and MTU settings Figure 12. vsphere VMkernel port MTU setting Figure 13. Select the virtual desktop agent Figure 14. Select Personal vdisk Configuration Figure 15. Virtual Desktop Configuration Figure 16. Select the machine type Figure 17. Select the cluster host and Master image Figure 18. Virtual Machine specifications Figure 19. Active directory location and naming scheme Figure 20. Summary and Catalog name Figure 21. Citrix Profile Manager Master Image Profile Redirection Figure 22. Enable Citrix Profile Manager Figure 23. Citrix Profile Manager path to user store Figure 24. VNX5300 nthreads properties Figure 25. VNX5300 File System Mount Properties Figure 26. VNX5300 FAST Cache tab Figure 27. VNX5300 Enable FAST Cache Figure 28. Personal Disk Boot storm IOPS for a single Desktop pool SAS drive Figure 29. Personal vdisk Boot storm IOPS for a single personal vdisk pool SAS drive Figure 30. Personal vdisk Boot storm Desktop Pool LUN IOPS and response time Figure 31. Personal vdisk Boot storm personal vdisk Pool LUN IOPS and response time Figure 32. Personal vdisk Boot storm Storage processor total IOPS and CPU Utilization Figure 33. Personal vdisk Boot storm personal vdisk FAST Cache IOPS Figure 34. Personal vdisk Boot storm Data Mover CPU utilization Figure 35. Personal vdisk Boot storm Data Mover NFS load Figure 36. Personal vdisk Boot storm ESXi CPU load Figure 37. Personal vdisk Boot storm Average Guest Millisecond/Command counter Figure 38. Personal vdisk Antivirus Desktop Disk I/O for a single SAS drive Figure 39. Personal vdisk Antivirus personal vdisk Disk I/O for a single SAS drive Figure 40. Personal vdisk Antivirus Desktop Pool LUN IOPS and response time Figure 41. Personal vdisk Antivirus Personal vdisk Pool LUN IOPS and response time Figure 42. Personal vdisk Antivirus Storage processor IOPS Figure 43. Personal vdisk Antivirus FAST Cache IOPS Figure 44. Personal vdisk Antivirus Data Mover CPU utilization Figure 45. Personal vdisk Antivirus Data Mover NFS load Figure 46. Personal vdisk Antivirus vsphere CPU load Figure 47. Personal vdisk Antivirus Average Guest Millisecond/Command counter Figure 48. Login VSI Desktop login time- Personal vdisk vs. Nonpersonal vdisk Figure 49. Personal vdisk Login VSI Desktop Disk IOPS for a single SAS drive EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 8

9 List of Figures Figure 50. Personal vdisk Login VSI PvDisk Disk IOPS for a single SAS drive Figure 51. Personal vdisk Login VSI Desktop Pool LUN IOPS and response time Figure 52. Personal vdisk Login VSI personal vdisk Pool LUN IOPS and response time Figure 53. Personal vdisk Login VSI Storage processor IOPS Figure 54. Personal vdisk Login VSI FAST Cache IOPS Figure 55. Personal vdisk Login VSI Data Mover CPU utilization Figure 56. Personal vdisk Login VSI Data Mover NFS load Figure 57. Personal vdisk Login VSI vsphere CPU load Figure 58. Personal vdisk Login VSI Average Guest Millisecond/Command counter Figure 59. Login VSI LUN response time (ms) Figure 60. Login VSI Login Time Figure 61. vsphere CPU Utilization EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 9

10 Chapter 1: Executive Summary 1 Executive Summary This chapter summarizes the proven solution described in this document and includes the following sections: Introduction to the EMC VNX series Business case Solution overview Key results and recommendations Introduction to the EMC VNX series Introduction The EMC VNX series delivers uncompromising scalability and flexibility for the midtier user while providing market-leading simplicity and efficiency to minimize total cost of ownership. Customers can benefit from VNX features such as: Next-generation unified storage, optimized for virtualized applications. Extended cache by using Flash drives with Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache that can be optimized for the highest system performance and lowest storage cost simultaneously on both block and file. Multiprotocol supports for file, block, and object with object access through EMC Atmos Virtual Edition (Atmos VE). Simplified management with EMC Unisphere for a single management framework for all NAS, SAN, and replication needs. Up to three times improvement in performance with the latest Intel Xeon multicore processor technology, optimized for Flash. 6 Gb/s SAS back end with the latest drive technologies supported: 3.5-in. 100 GB and 200 GB Flash, 3.5-in. 300 GB, and 600 GB 15k or 10k rpm SAS, and 3.5-in. 1 TB, 2 TB and 3 TB 7.2k rpm NL-SAS GB and 200 GB Flash, 300 GB, 600 GB and 900 GB 10k rpm SAS Expanded EMC UltraFlex I/O connectivity Fibre Channel (FC), Internet Small Computer System Interface (iscsi), Common Internet File System (CIFS), network file system (NFS) including parallel NFS (pnfs), Multi-Path File System (MPFS), and Fibre Channel over Ethernet (FCoE) connectivity for converged networking over Ethernet. The VNX series includes five software suites and three software packs that make it easier and simpler to attain the maximum overall benefits. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 10

11 Chapter 1: Executive Summary Business case Software suites available VNX FAST Suite automatically optimizes for the highest system performance and the lowest storage cost simultaneously (FAST VP is not part of the FAST Suite for the VNX5100 ). VNX Local Protection Suite Practices safe data protection and repurposing. VNX Remote Protection Suite Protects data against localized failures, outages and disasters. VNX Application Protection Suite Automates application copies and proves compliance. VNX Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. Software packages available VNX Total Efficiency Pack includes all five software suites (not available for the VNX5100). VNX Total Protection Pack Includes local, remote and application protection suites. VNX Total Value Pack includes all three protection software suites and the Security and Compliance Suite (the VNX5100 exclusively supports this package). Customers require a scalable, tiered, and highly available infrastructure to deploy their virtual desktop environment. There are several new technologies available to assist them in architecting a virtual desktop solution, but the customers need to know how best to use these technologies to maximize their investment, support service-level agreements, and reduce their desktop total cost of ownership. The purpose of this solution is to build a replica of a common customer end-user computing (EUC) environment, and validate the environment for performance, scalability, and functionality. Customers will achieve: Increased control and security of their global, mobile desktop environment, typically their most at-risk environment. Better end-user productivity with a more consistent environment. Simplified management with the environment contained in the data center. Better support of service-level agreements and compliance initiatives. Lower operational and maintenance costs. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 11

12 Chapter 1: Executive Summary Solution overview This solution demonstrates how to use an EMC VNX platform to provide storage resources for a robust Citrix XenDesktop 5.6 environment and Windows 7 virtual desktops. Planning and designing the storage infrastructure for Citrix XenDesktop is a critical step as the shared storage must be able to absorb large bursts of input/output (I/O) that occur throughout the course of a day. These large I/O bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can often adapt to slow performance, but unpredictable performance will quickly frustrate them. To provide predictable performance for a EUC environment, the storage must be able to handle peak I/O load from clients without resulting in high response times. Designing for this workload involves deploying several disks to handle brief periods of extreme I/O pressure and it is which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required, and thus minimizes the cost. Key results and recommendations EMC VNX FAST Cache provides measurable benefits in a desktop virtualization environment. It not only reduces the response time for both read and write workloads, but also effectively supports more virtual desktops on fewer drives, and greater IOPS density with a lower drive requirement. Separating personal vdisk storage from desktop storage improves the user experience. The personal vdisk storage workload is more sequential even though the IOPS is higher than desktop storage. Segregating these two workloads by creating two different storage pools improves the LUN response time of the personal vdisk storage and thus improves the user response time. Log on process in personal vdisk XenDesktop environment takes slightly longer than non personal vdisk XenDesktop environment. Our testing shows that personal vdisk log on took 17 seconds where as nonpersonal vdisk environment took only 5 seconds. Personal vdisk Desktop needs to do additional processing on the IOs so it can be sent to the appropriate storage device. This will increase the CPU utilization on the ESXi server that is hosting the virtual desktops. Our testing shows that during steady state Login VSI testing, the average ESXi CPU utilization on the personal vdisk (32%) environment is about 15%, higher than the nonpersonal vdisk environment (27%). During a virus scan, the average ESXi CPU utilization in the personal vdisk (20%) environment is twice as high as in the nonpersonal vdisk environment (10%). Chapter 7: Testing and Validation provides more details. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 12

13 Chapter 2: Introduction 2 Introduction Document overview EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect realworld deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently faced by its customers. This Proven Solutions Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for Citrix XenDesktop 5.6 solution by using the following products: EMC VNX series Citrix XenDesktop 5.6 Citrix Profile Manager 4.1 VMware vsphere 5.0 This chapter includes the following sections: Document overview Reference architecture Prerequisites and supporting documentation Terminology Use case definition The following use cases are examined in this solution: Boot storm Antivirus scan Login storm User workload simulated with Login Consultants Login VSI 3.6 tool Chapter 7: Testing and Validation contains the test definitions and results for each use case. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 13

14 Chapter 2: Introduction Purpose The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by Citrix XenDesktop 5.6, VMware vsphere 5.0, Citrix Profile Manager 4.1, EMC VNX series (NFS), VNX FAST Cache, and VNX storage pools. This solution includes all the components required to run this environment such as the infrastructure hardware, software platforms including Microsoft Active Directory, and the required Citrix XenDesktop configuration. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. Scope This Proven Solutions Guide contains the results observed from testing the EMC Infrastructure for Citrix XenDesktop 5.6 solution. The objectives of this testing are to establish: A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution. The best practices for the storage configuration to provide optimal performance, scalability, and protection in the context of the midtier enterprise market. Not in scope Audience Implementation instructions are beyond the scope of this document. Information on how to install and configure Citrix XenDesktop 5.6 components, vsphere 5.0, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable. The intended audience for this Proven Solutions Guide is: Internal EMC personnel EMC partners Customers Prerequisites It is assumed that the reader has a general knowledge of the following products: VMware vsphere 5.0 Citrix XenDesktop 5.6 EMC VNX series Cisco Nexus and Catalyst switches EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 14

15 Chapter 2: Introduction Terminology Table 1 lists the terms that are frequently used in this document. Table 1. Term Terminology EMC VNX FAST Cache Definition A feature that enables the use of Flash drive as an expanded cache layer for the array. Login VSI Citrix Profile Manager A third-party benchmarking tool developed by Login Consultants that simulates realworld EUC workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users. Preserves user profiles and dynamically synchronizes them with a remote profile repository. Reference Architecture Corresponding reference architecture This Proven Solutions Guide has a corresponding Reference Architecture document that is available on EMC online support website and EMC.com. EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Reference Architecture provides more details. If you do not have access to these documents, contact your EMC representative. The reference architecture and the results in this Proven Solutions Guide are valid for 1,000 Windows 7 virtual desktops conforming to the workload described in the Validated environment profile section. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 15

16 Chapter 2: Introduction Reference architecture diagram Figure 1 shows the reference architecture of the midsize solution. Figure 1. Citrix XenDesktop Reference architecture Configuration Hardware resources Table 2 lists the hardware used to validate the solution. Table 2. Citrix XenDesktop 5.6 Solution hardware Hardware Quantity Configuration Notes EMC VNX Two Data Movers (1 active and 1 passive) two disk-array enclosures (DAEs) configured with: Twenty nine 300 GB, 15k-rpm 3.5-in. SAS disks Three 100 GB, 3.5-in. Flash drives VNX shared storage for core solution two additional disk-array enclosures (DAEs) Twenty five 2 TB, 7,200 Optional; for user data EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 16

17 Hardware Quantity Configuration Notes rpm 3.5-in. NL-SAS disks Chapter 2: Introduction Intel-based servers Five additional 600 GB, 15k-rpm 3.5-in. SAS disks 22 Memory: 72 GB RAM CPU: Two Intel Xeon E GHz quadcore processors Internal storage: Two 146 GB internal SAS disks External storage: VNX5300 (NFS) Dual 1GbE ports Optional; for infrastructure storage Two ESX clusters to host 1,000 virtual desktops Cisco Catalyst WS-6509-E switch WS-x gigabit line cards WS-SUP720-3B supervisor 1-gigabit host connections distributed over two line cards Cisco Nexus Forty 10-gigabit ports Redundant LAN A/B configuration Software resources Table 3 lists the software used to validate the solution. Table 3. Citrix XenDesktop Solution software Software Configuration VNX5300 (shared storage, file systems) VNX OE for File Release VNX OE for Block Release 31 ( ) VSI for VMware vsphere: Unified Storage Management Version 5.2 VSI for VMware vsphere: Storage Viewer Version 5.2 Cisco Nexus Cisco Nexus 5020 Version 5.1(5) EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 17

18 Chapter 2: Introduction Software ESXi servers Configuration ESXi ESXi (515841) VMware Servers OS Windows 2008 R2 SP1 VMware vcenter Server 5.0 Citrix XenDesktop 5.6 Virtual desktops Note: This software is used to generate the test load. OS MS Windows 7 Enterprise SP1 (32- bit) VMware tools Microsoft Office build Office Enterprise 2007 SP3 Internet Explorer Adobe Reader McAfee Virus Scan 8.7 Enterprise Adobe Flash Player 11 Bullzip PDF Printer Login VSI (EUC workload generator) 3.6 Professional Edition EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 18

19 Chapter 3: Citrix XenDesktop Infrastructure 3 Citrix XenDesktop Infrastructure Citrix XenDesktop 5.6 This chapter describes the general design and layout instructions that apply to the specific components used during the development of this solution. This chapter includes the following sections: Citrix XenDesktop 5.6 vsphere 5.0 Infrastructure Windows infrastructure Introduction Citrix XenDesktop transforms Windows desktops as an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or any type of Windows, web, or SaaS application, to all the latest PCs, Macs, tablets, smartphones, laptops and thin clients and does so with a highdefinition HDX user experience. FlexCast delivery technology enables IT to optimize the performance, security, and cost of virtual desktops for any type of user, including task workers, mobile workers, power users, and contractors. XenDesktop helps IT rapidly adapt to business initiatives by simplifying desktop delivery and enabling user self-service. The open, scalable, and proven architecture simplifies management, support, and integration. Deploying Citrix XenDesktop components This solution used two Citrix XenDesktop Controllers, each capable of scaling up to 1,000 virtual desktops. The core elements of this Citrix XenDesktop 5.6 implementation are: Citrix XenDesktop Controllers Citrix Profile Manager Citrix Personal vdisk Additionally, the following components are required to provide the infrastructure for a Citrix XenDesktop 5.6 deployment: Microsoft Active Directory Microsoft SQL Server DNS server Dynamic Host Configuration Protocol (DHCP) server EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 19

20 Chapter 3: Citrix XenDesktop Infrastructure Citrix XenDesktop Controller The Citrix XenDesktop Controller is the central management location for virtual desktops and has the following key roles: Broker connections between the users and the virtual desktops Control the creation and retirement of virtual desktop images Assign users to desktops Control the state of the virtual desktops Control access to the virtual desktops Citrix personal vdisk Citrix Profile Management Citrix personal vdisk feature is introduced in Citrix XenDesktop 5.6. With personal vdisk, users can preserve customization settings and user-installed applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user s pooled VM to a separate disk called personal vdisk. During runtime, the content of the personal vdisk is blended with the content from the base VM to provide a unified experience to the end user. The personal vdisk data is preserved during reboot/refresh operations. Citrix Profile Manager 4.1 preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix Profile Manager does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage Citrix user profiles. Citrix Profile Manager provides the following benefits over traditional Windows roaming profiles: With Citrix Profile Manager, a user s remote profile is dynamically downloaded when the user logs in to a XenDesktop desktop. XenDesktop downloads persona information only when the user needs it. The combination of Citrix Profile Manager and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the number of storage required in an organization. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 20

21 Chapter 3: Citrix XenDesktop Infrastructure vsphere 5.0 Infrastructure vsphere 5.0 overview VMware vsphere 5.0 is the market-leading virtualization hypervisor that is used across thousands of IT environments around the world. VMware vsphere 5.0 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers to create fully functional virtual machines, each of which run their own operating systems and applications just like physical computers. The high-availability features in VMware vsphere 5.0 along with VMware Distributed Resource Scheduler (DRS) and Storage vmotion enable seamless migration of virtual desktops from one vsphere server to another with minimal or no disruption to the customers. Desktop vsphere clusters This solution deploys two vsphere clusters to host virtual desktops. These server types were chosen due to availability; similar results should be achievable with a variety of server configurations assuming that the ratios of server RAM per desktop and number of desktops per CPU core is upheld. Both clusters consist of eleven dual eight-core vsphere 5 servers to support 500 desktops each, resulting in around 45 to 46 virtual machines per vsphere server. Each cluster has access to five NFS datastores; four for storing virtual desktop machines and one for storing personal vdisk. Infrastructure vsphere cluster One vsphere cluster is deployed in this solution for hosting the infrastructure servers. This cluster is not required if the necessary resources to host the infrastructure servers are already present within the host environment. The infrastructure vsphere 5.0 cluster consists of two dual quad-core vsphere 5 servers. The cluster has access to a single datastore that is used for storing the infrastructure server virtual machines. The infrastructure cluster hosts the following virtual machines: One Windows 2008 R2 SP1 domain controllers Provides DNS, Active Directory, and DHCP services. One VMware vcenter 5 Server running on Windows 2008 R2 SP1 Provides management services for the VMware clusters. Two Citrix XenDesktop controllers each running on Windows 2008 R2 SP1 Provides services to manage the virtual desktops. SQL Server 2008 SP2 on Windows 2008 R2 SP1 Hosts databases for the VMware Virtual Center Server and Citrix XenDesktop Controllers. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 21

22 Chapter 3: Citrix XenDesktop Infrastructure Windows infrastructure Introduction Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components: Microsoft Active Directory Microsoft SQL Server DNS server DHCP server Microsoft Active Directory The Windows domain controller runs the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions: Manages the identities of users and their information Applies group policy objects Deploys software and updates Microsoft SQL Server DNS server DHCP server Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2008 SP2 is used to provide the required databases to vcenter Server and XenDesktop Controller. DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients. In this solution, the DNS role is enabled on the domain controllers. The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops. In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range that is large enough to support 1,000 virtual desktops. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 22

23 Chapter 4: Storage Design 4 Storage Design This chapter describes the storage design that applies to the specific components of this solution. EMC VNX series storage architecture Introduction The EMC VNX series is a dedicated network server optimized for file and block access that delivers high-end features in a scalable and easy-to-use package. The VNX series delivers a single-box block and file solution that offers a centralized point of management for distributed environments. This makes it possible to dynamically grow, share, and cost-effectively manage multiprotocol file systems and provide multiprotocol block access. Administrators can take advantage of simultaneous support for NFS and CIFS protocols by enabling Windows and Linux/UNIX clients to share files by using the sophisticated file-locking mechanisms of VNX for File and VNX for Block for high-bandwidth or for latency-sensitive applications. This solution uses file-based storage to leverage the benefits that each of the following provides: File-based storage over the NFS protocol is used to store the VMDK files for all virtual desktops, which has the following benefit: Unified Storage Management plug-in provides seamless integration with VMware vsphere to simplify the provisioning of datastores or virtual machines. EMC vsphere Storage APIs for Array Integration (VAAI) plug-in for vsphere 5 supports the vsphere 5 VAAI primitives for NFS on the EMC VNX platform. File-based storage over the CIFS protocol is used to store user data and the Citrix Profile Manager repository which has the following benefits: Redirection of user data and Citrix Profile Manager data to a central location for easy backup and administration. Single instancing and compression of unstructured user data to provide the highest storage utilization and efficiency. This section explains the configuration of the storage provisioned over NFS for the vsphere cluster to store the Desktop images and the storage provisioned over CIFS to redirect user data and provide storage for the Citrix Profile Manager repository. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 23

24 Chapter 4: Storage Design Storage layout Figure 2 shows the physical storage layout of the disks in the core reference architecture; this configuration accommodates only the virtual desktops. The Storage layout overview section provides more details about the physical storage configuration. Virtual Desktops Storage Pool 1 RAID UN-BOUND FAST Cache RAID Hot Spare 14 Bus 1 Enclosure 0 VNX OE RAID 5 (3+1) Hot Spare Personal vdisk Storage Pool 2 RAID Bus 0 Enclosure 0 SAS SSD NL SAS UNBOUND Figure 2. VNX5300 Core reference architecture physical storage layout Figure 3 shows the physical storage layout of the disks allocated for user data and Citrix profile manager repository. This storage is in addition to the core storage shown in Figure 2. User Profiles, Home Directories, and XenApp Profiles Storage Pool 3 RAID 6 Hot Spare Infrastructure VMs, SQL Database and Logs Storage Pool 4 RAID Bus 1 Enclosure 1 User Profiles, Home Directories, and XenApp Profiles Storage Pool 3 RAID Bus 0 Enclosure 1 Figure 3. VNX5300 user data physical storage layout Storage layout overview The following configurations are used in the core reference architecture as shown in Figure 2: Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE. Disk 0_0_4 is hot spare for SAS disks. Disk 1_0_114 is hot spare for SSD drives. These disks are marked as hot spare in the storage layout diagram. Ten SAS disks (1_0_0 to 0_0_9) on the RAID 5 storage pool 1 are used to store virtual desktops. FAST Cache is enabled on this pool. For NAS, ten LUNs of 203 GB each are carved out of the pool to provide the storage required to create eight NFS file systems. The file systems are presented to the ESXi servers as eight NFS datastores. Ten SAS disks (0_0_5 to 0_0_14) on the RAID 5 storage pool 2 are used to store personal vdisk. FAST Cache is enabled on this pool. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 24

25 Chapter 4: Storage Design For NAS, ten LUNs of 203 GB each are carved out of the pool to provide the storage required to create two NFS file systems. The file systems are presented to the ESXi servers as two NFS datastores. Two Flash drives (1_0_12 and 1_0_13) are used for EMC VNX FAST Cache. Disks 1_0_1 and 1_0_11 are unbound. They are not used for testing this solution. The following configurations are used in user data and Citrix profile manager repository shown in Figure 3: Disk 1_1_9 is hot spare for the NL-SAS disks. This disk is marked as hot spare in the storage layout diagram. Five SAS disks (1_1_10 to 1_1_14) on the RAID 5 storage pool 4 are used to store the infrastructure virtual machines and SQL database and logs. For NAS, one LUN of 1TB is carved out of the pool to provide the storage required to create one NFS file systems. The file system is presented to the ESXi servers as one NFS datastores. Twenty four NL-SAS disks (0_1_0 to 0_1_14 and 1_1_0 to 1_1_8) on the RAID 6 storage pool 3 are used to store user data and Citrix profile manager user profiles. FAST Cache is enabled for the entire pool. For NAS, twenty five LUNs of 1 TB each are carved out of the pool to provide the storage required to create two NFS file systems. The file systems are presented to the ESXi servers as two NFS datastores. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 25

26 Chapter 4: Storage Design File system layout Figure 4 shows the layout of the NFS file systems to store virtual desktops. Figure 4. VNX5300 Virtual desktop NFS file system layout Ten LUNs of 203 GB each are provisioned out of a RAID 5 storage pool configured with 10 SAS drives. Ten drives are used because the block-based storage pool internally creates 4+1 RAID 5 groups. Therefore, the number of SAS drives used is a multiple of five. Likewise, ten LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool. Eight file systems are then provisioned out of the Automatic Volume Management (AVM) system pool and are presented to the vsphere servers as datastores. A total of 1,000 desktops are evenly distributed among the eight NFS datastores. Starting from VNX for File version , AVM is enhanced to intelligently stripe across dvols that belong to the same block-based storage pool. There is no need to manually create striped volumes and add them to user-defined file-based pools. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 26

27 Figure 5 shows the layout of NFS file systems to store personal vdisk. Chapter 4: Storage Design Figure 5. VNX5300 Personal vdisk NFS file system layout Ten LUNs of 203 GB each are provisioned out of a RAID 5 storage pool configured with 10 SAS drives. The LUNs are presented to VNX File as dvols that belong to a systemdefined pool. Two file systems are then provisioned out of the Automatic Volume Management (AVM) system pool and are presented to the vsphere servers as datastores. Personal vdisk data is stored on these NFS datastores. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 27

28 Chapter 4: Storage Design Figure 6 shows the layout of CIFS file systems. Figure 6. VNX5300 CIFS file system layout Twenty five LUNs of 1 TB each are provisioned out of a RAID 6 storage pool configured with 24 NL-SAS drives. Twenty four drives are used because the block-based storage pool internally creates 6+2 RAID 6 groups. Therefore, the number of used NL-SAS drives is a multiple of eight. Likewise, twenty five LUNs are used because AVM stripes across five dvols, so the number of dvols is a multiple of five. The LUNs are presented to VNX File as dvols that belong to a system-defined pool. Like the NFS file systems, the CIFS file systems are provisioned from the AVM system pool to store user home directories and the Citrix Profile Manager repository. The two file systems are grouped in the same storage pool because their I/O profiles are sequential. FAST Cache is enabled on all the storage pools that are used to store the NFS and CIFS file systems. EMC VNX FAST Cache VNX Fully Automated Storage Tiering (FAST) Cache, a part of the VNX FAST Suite, uses Flash drives as an expanded cache layer for the array. VNX5300 is configured with two 100 GB Flash drives in a RAID 1 configuration for a 91 GB read/write-capable cache. This is the minimum amount of FAST Cache. Larger configurations are supported for scaling beyond 1,000 desktops. FAST Cache is an array-wide feature available for both files and block storage. FAST Cache works by examining 64 KB chunks of data in FAST Cache-enabled objects on EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 28

29 Chapter 4: Storage Design the array. Frequently accessed data is copied to the FAST Cache and subsequent accesses to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to the Flash drives. The use of Flash drives dramatically improves the response times for very active data and reduces data hot spots that can occur within the LUN. FAST Cache is an extended read/write cache that enables Citrix XenDesktop to deliver consistent performance at Flash-drive speeds by absorbing read-heavy activities (such as boot storms and antivirus scans). This extended read/write cache is an ideal caching mechanism for XenDesktop Controller because the base desktop image and other active user data are so frequently accessed that the data is serviced directly from the Flash drives without accessing the slower drives at the lower storage tier. VSI for VMware vsphere EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the vsphere client that provides a single management interface for managing EMC storage within the vsphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience that allows new features to be introduced rapidly in response to changing customer requirements. The following VSI features were used during the validation testing: Storage Viewer (SV) Extends the vsphere client to facilitate the discovery and identification of EMC VNX storage devices that are allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX platforms. It enables VMware administrators to provision new NFS and VMFS datastores, and RDM volumes seamlessly within vsphere client. The EMC VSI for VMware vsphere product guides available on the EMC online support website provide more information. vcenter Server storage layout The desktop storage pool is configured with 10x300 GB SAS drives to provide the storage required for the virtual desktops. Eight file systems are carved out of the pool to present to the ESX clusters as eight datastores. Each of these 250GB datastores accommodates 125 virtual machines, allowing each desktop to grow to a maximum average size of 2GB. The pool of desktops created in XenDesktop is balanced across eight datastores. The personal vdisk storage pool is configured with 10x300 GB SAS drives to provide the storage required for the virtual desktops personal vdisk. Two file systems are carved out of the pool to present to the ESX clusters as two datastores. Each of these 500GB datastores accommodates 500 virtual machines personal vdisk. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 29

30 Chapter 4: Storage Design VNX shared file systems Virtual desktops use two VNX shared file systems, one for Citrix Profile Manager repository and the other to redirect user storage. Each file system is exported to the environment through a CIFS share. Table 4 lists the file systems used for user profiles and redirected user storage. Table 4. VNX5300 File systems File system Use Max Size profiles_fs Citrix Profile Manager repository 15 TB home_fs User data 15 TB Citrix Profile Manager and folder redirection EMC VNX for File Home Directory feature Local user profiles are not recommended in an EUC environment. One reason for this is that a performance penalty is incurred when a new local profile is created when a user logs in to a new desktop image. Solutions such as Citrix Profile Manager and folder redirection enable user data to be stored centrally on a network location that resides on a CIFS share hosted by the EMC VNX array. This reduces the performance impact during user logon, while enables user data to roam with the profiles. The EMC VNX for File Home Directory feature uses the home_fs file system to automatically map the H: drive of each virtual desktop to the users own dedicated subfolder on the share. This ensures that each user has exclusive rights to a dedicated home drive share. This share is created by the File Home Directory feature, and does not need to be created manually. The Home Directory feature automatically maps this share for each user. The Documents folder for each user is also redirected to this share. Users can recover the data in the Documents folder by using the VNX Snapshots for File. The file system is set at an initial size of 100 GB, and extends itself automatically when more space is required. Capacity The file systems leverage EMC Virtual Provisioning and compression to provide flexibility and increased storage efficiency. If single instancing and compression are enabled, unstructured data such as user documents typically leads to a 50 percent reduction in consumed storage. The VNX file systems for the Citrix Profile Manager repository and user documents are configured as follows: profiles_fs can be configured to consume up to 15 TB of space. With 50 percent space saving, each profile can grow up to 30 GB in size. home_fs can be configured up to 15 TB of space. With 50 percent space saving, each user is able to store 30 GB of data. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 30

31 Chapter 5: Network Design 5 Network Design Considerations This chapter describes the network design in this solution and contains the following sections: Considerations VNX for File network configuration vsphere network configuration Cisco Nexus 5020 configuration Cisco Catalyst 6509 configuration Network layout overview Figure 7 shows the 10-gigabit Ethernet (GbE) connectivity between the two Cisco Nexus 5020 switches and the EMC VNX storage. The uplink Ethernet ports from the Nexus switches can be used to connect to a 10 Gb or 1 Gb external LAN. In this solution, a 1 Gb LAN through Cisco Catalyst 6509 switches is used to extend Ethernet connectivity to the desktop clients, Citrix XenDesktop components, and the Windows Server infrastructure. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 31

32 Chapter 5: Network Design Figure 7. Citrix XenDesktop Network layout overview Logical design considerations This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop. Link aggregation VNX platforms provide network high availability or redundancy by using link aggregation. This feature is one of the methods to address the problem of link or switch failure. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNX, combining two 10 GbE ports into a single virtual device. If a link is lost on an Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 32

33 Chapter 5: Network Design VNX for File network configuration Data Mover ports The EMC VNX5300 in this solution includes two Data Movers. The Data Movers can be configured in an active/active or an active/standby configuration. In the active/standby configuration, the standby Data Mover serves as a failover device for any of the active Data Movers. In this solution, the Data Movers operate in the active/standby mode. The VNX5300 Data Movers are configured for two 10-gigabit interfaces on a single I/O module. Link Aggregation Control Protocol (LACP) is used to configure ports fxg-1-0 and fxg-1-1 to support virtual machine traffic, home folder access, and external access for the Citrix Profile Manager repository. Figure 8 shows the rear view of two VNX5300 Data Movers that include two 10-gigabit fiber Ethernet (fxg) ports each in I/O expansion slot 1. fxg-1-1 fxg-1-1 Data Mover Data Mover 2 fxg-1-0 fxg-1-0 Figure 8. VNX5300 Ports of the two Data Movers LACP configuration on the Data Mover To configure the link aggregation that uses fxg-1-0 and fxg-1-1 on Data Mover 2, run the following command: $ server_sysconfig server_2 -virtual -name <Device Name> -create trk option "device=fxg-1-0,fxg-1-1 protocol=lacp" To verify if the ports are channeled correctly, run the following command: $ server_sysconfig server_2 -virtual -info lacp1 server_2: *** Trunk lacp1: Link is Up *** *** Trunk lacp1: Timeout is Short *** *** Trunk lacp1: Statistical Load Balancing is IP *** Device Local Grp Remote Grp Link LACP Duplex Speed fxg Up Up Full Mbs fxg Up Up Full Mbs The remote group number must match for both ports and the LACP status must be Up. Verify if appropriate speed and duplex are established as expected. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 33

34 Chapter 5: Network Design Data Mover interfaces It is recommended to create two Data Mover interfaces and IP addresses on the same subnet with the VMkernel port on the vsphere servers. Half of the NFS datastores are accessed by using one IP address and the other half by using the second IP. This enables the VMkernel traffic to be load balanced among the vsphere NIC teaming members. The following command shows an example of assigning two IP addresses to the same virtual interface named lacp1. $ server_ifconfig server_2 -all server_2: lacp1-1 protocol=ip device=lacp1 inet= netmask= broadcast= UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92 lacp1-2 protocol=ip device=lacp1 inet= netmask= broadcast= UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:93 Enable jumbo frames on Data Mover interface To enable jumbo frames for the link aggregation interface, run the following command to increase the MTU size. $ server_ifconfig server_2 lacp1-1 mtu=9000 To verify if the MTU size is set correctly, run the following command: $ server_ifconfig server_2 lacp1-1 server_2: lacp1 protocol=ip device=lacp1 inet= netmask= broadcast= UP, Ethernet, mtu=9000, vlan=276, macaddr=0:60:48:1b:76:92 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 34

35 Chapter 5: Network Design vsphere network configuration NIC teaming All network interfaces on the vsphere servers in this solution use 1 GbE connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intelbased servers use two onboard Broadcom GbE Controllers for all the network connections. Figure 9 shows the vswitch configuration in vcenter Server. Figure 9. vsphere vswitch configuration Virtual switch vswitch0 use two physical network interface cards (NICs). Table 5 lists the configured port groups in vswitch0. Table 5. Virtual switch vswitch0 vsphere Port groups in vswitch0 Configured port groups Management Network Used for VMkernel port used for vsphere host management vswitch0 VM Network Network connection for virtual desktops and LAN traffic vswitch0 VMKernelStorage NFS datastore traffic The NIC teaming load balancing policy for the vswitches must be set to Route based on IP hash as shown in Figure 10. Figure 10. vsphere Load balancing policy EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 35

36 Chapter 5: Network Design Increase the number of vswitch virtual ports By default, a vswitch is configured with 120 virtual ports, which may not be sufficient in an EUC environment. On the vsphere servers that host the virtual desktops, each virtual desktop consumes one port. Set the number of ports according to the number of virtual desktops that will run on each vsphere server, as shown in Figure 11. Note: Reboot the vsphere server for the changes to take effect. Figure 11. vsphere vswitch virtual ports and MTU settings If a vsphere server fails or needs to be placed in the maintenance mode, other vsphere servers within the cluster must accommodate the additional virtual desktops that are migrated from the vsphere server that goes offline. Consider the worst-case scenario when determining the maximum number of virtual ports per vswitch. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP address from the DHCP server. Enable jumbo frames for the VMkernel port used for NFS For a VMkernel port to access the NFS datastores by using jumbo frames, the MTU size for the vswitch to which the VMkernel port belongs and the VMkernel port itself must be set accordingly. The MTU size is set on the properties windows of the vswitch and the VMkernel ports. Figure 11 and Figure 12 show how a vswitch and a VMkernel port are configured to support jumbo frames. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 36

37 Chapter 5: Network Design Figure 12. vsphere VMkernel port MTU setting The MTU values of the vswitch and the VMkernel ports must be set to 9,000 to enable jumbo frame support for NFS traffic between the vsphere hosts and the NFS datastores. Cisco Nexus 5020 configuration Overview Cabling Enable jumbo frames on Nexus switch The two 40-port Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE, and are delivered by a cut-through switching architecture for 10 GbE server access in the next-generation data centers. In this solution, the VNX Data Mover cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic. The following excerpt of the switch configuration shows the required commands to enable jumbo frames at the switch level because per-interface MTU is not supported. policy-map type network-qos jumbo class type network-qos class-default mtu 9216 system qos service-policy type network-qos jumbo EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 37

38 Chapter 5: Network Design vpc for Data Mover ports Because the Data Mover connections for the two 10-gigabit network ports are spread across two Nexus switches, and LACP is configured for the two Data Mover ports, virtual Port Channel (vpc) must be configured on both switches. The following excerpt is an example of the switch configuration pertaining to the vpc setup for one of the Data Mover ports. The configuration on the peer Nexus switch is mirrored for the second Data Mover port. n5k-1# show running-config feature vpc vpc domain 2 peer-keepalive destination <peer-nexus-ip> interface port-channel3 description channel uplink to n5k-2 switchport mode trunk vpc peer-link spanning-tree port type network interface port-channel4 switchport mode trunk vpc 4 switchport trunk allowed vlan interface Ethernet1/4 description 1/4 vnx dm2 fxg-1-0 switchport mode trunk switchport trunk allowed vlan channel-group 4 mode active interface Ethernet1/5 description 1/5 uplink to n5k-2 1/5 switchport mode trunk channel-group 3 mode active interface Ethernet1/6 description 1/6 uplink to n5k-2 1/6 switchport mode trunk channel-group 3 mode active To verify if the vpc is configured correctly, run the following command on both the switches. The output should look like the following: n5k-1# show vpc Legend: (*) - local vpc is down, forwarding via vpc peer-link vpc domain id : 2 Peer status : peer adjacency formed ok vpc keep-alive status : peer is alive Configuration consistency status: success vpc role : secondary Number of vpcs configured : 1 Peer Gateway : Disabled Dual-active excluded VLANs : - EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 38

39 Chapter 5: Network Design vpc Peer-link status id Port Status Active vlans Po3 up 1, vpc status id Port Status Consistency Reason Active vlans Po4 up success success Cisco Catalyst 6509 configuration Overview Cabling Server uplinks The 9-slot Cisco Catalyst 6509-E switch provides high port densities that are ideal for many wiring closet, distribution, and core network deployments as well as data center deployments. In this solution, the vsphere server cabling is evenly spread across two WS-x Gb line cards to provide redundancy and load balancing of the network traffic. The server uplinks to the switch are configured in a port channel group to increase the utilization of server network resources and to provide redundancy. The vswitches are configured to balance the network traffic according to the IP hash. The following is a configuration example for one of the server ports. description 8/ rtpsol189-1 switchport switchport trunk encapsulation dot1q switchport trunk allowed vlan 276, switchport mode trunk mtu 9216 no ip address spanning-tree portfast trunk channel-group 23 mode on EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 39

40 Chapter 6: Installation and Configuration 6 Installation and Configuration This chapter describes how to install and configure this solution and includes the following sections: Installation overview Citrix XenDesktop components Storage components Installation overview This section provides an overview of the configuration of the following components: Virtual Desktop Agent on master Image Desktop pools Storage pools FAST Cache VNX Home Directory The installation and configuration steps for the following components are available on the VMware website: Citrix XenDesktop 5.6 Citrix Profile Manager 4.1 VMware vsphere 5.0 The installation and configuration steps for the following components are not covered: Microsoft Active Directory, Group Policies, DNS, and DHCP Microsoft SQL Server 2008 SP2 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 40

41 Chapter 6: Installation and Configuration Citrix XenDesktop components Citrix XenDesktop installation overview Virtual Desktop Agent Installation The Citrix XenDesktop Installation document available on the Citrix website has detailed procedures on how to install Citrix XenDesktop 5.6. No special configuration instructions are required for this solution. In this solution, virtual desktops are configured with personal vdisk. Complete the following steps while installing XenDesktop Virtual Desktop agent on the master image. 1. On the master image, insert the XenDesktop 5.6 DVD and double-click AutoSelect. The XenDesktop Install Virtual Desktop Agent window appears. 2. Click Install Virtual Desktop Agent. The Installation option window appears. 3. Select Advanced Install. The License Agreement window appears. 4. Select I accept the terms and condition, and then click Next. 5. In the Select the Virtual Desktop agent window, select the appropriate agent, and then click Next. Figure 13. Select the virtual desktop agent 6. On the Select components to install page, select the appropriate components and installation location. Click Next. 7. In the Personal vdisk configuration window, select Yes, enable personal vdisk. Click Next. The Controller Location window appears. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 41

42 Chapter 6: Installation and Configuration Figure 14. Select Personal vdisk Configuration 8. Select the appropriate radio button to locate the controller. Click Next. 9. On the Virtual Desktop Configuration page, select the appropriate virtual desktop configurations check boxes. Click Next. The Summary page is displayed. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 42

43 Chapter 6: Installation and Configuration Figure 15. Virtual Desktop Configuration 10. Verify the settings, and click Install to start the installation. Citrix XenDesktop machine catalog configuration In this solution, pooled desktops with personal vdisk are created using Machine Creation Services (MCS) so that users can maintain their desktop customization. Do the following steps to create a pooled with personal vdisk catalog. 1. On the XenDesktop Controller, select Start All Programs Citrix Desktop Studio. The Citrix Desktop Studio window appears. 2. Right-click Machines in the left pane and select Create catalog. The create catalog window appears. 3. In the Machine type page, select pooled with personal vdisk from the Machine Type list box. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 43

44 Chapter 6: Installation and Configuration Figure 16. Select the machine type 4. Click Next. The Master Image and Hosts page is displayed. 5. In the Hosts list, select the cluster host from which the virtual desktops are to be deployed. 6. Click the button to select a virtual machine or VM snapshot as the master image. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 44

45 Chapter 6: Installation and Configuration Figure 17. Select the cluster host and Master image 7. Click Next. The Number of VMs page is displayed. 8. In the Number of virtual machines to create box, enter or select the number of virtual machines to be created. Set the virtual machine specifications in Master Image including personal vdisk and select the appropriate Active Directory computer accounts option. In this example, the default options are selected. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 45

46 Chapter 6: Installation and Configuration Figure 18. Virtual Machine specifications 9. Click Next. The create accounts page is displayed. Complete the following steps: a. In the Domain area, select an Active Directory container to store the computer accounts. b. In the Account naming scheme field, enter your account naming scheme. An example of XD#### will create computer account names XD0001 through XD0500. These names are used when the virtual machines are created. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 46

47 Chapter 6: Installation and Configuration Figure 19. Active directory location and naming scheme 10. Click Next. The Administrators page is displayed. 11. In the Administrators page, make any changes and click Next. The Summary page is displayed. 12. In the Summary page, verify the settings for the catalog and specify the name under Catalog name. Figure 20. Summary and Catalog name EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 47

48 Chapter 6: Installation and Configuration 13. Click Finish to start the deployment of the virtual machines. Citrix Profile Manager Configuration The profiles_fs CIFS share is used for the Citrix Profile Manager repository. Citrix Profile Manager is enabled by using a Windows group policy template. The group policy is located in GPO-Templates in profile manager installation software. The group policy template ctxprofile4.1.0.adm is required to configure the Citrix Profile Manager. Do the following steps to configure the Citrix Profile Manager. 1. On the master Image, install Citrix Profile manager. 2. One the master Image, open registry settings. Set the value of EnableUserProfileRedirection to 0. The value is located in HKLM\Software\Citrix\personal vdisk\configuration. Figure 21. Citrix Profile Manager Master Image Profile Redirection 3. On the active directory server, open group policy management. Right-click the appropriate OU containing the computers on which profile manager is installed and create a new GPO. 4. Right-click the new GPO and select edit. 5. Navigate to Computer Configuration Policies and right-click Administrative Templates. 6. Select Add/Remove templates and click Add. 7. Browse the folder and locate ctxprofile4.1.0.adm. Click open to import the profile manager template. 8. On the profile GPO created in Step 3, navigate to Computer Configuration Policies Administrative Templates classic Administrative templates Citrix Profile Management. Set the following settings: a. Double-click Enable Profile Management. The Enable Profile Management window appears. Select Enabled. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 48

49 Chapter 6: Installation and Configuration Click OK to close the window. Figure 22. Enable Citrix Profile Manager b. Double-click Path to user store. The Path to user store window appears. Select Enabled and set the UNC path of the profiles_fs CIFS share. Click OK to close the window. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 49

50 Chapter 6: Installation and Configuration Figure 23. Citrix Profile Manager path to user store Storage components Storage pools Storage pools in the EMC VNX OE support heterogeneous drive pools. Four storage pools were configured in this solution, as shown in Figure 25. A RAID 5 storage pool (XD5_10Disk_SAS_Desktop) is configured from 10 SAS drives. Ten 203 GB thick LUNs were created from this storage pool. This pool is used to store the NFS datastores containing the virtual desktops. FAST Cache is enabled for the pool. A RAID 5 storage pool (XD5_SAS_vDisk) is configured from 10 SAS drives. Ten 203 GB thick LUNs were created from this storage pool. This pool is used to store the NFS datastores containing the virtual desktops personal vdisks. FAST Cache is enabled for the pool. A RAID 5 storage pool (XD5_SAS_NFS_Infrastructure_OS) is configured from 5 SAS drives. Five 203 GB thick LUNs are created from this storage pool. This pool is used to store the NFS file systems containing the infrastructure virtual servers. FAST Cache is disabled for the pool. A RAID 6 storage pool (XD5_NLSAS_userdata) is configured from 24 NL-SAS drives. Twenty-five 1 TB GB thick LUNs were created from this storage pool. This pool is used to store the user home directory and Citrix Profile Manager Repository CIFS shares. FAST Cache is enabled for the pool. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 50

51 Chapter 6: Installation and Configuration NFS active threads per Data Mover The default number of threads dedicated to serve NFS requests is 384 per Data Mover on the VNX. Some use cases such as the scanning of desktops might require more NFS active threads. It is recommended to increase the number of active NFS threads to the maximum of 2048 on each Data Mover. The nthreads parameter can be set by using the following command: # server_param server_2 facility nfs modify nthreads value 2048 Reboot the Data Mover for the change to take effect. Type the following command to confirm the value of the parameter: # server_param server_2 -facility nfs -info nthreads server_2 : name = nthreads facility_name = nfs default_value = 384 current_value = 2048 configured_value = 2048 user_action = reboot DataMover change_effective = reboot DataMover range = (32,2048) description = Number of threads dedicated to serve nfs requests This param represents number of threads dedicated to serve nfs requests. Any changes made to this param will be applicable after reboot only The NFS active threads value can also be configured by editing the properties of the nthreads Data Mover parameter in Settings Data Mover Parameters menu in Unisphere, as shown in Figure 24. Highlight the nthreads value you want to edit and select Properties to open the nthreads properties window. Update the Value field with the new value and click OK shown in Figure 24. Do this procedure for each of the nthreads Data Mover parameters listed menu. Reboot the Data Movers for the changes to take effect. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 51

52 Chapter 6: Installation and Configuration Figure 24. VNX5300 nthreads properties NFS performance fix VNX file software contains a performance fix that significantly reduces NFS write latency. The minimum software patch required for the fix is In addition to the patch upgrade, the performance fix only takes effect when the NFS file system is mounted by using the uncached option. # server_mount server_2 -option uncached fs1 /fs1 The uncached option can be verified by using the following command: # server_mount server_2 server_2 : root_fs_2 on / uxfs,perm,rw root_fs_common on /.etc_common uxfs,perm,ro fs1 on /fs1 uxfs,perm,rw,uncached fs2 on /fs2 uxfs,perm,rw,uncached fs3 on /fs3 uxfs,perm,rw,uncached fs4 on /fs4 uxfs,perm,rw,uncached fs5 on /fs5 uxfs,perm,rw,uncached fs6 on /fs6 uxfs,perm,rw,uncached fs7 on /fs7 uxfs,perm,rw,uncached fs8 on /fs8 uxfs,perm,rw,uncached EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 52

53 Chapter 6: Installation and Configuration The uncached option can also be configured by editing the properties of the file system mount in StorageStorageConfigurationFile SystemsMounts menu in Unisphere. Highlight the file system mount you want to edit and select Properties to open the Mount Properties window, as shown in Figure 25. Select Set Advanced Options to display the advanced menu options, and then select Direct Writes Enabled and click OK. The uncached option is now enabled for the selected file system. Figure 25. VNX5300 File System Mount Properties EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 53

54 Chapter 6: Installation and Configuration Enable FAST Cache FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere. Click the FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. There is no user-configurable parameters for FAST Cache. Figure 26 shows the FAST Cache settings for VNX5300 array in this solution. Figure 26. VNX5300 FAST Cache tab EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 54

55 Chapter 6: Installation and Configuration To enable FAST Cache for any LUN in a pool, navigate to the Storage Pool Properties page in Unisphere, and then click the Advanced tab. Select Enabled to enable FAST Cache, as shown in Figure 27. Figure 27. VNX5300 Enable FAST Cache EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 55

56 Chapter 7: Testing and Validation 7 Testing and Validation This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing is to characterize the performance of the solution and its component subsystems during the following scenarios: Boot storm of all desktops McAfee antivirus full scan on all desktops User workload testing using Login VSI on all desktops Validated environment profile Profile characteristics Table 6 provides the validated environment profile. Table 6. Citrix XenDesktop environment profile Profile characteristic Value Number of virtual desktops 1,000 Virtual desktop OS CPU per virtual desktop Windows 7 Enterprise SP1 (32- bit) 1 vcpu Number of virtual desktops per CPU core 5.7 RAM per virtual desktop Average desktop storage available for each virtual desktop Average personal vdisk storage available for each virtual desktop 1 GB 2 GB 2 GB Average IOPS per virtual desktop in steady state 11 Average peak IOPS per virtual desktop during boot storm 34 Number of datastores used to store virtual desktops 8 Number of datastores used to store personal vdisk 2 Number of virtual desktops per datastore 125 Disk and RAID type for datastores RAID 5, 300 GB, 15k rpm, 3.5-in. SAS disks EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 56

57 Chapter 7: Testing and Validation Profile characteristic Disk and RAID type for CIFS shares to host the Citrix Profile Manager repository and home directories Value RAID 6, 2 TB, 7,200 rpm, 3.5-in. NL-SAS disks Number of VMware clusters for virtual desktops 2 Number of vsphere servers in each cluster 11 Number of virtual desktops in each cluster 500 Use cases Three common use cases are executed to validate whether the solution performs as expected under heavy-load situations. The following use cases are tested: Simultaneous boot of all desktops Full antivirus scan of all desktops Login and steady-state user load simulated using the Login VSI medium workload on all desktops Each use cases are executed with personal vdisk and nonpersonal vdisk configuration. A number of key metrics are compared between these two configurations to show the overall performance difference of the solution. Login VSI Virtual Session Index (VSI) version 3.6 is used to run a user load on the desktops. VSI provides the guidance to gauge the maximum number of users that a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). A medium workload that was selected for this test had the following characteristics: The workload emulated a medium knowledge user who used Microsoft Office Suite, Internet Explorer, Adobe Acrobat Reader, Bullzip PDF Printer, and 7-zip. After a session started, the medium workload repeated every 12 minutes. The response time was measured every 2 minutes during each loop. The medium workload opened up to five applications simultaneously. The type rate was 160ms for each character. Approximately 2 minutes of idle time was included to simulate realworld users. Each loop of the medium workload used the following applications: Microsoft Outlook 2007 Browsed 10 messages. Microsoft Internet Explorer (IE) One instance of IE browsed the BBC.co.uk website, another instance browsed Wired.com, Lonelyplanet.com, and another instance opened a flash-based 480p video file. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 57

58 Chapter 7: Testing and Validation Microsoft Word 2007 One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document. Bullzip PDF Printer and Adobe Acrobat Reader The Word document was printed to PDF and reviewed. Microsoft Excel 2007 A very large Excel worksheet was opened and random operations were performed. Microsoft PowerPoint 2007 A presentation was reviewed and edited. 7-zip The command line version was used to zip the output of the session. Login VSI launcher A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. There are two types of launchers master and slave. Only one master is in a given test bed, while there can be several slave launchers as required. The number of desktop sessions that a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vcpus) and a 2 GB RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 1,000 desktop sessions were launched from 32 launchers, with approximately 32 sessions per launcher. Each launcher was allocated two vcpus and 4 GB of RAM. No bottlenecks were observed on the launchers during the Login VSI tests. FAST Cache configuration For all tests, FAST Cache is enabled for the storage pools holding the virtual desktop datastores, personal vdisk, user home directories, and Citrix Profile manager repository. Boot storm results Test methodology This test is conducted by selecting all the desktops in vcenter Server, and then selecting Power On. Overlays are added to the graphs to show when the last poweron task is completed and when the IOPS to the pool LUNs achieves a steady state. For the boot storm test, all 1,000 desktops are powered on within 3 minutes. The steady state is achieved in another 10 minutes for personal vdisk environment. This section describes the boot storm results for Desktop pools with and without personal vdisk configuration. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 58

59 Chapter 7: Testing and Validation Pool individual disk load Figure 28 shows the disk IOPS and response time for a single SAS drive in the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 28. Personal Disk Boot storm IOPS for a single Desktop pool SAS drive During peak load, the desktop pool disk serviced a maximum of 350 IOPS and experienced a response time of 32ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 59

60 Chapter 7: Testing and Validation Figure 29. drive Personal vdisk Boot storm IOPS for a single personal vdisk pool SAS During peak load, the personal vdisk pool disk serviced a maximum of 290 IOPS and experienced a response time of 12ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 60

61 Chapter 7: Testing and Validation Pool LUN load Figure 30 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Each LUN in a particular had similar results, therefore only the results from a single LUN are shown in the graph. Figure 30. Personal vdisk Boot storm Desktop Pool LUN IOPS and response time During peak load, the Desktop pool LUN serviced a maximum of 2900 IOPS and peak response time of 3.6ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 61

62 Chapter 7: Testing and Validation Figure 31. time Personal vdisk Boot storm personal vdisk Pool LUN IOPS and response During peak load, the personal vdisk pool LUN serviced a maximum of 590 IOPS and peak response time of 9ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 62

63 Chapter 7: Testing and Validation Storage processor IOPS and CPU utilization Figure 32 shows the total IOPS serviced by the storage processors during the test. Figure 32. Utilization Personal vdisk Boot storm Storage processor total IOPS and CPU During peak load, the storage processors serviced 34,000 IOPS. The storage processor utilization remained below 45 percent. The EMC VNX5300 had sufficient scalability headroom for this workload. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 63

64 Chapter 7: Testing and Validation FAST Cache IOPS Figure 33 shows the IOPS serviced from FAST Cache during the boot storm test. Figure 33. Personal vdisk Boot storm personal vdisk FAST Cache IOPS During peak load, FAST Cache serviced 29,000 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and SP memory cache. If memory cache hits are excluded, the two Flash drives alone serviced 15,500 IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 87 SAS drives to achieve the same level of performance. The Desktop storage pool data has more FAST cache hits than the personal vdisk storage pool data. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 64

65 Chapter 7: Testing and Validation Data Mover CPU utilization Figure 34 shows the Data Mover CPU utilization during the boot storm test. Figure 34. Personal vdisk Boot storm Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of 70 percent during this test. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 65

66 Chapter 7: Testing and Validation Data Mover NFS load Figure 35 shows the NFS operations per second from the Data Mover during the boot storm test. Figure 35. Personal vdisk Boot storm Data Mover NFS load During peak load, there were 61,000 total NFS operations per second. The Data Mover cache helped reduce the load on the disks. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 66

67 Chapter 7: Testing and Validation vsphere CPU load Figure 36 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 36. Personal vdisk Boot storm ESXi CPU load The vsphere server achieved a peak CPU utilization of 49 percent. Hyper-threading was enabled to double the number of logical CPUs. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 67

68 Chapter 7: Testing and Validation vsphere disk response time Figure 37 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the personal vdisk is shown as PvDisk FS GAVG and the average of all the datastores hosting the desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 37. counter Personal vdisk Boot storm Average Guest Millisecond/Command Antivirus results The peak GAVG of the file system hosting the desktop was 118ms, and the personal vdisk file system was 61ms. The overall impact of this brief spike in GAVG values was minimal as all 1,000 desktops attained steady state in less than 13 minutes after the initial power-on. Test methodology This test is conducted by scheduling a full scan of all desktops using a custom script to initiate an on-demand scan using McAfee VirusScan 8.7i. The full scans are started on all the desktops. The difference between start time and finish time is 60 minutes. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 68

69 Chapter 7: Testing and Validation Pool individual disk load Figure 38 shows the disk I/O for a single SAS drive in the storage pool that stores the virtual desktops. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 38. Personal vdisk Antivirus Desktop Disk I/O for a single SAS drive During peak load, the disk serviced with the peak of 90 IOPS and experienced a maximum response time of 14ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 69

70 Chapter 7: Testing and Validation Figure 39. Personal vdisk Antivirus personal vdisk Disk I/O for a single SAS drive During peak load, the personal vdisk disk serviced with the peak of 290 IOPS and experienced a maximum response time of 16ms. The majority of the virus scan IOs were directed toward personal vdisk disks. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 70

71 Chapter 7: Testing and Validation Pool LUN load Figure 40 shows the replica LUN IOPS and the response time of one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. Figure 40. Personal vdisk Antivirus Desktop Pool LUN IOPS and response time During peak load, the LUN serviced with the peak of 52 IOPS and experienced a peak response time of 4ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 71

72 Chapter 7: Testing and Validation Figure 41. time Personal vdisk Antivirus Personal vdisk Pool LUN IOPS and response During peak load, the personal vdisk LUN serviced 1200 IOPS and experienced a response time of 7ms. The majority of the virus scans IOs are directed toward personal vdisk LUNs. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 72

73 Chapter 7: Testing and Validation Storage processor IOPS and CPU utilization Figure 42 shows the total IOPS serviced by the storage processor during the test. Figure 42. Personal vdisk Antivirus Storage processor IOPS During peak load, the storage processors serviced 14,000 IOPS. The antivirus scan operations caused a peak CPU utilization of 60 percent. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 73

74 Chapter 7: Testing and Validation FAST Cache IOPS Figure 43 shows the IOPS serviced from FAST Cache during the test. Figure 43. Personal vdisk Antivirus FAST Cache IOPS During peak load, FAST Cache serviced 11,900 IOPS from the datastores. The FAST Cache hits include IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the two Flash drives alone serviced almost all the IOPS during peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 67 SAS drives to achieve the same level of performance. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 74

75 Chapter 7: Testing and Validation Data Mover CPU utilization Figure 44 shows the Data Mover CPU utilization during the antivirus scan test. Figure 44. Personal vdisk Antivirus Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of 65 percent during this test. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 75

76 Chapter 7: Testing and Validation Data Mover NFS load Figure 45 shows the NFS operations per second from the Data Mover during the antivirus scans. Figure 45. Personal vdisk Antivirus Data Mover NFS load During peak load, there were 34,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 76

77 Chapter 7: Testing and Validation vsphere CPU load Figure 46 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 46. Personal vdisk Antivirus vsphere CPU load The vsphere server achieved a peak CPU utilization of 35 percent during peak. Hyperthreading was enabled to double the number of logical CPUs. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 77

78 Chapter 7: Testing and Validation vsphere disk response time Figure 47 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the personal vdisk storage is shown as PvDisk FS GAVG and the average of all the datastores hosting the Desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 47. Personal vdisk Antivirus Average Guest Millisecond/Command counter Login VSI results The peak GAVG of the file system hosting the desktop was 105ms, and the personal vdisk file system 55ms. Test methodology This test is conducted by scheduling 1,000 users to connect through remote desktop in a 60-minute window, and starting the Login VSI-medium with Flash workload. The workload was run for thirty minutes in a steady state to observe the load on the system. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 78

79 Chapter 7: Testing and Validation Desktop logon time Figure 48 shows the required time for the desktops to complete the user login process. Figure 48. Login VSI Desktop login time- Personal vdisk vs. Nonpersonal vdisk The required time to complete the login process reached a maximum of 17 seconds during peak load of the 1,000 desktop login storms on the personal vdisk configuration. The Citrix profile manager is enabled on the personal vdisk environment and it helped to reduce the login time. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 79

80 Chapter 7: Testing and Validation Pool individual disk load Figure 49 shows the disk IOPS for a single SAS drive that is part of the storage pool. Each disk had similar results, therefore only the results from a single disk are shown in the graph. Figure 49. Personal vdisk Login VSI Desktop Disk IOPS for a single SAS drive During peak load, the SAS disk serviced 280 IOPS and experienced a response time of 7ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 80

81 Chapter 7: Testing and Validation Figure 50. Personal vdisk Login VSI PvDisk Disk IOPS for a single SAS drive During peak load, the SAS disk serviced 180 IOPS and experienced a response time of 12ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 81

82 Chapter 7: Testing and Validation Pool LUN load Figure 51 shows the Replica LUN IOPS and response time from one of the storage pool LUNs. Each LUN had similar results, therefore only the results from a single LUN are shown in the graph. Figure 51. Personal vdisk Login VSI Desktop Pool LUN IOPS and response time During peak load, the LUN serviced 800 IOPS and experienced a response time of 6.5ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 82

83 Chapter 7: Testing and Validation Figure 52. time Personal vdisk Login VSI personal vdisk Pool LUN IOPS and response During peak load, the LUN serviced 550 IOPS and experienced a response time of 1.8ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 83

84 Chapter 7: Testing and Validation Storage processor IOPS and CPU utilization Figure 53 shows the total IOPS serviced by the storage processor during the test. Figure 53. Personal vdisk Login VSI Storage processor IOPS During peak load, the storage processors serviced a maximum of 16,000 IOPS. The storage processor peak utilization was 36 percent. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 84

85 Chapter 7: Testing and Validation FAST Cache IOPS Figure 54 shows the IOPS serviced from FAST Cache during the test. Figure 54. Personal vdisk Login VSI FAST Cache IOPS During peak load, FAST Cache serviced 12,000 IOPS from the datastores. The FAST Cache hits included IOPS serviced by Flash drives and storage processor memory cache. If memory cache hits are excluded, the four Flash drives alone serviced 9500 IOPS at peak load. A sizing exercise using EMC's standard performance estimate (180 IOPS) for 15k rpm SAS drives suggests that it would require approximately 53 SAS drives to achieve the same level of performance. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 85

86 Chapter 7: Testing and Validation Data Mover CPU utilization Figure 55 shows the Data Mover CPU utilization during the Login VSI test. Figure 55. Personal vdisk Login VSI Data Mover CPU utilization The Data Mover achieved a peak CPU utilization of 40 percent during this test. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 86

87 Chapter 7: Testing and Validation Data Mover NFS load Figure 56 shows the NFS operations per second from the Data Mover during the Login VSI test. Figure 56. Personal vdisk Login VSI Data Mover NFS load During peak load, there were 14,000 NFS operations per second. The Data Mover cache helped reduce the load on the disks. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 87

88 Chapter 7: Testing and Validation vsphere CPU load Figure 57 shows the CPU load from the vsphere servers in the VMware clusters. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 57. Personal vdisk Login VSI vsphere CPU load The CPU load on the vsphere server reached a maximum of 35 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 88

89 Chapter 7: Testing and Validation vsphere disk response time Figure 58 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. The datastore hosting the PvDisk storage is shown as PvDisk FS GAVG and the average of all the datastores hosting the desktop storage is shown as Desktop FS GAVG in the graph. Each server had similar results, therefore only the results from a single server are shown in the graph. Figure 58. Personal vdisk Login VSI Average Guest Millisecond/Command counter The peak GAVG of the file system hosting the PvDisk was 7ms, and that of the desktop file system was 6ms. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 89

90 Chapter 8: Personal vdisk Implementation considerations 8 Personal vdisk Implementation considerations This chapter provides a few important guidelines when implementing personal vdisk feature in Citrix XenDesktop environment. Personal vdisk Implementation considerations Storage Layout Citrix XenDesktop has the option to place the personal vdisk storage on the same storage repository as the Desktop storage or on a dedicated storage repository. Tests show that separating personal vdisk storage from Desktop storage provides better user experience. Figure 59 shows the LUN response time between Desktop storage pool and personal vdisk pool LUNs. Figure 59. Login VSI LUN response time (ms) If one pool is used, the Login VSI test will fail with higher application access time. Having separate pools for personal vdisk reduces the application access time and enables the Login VSI test to pass. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 90

91 Chapter 8: Personal vdisk Implementation considerations Login Time The login time of personal vdisk enabled virtual desktop is compared with nonpersonal vdisk pooled virtual desktop. The pooled virtual desktop is deployed with same number of spindles using one pool (20 SAS drives for desktop and 2 SSD for FAST cache). Figure 60 shows the login time for the virtual desktops. Figure 60. Login VSI Login Time Tests show that personal vdisk login time is 9 seconds longer than the nonpersonal vdisk login time. EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 91

92 Chapter 8: Personal vdisk Implementation considerations vsphere CPU Utilization The vsphere server CPU utilization of personal vdisk enabled virtual desktop is compared with nonpersonal vdisk pooled virtual desktop. The pooled virtual desktop is deployed with same number of spindles using one pool (20 SAS drives for desktop and 2 SSD for FAST cache). Figure 61 shows the CPU utilization of the vsphere server. Figure 61. vsphere CPU Utilization Tests show that during the steady-state Login VSI testing, the average ESXi CPU utilization on the personal vdisk (32%) environment is about 15% higher than the nonpersonal vdisk environment (27%). During the virus scan, the average ESXi CPU utilization on the personal vdisk (20%) environment is twice as high as the nonpersonal vdisk environment (10%). EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Proven Solutions Guide 92

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Reference Architecture Copyright 2010 EMC Corporation.

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Proven Solution Guide Copyright 2010, 2011 EMC Corporation.

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Simplify management and decrease total cost of ownership Guarantee a superior desktop experience Ensure a successful virtual desktop deployment

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Provisioning Services 6.1 EMC Solutions Group

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled by

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture. Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Surveillance Dell EMC Storage with Cisco Video Surveillance Manager Configuration Guide H14001 REV 1.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published May 2015 Dell believes

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

DELL EMC UNITY: BEST PRACTICES GUIDE

DELL EMC UNITY: BEST PRACTICES GUIDE DELL EMC UNITY: BEST PRACTICES GUIDE Best Practices for Performance and Availability Unity OE 4.5 ABSTRACT This white paper provides recommended best practice guidelines for installing and configuring

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Unified Storage (FC), Microsoft Windows Server 2008 R2 Hyper-V, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Install & Run the EMC VNX Virtual Storage Appliance (VSA)

Install & Run the EMC VNX Virtual Storage Appliance (VSA) Install & Run the EMC VNX Virtual Storage Appliance (VSA) Simon Seagrave EMC vspecialist Technical Enablement Team & Blogger Kiwi_Si simon.seagrave@emc.com 1 Goal of this session In this session I want

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

Dell EMC SAN Storage with Video Management Systems

Dell EMC SAN Storage with Video Management Systems Dell EMC SAN Storage with Video Management Systems Surveillance October 2018 H14824.3 Configuration Best Practices Guide Abstract The purpose of this guide is to provide configuration instructions for

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Jake Howering. Director, Product Management

Jake Howering. Director, Product Management Jake Howering Director, Product Management Solution and Technical Leadership Keys The Market, Position and Message Extreme Integration and Technology 2 Market Opportunity for Converged Infrastructure The

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 White Paper EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1 EMC VNX Replicator, VMware vcenter Site Recovery Manager, and VMware View Composer Automating failover of virtual desktop instances Preserving user

More information

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.1, VMware Horizon View 5.2, and VMware Horizon View Composer 5.2 Simplify

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2 Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture...

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Cisco and EMC Release Joint Validated Design for Desktop Virtualization with Citrix XenDesktop. Daniel Chiu Business Development Manager, EMC

Cisco and EMC Release Joint Validated Design for Desktop Virtualization with Citrix XenDesktop. Daniel Chiu Business Development Manager, EMC Cisco and EMC Release Joint Validated Design for Desktop Virtualization with Citrix XenDesktop Daniel Chiu Business Development Manager, EMC 1 Cisco, Citrix, and EMC Partnership Collaborative solutions

More information

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide

Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint. Proven Solution Guide Externalizing Large SharePoint 2010 Objects with EMC VNX Series and Metalogix StoragePoint Copyright 2011 EMC Corporation. All rights reserved. Published March, 2011 EMC believes the information in this

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Key Results 2. Introduction 2.1.Scope 2.2.Audience 3. Technology Overview

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

Surveillance Dell EMC Storage with Synectics Digital Recording System

Surveillance Dell EMC Storage with Synectics Digital Recording System Surveillance Dell EMC Storage with Synectics Digital Recording System Configuration Guide H15108 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell

More information

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary Description Course Summary This powerful 5-day, 10 hour per day extended hours class is an intensive introduction to VMware vsphere including VMware ESXi 6.7 and vcenter 6.7. This course has been completely

More information

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740

Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 Accelerating Microsoft SQL Server 2016 Performance With Dell EMC PowerEdge R740 A performance study of 14 th generation Dell EMC PowerEdge servers for Microsoft SQL Server Dell EMC Engineering September

More information