EMC Infrastructure for Virtual Desktops

Size: px
Start display at page:

Download "EMC Infrastructure for Virtual Desktops"

Transcription

1 EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Proven Solution Guide

2 EMC <offerings> for <Application> Enabled by <Products/Services> on <OS> using <Network Transport>> Copyright 2010 EMC Corporation. All rights reserved. Published September, 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H8059 2

3 Table of Contents Chapter 1: About this Document... 4 Overview... 4 Audience and purpose... 4 Scope... 5 Technology solutions... 6 Virtual Desktop Infrastructure... 6 Reference architecture... 8 Validated environment profile... 9 Prerequisites and supporting documentation Terminology Chapter 2: Virtual Desktop Infrastructure Overview XenDesktop VDI VMware infrastructure Windows infrastructure Conclusion Chapter 3: Storage Design Overview Concepts Storage design layout File system layout Capacity planning Best practices Chapter 4: Network Design Overview Considerations Network layout Virtual LANs High availability network Chapter 5: Installation and Configuration Overview Task 1: Set up and configure the NFS datastore Task 2: Install and configure Desktop Delivery Controller Task 3: Install and configure Provisioning Server Task 4: Configure and provision the master virtual machine template Task 5: Deploy virtual desktops Chapter 6: Testing and Validation Overview Testing overview Testing tools Test results Result analysis of Desktop Delivery Controller Result analysis of Provisioning Server Result analysis of the vcenter Server Result analysis of SQL Server Result analysis of ESX servers Result analysis of Celerra unified storage Login storm scenario Test summary

4 Chapter 1: About this Document Overview Introduction EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. This document summarizes a series of best practices that were discovered, validated, or otherwise encountered during the validation of the EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 solution. Use case definition A use case reflects a defined set of tests that validates the reference architecture for a customer environment. This validated architecture can then be used as a reference point for a Proven Solution. Contents The chapter includes the following topics: Topic Overview 4 Audience and purpose 4 Scope 5 Technology solutions 6 Virtual Desktop Infrastructure 6 Reference architecture 8 Validated environment profile 9 Prerequisites and supporting documentation 10 Terminology 11 See Page Audience and purpose Audience The intended audience for the Proven Solution Guide is: Internal EMC personnel EMC partners Customers 4

5 Purpose The purpose of this solution is to: Develop a suggested Citrix XenDesktop 4 VDI for 1,000 users in the context of EMC Celerra unified storage and VMware vsphere virtualization platforms. Test and document the user response time and the performance of the associated servers. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. It can also be used by other EMC organizations (for example, the technical services or sales organization) as the basis for producing documentation for technical services or a sales kit. Scope Scope This document describes the architecture of an EMC solution built at EMC s Global Solutions Labs. This solution is engineered to enable customers to: Implement a Citrix XenDesktop VDI 4 solution in their environment after considering the storage configuration, design, sizing, and software. Reduce operation costs with VDI, when compared to existing desktop solutions. Deliver the highest level of service level agreement (SLA) with the lowest cost per application workload. Provide VDI with the flexibility of a solution that scales up to meet the requirements of large enterprises and still offers a simple footprint for midsize organizations. This solution provides information to: Create a well-performing storage design for a Citrix XenDesktop 4 VDI on a VMware vsphere virtualization platform for 1,000 desktop users on an EMC Celerra NS-120 unified storage system. Document the performance in the validated environment and suggest methods to improve the performance of the Citrix XenDesktop 4 solution. Not in scope Testing XenDesktop 4 VDI for a workload other than a typical office user workload was outside the scope of this testing. 5

6 Technology solutions Business challenges for midsize enterprises With limited resources and increasing demands, today's business must address the following challenges: Consolidate desktops across the enterprise Ensure information access, availability, and continuity Maximize server and storage utilization and deliver high desktop performance Manage upgrades and migration quickly and easily Reduce the demands on limited IT resources and budgets Reduce the complexity of selecting the right technology Solution for midsize enterprises The EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 solution establishes a configuration of validated hardware and software that permits easy and repeatable deployment of virtual desktops using the storage provided by Celerra NS-120. This Proven Solution Guide describes the deployment and validation of Citrix XenDesktop 4 VDI on Celerra NS-120 in a manner that provides performance, recoverability, and protection. Virtual Desktop Infrastructure Introduction The VDI is used to run desktop operating systems and applications inside virtual machines that reside on servers, which run a virtualization hypervisor. The desktop operating systems inside virtual machines are referred to as virtual desktops. Users access the virtual desktop and application from a desktop PC client or a thin client by using a remote display protocol. The applications and storage are centrally managed. Citrix XenDesktop Citrix XenDesktop is one of the leaders in desktop virtualization. It enables fully personalized desktops for each user with all the security and simplicity for centralized management. XenDesktop simplifies desktop management. Using centralized management, adding, updating, and removing applications are simple tasks. Users will have instant access to applications by using the HDX technology, a set of capabilities that delivers high-definition user experience over any network while getting a high-definition user experience over any network, including low-bandwidth and high-latency wide area network (WAN) connections. XenDesktop can instantly deliver every type of virtual desktop, each specifically tailored to meet the performance and flexibility requirements of individual users. 6

7 Components of Citrix XenDesktop VDI This solution has validated a XenDesktop 4 VDI deployment for high availability and simulated the workload of 1,000 real-world users. The VDI was built using the following components: Citrix DDC to broker and manage virtual desktops. Citrix Provisioning Services to provision desktop operating system (OS). EMC unified storage to store virtual desktops. VMware ESX and vcenter Server as the server virtualization infrastructure. Windows infrastructure to support services such as Active Directory, dynamic host configuration protocol (DHCP), Domain Name System (DNS), and SQL Server. 7

8 Reference architecture Corresponding reference architecture This use case has a corresponding reference architecture document that is available on EMC Powerlink and EMC.com. Refer to EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Reference Architecture for details. Reference architecture diagram The following diagram shows the overall physical architecture of the solution. 8

9 Validated environment profile Environment profile and test results The solution was validated with the following environment profile. Profile characteristic Value Number of virtual desktops 1,000 Size of each virtual desktop 3 GB (thin provisioned) Number of building blocks 10 Number of virtual desktops per building block 100 Number of NFS datastores per building block 1 Number of XenDesktop Provisioning Services Servers 2 Number of XenDesktop Desktop Delivery Controllers 2 NFS datastore RAID type, physical drive size, and speed RAID 10, 450 GB, 15k rpm, FC disks Storage to host the Golden images, TFTP boot area, and ISO images RAID type, physical drive size, and speed RAID 5, 450 GB, 15k rpm, FC disks Chapter 6: Testing and Validation on page 52 provides more information on the performance results. Hardware resources The following table lists the hardware used to validate the solution. Equipment Quantity Configuration Notes EMC Celerra NS HP ProLiant DL380 G5 3 Dell PowerEdge R Two Data Movers (active/standby) Two disk-array enclosures (DAEs) with 15 FC 450 GB 15k 2/4 Gb disks Memory: 20 GB RAM CPU: Two 3.0 GHz quad-core processors Storage: One 67 GB disk NIC: Two Broadcom NetXtreme II BCM 1,000 BaseT Adapters Memory: 32 GB RAM CPU: Two 2.6 GHz quad-core processors Storage: One 67 GB disk NIC: Four Broadcom NetXtreme II BCM 1,000 BaseT Adapters NFS datastore storage and Trivial File Transfer Protocol (TFTP) server ESX servers to host virtual machines for vcenter Server, Active Directory, DHCP, DNS, DDC, PVS, and SQL Server ESX servers to host 1,000 virtual desktops 9

10 Software resources The following table lists the software used to validate the solution. Software Version Celerra NS-120 (Celerra shared storage, file systems) NAS or Data Access in Real Time (DART) Release CLARiiON FLARE Release 28 ( ) Celerra plug-in for VMware Version XenDesktop desktop virtualization Citrix XenDesktop Version 4 Platinum Edition Citrix Desktop Delivery Controller Server Version 4.0 Citrix Provisioning Services Server Version Microsoft SQL Server Version 2005 Enterprise Edition (64-bit) VMware vsphere ESX server ESX (Build ) vcenter Server (Build ) OS for vcenter Server Virtual desktops or virtual machines (One vcpu and 512 MB RAM) Microsoft Windows Server 2003 R2 Enterprise Edition OS Microsoft Windows XP Pro Edition Microsoft Office 2007 Version 12 Internet Explorer Adobe Reader 9.1 Adobe Flash Player 10 Bullzip PDF Printer Prerequisites and supporting documentation Technology It is assumed that the reader has a general knowledge of the following products: EMC Celerra unified storage Citrix XenDesktop VMware vsphere 10

11 Supporting documents The following documents, located on Powerlink, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (NFS), VMware vsphere 4, and Citrix XenDesktop 4 Reference Architecture Configuring Citrix XenDesktop 3.0 with Provisioning Server using EMC Celerra Build Document Celerra Plug-in for VMware Solution Guide Third-party documents Product documentation is available on the Citrix and VMware websites. Citrix Product Documentation Library for XenDesktop VMware vsphere 4.0 Documentation Terminology Introduction Term Desktop Delivery Controller (DDC) Citrix Provisioning Services Server (PVS) PVS vdisk PVS write cache EMC Celerra plug-in for VMware LoginVSI This section defines the terms used in this document. Definition As a part of the Citrix XenDesktop virtual desktop delivery system, this controller authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops. As a part of the Citrix XenDesktop virtual desktop delivery system, this service creates and de-provisions virtual desktops from a single desktop image on demand, optimizes storage utilization, and provides a pristine virtual desktop to each user every time they log on. vdisk exists as disk image files on a Provisioning Server or on a shared storage device. The vdisk images are configured to be in Private, Standard, or Difference Disk mode. Private mode gives exclusive read-write access to a single desktop while vdisk in Standard or Difference Disk mode is shared with read-only permission among multiple desktops. Any writes made to the desktop operating system are redirected to a temporary area called the write cache. The write cache can exist as a temporary file on a Provisioning Server in the virtual desktop s memory or on the virtual desktop s hard drive. A VMware vcenter plug-in designed to simplify the storage administration of the EMC Celerra network-attached storage (NAS) platform. The plug-in enables VMware administrators to provision new NFS datastores directly from the vcenter Server. When provisioning storage on a cluster, folder, or data center, the plug-in automatically provisions the storage for all ESX hosts within the selected object. A third-party benchmarking tool, developed by Login Consultants, simulates real-world VDI workload using an AutoIT script and determines maximum system capacity based on the user s response time. 11

12 Chapter 2: Virtual Desktop Infrastructure Overview Introduction The VDI design layout instructions described in this chapter apply to the specific components used during the development of this solution. Contents This chapter contains the following topics: Topic Overview 12 XenDesktop VDI 12 VMware infrastructure 13 Windows infrastructure 14 Conclusion 15 See Page XenDesktop VDI Introduction Citrix XenDesktop 4 is a desktop virtualization system that centralizes and delivers Microsoft Windows XP, 7, or Vista virtual desktops to users located anywhere without any performance impact. XenDesktop 4 simplifies desktop management by using a single image to deliver personalized desktops to users and enables administrators to manage service levels with built-in desktop performance monitoring. The open architecture of XenDesktop 4 offers choice and flexibility of virtualization platform and user device. Deploying a XenDesktop farm This VDI solution is deployed using a dual-server model in a XenDesktop 4 farm with high availability, which provides a working deployment on a minimal number of computers. As the farm grows, additional controllers and components to the farm can be added seamlessly. The essential elements of a XenDesktop 4 farm are: Desktop Delivery Controller Citrix Licensing Provisioning Server Apart from these Citrix elements, the following components are required for a XenDesktop 4 farm: Microsoft SQL Server to hold the configuration information and administrator account information Active Directory DNS Server PXE boot and TFTP servers 12

13 Desktop Delivery Controller The Desktop Delivery Controller (DDC) authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops. It controls the state of the desktops. However, starting and stopping the desktops are processes based on demand and administrative configuration. DDC also includes the User Profile Manager to manage user personalization settings in virtualized or physical Windows environments. The Citrix licensing service is also installed on the Desktop Delivery Controller. Provisioning Server The Provisioning Server creates and de-provisions virtual desktops from a single desktop image on demand, optimizes storage utilization, and provides a pristine virtual desktop to each user every time they log on. Desktop provisioning also simplifies desktop images, provides the best flexibility, and offers fewer points of desktop management for both applications and desktops. High availability of XenDesktop components In this solution, two DDCs and two Provisioning Servers were used to provide high availability as well as load balancing. For this solution with 1,000 virtual desktops, 500 virtual desktops are managed by each of the Desktop Delivery Controllers. In a similar way, each Provisioning Server manages 500 virtual desktops. If the DDC or Provisioning Server is made offline, then the other DDC or Provisioning Server takes over the virtual desktops of the offline server and manages all 1,000 virtual desktops. VMware infrastructure Introduction This Citrix XenDesktop 4 VDI solution is implemented on a VMware vsphere 4 virtual infrastructure. This will enable organizations to leverage their existing investment or infrastructure of VMware implementation. VMware vsphere VMware vsphere 4 is the industry s first cloud operating system, transforming IT infrastructures into a private cloud, a collection of internal clouds federated on demand to external clouds, and delivering IT infrastructure as a service. vsphere 4 supports the 64-bit VMkernel and the service console. The new service console version is derived from a recent release of a leading enterprise Linux vendor. The following elements of VMware vsphere were used in this solution: VMware ESX 4 server VMware vcenter Server VMware NFS datastore 13

14 VMware ESX server The VMware ESX server is the main building block of the VMware infrastructure. It provides a platform for multiple virtual machines that share the same hardware resources (including processor, memory, storage, and networking resources) and the ability to perform all the functions of a physical machine. This maximizes the hardware utilization and minimizes installation capital and operating cost. In this solution, all XenDesktop components reside as virtual machines on the VMware ESX 4 servers. VMware vcenter Server VMware vcenter Server provides a scalable and extensible platform that forms the foundation for virtualization management. VMware vcenter Server centrally manages VMware vsphere environments. VMware NFS datastore The ESX server can access a designated NFS volume located on a Celerra unified storage platform, mount the volume, and use it for the storage needs for this solution. Windows infrastructure Introduction Microsoft Windows infrastructure is used in this solution to provide the following services to virtual desktops and XenDesktop elements: Active Directory Service DNS DHCP Service SQL Server Domain controller The Windows domain controller contains the Active Directory service, which provides the means to manage the identities and relationships of virtual desktops and other components in this VDI environment. Active Directory is also used by DDC to enable XenDesktop components to communicate securely. DNS Server DNS is the backbone of Active Directory and the primary name resolution mechanism of Windows servers. Domain Controllers dynamically register information about themselves and about Active Directory in the DNS Server. In this solution, DNS Server is installed on the domain controller. 14

15 DHCP server The DHCP server provides the IP address, boot server name, and boot file name for the virtual desktops. The IP range of DHCP is configured to allocate IP addresses for 1,000 virtual desktop machines. Because the virtual desktop virtual machines use PXE boot from a bootstrap image prior to loading the master desktop image supplied by Citrix Provisioning Server, DHCP options 66 and 67 are configured to redirect the virtual desktops to retrieve the bootstrap image from a TFTP server that is hosted on EMC Celerra. SQL Server Microsoft SQL Server is a relational database management system (RDBMS) from Microsoft. In this solution, Microsoft SQL Server 2005 satisfies the database required for Citrix Provisioning Server and DDC. It can also be used to satisfy the databases required for a VMware vcenter Server. Microsoft SQL Server 2005 Enterprise Edition (64-bit) is used in this solution. Though Microsoft SQL Server 2005 Express Edition is free, lightweight, and can satisfy the database requirement for a very small virtual desktop farm, it is not recommended to be used in a production environment because its support is limited to 1 CPU, 1 GB addressable RAM, and a maximum database size of 4 GB. Conclusion Conclusion This XenDesktop 4 VDI implementation for 1,000 virtual desktops is configured in a desktop farm that contains two Desktop Delivery Controllers and two Provisioning Servers for high availability, using the existing VMware virtual infrastructure and Windows servers that provide networking and database services. 15

16 Chapter 3: Storage Design Overview Introduction The storage design layout instructions described in this chapter apply to the specific components of this solution Contents This chapter contains the following topics: Topic Overview 16 Concepts 16 Storage design layout 16 File system layout 17 Capacity planning 19 Best practices 19 See Page Concepts Introduction The Celerra unified storage system is used for most of the storage needs of this solution. The Celerra unified storage system is a multiprotocol system that provides access to data through a variety of file access protocols, including the NFS protocol. NFS is a client/server distributed file service that provides file sharing in network environments. When a Celerra is configured as an NFS server, the file systems are mounted on a Data Mover and a path to that file system is exported. Exported file systems are then available across the network and are mounted as NFS datastores on ESX servers that host the virtual desktops. Storage design layout Building block approach This VDI solution is validated using a building block approach, which allows administrators to methodically provision additional blocks of storage as the number of desktop users continues to scale up. A building block is defined as two spindles on a 1+1 Celerra RAID 10 group. Each of these building blocks is designed to accommodate up to 100 virtual desktop users. The validation test uses up to 10 building blocks to support 1,000 virtual desktops. 16

17 Disk layout for 10 building blocks The following figure shows the disk layout for 10 RAID 1/0 building blocks on two shelves using user-defined storage pools. The NS-120 can be fully populated up to eight disk shelves of 15 disk drives each. This validated solution uses two shelves of 450 GB 15k FC drives. Ten RAID 10 groups are used to store the virtual desktops. A single 4+1 RAID 5 group (RG 0) is used to store the golden image of virtual desktops, the TFTP boot image, and other support files. File system layout File system and NFS export According to the standard NAS template, two LUNs are created per RAID group and each LUN is owned by a different storage processor (SP) for load balancing. These LUNs are represented as disk volumes (dvol) in the Celerra, as shown in the earlier figure. For each building block, a file system is created over a metavolume that concatenates the two dvols from the same RAID group. This file system is exported as an NFS share to the VMware ESX 4 server and used as a NFS datastore. The following table shows the dvol selection for each of the file systems created. To ensure SP load balancing, the order of dvol numbers alternates between the file systems. File system Golden image TFTP boot Virtual desktop groups dvols d17 d29 d18,d30 (concatenated) d31,d19 (concatenated) d20,d32 (concatenated) d33,d21 (concatenated) d22,d34 (concatenated) d35,d23 (concatenated) d24,d36 (concatenated) 17

18 d37,d25 (concatenated) d26,d38 (concatenated) d39,d27 (concatenated) The EMC Celerra plug-in for NFS, an integrated tool with the vcenter GUI, streamlines the creation of a file system, NFS export, and datastore. Celerra Plug-in for VMware Solution Guide on Powerlink provides more details on this plug-in. NFS datastore usage The Celerra unified storage platform is used to store the following: Virtual desktop virtual machines Citrix Provisioning Services vdisk Virtual desktop virtual machines The virtual desktops are deployed as virtual machines that are hosted on ESX 4 servers. Each desktop virtual machine has its own folder that contains.vmdk,.vmx,.vswp, and other files that are stored in the NFS datastore. In this proven solution, each building block is configured with one NFS datastore that accommodates up to 100 virtual desktops. There are a total of 10 NFS datastores that support up to 1,000 desktops. Citrix Provisioning Services vdisk The master desktop image is stored in a Citrix Provisioning Services vdisk, which corresponds to a virtual hard disk (VHD) file that resides on a local drive (NTFS formatted) of the Provisioning Servers. Because the Provisioning Servers are virtualized as virtual machines, the local drive that holds the master image is in fact a VMDK file that resides on an NFS datastore, whose file system is created from a 4+1 RAID 5 group (RG 0 as shown in the Disk layout for 10 building blocks on page 17) on the Celerra. The following figure shows the storage layers of the vdisk. Desktop OS image vdisk VHD file Read-only NTFS VMDK NFS datastore Celerra NFS file system The NTFS file system is made read-only when the master image is finalized and ready to be sealed. The read-only file system enables concurrent access for multiple Provisioning Servers without the need of a clustering file system to handle any locking issues that may arise. 18

19 TFTP server All virtual desktops need to PXE boot from a bootstrap image when they are powered up. This bootstrap image is stored on a file system that is also created from a RAID group 0 on the Celerra. The image is then made available through the Celerra TFTP server. Capacity planning Building block of 100 virtual desktops Storage design layout on page 16 indicates that this validated solution uses a building block approach. Each building block consists of two spindles on a 1+1 RAID 10 group that is designed to accommodate up to 100 virtual desktop users. The Celerra unified storage NS-120 uses 450 GB FC 15k rpm spindles. As mentioned in Disk layout for 10 building blocks on page 17, each RAID group produces two dvols of about GB each. The file system formed by concatenating these two dvols provides a storage space of about 402 GB. This 402 GB of storage space exported as NFS storage on an ESX server is adequate for 100 virtual desktops of 3 GB each, where the 3 GB is thin provisioned and used as Provisioning Services write cache storage, a temporary area to save changes made to the virtual desktops. Virtual desktops typically consume several hundred megabytes of write cache. Care should be taken not to overflow the write cache area that each desktop is allocated. Otherwise, users may experience disk errors when performing write operations. In addition to virtual disk storage, each virtual desktop or virtual machine requires virtual swap space (.vswp file) at the ESX level. Because each virtual machine is allocated 512 MB of memory without ESX memory reservation, 100 virtual desktops require 50 GB (100 x 512 MB) out of the 402 GB. Thin provisioning The virtual hard disk provided to virtual desktops carved out of the NFS datastore is thin provisioned. This enables users to control storage costs and provide a higher level of utilization, and eliminate storage waste and the need for dedicated capacity. Best practices Celerra Data Mover parameter set up EMC recommends that users turn off file system read prefetching and enable the uncached option for a random I/O workload, like the one for virtual desktop workload. To set noprefetch and uncached options for a file system, type: server_mount <movername> -option <options>, uncached,noprefetch <fs_name> <mount_point> For example: server_mount server_2 -option rw,uncached,noprefetch ufs1 /ufs1 19

20 Disk drives The general recommendations for disk drives are: Drives with higher revolutions per minute (rpm) provide higher overall random-access throughput and shorter response times than drives with slower rpm. For optimum performance, higher rpm drives are recommended for file systems that store the virtual desktops. FC drives are recommended over Serial Advanced Technology Attached (SATA) drives, because FC drives provide better performance than SATA drives. Enterprise Flash Drives (EFDs) could have been considered for their performance, efficiency, power, space, and cooling requirements. However, they increase the cost drastically. As technology cost tends to reduce over the course of time, EFD can be used in such solutions in the near future. RAID 10 compared to RAID 5 The I/O loads generated by virtual desktops are characterized as small, random, or write-intensive I/O. A workload is considered write-intensive when it consists of greater than 30 percent of random writes. In such a random workload, RAID 10 offers better performance than RAID 5 because of the write penalty that RAID 5 incurs when the parity bit is calculated for every write operation. Since RAID 10 does not calculate parity, it does not suffer a similar penalty when writing data. Roaming profiles and folder redirection The local user profile is not recommended in a VDI environment because a performance penalty is incurred when a new local profile is created whenever a user logs in to a new desktop image. On the other hand, roaming profiles and folder redirection allow user data to be stored centrally on a network location that can reside on a Celerra CIFS share. Thus, it reduces the performance hit during user logon while allowing user data to roam with the profiles. Alternative profile management tools such as Citrix User Profile Manager and a third-party tool such as AppSense Environment Manager provide more advanced and granular features to manage various user profile scenarios. Refer to User Profiles for XenApp and XenDesktop on the Citrix website for further details. 20

21 Chapter 4: Network Design Overview Introduction This chapter describes the network design of Citrix XenDesktop 4 in the VDI solution. Contents This chapter contains the following topics: Topic Overview 21 Considerations 21 Network layout 22 Virtual LANs 22 High availability network 23 See Page Considerations Physical design considerations EMC recommends that the switches support gigabit Ethernet (GbE) connections and Link Aggregation Control Protocol (LACP), and that the ports on the switches support copper-based media. Logical design considerations This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. The IP scheme for the virtual desktop network must be designed in such a way that there are enough IP addresses available in one or more subnets for the DHCP server to assign them to each virtual desktop. Link aggregation The Celerra unified storage provides network high availability or redundancy by using link aggregation. This is one of the methods to deal with the problem of link or switch failure. Link aggregation is a high availability feature that enables multiple active Ethernet connections to appear as a single link with a single MAC address and potentially multiple IP addresses. In this solution, link aggregation applied on Celerra combines two GbE ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All traffic is distributed across the active links. 21

22 Network layout Network layout for the validated scenario The network layout implements the following physical connections: GbE with TCP/IP provides network connectivity. NFS provides file system semantics for NFS datastores. Virtual desktop machines run on VMware ESX servers that are connected to the production network. ESX VMkernel ports reside on the storage network to access the Data Mover network ports when mounting NFS datastores. Dedicated network switches and VLANs are used to segregate production and storage networks. Virtual LANs Production VLAN The production VLAN is used for end users to access virtual desktops, Citrix XenDesktop components, and associated infrastructure servers such as DNS, Active Directory, and DHCP. Virtual desktops also use this VLAN to access a Celerra TFTP server for the PXE boot image. Storage VLAN The storage VLAN provides connectivity between ESX servers and storage. It is used for NFS communication between the VMkernel ports and the Celerra Data Mover network ports. NIC teaming on ESX along with link aggregation on Data Mover provide load balancing and failover capabilities. Other considerations In addition to VLANs, separate redundant network switches for storage can be used. It is recommended that these switches support GbE connections, jumbo frames, and port channeling. 22

23 High availability network Link aggregation on Data Mover LACP is enabled with two GbE ports available on the Data Mover. To configure the link aggregation that uses two ports of the Ethernet NIC on server_2, type: $ server_sysconfig server_2 -virtual -name <Device Name> - create trk option "device=cge0,cge2 protocol=lacp" To verify if the ports are channeled correctly, type: $ server_sysconfig server_2 -virtual -info lacp0 server_2 : *** Trunk lacp0: Link is Up *** *** Trunk lacp0: Timeout is Short *** *** Trunk lacp0: Statistical Load C is IP *** Device Local Grp Remote Grp Link LACP Duplex Speed cge Up Up Full 1000 Mbs cge Up Up Full 1000 Mbs The remote group number for both cge ports needs to match and the LACP status must be Up. Confirm if the appropriate speed and duplex are established as expected. NIC teaming on the ESX server NIC teaming is configured to provide highly available network connectivity to the ESX server. To add a second NIC adapter to the vswtich, complete the following steps: Step Action 1 Log in to vcenter Server. 2 Edit vswitch properties from the ESX server s Configuration page. 3 Select the Network Adapters tab. 4 Click Add to add the available NIC adapter to the vswitch. 23

24 5 Select the NIC Teaming tab and for the vswitch select Route based on ip hash from the Load Balancing list box. 24

25 Increase the number of vswitch virtual ports A vswitch, by default, is configured with 24 virtual ports, which may not be sufficient in a VDI environment. On the ESX servers that host the virtual desktops, each port is consumed by a virtual desktop. Therefore, set the number of ports based on the number of virtual desktops that will run on each ESX server. Note: Reboot the ESX server for the changes to take effect. If an ESX server goes down or needs to be placed in maintenance mode, other ESX servers within the cluster must accommodate additional virtual desktops that are migrated from the ESX server that goes offline. One must take into account the worst-case scenario when determining the maximum number of virtual ports per vswitch. If there are not enough virtual ports, the virtual desktops will not be able to obtain an IP from the DHCP server. 25

26 Chapter 5: Installation and Configuration Overview Introduction This chapter provides procedures and guidelines for installing and configuring the components that make up the validated solution scenarios. It is not intended to be a comprehensive step-by-step installation guide and highlights only configurations that pertain to the validated solution. Scope The installation and configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world-based solution on this validated scenario, gather the appropriate installation and configuration documentation for revision levels of the hardware and software components as planned in the solution. Contents This chapter contains the following topics: Topic See Page Overview 26 Task 1: Set up and configure the NFS datastore 27 Task 2: Install and configure Desktop Delivery Controller 29 Task 3: Install and configure Provisioning Server 32 Task 4: Configure and provision the master virtual machine 44 template Task 5: Deploy virtual desktops 46 26

27 Task 1: Set up and configure the NFS datastore ESX advanced parameter to support maximum number of NFS exports The ESX server can mount up to eight NFS datastores by default. Because this VDI solution uses 10 datastores, the ESX advanced parameter must be adjusted as shown in the following figure: EMC Celerra plug-in for VMware The EMC Celerra plug-in for VMware is a VMware vcenter plug-in that is designed to simplify the storage administration of the EMC Celerra NAS platform. The plug-in enables VMware administrators to provision new NFS datastores directly from the vcenter Server. One of the advantages of using the Celerra plug-in for VMware is that if the storage is provisioned on a cluster, folder, or data center, then all ESX hosts within the selected object will mount the newly created Celerra NFS export. To provision the storage, complete the following steps: Step Action 1 Download the EMC Celerra plug-in from Powerlink and install it on the machine that is used to run the vsphere Client. 2 Launch the vsphere Client and connect to the vcenter Server. 3 In the left navigation pane, right-click an ESX server in the cluster and select EMC Celerra > Provision Storage. 27

28 The Provision Storage dialog box appears. Celerra Plug-in for VMware Solution Guide on Powerlink provides more details on this plug-in. 28

29 Task 2: Install and configure Desktop Delivery Controller Database server Microsoft SQL Server 2005 Enterprise Edition is installed on a dedicated Windows 2003 Server virtual machine to host the databases required to store the configurations for the three components Desktop Delivery Controller, Provisioning Server, and vcenter Server. Consider the following options when configuring SQL Server: Configure Windows Authentication Mode as the SQL Server s Authentication Mode. Provide a custom SQL Server instance name or use the default instance name. Provide this SQL Server name and instance name as Database Server options while installing the Provisioning Server. If SQL Server is used as a database server for vcenter Server, run the scripts provided by VMware to create local and remote databases. An ODBC connection also needs to be configured between the vcenter Server and SQL Server. vcenter Server Installation Guide on VMware website provides more information on configuring SQL database for vcenter Server. Note: The Provisioning Server installation CD comes with Microsoft SQL Server 2005 Express Edition, by default. However, the databases on Express Edition may not offer the scalability required for Provisioning Server, Desktop Delivery Controller, and VMware vcenter Server. Install Desktop Delivery Controller On the virtual machine designated as the first Desktop Delivery Controller, install the following components from the Citrix DDC installation CD (or ISO): Citrix Desktop Delivery Controller Citrix Management Console Citrix License Server Select Create new farm when prompted for the Create or Join a Farm dialog during the installation. Select Use an existing Database Server and specify the Microsoft SQL 2005 Server and instance name for the Optional Server Configuration dialog box of the installation wizard. Configure additional Desktop Delivery Controllers To install additional Desktop Delivery Controllers, select the Citrix Desktop Delivery Controller component from the installation CD (or ISO). Select Join existing Farm and type the name of the first DDC in the Type the name of the first controller in the farm box. 29

30 Throttle commands to VMware vcenter By default, the DDC Pool Management service will attempt to start 10 percent of a desktop pool size. It may be necessary to throttle the number of concurrent requests sent to the vcenter Server and not to overwhelm the VMware infrastructure. To modify the number of concurrent requests, edit the following configuration on each DDC: 1. Open the C:\Program Files (x86)\citrix\vmmanagement\cdspoolmgr.exe.config file using a text editor such as Notepad. 2. Add a line with the MaximumTransitionRate parameter and set the value to the required number of concurrent requests. A value of 20 is used in this solution. <?xml version="1.0" encoding="utf-8"?> <configuration> <appsettings> <add key="logtocdf" value ="1"/> <add key= MaximumTransitionRate value= 20 /> </appsettings> </configuration> 3. After saving the file, restart either the DDC or the Citrix Pool Management Service for the change to take effect. Virtual desktop idle pool settings DDC manages the number of virtual desktops that are idle based on the time and automatically optimizes the idle pool settings in the desktop group based on the number of virtual desktops in the group. These default idle pool settings need to be adjusted according to customer requirements to have virtual machines powered on in advance to avoid a boot storm scenario. During the validation testing, the idle desktop count is set to match the number of desktops in the group to ensure that all desktops are powered on in a steady state and ready for client connections immediately. To change the idle pool settings after a desktop group is created: 1. Navigate to Start > All Programs > Citrix > Management Consoles > Delivery Services Console on DDC. 2. In the left pane, navigate to Citrix Resources > Desktop Delivery Controller > [XenDesktopFarmName] > Desktop Groups 3. Right-click the desktop group name and select Properties. 4. Select Idle Pool Settings in the left pane under the Advanced option. 5. In the Idle Desktop Count section in the right pane, modify the number of desktops to be powered on during Business hours, Peak time, and Out of hours. You can optionally redefine business days and hours per your business requirements. 6. Click OK to save the settings and close the window. 30

31 31

32 Task 3: Install and configure Provisioning Server Install Provisioning Server Unlike Citrix Desktop Delivery Controller, the installation of Provisioning Server is identical for the first Provisioning Server in the desktop farm and the additional Provisioning Servers installed in the farm. The Provisioning Services Configuration Wizard is run after the installation of the Provisioning Services software. The configuration option differs for the first and secondary (or additional) Provisioning Servers. The following steps highlight the configuration wizard options customized for this solution. Provisioning Server DHCP services Since the DHCP services run on a dedicated DHCP server, select The service that runs on another computer for DHCP services when configuring the DHCP services in the configuration wizard. 32

33 Provisioning Server PXE services The Provisioning Server is not used as a PXE server because DHCP services are hosted elsewhere. Select The service that runs on another computer for PXE services when configuring the PXE services in the configuration wizard. 33

34 Provisioning Server Farm configuration In the Farm Configuration page of the Configuration Wizard, select Create farm to configure the first Provisioning Server or Join existing farm to configure additional Provisioning Servers. With either option, the wizard will prompt for a SQL server and its instance. First, Provisioning Server will use these inputs to create a database to store the configuration details of the Provisioning Server. Additional Provisioning Servers use these inputs to retrieve information about existing farms from the database. 34

35 Provisioning Server User account Because the master desktop vdisk is stored on a local drive of each Provisioning Server, select Local system account (Use with SAN) as the user account to run the stream and soap services in the Provisioning Servers. 35

36 Provisioning Server Stream services Ensure that the appropriate network card is selected, for the stream services, while configuring the Provisioning Servers. Leave the management services communications and soap server ports unchanged. Provisioning Server TFTP Because the TFTP server is hosted on the Celerra, clear Use the Provisioning Services TFTP service, while configuring the TFTP option and bootstrap location settings in the configuration wizard. 36

37 Inbound communication Each Provisioning Server maintains a range of User Datagram Protocol (UDP) ports to manage all inbound communications from virtual desktops. The default port range of 21 ports and threads per port of 8 may not support a large number of virtual desktops in this validated solution. The total number of threads supported by a Provisioning Server is calculated as: Total threads = (Number of UDP ports * Threads per port * Number of network adapters) Ideally, there should be one thread dedicated for each desktop session. The number of UDP ports are increased to 64 (port range of 6910 to 6973) and threads per port are increased to 10 on each Provisioning Server (PVS) (64 * 10 * 1 NIC = 640 threads per server) to accommodate up to 1,000 desktops. The number of UPD ports can be modified in the Network tab of the Server Properties dialog box (as shown in the following figure). The Server Properties dialog box appears when you double-click a Provisioning Server in the Provisioning Services Console. The threads per port parameter can be modified by using the Advanced option. 37

38 By default, Citrix PVS two-stage boot service uses port Since this solution does not require this service, the two-stage boot service is disabled to avoid conflict and it enables the UDP port to range up to It is a best practice to maintain the same server properties among PVS servers. In particular, all servers must have the same port range configured. Sharing the SCSI bus Normally, the VMware ESX server enforces file locking and does not allow two virtual machines to access the same virtual disk (VMDK) at the same time. However, in this validated solution, the PVS virtual machines share the virtual disk containing the master vdisk. Since the PVS virtual machines run on separate ESX servers, SCSI Bus Sharing is set to Physical to enable access to the same virtual disk. 38

39 Thick provisioning of virtual disk on Provisioning Server Because SCSI bus sharing is incompatible with VMware thin provisioned virtual disk, all virtual disks attached to the PVS virtual machines must be thick. Otherwise, ESX servers will not allow the virtual machines to power up. If the virtual disk attached to the PVS virtual machine was previously thin provisioned, use the vmkfstools command with the following syntax to convert the thin provisioned virtual disk to thick. The following command inflates a thin provisioned virtual hard disk named vdesktop1.vmdk: vmkfstools -j vdesktop1.vmdk Disk align virtual disk For better performance, it is recommended to align the virtual disk of the Provisioning Server and other virtual machines. For Windows 2003 virtual machines, disk alignment is done by using the diskpart.exe tool. Select the appropriate disk in the DISKPART prompt and type the following command to align the partition with 1024 KB offset: DISKPART> create partition primary align=

40 Citrix XenConvert is used when the golden image of the master virtual machine is cloned to the master vdisk. Provisioning Services 5.1 contains XenConvert 2.0.x. XenConvert 2.0.x fixes the partition offset at 252 KB, which causes disk misalignment. XenConvert 2.1 or later versions contain an option to specify the desired offset to align the disk correctly. Locate the XenConvert.ini file in the same location as the XenConvert executable. To set the offset to 1024 KB, add the following section and the value to the file: [parameters] PartitionOffsetBase= To specify the offset manually, upgrade Xenconvert to the latest version. vdisk access mode After the golden image of the master virtual machine is cloned to the master vdisk, the Access Mode must be changed from Private Image to Standard Image to enable virtual desktops to share the common vdisk. Thereafter, the vdisk becomes read-only. Virtual desktop changes are redirected to a write cache area. In this solution testing, the write cache type is set to Cache on device s HD to ensure that each virtual desktop uses its own VMDK to store the write cache. 40

41 Read only NTFS volume with vdisk Modifying the master vdisk access mode to "Standard Image" changes the underlying VHD file to write-protected because the golden image is sealed. As a result, the NTFS volume that is used to host the vdisk can be made read-only such that it can be shared across other Provisioning Servers without the need of a cluster file system to handle any file locking issues. Read-only access to the NTFS volume is done by using the diskpart command. Run this command from the command prompt, select the target volume, and type: DISKPART> attributes volume set readonly On successful completion of setting the read-only attribute, the NTFS volume needs to be remounted for the read-only flag to take effect. Given that PVS runs as a virtual machine, this can be done by removing and adding the virtual disk from the Virtual Machine Properties screen. When the virtual machine is powered on, the add/remove operations are available in vsphere 4 only. Configure a bootstrap file The bootstrap file required for the virtual desktops to PXE boot is updated using the Configure Bootstrap option. This option is available in the Provisioning Services Console (Farm > Sites > Site-name > Servers). The Configure Bootstrap dialog box is shown in the following figure. After the new PVS is added to the server farm, the bootstrap image must be updated to reflect the IP addresses used for all PVS servers that provide streaming services in a round-robin fashion. The list of PVS servers can be obtained by either clicking Read Servers from Database or by manually adding the server information by clicking Add. 41

42 On modifying the configuration, click OK to update the ARDBP32.BIN bootstrap file, which is located at C:\Documents and Settings\All Users\Application Data\Citrix\Provisioning Services\Tftpboot. Navigate to the folder and examine the timestamp of the bootstrap file to ensure that the bootstrap file is updated on the intended Provisioning Server. Copy the bootstrap file to the TFTP server on Celerra In addition to serving as a NFS server, EMC Celerra unified storage is used as a TFTP server that provides a bootstrap image when virtual desktops PXE boot. To configure the Celerra TFTP server, complete the following steps: Step Action 1 Enable the TFTP service by using the following command syntax: server_tftp <movername> -service start 2 Set the TFTP working directory and enable read/write access for file transfer by using the following command syntax. It is assumed that the path name references to a file system created in RAID group 0 as shown in Disk layout for 10 building blocks on page 17. server_tftp <movername> -set -path <pathname> -readaccess all writeaccess all 3 Use a TFTP client, of your choice, to upload the ARDBP32.BIN bootstrap file from C:\Documents and Settings\All Users\Application Data\Citrix\Provisioning Services\Tftpboot on the Provisioning Server to the Celerra TFTP server. 4 Set the TFTP working directory access to read-only to prevent accidental modification of the bootstrap file. server_tftp <movername> -set -path <pathname> writeaccess none 42

43 Configure boot options 66 and 67 on the DHCP In order for the virtual desktops to PXE boot successfully from the bootstrap image supplied by the Provisioning Servers, the DHCP server must have the boot options 66 and 67 configured. To configure boot options 66 and 67 on the DHCP, complete the following steps: Step Action 1 On the installed Microsoft DHCP server, select Scope Options. 2 Select 066 Boot Server Host Name. 3 Enter the IP address of the Data Mover configured as TFTP server in the String value box. 4 Similarly, enable 067 Bootfile Name and enter ARDBP32.BIN in the String value box. The ARDBP32.BIN bootstrap image is loaded on a virtual desktop before the vdisk image is streamed from the Provisioning Server. 43

44 Task 4: Configure and provision the master virtual machine template Create a virtual machine template for virtual desktops To create a virtual machine template for the virtual desktop, create a virtual machine using the Create New Virtual Machine wizard, edit the settings, and convert it into a template. This validated solution uses Microsoft Windows XP (32-bit edition) as the virtual desktop guest operating system. Ensure that the virtual machine is allocated with one vcpu, a RAM of 512 MB, a 3 GB thin virtual hard disk, and a network adapter that uses a vmxnet2 driver. The virtual hard disk on the virtual machine is neither aligned nor formatted. To align and format the virtual hard disk, complete the following steps: Step Action 1 Edit the virtual machine settings. 2 Remove the hard disk by clicking Remove. 3 Attach the hard disk on another Windows machine. 4 Align the hard disk using the diskpar or diskpart utility. 5 Quick format the hard disk using a NTFS volume. 6 Remove the hard disk from the proxy Windows machine. 7 Attach the hard disk back to the new virtual machine by clicking Add as shown in the figure. Once these modifications are made, convert this virtual machine in to a virtual machine template. 44

45 Note: The virtual desktop virtual machines will be created in the same datastore where the virtual hard disk of the template machine resides. If the virtual desktops are distributed among multiple datastores, it is easier to clone the virtual machine template such that there is a template in each datastore. New hardware found message When the virtual hard disk is attached to the master vdisk as a write-cache drive for the first time, Windows will detect the drive as new hardware and prompt for a reboot as soon as a virtual desktop session begins. To avoid such a reboot, attach the virtual hard disk to the master virtual machine before its image is cloned to the vdisk such that the vdisk image contains the disk signature that will be recognized when the virtual desktops are started. 45

46 Task 5: Deploy virtual desktops Appropriate access to vcenter SDK DDC and the XenDesktop Setup Wizard require appropriate access to communicate with the SDK of the VMware vcenter Server. This is achieved by one of the following methods depending on the security requirements: HTTPS access to vcenter SDK Step Action 1 On the VMware vcenter Server, replace the default SSL certificate. The Replacing vcenter Server Certificates paper on the VMware website provides more details on how to replace the default SSL certificate. 2 Open an MMC and the Certificates snap-in on the Desktop Delivery Controllers and Provisioning Servers. 3 Select Certificates > Trusted Root Certification Authorities > Certificates and import the trusted root certificate for the SSL certificate created in step 1. HTTP access to vcenter SDK Step Action 1 Log in to the vcenter Server and open the C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\proxy.xml file. 2 Navigate to the tag where servernamespace=/sdk. Do not modify the /sdktunnel properties. <e id="5"> <_type>vim.proxyservice.localservicespec</_typ e> <accessmode>httpandhttps</accessmode> <port>8085</port> <servernamespace>/sdk</servernamespace> </e> 3 Change accessmode to httpandhttps. Alternatively, set accessmode to httponly to disable HTTPS. 4 Save the file and restart the vmware-hostd process using the following command. You may have to reboot the vcenter Server if SDK is inaccessible after restarting the process: service mgmt-vmware restart 46

47 XenDesktop Setup Wizard The XenDesktop Setup Wizard installed on the Provisioning Server simplifies virtual desktop deployment and can rapidly provision a large number of desktops. To run this wizard, complete the following steps: Step Action 1 Select Start > All Programs > Citrix > Administration Tools > XenDesktop Setup Wizard on the Provisioning Server. The Welcome to XenDesktop Setup Wizard page appears. 2 Click Next. The Desktop Farm page appears. 3 Select the relevant farm name from the Desktop farm list. The list of farms appear. 4 Click Next. Before proceeding to the Hosting Infrastructure page, complete the steps described in the Appropriate access to vcenter SDK on page On the Hosting Infrastructure page, select VMware virtualization as the hosting infrastructure. Type the URL of the vcenter Server SDK and click Next. 47

48 Note: You will be prompted to specify the user credentials for the VMware vcenter Server. 6 On the Virtual Machine Template page, select the virtual machine template that you want to use as a template for the virtual desktops. These virtual machine templates are retrieved from the vcenter Server. 7 Click Next. The Virtual Disk (vdisk) page appears. 8 Select the vdisk from which the virtual desktops will be created. Only vdisks in standard mode appear. As shown in the following figure, the list of existing device collections contain only the device collections that belong to the same site as the vdisk. 48

49 9 Click Next. The Virtual Desktops page appears. 10 Enter the following and click Next. The number of desktops to create. The common name to use for all the desktops. The start number to enumerate the newly created desktops. The sequence of this number will be appended to the common name, and will be assigned to the virtual desktop names. The Organizational Unit Location page appears. 11 Select the OU to which the desktops will be added and click Next. 49

50 The Desktop Group page appears. 12 Specify the group of the Desktop Delivery Services to which to add the desktops and click Next. The Desktop Creation page appears. 12 Ensure that the details are correct and then click Next to create the desktops. 50

51 The Summary page appears. Note: Clicking Next will start an irreversible process of creating desktops that also includes creating computer objects in the Active Directory. 51

52 Chapter 6: Testing and Validation Overview Introduction This solution for Citrix XenDesktop 4 on EMC Celerra explores several configurations that can be used to implement a 1,000-user environment using EMC Celerra. Contents This section contains the following topics: Topic See Page Overview 52 Testing overview 52 Testing tools 52 Test results 55 Result analysis of Desktop Delivery Controller 56 Result analysis of Provisioning Server 59 Result analysis of the vcenter Server 62 Result analysis of SQL Server 64 Result analysis of ESX servers 67 Result analysis of Celerra unified storage 70 Login storm scenario 78 Test summary 80 Testing overview Introduction This chapter provides a summary and characterization of the tests performed to validate the solution. The goal of the testing was to characterize the end-to-end solution and component subsystem response under reasonable load for Citrix XenDesktop 4 with Celerra NS-120 over NFS. Testing tools Introduction To apply a reasonable real-world user s workload, a third-party benchmarking tool LoginVSI from Login Consultants was used. LoginVSI simulates a VDI workload using the AutoIT script within each desktop session to automate the execution of generic applications like Microsoft Office 2007, Internet Explorer, Acrobat Reader, Notepad, and other third-party software. 52

53 LoginVSI Test methodology Virtual Session Index (VSI) provides guidance to gauge the maximum number of users a desktop environment can support. LoginVSI workload can be categorized as light, medium, heavy, and custom. Medium is the only workload that is available in the VSI Express (free edition) and Pro editions. VSI Pro edition and medium workload were chosen for testing and they have the following characteristics: Emulates a medium knowledge worker using Office, Internet Explorer, and PDF. Once a session is started, the medium workload will repeat every 12 minutes. The response time is measured every 2 minutes during each loop. The medium workload opens up to five applications simultaneously. The type rate is 160 ms for each character. The medium workload in VSI 2.0 is approximately 35 percent more resource-intensive than VSI 1.0. Approximately, 2 minutes of idle time is included to simulate real-world users. Each loop of the medium workload will open and use: Outlook 2007: Browse 10 messages. Internet Explorer: One instance is left open (BBC.co.uk). One instance is browsed to Wired.com, Lonelyplanet.com, and heavy flash app gettheglass.com (not used with MediumNoFlash workload). Word 2007: One instance to measure response time and one instance to review and edit the document. Bullzip PDF Printer and Acrobat Reader: The Word document is printed and the PDF is reviewed. Excel 2007: A very large randomized sheet is opened. PowerPoint 2007: A presentation is reviewed and edited. 7-zip: Using the command line version, the output of the session is zipped. The current LoginVSI version is This version has a gating metric called VSImax that measures the response time of five operations: 1. Maximizing Microsoft Word. 2. Starting the File Open dialog box. 3. Starting the Search and Replace dialog box. 4. Starting the Print dialog box. 5. Starting Notepad. The LoginVSI workload is gradually increased by starting desktop sessions one after another at a specified interval. Although the interval can be customized, a default interval of 1 second is used during the testing. The desktop infrastructure is considered saturated when the average response time of three consecutive users crosses the 2,000 ms threshold. The administrator guide available at provides more information on the LoginVSI tool. LoginVSI launcher A LoginVSI launcher is a Windows system that launches desktop sessions on target virtual desktop machines. There are two types of launchers master and slave. There is only one master in a given test bed and there can be many slave launchers as required. Launchers coordinate the start of the sessions using a common CIFS share. In this validated testing, the share is created on a Celerra file system that resides in the 4+1 RAID 5 group as shown in Disk layout for 10 building blocks on page

54 The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. Login consultants recommend using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vcpus) and a RAM of 2 GB, when the GDI limit has not been tuned (default). However with the GDI limit tuned, this limit extends to 60 sessions per two-core machine. In this validated testing, 1,000 desktop sessions were launched from 24 launcher virtual machines, resulting in 41 or 42 sessions established per launcher. Each launcher virtual machine is allocated two vcpus and a RAM of 4 GB without encountering any system bottlenecks. 54

55 Test results Result summary The following graph shows the response time compared to the number of active desktop sessions, as generated by the LoginVSI launchers. It shows that the average response time increases marginally as the user count increases. Throughout the test run, the average response time stays below 300 ms, which has plenty of headroom below the 2,000 ms gating metric. The maximum response time increases nominally as the user count increases with some spikes. However, it never exceeds 3,000 ms. I 55

56 Result analysis of Desktop Delivery Controller Introduction Since the two DDCs are load balanced to host 1,000 desktops, their performance counters are comparable. As a result, only the statistics for the first DDC is reported in the following sections. CPU utilization The average percentage processor time is recorded at 8.42 percent with occasional spikes that reach as high as 65 percent. The percentage process time is reported based on the average of two vcpus. 56

57 Memory utilization Each DDC virtual machine was configured with a RAM of 4 GB. The memory utilization fluctuates between 1 GB and 2.2 GB. The average utilization is around 1.5 GB, consuming less than half of the available memory. Disk throughput Windows operating system and XenDesktop software were installed on a local drive for each DDC. As seen in the following graph, despite a couple of spikes occurring at the end of the test, run of the average disk throughput is about 28 KB/s at the end of the test run. 57

58 Network throughput Each DDC virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to manage the virtual desktops. An average transfer rate of 443 KB/s translates to 3.5 Mb/s. A surge of 758 KB/s (or 6 Mb/s) was measured at the end of the test run due to concurrent users logging off. 58

59 Result analysis of Provisioning Server Introduction Since the two PVSs are load balanced to host 1,000 desktops, their performance counters are comparable. As a result, this section covers only the statistics for the first PVS. CPU utilization Four vcpus were configured for each PVS server in anticipation of intense network activities to communicate with 1,000 desktops. It is obviously overkill based on the following graph. A two-cpu virtual machine would suffice. 59

60 Memory utilization Each PVS virtual machine was configured with a RAM of 4 GB. The memory utilization remains steady in the range of 1.3 GB to 1.8 GB. Disk throughput The following graph shows the disk throughput measured for the physical disk that stores the master vdisk. Since PVS servers cache the vdisk data blocks in memory, the initial read activity is observed at 4 MB/s. Negligible disk activity is observed thereafter. 60

61 Network throughput Each PVS virtual machine was configured with a gigabit adapter that uses the vmxnet2 driver to stream the vdisk image to virtual desktops. The average network throughput is recorded at 4 MB/s (or 32 Mb/s). The maximum network throughput is capped below 30 MB/s (or 240 Mb/s) towards the end of the run despite bursting activity as a result of concurrent user logoff. 61

62 Result analysis of the vcenter Server Introduction The vcenter Server maintains two clusters of ESX servers. Each cluster contains 500 desktop virtual machines that are hosted on eight ESX servers. CPU utilization The vcenter Server virtual machine is configured with dual vcpus. The average CPU utilization is less than 4 percent throughout the test. Periodic surges are curbed at 81 percent. 62

63 Memory utilization A RAM of 6 GB was allocated to the vcenter Server virtual machine. Committed bytes never exceeded 2.55 GB. The amount of allocated memory could have scaled down to 4 GB. Disk throughput Windows operating system and vcenter Server software were installed on a local drive. There is minimal disk I/O activity as seen in the following graph. 63

64 Network throughput The vcenter Server was configured with a gigabit adapter that uses the vmxnet2 driver. The majority of network activity comes from the DDCs that manipulate and detect the state of each virtual desktop. The average network throughput is measured at 17.6 KB/s (or 141 Kbps). Logoff activity towards the end of the run triggers a spike of 782 KB/s (or 6.3 Mbps). Result analysis of SQL Server Introduction Three databases were created on SQL Server, which is the central repository of the DDC, PVS, and vcenter Server configurations. The database size for the vcenter Server grows to 5.3 GB the largest of the three databases. DDC and PVS databases require merely 10 MB and 5 MB, respectively. CPU utilization The SQL Server virtual machine was configured with dual vcpus. The average CPU utilization is less than 2 percent throughout the test. Periodic surges are curbed at 65 percent. 64

65 Memory utilization A RAM of 6 GB was allocated to the SQL Server virtual machine. Committed bytes never exceeded 3.5 GB. The amount of allocated memory could have scaled down by 1 GB. 65

66 Disk throughput Windows operating system and SQL server software were installed on a local drive. The average disk throughput is below 392 KB/s while the maximum throughput is recorded around 45 MB/s. Network throughput The SQL server was configured with a gigabit adapter that uses the vmxnet2 driver. The average network throughput is measured at 14.5 KB/s (or 116 Kbps). The maximum throughput is recorded at 458 KB/s (or 3.7 Mbps). 66

67 Result analysis of ESX servers Introduction A thousand desktop virtual machines are spread among 16 ESX servers. Prior to testing, ESX server is responsible for hosting 61 to 62 virtual machines that are distributed evenly using VMware Distributed Resource Scheduler (DRS) automation. The DRS automation level is set to manual during the test run to avoid unpredictable workload overhead caused by virtual machine migration. Because each ESX server hosts almost the same number of virtual machines, the esxtop performance counters are sampled from one of the 16 servers. CPU utilization Each of the 16 ESX servers has 8 Intel Nahalem 2.6 GHz CPUs. Each ESX server hosts up to 62 desktop virtual machines, yielding the VMs/core ratio at As the workload gradually increases when more desktops become active during the test, CPU utilization grows linearly and reaches a maximum of 100 percent towards the end of the test run. Sessions begin to log off simultaneously, triggering a surge of CPU consumption. 67

68 Memory utilization Each of the 16 ESX servers has a memory of 32 GB installed. Ideally 62 virtual machines with a RAM of 512 MB each add up to a total of 31 GB in theory. The memory utilization barely exceeds 29 GB (32 GB 3 GB free memory) because the ESX memory deduplication technology is used. Disk throughput Each of the 16 ESX servers is configured with one internal hard disk. There is a nominal disk I/O targeted to the internal drive as the majority of I/Os are redirected to the NFS datastores. 68

69 Network throughput Each of the 16 ESX servers is configured with a NIC teaming on 2 gigabit adapters to provide high availability. The following graph shows that the network utilization continues to increase as desktop sessions are ramped up. Despite the steady increase in network utilization, the maximum throughput of 50 Mb/s is much higher than the physical limit of the aggregated gigabit network bandwidth. 69

70 Result analysis of Celerra unified storage Celerra Data Mover stats The Celerra command server_stats with the following syntax was used to collect the performance data of the Data Mover every 30 seconds. $ /nas/bin/server_stats <server_name> -summary basic,caches -table net,dvol,fsvol -interval 30 -format csv -titles once -terminationsummary yes The following table provides some of the significant Data Mover statistics that were collected: Measurement parameter Average value Network input 20,694 KB/s (20.2 MB/s) Network output 1,589 KB/s (1.6 MB/s) Dvol read 575 KB/s (0.6 MB/s) Dvol write 22,339 KB/s (21.8 MB/s) Buffer cache hit rate 98% CPU utilization 12% Data Mover CPU utilization Despite the gradual increase in the CPU utilization on the Data Mover and the increase in the test workload, the CPU utilization remains below 30 percent until the end of the test run when the logoff storm invokes a spike of 55 percent. 70

71 Data Mover disk throughput The following graph shows the trend of the disk throughput measured on the Data Mover. Its pattern mimics the CPU utilization trend where the disk throughput gradually increases and reaches a maximum at 67 MB/s. Storage array CPU utilization CLARiiON Analyzer GUI was started to collect performance data about the storage array at 60-second intervals. The following figure shows the CPU utilization at the SP level. The storage processor (SP) balances the LUN ownership for the 10 building blocks that are used to store the virtual desktops. However, because SP A also consists of the LUNs that store the golden vdisk image, and the CIFS file system that contains the roaming user profiles and LoginVSI results, additional CPU cycles are incurred on SPA, causing its maximum to reach nearly 60 percent, while SPB utilization reaches a maximum of 48 percent only. 71

72 Storage array total bandwidth The storage array can easily handle the I/O bandwidth that the test workload generates, where less than 30 MB/s of I/O bandwidth is observed for each SP. 72

73 Storage array total throughput The maximum aggregated throughput at the SP level is recorded at 6,452 IOPS ( ) towards the end of the test run. This includes all I/O activities for this storage array. Throughput measured for the virtual desktops alone is reported to be below the LUN level. 73

74 Storage array response time The SP response time throughout the test run is less than 1 millisecond an acceptable response time that suggests that the storage processor is not a bottleneck. 74

75 Most active LUN utilization The following four graphs shows the performance statistics for the busiest LUN. This is measured within the 10 building blocks used to store the virtual desktops. The maximum utilization for the most active LUN never exceeds 50 percent as shown in the following figure. 75

76 Most active LUN bandwidth The maximum LUN bandwidth is measured at 5 MB/s for the most active LUN during the test. The storage array can easily handle the bandwidth requirement. 76

77 Most active LUN throughput The maximum throughput measured for the most active LUN is slightly above 500 IOPS. Because the storage array write-cache absorbs some of the front-end IOPS before it writes to the physical disks, the LUN throughput can exceed the theoretical limit of what two 15k drives can yield in a building block. 77

78 Most active LUN response time The response time of the most active LUN is around 1 millisecond throughout the test run, which suggests that there is no bottleneck at the LUN level. Login storm scenario Introduction One of the most concerned areas in VDI implementation is the what-if scenarios of login and boot storms. Given that the DDC has an option to adjust the idle desktop count, it is recommended to tune the parameter accordingly to power up enough virtual desktops ahead of business opening/peak hours and alleviate a boot storm scenario. The impact of login storm, on the other hand, may be minimized by keeping desktop users logged in as long as possible. However, this is beyond the control of the desktop administrators. The following section prepares for the worst-case scenario when logins occur in rapid succession. Login timing To simulate a login storm, 500 desktops are powered up initially into steady state by setting the idle desktop count to 500. The login time of each session is then measured by starting a LoginVSI test that establishes the sessions with a custom interval of five seconds. The 500 sessions are logged in within 42 minutes (500 x 5 / 60 = 41.6), a period that models a burst of login activity that takes place in the opening hour of a production environment. 78

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Unified Storage (FC), Microsoft Windows Server 2008 R2 Hyper-V, and Citrix XenDesktop 4 Proven Solution Guide EMC for Enabled

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture. Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Reference Architecture Copyright 2010 EMC Corporation.

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 Proven Solutions Guide EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

EMC Infrastructure for Virtual Desktops

EMC Infrastructure for Virtual Desktops EMC Infrastructure for Virtual Desktops Enabled by EMC Celerra Unified Storage (FC), VMware vsphere 4.1, VMware View 4.5, and VMware View Composer 2.5 Proven Solution Guide Copyright 2010, 2011 EMC Corporation.

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

VMware vsphere Storage Appliance Installation and Configuration

VMware vsphere Storage Appliance Installation and Configuration VMware vsphere Storage Appliance Installation and Configuration vsphere Storage Appliance 1.0 vsphere 5.0 This document supports the version of each product listed and supports all subsequent versions

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution Enabled by EMC Celerra and Linux using FCP and NFS Reference Architecture Copyright 2009 EMC Corporation. All rights reserved. Published

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

iscsi Boot from SAN with Dell PS Series

iscsi Boot from SAN with Dell PS Series iscsi Boot from SAN with Dell PS Series For Dell PowerEdge 13th generation servers Dell Storage Engineering September 2016 A Dell Best Practices Guide Revisions Date November 2012 September 2016 Description

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

Dell EMC Ready System for VDI on XC Series

Dell EMC Ready System for VDI on XC Series Dell EMC Ready System for VDI on XC Series Citrix XenDesktop for Dell EMC XC Series Hyperconverged Appliance March 2018 H16969 Deployment Guide Abstract This deployment guide provides instructions for

More information

EMC Business Continuity for Oracle Database 11g

EMC Business Continuity for Oracle Database 11g EMC Business Continuity for Oracle Database 11g Enabled by EMC Celerra using DNFS and NFS Copyright 2010 EMC Corporation. All rights reserved. Published March, 2010 EMC believes the information in this

More information

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server White Paper Table of contents 2 Tested environments 3 Benchmarking tests 3 Performance comparisons 7 Installation requirements 7 Installing and configuring the VMware environment 1 Supported virtual machine

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (FC), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix XenDesktop and XenApp for Dell EMC XC Family September 2018 H17388 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Dell Flexible Computing Solutions: Deploying On-Demand Desktop Streaming

Dell Flexible Computing Solutions: Deploying On-Demand Desktop Streaming Dell Flexible Computing Solutions: Deploying On-Demand Desktop Streaming Product Group November 2007 Dell White Paper November 2007 Contents Introduction... 3 Overview... 4 Planning the Deployment... 5

More information

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi EMC Solutions for Microsoft Exchange 2007 NS Series iscsi Applied Technology Abstract This white paper presents the latest storage configuration guidelines for Microsoft Exchange 2007 on the Celerra NS

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

CMB-207-1I Citrix Desktop Virtualization Fast Track

CMB-207-1I Citrix Desktop Virtualization Fast Track Page1 CMB-207-1I Citrix Desktop Virtualization Fast Track This fast-paced course covers select content from training courses CXA-206: Citrix XenApp 6.5 Administration and CXD-202: Citrix XenDesktop 5 Administration

More information

Oracle RAC 10g Celerra NS Series NFS

Oracle RAC 10g Celerra NS Series NFS Oracle RAC 10g Celerra NS Series NFS Reference Architecture Guide Revision 1.0 EMC Solutions Practice/EMC NAS Solutions Engineering. EMC Corporation RTP Headquarters RTP, NC 27709 www.emc.com Oracle RAC

More information

Personal vdisk Implementation Guide. Worldwide Technical Readiness

Personal vdisk Implementation Guide. Worldwide Technical Readiness Worldwide Technical Readiness Table of Contents Table of Contents... 2 Overview... 3 Implementation Guide... 4 Pre-requisites... 5 Preparing PVS vdisk to be used with Personal vdisk... 6 Creating a Desktop

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

REVISED 1 AUGUST REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes and later

REVISED 1 AUGUST REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes and later REVISED 1 AUGUST 2018 REVIEWER'S GUIDE FOR VMWARE APP VOLUMES VMware App Volumes 2.13.1 and later Table of Contents Introduction Audience What You Will Learn Navigating This Document for App Volumes Use

More information

REVISED 1 AUGUST QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes and later

REVISED 1 AUGUST QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes and later REVISED 1 AUGUST 2018 QUICK-START TUTORIAL FOR VMWARE APP VOLUMES VMware App Volumes 2.13.1 and later Table of Contents Introduction Audience What You Will Learn Navigating This Document for App Volumes

More information

Dell EMC Ready System for VDI on VxRail

Dell EMC Ready System for VDI on VxRail Dell EMC Ready System for VDI on VxRail Citrix XenDesktop for Dell EMC VxRail Hyperconverged Appliance April 2018 H16968.1 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

Course CXS-203 Citrix XenServer 6.0 Administration

Course CXS-203 Citrix XenServer 6.0 Administration Course CXS-203 Citrix XenServer 6.0 Administration Overview In the Citrix XenServer 6.0 classroom training course, students are provided the foundation necessary to effectively install, configure, administer,

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

CXS-203-1I Citrix XenServer 6.0 Administration

CXS-203-1I Citrix XenServer 6.0 Administration 1800 ULEARN (853 276) www.ddls.com.au CXS-203-1I Citrix XenServer 6.0 Administration Length 5 days Price $5115.00 (inc GST) Overview In the Citrix XenServer 6.0 classroom training course, students are

More information

Features. HDX WAN optimization. QoS

Features. HDX WAN optimization. QoS May 2013 Citrix CloudBridge Accelerates, controls and optimizes applications to all locations: datacenter, branch offices, public and private clouds and mobile users Citrix CloudBridge provides a unified

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

VMware Mirage Getting Started Guide

VMware Mirage Getting Started Guide Mirage 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

CXS Citrix XenServer 6.0 Administration

CXS Citrix XenServer 6.0 Administration Course Overview View Course Dates & Register Today Students will learn to effectively install, configure, administer, and troubleshoot XenServer 6.0. Students will also learn how to configure a Provisioning

More information

Citrix XenDesktop 5 Administration

Citrix XenDesktop 5 Administration Citrix XenDesktop 5 Administration Duration: 5 Days Course Code: CXD-202 Overview: This course provides the foundation necessary for administrators to effectively centralize and manage desktops in the

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, page 1 Virtual Machine Configuration Recommendations, page 1 Configuring Resource Pools Using vsphere Web Client, page 4 Configuring a Virtual Machine

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM Preparing a VMware System for Cisco APIC-EM Deployment, on page 1 Virtual Machine Configuration Recommendations, on page 1 Configuring Resource Pools Using vsphere Web Client, on page 4 Configuring a Virtual

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

Introduction to Using EMC Celerra with VMware vsphere 4

Introduction to Using EMC Celerra with VMware vsphere 4 Introduction to Using EMC Celerra with VMware vsphere 4 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2009 EMC Corporation.

More information

Citrix XenDesktop 7.6, Provisioning Services 7.6 and the XenDesktop Setup Wizard with Write Cache and Personal vdisk Drives

Citrix XenDesktop 7.6, Provisioning Services 7.6 and the XenDesktop Setup Wizard with Write Cache and Personal vdisk Drives Citrix XenDesktop 7.6, Provisioning Services 7.6 and the XenDesktop Setup Wizard with Write Cache and Personal vdisk Drives Using Personal vdisks and Write Cache drives with XenDesktop 7.6 Prepared by

More information

Configuring and Managing Virtual Storage

Configuring and Managing Virtual Storage Configuring and Managing Virtual Storage Module 6 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

Citrix XenDesktop. Evaluation Guide. Citrix XenDesktop 2.1 with Microsoft Hyper-V and System Center Virtual Machine Manager 2008.

Citrix XenDesktop. Evaluation Guide. Citrix XenDesktop 2.1 with Microsoft Hyper-V and System Center Virtual Machine Manager 2008. Citrix XenDesktop Evaluation Guide Citrix XenDesktop 2.1 with Microsoft Hyper-V and System Center Virtual Machine Manager 2008 Evaluation Guide XenDesktop with Hyper-V Evaluation Guide 2 Copyright and

More information

Vendor: Citrix. Exam Code: 1Y Exam Name: Managing Citrix XenDesktop 7.6 Solutions. Version: Demo

Vendor: Citrix. Exam Code: 1Y Exam Name: Managing Citrix XenDesktop 7.6 Solutions. Version: Demo Vendor: Citrix Exam Code: 1Y0-201 Exam Name: Managing Citrix XenDesktop 7.6 Solutions Version: Demo DEMO QUESTION 1 Scenario: A Citrix Administrator updates all of the machines within a Delivery Group.

More information

Citrix XenServer 6 Administration

Citrix XenServer 6 Administration Citrix XenServer 6 Administration Duration: 5 Days Course Code: CXS-203 Overview: In the Citrix XenServer 6.0 classroom training course, students are provided the foundation necessary to effectively install,

More information

Citrix Connector 7.5 for Configuration Manager. Using Provisioning Services with Citrix Connector 7.5 for Configuration Manager

Citrix Connector 7.5 for Configuration Manager. Using Provisioning Services with Citrix Connector 7.5 for Configuration Manager Citrix Connector 7.5 for Configuration Manager Using Provisioning Services with Citrix Connector 7.5 for Configuration Manager Prepared by: Subbareddy Dega and Kathy Paxton Commissioning Editor: Kathy

More information

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5 You can find the most up-to-date technical documentation on the VMware website at:

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

XenDesktop Planning Guide: Image Delivery

XenDesktop Planning Guide: Image Delivery Consulting Solutions WHITE PAPER Citrix XenDesktop XenDesktop Planning Guide: Image Delivery ( / Machine Creation ) www.citrix.com Overview With previous versions of XenDesktop (version 4 and prior), the

More information

Provisioning Services 6.0

Provisioning Services 6.0 Provisioning Services 6.0 2011 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Contents Provisioning Services 6.0 9 Provisioning Services Product Overview 10 Provisioning

More information

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo Vendor: Citrix Exam Code: 1Y0-401 Exam Name: Designing Citrix XenDesktop 7.6 Solutions Version: Demo DEMO QUESTION 1 Which option requires the fewest components to implement a fault-tolerant, load-balanced

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix Virtual Apps and Desktops on VxRail and vsan Ready Nodes October 2018 H17344.1 Validation Guide Abstract This validation guide describes the architecture

More information

Installing VMware vsphere 5.1 Components

Installing VMware vsphere 5.1 Components Installing VMware vsphere 5.1 Components Module 14 You Are Here Course Introduction Introduction to Virtualization Creating Virtual Machines VMware vcenter Server Configuring and Managing Virtual Networks

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy Applied Technology Abstract This white paper highlights a new EMC Celerra feature that simplifies the process of creating specific custom

More information

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2 Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture...

More information

COURSE OUTLINE IT TRAINING

COURSE OUTLINE IT TRAINING CMB-207-1I Citrix XenApp and XenDesktop Fast Track Duration: 5 days Overview: This fast-paced course covers select content from training courses CXA-206 and CXD- 202 and provides the foundation necessary

More information

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4 Solutions for Small & Medium Environments Virtualization Solutions Engineering Ryan Weldon and Tom Harrington THIS WHITE PAPER

More information

Goliath Performance Monitor v11.7 POC Install Guide

Goliath Performance Monitor v11.7 POC Install Guide Goliath Performance Monitor v11.7 POC Install Guide Goliath Performance Monitor Proof of Concept Limitations Goliath Performance Monitor Proof of Concepts (POC) will be limited to monitoring 5 Hypervisor

More information

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS

EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS EMC Backup and Recovery for Oracle Database 11g Enabled by EMC Celerra NS-120 using DNFS Abstract This white paper examines the performance considerations of placing Oracle Databases on Enterprise Flash

More information

Basic Configuration Installation Guide

Basic Configuration Installation Guide RecoverPoint for VMs 5.1 Basic Configuration Installation Guide P/N 302-003-975 REV 1 July 4, 2017 This document contains information on these topics: Revision History... 2 Overview... 3 Reference architecture...

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

[TITLE] Virtualization 360: Microsoft Virtualization Strategy, Products, and Solutions for the New Economy

[TITLE] Virtualization 360: Microsoft Virtualization Strategy, Products, and Solutions for the New Economy [TITLE] Virtualization 360: Microsoft Virtualization Strategy, Products, and Solutions for the New Economy Mounir Chaaban & Riaz Salim Account Technology Strategist Microsoft Corporation Microsoft s Vision

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA Version 4.0 Configuring Hosts to Access VMware Datastores P/N 302-002-569 REV 01 Copyright 2016 EMC Corporation. All rights reserved.

More information

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi Best Practices Planning Abstract This white paper presents the best practices for optimizing performance for a Microsoft Exchange 2007

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere

More information