EMC VSPEX END-USER COMPUTING

Size: px
Start display at page:

Download "EMC VSPEX END-USER COMPUTING"

Transcription

1 VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX End-User Computing solution with Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 250 virtual desktops. December, 2012

2 Copyright 2013 EMC Corporation. All rights reserved. Published in the USA. Published December 2012 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC 2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the EMC online support website. Part Number H

3 Contents Chapter 1 Executive Summary 13 Introduction Target audience Document purpose Business needs Chapter 2 Solution Overview 17 Overview Desktop broker Virtualization Storage Network Compute Chapter 3 Solution Technology Overview 21 The technology solution Summary of key components Introduction Desktop broker Overview Citrix XenDesktop Machine Creation Services Citrix Personal vdisk Citrix Profile Manager Virtualization Overview VMware vsphere VMware vcenter VMware vsphere High Availability EMC Virtual Storage Integrator for VMware

4 Contents VNX VMware vstorage API for Array Integration support Compute Network Storage Overview EMC VNXe series Backup and recovery Security RSA SecurID two-factor authentication SecurID authentication in VSPEX End User Computing for Citrix XenDesktop environment Required components Compute, memory and storage resources Chapter 4 Solution Stack Architectural Overview 37 Solution overview Solution architecture Architecture for up to 250 virtual desktops Key components Hardware resources Software resources Sizing for validated configuration Server configuration guidelines Overview VMware vsphere memory virtualization for VSPEX Memory configuration guidelines Network configuration guidelines Overview VLAN Enable jumbo frames Link aggregation Storage configuration guidelines Overview VMware vsphere storage virtualization for VSPEX Storage layout for 250 virtual desktops High Availability and failover Introduction Virtualization layer Compute layer Network layer

5 Contents Storage layer Validation test profile Profile characteristics Backup environment configuration guidelines Backup characteristics Backup layout Sizing guidelines Reference workload Defining the reference workload Applying the reference workload Concurrency Heavier desktop workloads Implementing the solution architecture Resource types CPU resources Memory resources Network resources Storage resources Backup resources Implementation summary Quick assessment CPU requirements Memory requirements Storage performance requirements Storage capacity requirements Determining Equivalent Reference Virtual Desktops Fine tuning hardware resources Chapter 5 VSPEX Configuration Guidelines 65 Overview Pre-deployment tasks Overview Deployment prerequisites Customer configuration data Prepare switches, connect network, and configure switches Overview Prepare network switches Configure infrastructure network Configure VLANs

6 Contents Complete network cabling Prepare and configure storage array VNXe configuration Provision core data storage Provision optional storage for user data Provision optional storage for infrastructure virtual machines Install and configure VMware vsphere hosts Overview Install ESXi Configure ESXi networking Jumbo frames Connect VMware datastores Install and configure SQL Server database Overview Create a virtual machine for Microsoft SQL Server Install Microsoft Windows on the virtual machine Install SQL Server Configure database for VMware vcenter Configure database for VMware Update Manager Install and configure VMware vcenter Server Overview Create the vcenter host virtual machine Install vcenter guest OS Create vcenter ODBC connections Install vcenter Server Apply vsphere license keys Deploy the VNX VAAI for NFS plug-in (NFS variant) Install the EMC VSI plug-in Install and configure XenDesktop controller Overview Install server-side components of XenDesktop Install Desktop Studio Configure a site Add a second controller Prepare master virtual machine Provision virtual desktops Summary Chapter 6 Validating the Solution 85 Overview

7 Contents Post-install checklist Deploy and test a single virtual desktop Verify the redundancy of the solution components Appendix A Bills of Materials 89 Bill of material for 250 virtual desktops Appendix B Customer Configuration Data Sheet 91 Customer configuration data sheets Appendix C References 95 References EMC documentation Other documentation Appendix D About VSPEX 99 About VSPEX

8 Contents 8

9 Figures Figure 1. Solution components Figure 2. Compute layer flexibility Figure 3. Example of highly available network design Figure 4. Authentication control flow for XenDesktop access requests originating on an external network Figure 5. Authentication control flow for XenDesktop requests originating on local Figure 6. network Logical architecture: VSPEX End-User Computing for Citrix XenDesktop with RSA Figure 7. Logical architecture for 250 virtual desktops Figure 8. Network diagram Figure 9. Hypervisor memory consumption Figure 10. Required Networks Figure 11. VMware virtual disk types Figure 12. Core storage layout Figure 13. Optional storage layout Figure 14. High Availability at the virtualization layer Figure 15. Redundant Power Supplies Figure 16. Network layer High Availability Figure 17. VNXe series High Availability Figure 18. Sample Ethernet network architecture Figure 19. Virtual machine memory settings

10 Figures 10

11 Tables Table 1. VNXe customer benefits Table 2. Minimum hardware resources to support SecurID Table 3. Solution hardware Table 4. Solution software Table 5. Server hardware Table 6. Storage hardware Table 7. Validated environment profile Table 8. Backup profile characteristics Table 9. Virtual desktop characteristics Table 10. Blank worksheet row Table 11. Reference virtual desktop resources Table 12. Example worksheet row Table 13. Example applications Table 14. Server resource component totals Table 15. Blank customer worksheet Table 16. Deployment process overview Table 17. Tasks for pre-deployment Table 18. Deployment prerequisites checklist Table 19. Tasks for switch and network configuration Table 20. Tasks for storage configuration Table 21. Tasks for server installation Table 22. Tasks for SQL Server database setup Table 23. Tasks for vcenter configuration Table 24. Tasks for XenDesktop controller setup Table 25. Tasks for testing the installation Table 26. List of components used in the VSPEX solution for 250 Virtual Desktops Table 27. Common server information Table 28. ESXi Server information Table 29. Array information Table 30. Network infrastructure information Table 31. VLAN information Table 32. Service accounts

12 Tables 12

13 Chapter 1 Executive Summary This chapter presents the following topics: Introduction Target audience Document purpose Business needs

14 Executive Summary Introduction Target audience Document purpose VSPEX validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute and networking layers. VSPEX eliminates server virtualization planning and configuration burdens. When embarking on server virtualization, virtual desktop deployment, or IT consolidation, VSPEX accelerates your IT transformation by enabling faster deployments, choice, greater efficiency, and lower risk. This document is intended to be a comprehensive guide to the technical aspects of this solution. Server capacity is provided in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware that meet or exceed the stated minimums. The reader of this document is expected to have the necessary training and background to install and configure an End-User Computing solution based on Citrix XenDesktop with VMware vsphere as a hypervisor, EMC VNXe series storage systems, and associated infrastructure as required by this implementation. External references are provided where applicable and it is recommended that the reader be familiar with these documents. Readers are also expected to be familiar with the infrastructure and database security policies of the customer installation. Individuals focused on selling and sizing a VSPEX End-User Computing for Citrix XenDesktop solution should pay particular attention to the first four chapters of this document. After purchase, implementers of the solution will want to focus on the configuration guidelines in Chapter 5, the solution validation in Chapter 6 and the appropriate references and appendices. This document provides an initial introduction to the VSPEX End-User Computing architecture, an explanation on how to modify the architecture for specific engagements, and instructions on how to effectively deploy the system. The VSPEX End-User Computing architecture provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution executes on VMware s vsphere virtualization layer backed by the highly available VNX storage family for storage and Citrix s XenDesktop desktop broker. The compute and network components, while customer-definable, are laid out to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 250 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when 14

15 Executive Summary Business needs deployed. For larger environments, solutions for up to 2000 virtual desktops based on the EMC VNX series are described in EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 2000 Virtual Desktops. An end-user computing or virtual desktop architecture is a complex system offering. This document will facilitate its setup by providing up front software and hardware material lists, systematically sizing guidance and worksheets, and verified deployment steps. When the last component has been installed, there are validation tests to ensure that your system is up and running properly. Following guidance provided by this document will ensure an efficient and painless desktop deployment. VSPEX solutions are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, server, and networking layers. VSPEX solutions accelerate your IT transformation by enabling faster deployments, choice, efficiency, and lower risk. Business applications are moving into the consolidated compute, network, and storage environment. EMC VSPEX End-User Computing using Citrix reduces the complexity of configuring every component of a traditional deployment model. The complexity of integration management is reduced while maintaining the application design and implementation options. Administration is unified, while process separation can be adequately controlled and monitored. The following are the business needs for the VSPEX End-User Computing for Citrix architectures: Provide an end-to-end virtualization solution to utilize the capabilities of the unified infrastructure components. Provide a VSPEX for Citrix End-User Computing solution for efficiently virtualizing 250 virtual desktops for varied customer use cases. Provide a reliable, flexible, and scalable reference design. 15

16 Executive Summary 16

17 Chapter 2 Solution Overview This chapter presents the following topics: Overview Desktop broker Virtualization Storage Network Compute

18 Solution Overview Overview Desktop broker Virtualization Storage The EMC VSPEX End-User Computing for Citrix XenDesktop on VMware vsphere 5.1 Solution provides a complete systems architecture capable of supporting up to 250 virtual desktops with a redundant server/network topology and highly available storage. The core components that make up this particular solution are desktop broker, virtualization, storage, compute, and networking. XenDesktop is the virtual desktop solution from Citrix that allows virtual desktops to run on the VMware vsphere virtualization environment. It allows centralized desktop management and provides increased control for IT organizations. XenDesktop allows end users connect to their desktops from multiple devices across a network connection. VMware vsphere is the leading virtualization platform in the industry. For years it has provided flexibility and cost savings to end users by enabling the consolidation of large, inefficient server farms into nimble, reliable cloud infrastructures. The core VMware vsphere components are the VMware vsphere Hypervisor and the VMware vcenter TM Server for system management. The VMware hypervisor runs on a dedicated server and allows multiple operating systems to run on the system at one time as virtual machines. These hypervisor systems can then be connected to operate in a clustered configuration. These clustered configurations are then managed as a larger resource pool through the vcenter product and allow for dynamic allocation of CPU, memory, and storage across the cluster. Features like vmotion TM, which allows a virtual machine to move between different servers with no disruption to the operating system, and Distributed Resource Scheduler (DRS) which performs vmotions automatically to balance the load, make vsphere a solid business choice. With the release of vsphere 5.1, a VMware virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual RAM. The EMC VNX storage family is the number one shared storage platform in the industry. Its ability to provide both file and block access with a broad feature set make it an ideal choice for any End-User Computing implementation. The VNXe storage components include the following which are sized for the stated reference architecture workload: Host adapter ports Provide host connectivity via fabric into the array. 18

19 Solution Overview Storage Processors The compute component of the storage array, responsible for all aspects of data moving into, out of, and between arrays and protocol support. Disk drives Actual spindles that contain the host/application data and their enclosures. The 250 Virtual Desktop solution discussed in this document is based on the VNXe3300 TM storage array. The VNXe3300 can host up to 150 drives. The EMC VNXe series supports a wide range of business class features ideal for the End-User Computing environment including: Thin Provisioning Replication Snapshots File Deduplication and Compression Quota Management and many more Network VSPEX allows the flexibility of designing and implementing your choice of network components. The infrastructure has to conform to the following attributes: Redundant network links for the hosts, switches, and storage Support for Link Aggregation Traffic isolation based on industry accepted best practices Compute VSPEX allows the flexibility of designing and implementing the vendor s choice of server components. The infrastructure has to conform to the following attributes: Sufficient RAM, cores and memory to support the required number and types of virtual machines Sufficient network connections to enable redundant connectivity to the system switches Excess capacity to withstand a server failure and failover in the environment 19

20 Solution Overview 20

21 Chapter 3 Solution Technology Overview This chapter presents the following topics: The technology solution Summary of key components Desktop broker Virtualization Compute Network Storage Backup and recovery Security

22 Solution Technology Overview The technology solution This solution uses EMC VNXe3300 and VMware vsphere 5.1 to provide the storage and compute resources for a Citrix XenDesktop 5.6 environment of Microsoft Windows 7 virtual desktops provisioned by Machine Creation Services (MCS). Figure 1. Solution components In particular, planning and designing the storage infrastructure for a Citrix XenDesktop environment is a critical step because the shared storage must be able to absorb large bursts of input/output (I/O) that occur over the course of a workday. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users may adapt to slow performance, but unpredictable performance will frustrate them and reduce efficiency. To provide a predictable performance for a virtual desktop infrastructure, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. EMC Next-Generation Backup enables protection of user data and end-user recoverability. This is accomplished by leveraging EMC Avamar and its desktop client within the desktop image. 22

23 Solution Technology Overview Summary of key components Introduction This section describes the key components of this solution. Desktop The Desktop Virtualization broker manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, to allow maintenance to the image without impacting user productivity, and to prevent the environment from growing in an unconstrained way. Virtualization The virtualization layer allows the physical implementation of resources to be decoupled from the applications that use them. In other words the application s view of the resources available to it is no longer directly tied to the hardware. This enables many key features in the End User Computing concept. Compute The compute layer provides memory and processing resources for the virtualization layer software as well as the needs of the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required, but allows the customer to implement the requirements using any compute hardware which meets these requirements. Network The network layer connects the users of the environment to the resources they need, as well as connecting the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution, and provides general guidance on network architecture, but allows the customer to implement the requirements using any network hardware which meets these requirements. Storage The storage layer is a critical resource for the implementation of the end user computing environment. Due to the way desktops are used the storage layer must be able to absorb large bursts of activity as they occur without unduly impacting the user experience. Backup and recovery The optional Backup and Recovery components of the solution provide data protection in the event that the data in the primary system is deleted, damaged, or otherwise unusable. Security The optional Security component of the solution from RSA provides consumers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. 23

24 Solution Technology Overview Desktop broker Solution architecture provides details on all the components that make up the reference architecture. Overview Desktop virtualization encapsulates and delivers the users desktops to remote client devices which can include thin clients, zero clients, smart phones, and tablets. It allows subscribers from different locations to access virtual desktops hosted on centralized computing resources at remote data centers. In this solution, Citrix XenDesktop is used to provision, manage, broker and monitor the desktop virtualization environment. Citrix XenDesktop 5.6 Citrix XenDesktop transforms Windows desktops as an on-demand service to any user, any device, anywhere. XenDesktop quickly and securely delivers any type of virtual desktop or any type of Windows, web, or SaaS application, to all the latest PCs, Macs, tablets, smart phones, laptops and thin clients and does so with a highdefinition user experience (HDX). FlexCast delivery technology enables IT to optimize the performance, security, and cost of virtual desktops for any type of user, including task workers, mobile workers, power users, and contractors. XenDesktop helps IT rapidly adapt to business initiatives by simplifying desktop delivery and enabling user self-service. The open, scalable, and proven architecture simplifies management, support, and integration. Machine Creation Services Machine Creation Services (MCS) is a provisioning mechanism introduced in XenDesktop 5.0. It is integrated with the XenDesktop management interface, Desktop Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle management from a centralized point of management. MCS allows several types of machines to be managed within a catalog in Desktop Studio, including dedicated and pooled machines. Desktop customization is persistent for dedicated machines, while a pooled machine is required if a nonpersistent desktop is appropriate. In this solution, 250 persistent virtual desktops running Windows 7 were provisioned by using MCS. The desktops were deployed from two dedicated machine catalogs. Citrix Personal vdisk The Citrix Personal vdisk feature is introduced in Citrix XenDesktop 5.6. With Personal vdisk, users can preserve customization settings and user-installed applications in a pooled desktop. This capability is accomplished by redirecting the changes from the user s pooled VM to a separate disk called Personal vdisk. During runtime, the content of the Personal vdisk is blended with the content from the base VM to provide a unified experience to the end user. The Personal vdisk data is preserved during reboot/refresh operations. Citrix Profile Manager 4.1 Citrix Profile Manager 4.1 preserves user profiles and dynamically synchronizes them with a remote profile repository. Citrix profile manager ensures that the user s 24

25 Solution Technology Overview personal settings are applied to desktops and applications regardless of their login location or client device. The combination of Citrix Profile Manager and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the number of storage required in an organization. With Citrix Profile Manager, a user s remote profile is dynamically downloaded when the user logs in to a Citrix Xendesktop. Profile Manager downloads user profile information only when the user needs it. Virtualization Overview The virtualization layer is a key component of any End User Computing solution. It allows the application resource requirements to be decoupled from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and even allows the physical capability of the system to change without impacting the hosted applications. VMware vsphere 5.1 VMware vsphere 5.1 transforms a computer s physical resources by virtualizing the CPU, Memory, Storage, and Network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications just like physical computers. High-availability features of VMware vsphere 5.1 such as vmotion and Storage vmotion enable seamless migration of virtual machines and stored files from one vsphere server to another with minimal or no performance impact. Coupled with vsphere DRS and Storage DRS, virtual machines have access to the appropriate resources at any point in time through load balancing of compute and storage resources. In this solution, VMware vsphere 5.1 is used to build the virtualization layer. VMware vcenter VMware vcenter is a centralized management platform for the VMware Virtual Infrastructure. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure, which can be accessed from multiple devices. VMware vcenter is also responsible for managing some of the more advanced features of the VMware virtual infrastructure like VMware vsphere High Availability and Distributed Resource Scheduling (DRS), along with vmotion and Update Manager. VMware vsphere High Availability The VMware vsphere High Availability feature allows the virtualization layer to automatically restart virtual machines in various failure conditions. If the Virtual Machine operating system has an error, the Virtual Machine can be automatically restarted on the same hardware. 25

26 Solution Technology Overview Note If the physical hardware has an error, the impacted virtual machines can be automatically restarted on other servers in the cluster. In order to restart virtual machines on different hardware those servers will need to have resources available. There are specific recommendations in the Compute section below to enable this functionality. VMware vsphere High Availability allows you to configure policies to determine which machines are restarted automatically, and under what conditions these operations should be attempted. EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vsphere is a plug-in to the vsphere client that provides a single management interface that is used for managing EMC storage within the vsphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Features are managed by using the VSI Feature Manager. VSI provides a unified user experience, which allows new features to be introduced rapidly in response to changing customer requirements. The following features are used during the validation testing: Storage Viewer (SV) Extends the vsphere client to facilitate the discovery and identification of EMC VNXe storage devices that are allocated to VMware vsphere hosts and virtual machines. SV presents the underlying storage details to the virtual datacenter administrator, merging the data of several different storage mapping tools into a few seamless vsphere client views. Unified Storage Management Simplifies storage administration of the EMC VNX unified storage platform. It enables VMware administrators to provision new Network File System (NFS) and Virtual Machine File System (VMFS) datastores, and RDM volumes seamlessly within vsphere client. Refer to the EMC VSI for VMware vsphere product guides on EMC Online Support for more information. VNX VMware vstorage API for Array Integration support Hardware acceleration with VMware vstorage API for Array Integration (VAAI) is a storage enhancement in vsphere 5.1 enabling vsphere to offload specific storage operations to compatible storage hardware such as the VNXe series platforms. With storage hardware assistance, vsphere performs these operations faster and consumes less CPU, memory, and storage fabric bandwidth. 26

27 Solution Technology Overview Compute The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For this reason EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a given number of servers with a specific set of requirements, VSPEX documents a number of processor cores and an amount of RAM that must be achieved. This can be implemented with two servers or 20 and still be considered the same VSPEX solution. For example, assume that the compute layer requirement for a given implementation is 25 processor cores, and 200 GB of RAM. One customer might want to implement these using white-box servers containing 16 processor cores, and 64 GB of RAM; while a second customer might select a higher-end server with 20 processor cores and 144 GB of RAM. The first customer will need four of the servers they chose, while the second customer needs two, as shown in Figure 2 on page

28 Solution Technology Overview Figure 2. Compute layer flexibility Note To enable high availability at the compute layer, each customer will need one additional server with sufficient capacity to provide a failover platform in the event of a hardware outage. The following best practices should be observed in the compute layer: It is a best practice to use a number of identical or at least compatible servers. VSPEX implements hypervisor level high-availability technologies which may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units you can minimize compatibility problems in this area. If you are implementing hypervisor layer high availability, then the largest virtual machine you can create will be constrained by the smallest physical server in the environment. It is recommended to implement the high availability features available in the virtualization layer, and to ensure that the compute layer has sufficient 28

29 Solution Technology Overview resources to accommodate at least single server failures. This allows you to implement minimal-downtime upgrades, and tolerate single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be very flexible to meet your specific needs. The key constraint is provision of sufficient processor cores and RAM per core to meet the needs of the target environment. Network The infrastructure network requires redundant network links for each vsphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. An example of this kind of highly available network topology is depicted in Figure 3. Figure 3. Example of highly available network design 29

30 Solution Technology Overview This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security. EMC unified storage platforms provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address, and potentially multiple IP addresses. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. 30

31 Solution Technology Overview Storage Overview The storage layer is also a key component of any Cloud Infrastructure solution that serves data generated by applications and operating systems in datacenter storage processing systems. This increases storage efficiency and management flexibility, and reduces total cost of ownership. In this VSPEX solution, EMC VNXe Series servers are used for providing virtualization at storage layer. EMC VNXe series The EMC VNX family is optimized for virtual applications delivering industry-leading innovation and enterprise capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today s enterprises. The VNXe series is powered by Intel Xeon processor, for intelligent storage that automatically and efficiently scales in performance, while ensuring data integrity and security. The VNXe series is purpose-built for the IT manager in smaller environments, and is designed to meet the high-performance, high-scalability requirements of midsize and large enterprises. Table 1 lists the VNXe customer benefits. Table 1. Feature VNXe customer benefits Next-generation unified storage, optimized for virtualized applications Capacity optimization features including compression, deduplication, thin provisioning, and application-centric copies High availability, designed to deliver five 9s availability Multiprotocol support for file and block Simplified management with EMC Unisphere for a single management interface for all NAS, SAN, and replication needs Software Suites Available Local Protection Suite Increases productivity with snapshots of production data. Remote Protection Suite Protects data against localized failures, outages, and disasters. Application Protection Suite Automates application copies and proves compliance. Security and Compliance Suite Keeps data safe from changes, deletions, and malicious activity. 31

32 Solution Technology Overview Backup and recovery Security Software Packs Available VNXe3300 Total Protection Pack Includes the Local Protection, Remote Protection, and Application Protection Suites. EMC s Avamar data deduplication technology seamlessly integrates into virtual environments, providing rapid backup and restoration capabilities. Avamar s deduplication results in vastly less data traversing the network, and greatly reduces the amount of data being backed up and stored translating to storage, bandwidth, and operational savings. The following are two of the most common recovery requests made to backup administrators: File-level recovery The object-level recoveries account for the vast majority of user support requests. Common actions requiring file-level recovery are individual users deleting files, applications requiring recoveries, and batch process-related erasures. System recovery Although complete system recovery requests are less frequent in number than those for file-level recovery, this bare metal restore capability is vital to the enterprise. Some common root causes for full system recovery requests are viral infestation, registry corruption, or unidentifiable unrecoverable issues. Avamar s functionality, in conjunction with VMware implementations, adds new capabilities for backup and recovery in both of these scenarios. Key capabilities added in VMware such as the vstorage API integration and change block tracking (CBT) enable the Avamar software to protect the virtual environment more efficiently. Leveraging CBT for both Backup and recoveries with virtual proxy server pools, this functionality minimizes management needs. Coupling that with Data Domain as the storage platform for image data, this solution enables the most efficient integration with two of the industry leading next-generation backup appliances. RSA SecurID twofactor authentication RSA SecurID two-factor authentication can provide enhanced security for the VSPEX end user computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase, consisting of: Something the user knows: a PIN, which is used like any other PIN or password. Something the user has: A token code, provided by a physical or software token, which changes every 60 seconds. The typical use case deploys SecurID to authenticate users accessing protected resources from an external or public network. Access requests originating from within a secure network are authenticated by traditional mechanisms involving Active 32

33 Solution Technology Overview Directory or LDAP. A configuration description for implementing SecurID is available for the VSPEX end user computing infrastructures. SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, high availability, etc. The Citrix NetScaler network appliance and Citrix Storefront enable streamlined integration of SecurID into the XenDesktop (as well as XenApp and other Citrix virtualization products) environment. SecurID authentication in VSPEX End User Computing for Citrix XenDesktop environment For external access requests into the VSPEX End-User Computing with Citrix XenDesktop environment, the user is challenged for a userid, SecurID passphrase, and Active Directory password on a single dialog. Upon successful authentication, the user is logged in directly to his or her virtual desktop. Internal request authentication is carried out against Active Directory only. Figure 4 describes the authentication flow for an external access request to the XenDesktop environment. Figure 4. Authentication control flow for XenDesktop access requests originating on an external network Note Authentication policies set on NetScaler s Access Gateway Enterprise Edition (AGEE) control authentication against SecurID and Active Directory. The internal access authentication flow is shown in Figure 5 on page 34. Active Directory authentication is initiated from within the Citrix Storefront. 33

34 Solution Technology Overview Figure 5. Authentication control flow for XenDesktop requests originating on local network Note Users are authenticated against Active Directory only. Required components Enablement of SecurID for this VSPEX solution is described in Securing VSPEX Citrix XenDesktop 5.6 End-User Computing Solutions with RSA Design Guide. The following components are required: RSA SecurID Authentication Manager (version 7.1 SP4) Used to configure and manage the SecurID environment and assign tokens to users, Authentication Manager 7.1 SP4 is available as an appliance or as an installable feature on a Windows Server 2008 R2 instance. Future versions of Authentication Manager will be available as a physical or virtual appliance only. SecurID tokens for all users SecurID requires something the user knows (a PIN) combined with a constantly-changing code from a token the user possesses. SecurID tokens may be physical, displaying a new code every 60 seconds which the user must then enter with a PIN, or software-based, wherein the user supplies a PIN and the token code is supplied programmatically. Hardware and software tokens are registered with Authentication Manager through token records supplied on a CD or other media. Citrix NetScaler network appliance (version 10 or higher) NetScaler s Access Gateway functionality manages RSA SecurID (primary) and Active Directory (secondary) authentication of access requests originating on public or external networks. NetScaler also provides load balancer capability supporting high availability of Authentication Manager and Citrix Storefront servers. Citrix Storefront (version 1.2 or higher) Storefront, also known as CloudGateway Express, provides authentication and other services and presents users desktops to browser-based or mobile Citrix clients. Citrix Receiver Receiver provides a user interface through which the user interacts with the virtual desktop or other Citrix virtual environment such as XenApp or XenServer. In the context of this solution, the user client is considered a 34

35 Solution Technology Overview generic user endpoint, so versions of the Receiver client and options and optimizations for it are not addressed. Compute, memory and storage resources Figure 6 depicts the VSPEX End-User Computing for Citrix XenDesktop environment with added infrastructure to support SecurID. All necessary components can run in a redundant, high-availability configuration on two or more VMware vsphere hosts with a minimum total of twelve CPU cores (sixteen recommended) and sixteen gigabytes of RAM. Table 2 on page 36 summarizes these requirements. Figure 6. Logical architecture: VSPEX End-User Computing for Citrix XenDesktop with RSA 35

36 Solution Technology Overview Table 2. RSA Authentication Manager Minimum hardware resources to support SecurID CPU (cores) Memory (GB) Storage (GB) SQL Database* Reference n/a RSA Authentication Manager 7.1 Performance and Scalability Guide Citrix NetScaler VPX n/a Citrix NetScaler VPX Getting Started Guide Citrix Storefront MB per 100 users * This capacity can probably be drawn from pre-existing SQL Servers defined in the VSPEX reference architectures. 36

37 Chapter 4 Solution Stack Architectural Overview This chapter presents the following topics: Solution overview Solution architecture Server configuration guidelines Network configuration guidelines Storage configuration guidelines High Availability and failover Validation test profile Backup environment configuration guidelines Sizing Guidelines Reference workload Applying the reference workload Implementing the solution architecture Quick assessment

38 Solution Stack Architectural Overview Solution overview Solution architecture VSPEX Proven Infrastructure solutions are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor, compute, and networking layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC. VSPEX accelerates your IT Transformation to cloudbased computing by enabling faster deployment, more choice, higher efficiency, and lower risk. This section includes a comprehensive guide to the major aspects of this solution. Server capacity is specified in generic terms for required minimums of CPU, memory, and network interfaces; the customer is free to select the server and networking hardware of their choice that meet or exceed the stated minimums. The specified storage architecture, along with a system meeting the server and network requirements outlined, has been validated by EMC to provide high levels of performance while delivering a highly available architecture for your end-user computing deployment. Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual desktops that have been validated by EMC. In practice, each virtual desktop type has its own set of requirements that rarely fit a pre-defined idea of what a virtual desktop should be. In any discussion about virtual infrastructures, it is important to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. The VSPEX End User Computing solution with EMC VNXe is validated with up to 250 virtual machines. These defined configurations form the basis of creating a custom solution. These points of scale are defined in terms of the reference workload. Note VSPEX uses the concept of a Reference Workload to describe and define a virtual machine. Therefore, one physical or virtual desktop in an existing environment may not be equal to one virtual desktop in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. Details of that process are described in Applying the reference workload. Architecture for up to 250 virtual desktops The architecture diagram in Figure 7 on page 39 shows the layout of major components comprising the solution. 38

39 Solution Stack Architectural Overview Figure 7. Logical architecture for 250 virtual desktops Note The networking components of the solution can be implemented using 1 Gb or 10 Gb IP networks provided that sufficient bandwidth and redundancy are provided to meet the listed requirements. Key components Citrix XenDesktop 5.6 controller Two Citrix XenDesktop controllers are used to provide redundant virtual desktop delivery, authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. In this reference architecture, the controllers are installed on Windows Server 2008 R2 and hosted as virtual machines on VMware vsphere 5.1 Servers. Virtual desktops The 250 persistent virtual desktops that run Windows 7 are provisioned using MCS, a provisioning mechanism introduced in XenDesktop 5.0. VMware vsphere 5.1 Provides a common virtualization layer to host a server environment which contains the virtual machines. The specifics of the validated environment are listed in Table 9 on page 56. VSphere 5.1 provides highly available infrastructure through features such as: vmotion Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption. Storage vmotion Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption. vsphere High Availability (HA) Detects and provides rapid recovery for a failed virtual machine in a cluster. 39

40 Solution Stack Architectural Overview Distributed Resource Scheduler (DRS) Provides load balancing of computing capacity in a cluster. Storage Distributed Resource Scheduler (SDRS) Provides load balancing across multiple datastores, based on space use and I/O latency. VMware vcenter Server 5.1 Provides a scalable and extensible platform that forms the foundation for virtualization management for the VMware vsphere 5.1 cluster. All vsphere hosts and their virtual machines are managed via vcenter. VSI for VMware vsphere EMC VSI for VMware vsphere is a plugin to the vsphere client that provides storage management for EMC arrays directly from the client. VSI is highly customizable and helps provide a unified management interface. Active Directory server Active Directory services are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose. DHCP server Centrally manages the IP address scheme for the virtual desktops. This service is hosted on the same virtual machine as the domain controller and DNS server. The Microsoft DHCP Service running on a Windows 2012 server is used for this purpose. DNS Server DNS services are required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows 2012 server is used for this purpose. SQL Server The Citrix XenDesktop controllers and VMware vcenter Server require a database service to store configuration details. A Microsoft SQL 2008 server is used for this purpose. This server is hosted as a virtual machine on a VMware vsphere 5.1 server. Gigabit (GbE) IP Network The Ethernet network infrastructure provides 1GbE connectivity between virtual desktops, vsphere clusters, and VNXe storage. It also allows desktop users to redirect their roaming profiles and home directories to the centrally maintained CIFS shares on the VNXe. The desktop clients, XenDesktop management components, and Windows server infrastructure can also reside on a 1 GbE network but different pair of network interfaces. EMC VNXe3300 series Provides storage by using IP (NFS) connections for virtual desktops, and infrastructure virtual machines such as Citrix XenDesktop controllers, VMware vcenter Servers, Microsoft SQL Server databases, and other supporting services. Optionally, user profiles and home directories are redirected to CIFS network shares on VNXe3300. IP/Storage Networks All network traffic is carried by standard Ethernet network with redundant cabling and switching. User and management traffic is carried over a shared network while NFS storage traffic is carried over a private, non-routable subnet. EMC Avamar Virtual Edition Provides the platform for protection of virtual machines. This protection strategy leverages persistent virtual desktops. It also leverages both image protection and end-user recoveries. 40

41 VNXe series storage arrays include the following components: Solution Stack Architectural Overview Storage Processors (SPs) support block and file data with UltraFlex I/O technology that supports iscsi and NFS protocol. The SPs provide access for all external hosts and for the file side of the VNXe array. Battery Backup Units are battery units within each storage processor. They provide enough power to each storage processor to ensure that any data in flight is de-staged to the vault area in the event of a power failure, ensuring that no writes are lost. Upon restart of the array, the pending writes are reconciled and persisted. Disk-Array Enclosures (DAE) house the drives used in the array. Hardware resources Table 3 lists the hardware used in this solution. Table 3. Solution hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop (500 GB RAM across all servers) CPU: 1 vcpu per desktop (eight desktops per core - 32 cores across all servers) Network: Six 1 GbE NICs per server Total server capacity required to host 250 virtual desktops NFS network infrastructure Minimum switching capability: Six 1 GbE ports per vsphere server Four 1 GbE ports per storage processor Redundant LAN configuration EMC VNXe3300 Two storage processors Four 1 GbE interfaces per storage processor VNXe shared storage for virtual desktops Twenty-two 300 GB, 15 k rpm 3.5-inch SAS disks (three RAID-5 performance packs) Servers for customer infrastructure Thirteen 2 TB, 7200 rpm 3.5-inch NL-SAS disks Seven 300 GB, 15 k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Minimum number required: Two physical servers 20 GB RAM per server Four processor cores per server Two 1 GbE ports per server Optional for user data Optional for infrastructure storage These servers and the roles they fulfill may already exist in the customer environment 41

42 Solution Stack Architectural Overview Software resources Table 4 lists the software used in this solution. Table 4. Solution software Software Configuration VNXe3300 (shared storage, file systems) Software version XenDesktop Desktop Virtualization Citrix XenDesktop Controller Operating system for XenDesktop Controller Microsoft SQL Server Version 5.6 Platinum Edition Windows Server 2008 R2 Standard Edition Version 2008 R2 Standard Edition Next generation backup Avamar Virtual Edition (2TB) 6.1 SP1 VMware vsphere vsphere server 5.1 vcenter Server 5.1 Operating system for vcenter Server Windows Server 2008 R2 Standard Edition Virtual Desktops (Note beyond base OS, software was used for solution validation and is not required) Base operating system Microsoft Office Microsoft Windows 7 Standard (32-bit) SP1 Office Enterprise 2007 SP3 Internet Explorer Adobe Reader 9.1 McAfee Virus Scan 8.7.0i Enterprise Adobe Flash Player 10 Bullzip PDF Printer FreeMind Sizing for validated configuration When selecting servers for this solution, the processor core shall meet or exceed the performance of the Intel Nehalem family at 2.66 Ghz. As servers with greater processor speeds, performance and higher core density become available servers may be consolidated as long as the required total core and memory count is met and a sufficient number of servers are incorporated to support the necessary level of high availability. 42

43 Solution Stack Architectural Overview With servers, speed and quantity of network interface card (NIC) may also be consolidated as long as the overall bandwidth requirements for this solution and sufficient redundancy necessary to support high availability are maintained. A configuration of four servers each with two sockets of four cores each, 128 GB of RAM and six 1 GbE NICs will support this solution for a total of 32 cores and 512 GB of RAM. As shown in Table 2 on page 36, a minimum of one core is required to support eight virtual desktops and a minimum of 2GB of RAM for each. The correct balance of memory and cores for the expected number of virtual desktops to be supported by a server must also be taken into account. For example, a server expected to support 24 virtual desktops requires a minimum of three cores and a minimum of 48 GB of RAM. Figure 8. Network diagram IP network switches used to implement this reference architecture must have a minimum backplane capacity of 48 Gb/s non-blocking and support the following features: IEEE 802.1x Ethernet flow control 802.1q VLAN tagging Ethernet link aggregation using IEEE 802.1ax (802.3ad) Link Aggregation Control Protocol 43

44 Solution Stack Architectural Overview SNMP management capability Jumbo frames Choose switches that support high availability and choose a network vendor based on the availability of parts, service, and support contracts. In addition to the above features, the network configuration should include the following: A minimum of two switches to support redundancy Redundant power supplies A minimum of 40 1-GbE ports (distributed for high availability) Appropriate uplink ports for customer connectivity Use of 10 GbE ports should align with those on the server and storage while keeping in mind the overall network requirements for this solution and a level of redundancy to support high availability. Additional server NICs and storage connections should also be considered based on customer or specific implementation requirements. The management infrastructure (Active Directory, DNS, DHCP, and SQL Server) can be supported on two servers similar to those previously defined, but will require a minimum of only 20 GB RAM instead of 128 GB. Disk storage layout is explained in Storage configuration guidelines. Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors may alter the final purchase. From a virtualization perspective, if a system s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement. If the virtual machine pool does not have a high level of peak or concurrent usage, the number of vcpus can be reduced. Conversely, if the applications being deployed are highly computational in nature, the number of CPUs and memory purchased may need to be increased. Table 5. Server hardware Hardware Configuration Notes Servers for virtual desktops Memory: 2 GB RAM per desktop (500 GB RAM across all servers) CPU: 1 vcpu per desktop (eight desktops per core - 32 cores across all servers) Network: Six 1 GbE NICs per server Total server capacity required to host 250 virtual desktops 44

45 Solution Stack Architectural Overview VMware vsphere memory virtualization for VSPEX VMware vsphere 5.1 has a number of advanced features that help to maximize performance and overall resource utilization. The most important of these are in the area of memory management. This section describes some of these features and the items you need to consider when using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources. Figure 9 shows an example of memory consumption at hypervisor lever. Figure 9. Hypervisor memory consumption This basic concept is enhanced by understanding the technologies presented in this section. 45

46 Solution Stack Architectural Overview Memory over commitment Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vsphere host. Using sophisticated techniques, such as ballooning and transparent page sharing, vsphere is able to handle memory over-commitment without any performance degradation. However, if more memory than that is present on the server is being actively used, vsphere might resort to swapping out portions of a VM's memory. Non-Uniform Memory Access (NUMA) vsphere uses a NUMA load-balancer to assign a home node to a virtual machine. Because memory for the virtual machine is allocated from the home node, memory access is local and provides the best performance possible. Even applications that do not directly support NUMA benefit from this feature. Transparent page sharing Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and keep only one copy, which frees up the total host memory consumption. If most of your application virtual machines run the same operating system and application binaries then total memory usage can be reduced to increase consolidation ratios. Memory ballooning By using a balloon driver loaded in the guest operating system, the hypervisor can reclaim host physical memory if memory resources are under contention. This is done with little to no impact to the performance of the application. Memory configuration guidelines This section provides guidelines for allocating memory to virtual machines. The guidelines outlined here take into account vsphere memory overhead and the virtual machine memory settings. vsphere memory overhead Some overhead is associated with the virtualization of memory resources. The memory space overhead has two components. The fixed system overhead for the VMkernel Additional overhead for each virtual machine Overhead memory depends on the number of virtual CPUs and configured memory for the guest operating system. Allocating memory to virtual machines The proper sizing of memory for a virtual machine in VSPEX architectures is based on many factors. With the number of application services and use cases available determining a suitable configuration for an environment requires creating a baseline configuration, testing, and making adjustments, as discussed later in this paper. Table 9 on page 56 outlines the resources used by a single virtual machine. 46

47 Solution Stack Architectural Overview Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly-available network configuration. The guidelines outlined here take into account Jumbo Frames, VLANs, and Link Aggregation Control Protocol (LACP) on EMC unified storage. For detailed network resource requirement, please refer to Table 3 on page 41. VLAN It is a best practice to isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases physical isolation may be required for regulatory or policy compliance reasons; but in many cases logical isolation using VLANs is sufficient. This solution calls for a minimum of three VLANs. Client access Storage Management These VLANs are illustrated in Figure 10. Figure 10. Required Networks Note The diagram demonstrates the network connectivity requirements for a VNXe3300 using 1 GbE network connections. A similar topology should be created when using the VNXe3150 TM array, or 10 GbE network connections. 47

48 Solution Stack Architectural Overview The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network is used for communication between the compute layer and the storage layer. The Management network allows administrators to have a dedicated way to access the management connections on the storage array, network switches, and hosts. Note Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks may be implemented if desired, but they are not required. Enable jumbo frames This solution for EMC VSPEX End User Computing recommends an MTU set at 9000 (jumbo frames) for efficient storage and migration traffic. Link aggregation A link aggregation resembles an Ethernet channel, but uses the Link Aggregation Control Protocol (LACP) IEEE 802.3ad standard. The IEEE 802.3ad standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex. In this solution, Link Aggregation Control Protocol (LACP) is configured on VNXe, combining multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Storage configuration guidelines Overview vsphere allows more than one method of utilizing storage when hosting virtual machines. The solutions described below are tested utilizing NFS and the storage layout described adheres to all current best practices. An educated customer or architect can make modifications based on their understanding of the system s usage and load if required. Table 6. Storage hardware Hardware Configuration Notes EMC VNXe3300 Two storage processors Four 1 GbE interfaces per storage processor Twenty-two 300 GB, 15 k rpm 3.5-inch SAS disks (three RAID-5 performance packs) VNXe Shared storage for virtual desktops Thirteen 2 TB, 7200 rpm 3.5-inch NL-SAS disks Seven 300 GB, 15 k rpm 3.5-inch SAS disks (one RAID-5 performance pack) Optional for user data Optional for infrastructure storage VMware vsphere storage virtualization for VSPEX VMware ESXi TM provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machine. A virtual machine stores its operating system and all other files that are related to the VM activities in a virtual disk. The virtual disk itself is one or more files. VMware uses 48

49 Solution Stack Architectural Overview virtual SCSI controller to present a virtual disk to the guest operating system running inside the virtual machine. The virtual disk resides in a datastore. Depending on the type used, it can be either a VMware Virtual Machine File system (VMFS) datastore or an NFS datastore. Figure 11. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI based local or network storage. Raw Device Mapping In addition, VMware also provides a mechanism named Raw Device Mapping (RDM). RDM allows a virtual machine to directly access a volume on the physical storage, and can only be used with Fibre Channel or iscsi. NFS VMware also supports the use of NFS file systems from external NAS storage systems or devices as virtual machine datastores. In this VSPEX solution, NFS is used for hosting virtual desktops. 49

50 Solution Stack Architectural Overview Storage layout for 250 virtual desktops Core storage layout Figure 12 illustrates the layout of the disks that are required to store 250 desktop virtual machines. This layout does not include space for user profile data. Figure 12. Core storage layout Core storage layout overview The following core configuration is used in the reference architecture. Note that VNXe provisioning wizards perform disk allocation and do not allow user selection. Note Twenty-one SAS disks are allocated in 6+1 Raid-5 groups to contain virtual desktop datastores. Note that seven of the disks used (one 6+1 Raid-5 group) may contain VNXe system storage, reducing user storage. One SAS disk is a hot spare and is contained in the VNXe hot spare pool. If more capacity is required, larger drives may be substituted. To satisfy the load recommendations, the drives will all need to be 15k rpm and the same size. If differing sizes are utilized, storage layout algorithms may give suboptimal results. Optional user data storage layout In solution validation testing, storage space for user data is allocated on the VNXe array as shown in Figure 13. This storage is in addition to the core storage shown in Figure 12. If storage for user data exists elsewhere in the production environment, this storage is not required. Figure 13. Optional storage layout 50

51 Optional storage layout overview Solution Stack Architectural Overview The virtual desktops use two shared file systems one for user profiles, and the other to redirect user storage that resides in home directories. In general, redirecting user data out of the base image of VNXe for file enables centralized administration, backup, and recovery, and makes the desktops stateless. Each file system is exported to the environment through a CIFS share. The following optional configuration is used in the reference architecture. The actual disk selection is done by the VNXe provisioning wizards and may not match the reference diagram exactly. Twelve NL-SAS disks are allocated in 4+2 Raid-6 groups to store user data and roaming profiles. One NL-SAS disk is a hot spare. This disk is marked as hot spare in the storage layout diagram. Seven SAS disks configured as a 6+1 Raid-5 group are used to store the infrastructure virtual machines. Remaining disks are unbound or drive bays may be empty, as no additional drives are used for testing this solution. High Availability and failover Introduction This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide it provides the ability to survive single-unit failures with minimal to no impact to business operations. Virtualization layer As indicated earlier, it is recommended to configure high availability in the virtualization layer and allow the hypervisor to automatically restart virtual machines that fail. Figure 14 illustrates the hypervisor layer responding to a failure in the compute layer. Figure 14. High Availability at the virtualization layer By implementing high availability at the virtualization layer it ensures that, even in the event of a hardware failure, the infrastructure will attempt to keep as many services running as possible. Compute layer While the choice of servers to implement in the compute layer is flexible, it is recommended to use enterprise class servers designed for the datacenter. This type of server has redundant power supplies. These should be connected to separate 51

52 Solution Stack Architectural Overview Power Distribution units (PDUs) in accordance with your server vendor s best practices. Figure 15. Redundant Power Supplies It is also recommended to configure high availability in the virtualization layer. This means that the compute layer must be configured with enough resources so that the total number of available resources meets the needs of the environment, even with a server failure. This is demonstrated in Figure 14 on page 51. Network layer The advanced networking features of the VNX family provide protection against network connection failures at the array. Each vsphere host has multiple connections to user and storage Ethernet networks to guard against link failures. These connections should be spread across multiple Ethernet switches to guard against component failure in the network. Figure 16. Network layer High Availability 52

53 Solution Stack Architectural Overview By ensuring that there are no single points of failure in the network layer, you can ensure that the compute layer will be able to access storage, and communicate with users even if a component fails. Storage layer The VNX family is designed for five 9s availability by using redundant components throughout the array. All of the array components are capable of continued operation in case of hardware failure. The RAID disk configuration on the array provides protection against data loss due to individual disk failures, and the available hot spare drives can be dynamically allocated to replace a failing disk. Figure 17. VNXe series High Availability EMC Storage arrays are designed to be highly available by default. When configured according to the directions in their installation guides, single unit failures will not result in data loss or unavailability. 53

54 Solution Stack Architectural Overview Validation test profile Profile characteristics The VSPEX solution is validated with environment profile in Table 7. Table 7. Validated environment profile Profile characteristic Value Number of virtual desktops 250 Virtual desktop OS CPU per virtual desktop Windows 7 Standard (32-bit) SP1 1 vcpu Number of virtual desktops per CPU core 8 RAM per virtual desktop Desktop provisioning method Average storage available for each virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm 2 GB MCS 18 GB (vmdk and vswap) 8 IOPS 57 IOPS Number of datastores to store virtual desktops 2 Number of virtual desktops per datastore 125 Disk and RAID type for datastores Disk and RAID type for CIFS shares to host roaming user profiles and home directories (optional for user data) RAID 5, 300 GB, 15k rpm, 3.5-inch SAS disks RAID 6, 2 TB, 7, 200 rpm, 3.5-inch NL-SAS disks 54

55 Solution Stack Architectural Overview Backup environment configuration guidelines Overview This section provides guideline to set up backup and recovery environment for this VSPEX solution. It defines how the backup is characterized and laid out. Backup characteristics The backup environment of this VSPEX solution is sized with the following application environment profile: Table 8. Backup profile characteristics Profile characteristic Value Number of virtual machines 250 User data 2.5 TB (10.0 GB per Desktop) Daily change rate for the applications User data 2% Retention per data types # Daily 30 Daily # Weekly 4 Weekly # Monthly 1 Monthly Backup layout Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with two 2 TB Avamar Virtual Edition machines. This enables the unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. The solution also enables customers to unify their backup process with industry-leading deduplication backup software, and achieve the highest levels of performance and efficiency. Sizing guidelines In the following sections, the reader will find definitions of the reference workload used to size and implement the VSPEX architectures discussed in this document. Guidance will be provided on how to correlate those reference workloads to actual customer workloads and how that may change the end delivery from the server and network perspective. Modification to the storage definition can be made by adding drives for greater capacity and performance. The disk layouts have been created to provide support for the appropriate number of virtual desktops at the defined performance level. Decreasing the number of recommended drives or stepping down an array type can result in lower IOPS per desktop and a reduced user experience due to higher response time. 55

56 Solution Stack Architectural Overview Reference workload Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines that have been validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a pre-defined idea of what a virtual machine should be Defining the reference workload In any discussion about end-user computing, it is important to first define a reference workload. Not all desktop users perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics. To simplify the discussion, we have defined a representative customer reference workload. By comparing your actual customer usage to this reference workload, you can extrapolate which reference architecture to choose. For this VSPEX end-user computing solution, the reference workload is defined as a single virtual desktop with the following characteristics: Table 9. Virtual desktop characteristics Characteristic Virtual desktop operating system Value Microsoft Windows 7 Enterprise Edition (32-bit) SP1 Virtual processors per virtual desktop 1 RAM per virtual desktop Available storage capacity per virtual desktop Average IOPS per virtual desktop at steady state Average peak IOPS per virtual desktop during boot storm 2 GB 18 GB (vmdk and vswap) 8 57 IOPS This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently, with a steady load generated by the constant use of office-based applications like browsers, office productivity software, and other standard task worker utilities. Applying the reference workload In addition to the supported desktop numbers, there may be other factors to consider when deciding which end-user computing solution to deploy. Concurrency The workloads used to validate VSPEX solutions assume that all desktop users will be active at all times. In other words, the 250 desktop architecture is tested with 250 desktops, all generating workload in parallel, all booted at the same time, and so on. If the customer expects to have 300 users, but only 50 percent of them will be logged 56

57 Solution Stack Architectural Overview on at any given time due to time zone differences or alternate shifts, the 150 active users out of the total 300 users can be supported by the 250 desktops architecture. Heavier desktop workloads The workload defined in Table 9 on page 56 and used to test this VSPEX end user computing configurations is considered a typical office worker load. However, some customers may feel that their users have a more active profile. Suppose that a company has 200 users and, due to custom corporate applications, each user generates 15 IOPS as compared to 8 IOPS used in the VSPEX workload. This customer will need 3,000 IOPS (200 users * 15 IOPS per desktop). This 250 desktop configuration would be underpowered in this case because it has been rated to 2,000 IOPS (250 desktops * 8 IOPS per desktop). This customer should refer to the EMC VSPEX End-User Computing Solution with Citrix XenDesktop 5.6 and VMware vsphere 5.1 for up to 2000 Virtual Desktops document and consider moving up to the 500 desktops solution. Implementing the solution architecture This reference architecture requires a set of hardware to be available for the CPU, memory, network, and storage needs of the system. In this Reference Architecture, these are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements. Resource types This reference architecture defines the hardware requirements for the solution in terms of five basic types of resources: CPU resources Memory resources Network resources Storage resources Backup resources This section describes the resource types, how they are used in the reference architecture, and key considerations for implementing them in a customer environment. CPU resources The architectures define the number of CPU cores that are required, but not a specific type or configuration. It is intended that new deployments use recent revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution. In any running system, it is important to monitor the utilization of resources and adapt as needed. The reference virtual desktop and required hardware resources in the reference architectures assume that there will be no more than eight virtual CPUs for each physical processor core (8:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops; however, this ratio may 57

58 Solution Stack Architectural Overview not be appropriate in all use cases. EMC recommends monitoring the CPU utilization at the hypervisor layer to determine if more resources are required. Memory resources Each virtual desktop in the reference architecture is defined to have 2 GB of memory. In a virtual environment, it is common to provision virtual desktops with more memory than the hypervisor physically has due to budget constraints. The memory over-commitment technique takes advantage of the fact that each virtual desktop does not fully utilize the amount of memory allocated to it. It makes business sense to oversubscribe the memory usage to some degree. The administrator has the responsibility to proactively monitor the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware vsphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the vswap files. If the storage subsystem is sized correctly, occasional spikes due to vswap activity may not cause performance issues as transient bursts of load can be absorbed. However, if the memory oversubscription rate is so high that the storage subsystem is severely impacted by a continuing overload of vswap activity, more disks will need to be added not because of capacity requirement, but due to the demand of increased performance. Now, it is up to the administrator to decide whether it is more cost effective to add more physical memory to the server, or to increase the amount of storage. With memory modules being a commodity, it is likely less expensive to choose the former option. This solution is validated with statically assigned memory and no over-commitment of memory resources. If memory over-commit is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results. When using the Avamar backup solution for VSPEX, do not schedule all backups at once, but stagger them across your backup window. Scheduling all resources to backup at the same time could cause the consumption of all available host CPUs. Network resources The reference architecture outlines the minimum needs of the system. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server will depend on the type of server. The storage arrays have a number of included network ports, and have the option to add ports using EMC FLEX I/O modules. In the validated environment, EMC assumes that each virtual desktop generates 8 IOs per second with an average size of 4 KB. This means that each virtual desktop is generating at least 32 KB/s of traffic on the storage network. For an environment rated for 250 virtual desktops, this comes out to a minimum of approximately 8 MB/sec. This is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for: User network traffic Virtual desktop migration 58

59 Solution Stack Architectural Overview Administrative and management operations The requirements for each of these will vary depending on how the environment is being used, so it is not practical to provide concrete numbers in this context. However, the network described in the reference architecture for each solution should be sufficient to handle average workloads for the above use cases. Regardless of the network traffic requirements, EMC recommends always having at least two physical network connections that are shared for a logical network so that a single link failure does not impact the availability of the system. The network should be designed so that the aggregate bandwidth in the event of a failure is sufficient to accommodate the full workload. Storage resources The reference architectures contain layouts for the disks used in the validation of the system. Each layout balances the available storage capacity with the performance capability of the drives. There are a few layers to consider when examining storage sizing. Specifically, the array has a collection of disks which are assigned to a storage pool. From that storage pool, you can provision datastores to the VMware vsphere Cluster. Each layer has a specific configuration that is defined for the solution and documented in Chapter 5. It is generally acceptable to replace drive types with a type that has more capacity and the same performance characteristics; or with ones that have higher performance characteristics and the same capacity. Similarly, it is acceptable to change the placement of drives in the drive shelves in order to comply with updated or new drive shelf arrangements. In other cases where there is a need to deviate from the proposed number and type of drives specified, or the specified pool and datastore layouts, ensure that the target layout delivers the same or greater resources to the system. Backup resources Implementation summary The reference architecture outlines the backup storage (initial and growth) and retention needs of the system. Additional information can be gathered to further size Avamar; including tape-out needs, RPO and RTO specifics, as well as multi-site environment replication needs. The requirements stated in the reference architecture are what EMC considers the minimum set of resources to handle the workloads required based on the stated definition of a reference virtual desktop. In any customer implementation, the load of a system will vary over time as users interact with the system. However, if the customer virtual desktops differ significantly from the reference definition, and vary in the same resource group, then you may need to add more of that resource to the system. Quick assessment An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations, and help assess the customer environment. 59

60 Solution Stack Architectural Overview First, summarize the user types planned for migration into the VSPEX End-User Computing environment. For each group, determine the number of virtual CPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of reference virtual desktops required from the resource pool. Applying the reference workload provides examples of this process. Fill out a row in the worksheet for each application, as shown in Table 10. Table 10. Blank worksheet row Application CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Example User Type Resource Requirements Equivalent Reference Desktops Fill out the resource requirements for the User Type. The row requires inputs on three different resources: CPU, Memory, and IOPS. CPU requirements Most desktop applications are optimized for a single CPU; that is what the reference virtual desktop assumes. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, consider that your pool needs to provide 120 virtual desktops of capability. Memory requirements Memory plays a key role in ensuring application functionality and performance. Therefore, each group of desktops will have different targets for the acceptable amount of available memory. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of planned desktops to accommodate the additional resource requirements. For example, if you have 100 desktops that will be virtualized, but each one needs 4 GB of memory instead of the 2 GB that is provided in the reference virtual desktop, plan for 200 reference virtual desktops. Storage performance requirements The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations. Storage capacity requirements The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops presented in this solution rely on additional shared storage for user profile data and user documents. This requirement is covered as an optional component that can be 60

61 Solution Stack Architectural Overview met with the addition of specific storage hardware defined in the Reference Architecture. It can also be covered with existing file shares in the environment. Determining Equivalent Reference Virtual Desktops With all of the resources defined, determine an appropriate value for the Equivalent Reference Virtual Desktops line by using the relationships in Table 11. Round all values up to the closest whole number. Table 11. Resource Reference virtual desktop resources Value for Reference Virtual Desktop Relationship between requirements and equivalent reference virtual desktops CPU 1 Equivalent Reference Virtual Desktops = Resource Requirements Memory 2 Equivalent Reference Virtual Desktops = (Resource Requirements)/2 IOPS 10 Equivalent Reference Virtual desktops = (Resource Requirements)/10 For example, if there is a group of 50 users who need the two virtual CPUs and 12 IOPS per desktop described earlier, along with 8 GB of memory on the resource requirements line, describe them as needing two reference desktops of CPU, four reference desktops of memory and, and two reference desktops of IOPS based on the virtual desktop characteristics in Table 9 on page 56. These figures go in the Equivalent Reference Desktops row as shown in Table 12. Use the maximum value in the row to fill in the Equivalent Reference Virtual Desktops column. Multiply the number of equivalent reference virtual desktops, as found in Table 12, by the number of users to arrive at the total resource needs for that type of user. Table 12. Example worksheet row User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Heavy Users Resource Requirements Equivalent Reference Virtual Desktops

62 Solution Stack Architectural Overview Once the worksheet is filled out for each user type that the customer wants to migrate into the virtual infrastructure, compute the total number of reference virtual desktops required in the pool by computing the sum of the total column on the right side of the worksheet as shown in Table 13. Table 13. Example applications User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Heavy Users Moderate Users Typical Users Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Total 240 The VSPEX End-User Computing Solutions define discrete resource pool sizes. For this solution set, the pool contains 250 desktops. In the case of Table 13, the customer requires 240 virtual desktops of capability from the pool. Therefore, this 250 virtual desktop resource pool provides sufficient resources for the current needs. Fine tuning hardware resources In most cases, the recommended hardware for servers and storage will be sized appropriately based on the process described. However, in some cases there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this document. Additional customization can be done at this point. Storage resources In some applications, there is a need to separate some storage workloads from other workloads. The storage layouts in the VSPEX architectures put all of the virtual desktops in a single resource pool. In order to achieve workload separation, purchase additional disk drives for each group that needs workload isolation, and add them to a dedicated pool. 62

63 Solution Stack Architectural Overview It is not appropriate to reduce the size of the main storage resource pool in order to support isolation, or to reduce the capability of the pool without additional guidance beyond this paper. The storage layouts presented in the Reference Architectures are designed to balance many different factors in terms of high availability, performance, and data protection. Changing the components of the pool can have significant and unpredictable impacts on other areas of the system. Server resources For the server resources in the VSPEX end-user computing solution, it is possible to customize the hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 14. Note the addition of the Total CPU Resources and Total Memory Resources columns at the right of the table. Table 14. User Type Heavy Users Server resource component totals Resource Requirements CPU (Virtual CPUs) Memory (GB) Number of Users Total CPU Resources Total Memory Resources Moderate Users Typical Users Resource Requirements Resource Requirements Total In this example, the target architecture required 210 virtual CPUs and 480 GB of memory. With the stated assumptions of 8 desktops per physical processor core, and no memory over-provisioning, this translates to 27 physical processor cores and 480 GB of memory. In contrast, this 250 virtual desktop resource pool as documented in the Reference Architecture calls for 500 GB of memory and at least 32 physical processor cores. In this environment, the solution can be effectively implemented with fewer server resources. Note Keep high availability requirements in mind when customizing the resource pool hardware. 63

64 Solution Stack Architectural Overview Table 15 contains a blank worksheet. Table 15. Blank customer worksheet User Type CPU (Virtual CPUs) Memory (GB) IOPS Equivalent Reference Virtual Desktops Number of Users Total Reference Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Resource Requirements Equivalent Reference Virtual Desktops Total 64

65 Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Overview Pre-deployment tasks Customer configuration data Prepare switches, connect network, and configure switches Prepare and configure storage array Install and configure VMware vsphere hosts Install and configure SQL Server database Install and configure VMware vcenter Server Install and configure XenDesktop controller Summary

66 VSPEX Configuration Guidelines Overview The deployment process is divided into the stages shown in Table 16. Upon completion of the deployment, the VSPEX infrastructure will be ready for integration with the existing customer network and server infrastructure. Table 16 lists the main stages in the solution deployment process. The table also includes references to chapters where relevant procedures are provided. Table 16. Deployment process overview Stage Description Reference 1 Verify prerequisites Pre-deployment tasks 2 Obtain the deployment tools Pre-deployment tasks 3 Gather customer configuration data Pre-deployment tasks 4 Rack and cable the components Refer to vendor documentation 5 Configure the switches and networks, connect to the customer network Prepare switches, connect network, and configure switches 6 Install and configure the VNXe Prepare and configure storage array 7 Configure virtual machine datastores Prepare and configure storage array 8 Install and configure the servers Install and configure VMware vsphere hosts 9 Set up SQL Server (used by VMware vcenter and XenDesktop) 10 Install and configure vcenter and virtual machine networking Install and configure SQL Server database Install and configure VMware vcenter Server 11 Set up XenDesktop Controller Install and configure XenDesktop controller 12 Test and Installation Validating the Solution 66

67 VSPEX Configuration Guidelines Pre-deployment tasks Overview Pre-deployment tasks include procedures that do not directly relate to environment installation and configuration, but whose results will be needed at the time of installation. Examples of pre-deployment tasks are collection of hostnames, IP addresses, VLAN IDs, license keys, installation media, and so on. These tasks should be performed before the customer visit to decrease the time required onsite. Table 17. Tasks for pre-deployment Task Description Reference Gather documents Gather the related documents listed in Appendix C. These are used throughout the text of this document to provide detail on setup procedures and deployment best practices for the various components of the solution. Appendix C Gather tools Gather data Gather the required and optional tools for the deployment. Use Table 18 to confirm that all equipment, software, and appropriate licenses are available before the deployment process. Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Appendix B for reference during the deployment process. Table 18 Deployment prerequisites checklist Appendix B Deployment prerequisites Complete the VNXe Series Configuration Worksheet, available on the EMC online support website, to provide the most comprehensive array-specific information. Table 18 itemizes the hardware, software, and license requirements to configure the solution. Table 18. Deployment prerequisites checklist Requirement Description Reference Hardware Physical servers to host virtual desktops: Sufficient physical server capacity to host 250 desktops VMware vsphere 5.1 servers to host virtual infrastructure servers Note This requirement may be covered by existing infrastructure EMC VSPEX End-User Computing Solution Citrix XenDesktop 5.6, VMware vsphere 5.1 for 250 Virtual Desktops Enabled by Citrix XenDesktop 5.6, VMware vsphere 5.1, and EMC VNXe3300 Reference Architecture 67

68 VSPEX Configuration Guidelines Requirement Description Reference Networking: Switch port capacity and capabilities as required by the virtual desktop infrastructure. EMC VNXe3300: Multiprotocol storage array with the required disk layout. Software VMware ESXi 5.1 installation media VMware vcenter Server 5.1installation media Citrix XenDesktop 5.6 installation media EMC VSI for VMware vsphere: Unified Storage Management EMC Online Support EMC VSI for VMware vsphere: Storage Viewer Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vcenter and Citrix Desktop Controller) Microsoft Windows 7 SP1 installation media Microsoft SQL Server 2008 or newer installation media (suggested OS for VMware vcenter and Citrix Desktop Controller) Microsoft Windows Server 2012 installation media(ad/dhcp/dns) Note This requirement may be covered in the existing infrastructure EMC vstorage API for Array Integration Plug-in EMC Online Support Licenses VMware vcenter 5.1 license key VMware vsphere 5.1 Desktop license keys Citrix XenDesktop 5.6 license files 68

69 Requirement Description Reference Microsoft Windows Server 2008 R2 Standard (or higher) license keys Microsoft Windows Server 2012 Standard (or higher) license keys Note This requirement may be covered in the existing Microsoft Key Management Server (KMS) Microsoft Windows 7 license keys Note This requirement may be covered in the existing Microsoft Key Management Server (KMS) Microsoft SQL Server license key Note This requirement may be covered in the existing infrastructure VSPEX Configuration Guidelines Customer configuration data To reduce the onsite time, information such as IP addresses and hostnames should be assembled as part of the planning process. Appendix B provides a table to maintain a record of relevant information. This form can be expanded or contracted as required, and information may be added, modified, and recorded as deployment progresses. Additionally, complete the Customer Configuration Data Sheets, available on the EMC Online Support website, to provide the most comprehensive array-specific information. 69

70 VSPEX Configuration Guidelines Prepare switches, connect network, and configure switches Overview This section provides the requirements for network infrastructure required to support this architecture. Below is a summary of the tasks that will be completed and references for further information. Table 19. Tasks for switch and network configuration Task Description Reference Configure the infrastructure network Configure the VLANs Configure storage array and ESXi host infrastructure networking as specified in Solution architecture Configure private and public VLANs as required Your vendor s switch configuration guide Complete the network cabling Connect switch interconnect ports Connect VNXe ports Connect ESXi server ports Prepare network switches For validated levels of performance and high availability, this solution requires the switching capacity that is provided in Table 3 on page 41.If existing infrastructure meets the requirements, no new hardware installation is needed. Configure infrastructure network The infrastructure network requires redundant network links for each ESXi host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is required regardless of whether the network infrastructure for the solution already exists, or is being deployed alongside other components of the solution. Figure 18 shows a sample redundant Ethernet infrastructure for this solution. The diagram illustrates the use of redundant switches and links to ensure that no single points of failure exist in network connectivity. 70

71 VSPEX Configuration Guidelines Figure 18. Sample Ethernet network architecture Configure VLANs Ensure adequate switch ports for the storage array and be certain that ESXi hosts are configured with a minimum of three VLANs for: Virtual machine networking, ESXi management, and CIFS traffic (customer facing networks, which may be separated if desired) NFS networking (private network) vmotion (private network) Complete network cabling Ensure that all solution servers, storage arrays, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is complete connection to existing customer network. Note At this point, the new equipment is being connected to the existing customer network. Ensure that unforeseen interactions do not cause service issues on the customer network. 71

72 VSPEX Configuration Guidelines Prepare and configure storage array VNXe configuration Overview This overview describes how to configure the VNXe storage array. In this solution, the VNXe series provides Network File System (NFS) data storage for VMware hosts. Table 20. Tasks for storage configuration Task Description Reference Setup the initial VNXe configuration Setup VNXe Networking Provision storage for NFS datastores Configure the IP address information and other key parameters on the VNXe. Configure LACP on the VNXe and network switches. Create NFS file systems that will be presented to the ESXi servers as NFS datastores hosting the virtual desktops. VNXe3300 System Installation Guide VNXe Series Configuration Worksheet Your vendor s switch configuration guide Provision optional storage for user data Provision optional storage for infrastructure virtual machines Create CIFS file systems that will be used to store roaming user profiles and home directories. Create optional NFS datastores to host SQL server, domain controller, vcenter server, and/or XenDesktop controller virtual machines. Prepare VNXe VNXe3300 System Installation Guide provides instructions on assembly, racking, cabling, and powering the VNXe. There are no specific setup steps for this solution. Set up the initial VNXe configuration After completing the initial VNXe setup you need to configure key information about the existing environment so that the storage array can communicate. Configure the following items in accordance with your IT datacenter policies and existing infrastructure information. DNS NTP Storage network interfaces Storage network IP address CIFS services and Active Directory Domain membership 72

73 VSPEX Configuration Guidelines The reference documents listed in Table 20 on page 72 provide more information on how to configure the VNX platform. Storage configuration guidelines on page 48 provides more information on the disk layout. Provision core data storage Complete the following steps in Unisphere to configure NFS file systems on VNXe that will be used to store virtual desktops: 1. Create a pool with the appropriate number of disks. From the System Storage Pools area in Unisphere, select Configure Disks and manually create a new pool by Disk Type for SAS drives. The validated configuration uses a single pool with 21 drives. In other scenarios, creating separate pools may be advisable. Note Create your Hot Spare disks at this point. Please consult the VNXe3300 System Installation Guide for additional information. Figure 12 on page 50 depicts the target core storage layout for the solution. 2. Create an NFS Shared Folder Server. Access the wizard in Unisphere from Settings Shared Folder Server Settings Add Shared Folder Server. Detailed instructions on this are found in the VNXe3300 System Installation Guide. 3. Create a VMware storage resource. In Unisphere, navigate to Storage VMware Create. Create a NFS datastore on the pool and shared folder server created above. The size of the datastore will be determined by the number of virtual machines it will contain. The validated configuration used 1TB datastores each. Note Do not enable Thin Provisioning. 4. Finally, add your ESXi hosts to the list of hosts that are allowed to access the new datastore. Provision optional storage for user data If storage required for user data (that is, roaming user profiles and home directories) does not exist in the production environment already and the optional user data disk pack has been purchased, complete the following steps in Unisphere to configure two CIFS file systems on VNXe: 1. Create a RAID 6 storage pool that consists of twelve 2 TB NL-SAS drives. Figure 13 on page 50 depicts the target optional user data storage layout. 2. Two file systems are carved out of the storage pool to be exported as CIFS shares on a CIFS server. Provision optional storage for infrastructure virtual machines If storage required for infrastructure virtual machines (i.e., SQL server, domain controller, vcenter server, and/or XenDesktop controllers) does not exist in the production environment already and the optional user data disk pack has been purchased, configure an NFS file system on VNXe to be used as an NFS datastore in which the infrastructure virtual machines reside. Repeat the configuration steps that are shown in Provision core data storage to provision the optional storage, while taking into account the smaller number of drives. 73

74 VSPEX Configuration Guidelines Install and configure VMware vsphere hosts Overview This chapter provides information about installation and configuration of ESXi hosts and infrastructure servers required to support the architecture. Table 21 describes the tasks that must be completed. Table 21. Tasks for server installation Task Description Reference Install ESXi Install the ESXi 5.1 hypervisor on the physical servers deployed for the solution. vsphere Installation and Setup Guide Configure ESXi networking Connect VMware datastores Configure ESXi networking including NIC trunking, VMkernel ports, and virtual machine port groups and Jumbo Frames. Connect the VMware datastores to the ESXi hosts deployed for the solution. vsphere Networking vsphere Storage Guide Install ESXi Upon initial power-up of the servers being used for ESXi, confirm or enable the hardware-assisted CPU virtualization and the hardware-assisted MMU virtualization setting in each server s BIOS. If the servers are equipped with a RAID controller, configure mirroring on the local disks. Boot the ESXi 5.x installation media and install the hypervisor on each of the servers. ESXi hostnames, IP addresses and a root password will be required for installation. Appendix B provides appropriate values. Configure ESXi networking During the installation of VMware ESXi, a standard virtual switch (vswitch) will be created. By default, ESXi chooses only one physical NIC as a virtual switch uplink. To maintain redundancy and bandwidth requirements, an additional NIC must be added either by using the ESXi console or by connecting to the ESXi host from the vsphere Client. Each VMware ESXi server should have multiple interface cards for each virtual network to ensure redundancy and provide for the use of network load balancing, link aggregation, and network adapter failover. VMware ESXi networking configuration including load balancing, link aggregation, and failover options are described in vsphere Networking. Choose the appropriate load-balancing option based on what is supported by the network infrastructure. Create VMkernel ports as required, based on the infrastructure configuration: VMkernel port for NFS traffic VMkernel port for VMware vmotion 74

75 VSPEX Configuration Guidelines Virtual desktop port groups (used by the virtual desktops to communicate on the network) vsphere Networking describes the procedure for configuring these settings. Jumbo frames A Jumbo frame is an Ethernet frame with a payload between 1500 bytes and 9000 bytes. This is also known as the Maximum Transmission Unit (MTU). The generally accepted maximum size for a jumbo frame is 9000 bytes. Processing overhead is proportional to the number of frames. Therefore, enabling jumbo frames will reduce processing overhead by reducing the number of frames to be sent. This will increase the network throughput. Jumbo frames are recommended to be enabled end-to-end. This includes the network switches, ESXi servers, and VNXe SPs. Jumbo frames can be enabled on the ESXi server into two different levels. If all the portals on the virtual switch need to be enabled for jumbo frames, this can be achieved by selecting properties of virtual switch and editing the MTU settings from the vcenter. If specific VMkernel ports are to be jumbo frames enabled, edit the VMkernel port under network properties from vcenter. To enable Jumbo frames on the VNXe, use Unisphere Settings More Configuration Advanced Configuration. Select the appropriate IO module and Ethernet port, and then set the MTU to Jumbo frames may also need to be enabled on each network switch. Please consult your switch configuration guide for instructions. Connect VMware datastores Connect the datastores configured in to the appropriate ESXi servers. These include the datastores configured for: Virtual desktop storage Infrastructure virtual machine storage (if required) SQL Server storage (if required) vsphere Storage Guide provides instructions on how to connect the VMware datastores to the ESXi host. The ESXi EMC NFS VAAI plug-ins must be installed after VMware Virtual Center has been deployed as described in Install and configure VMware vcenter Server. Plan virtual machine memory allocations Server capacity is required for two purposes in the solution: To support the new virtualized server infrastructure To support the required infrastructure services such as authentication/authorization, DNS, and database For information on minimum infrastructure services hosting requirements, refer to Table 3 on page 41. If existing infrastructure services meet the requirements, the hardware listed for infrastructure services will not be required. 75

76 VSPEX Configuration Guidelines Memory configuration Proper sizing and configuration of the solution necessitates careful configuration of server memory. The following section provides general guidance on memory allocation for the virtual machines and factor in vsphere overhead and the virtual machine configuration. We begin with an overview of how memory is managed in a VMware environment. ESX/ESXi memory management Memory virtualization techniques allow the vsphere hypervisor to abstract physical host resources such as memory in order to provide resource isolation across multiple VMs while avoiding resource exhaustion. In cases where advanced processors (for example, Intel processors with EPT support) are deployed, this abstraction takes place within the CPU. Otherwise, this process occurs within the hypervisor itself via a feature known as shadow page tables. vsphere employs the following memory management techniques: Allocation of memory resources greater than those physically available to the VM is known as memory over-commitment. Identical memory pages that are shared across VMs are merged via a feature known as transparent page sharing. Duplicate pages are returned to the host free memory pool for reuse. Through memory compression, ESXi stores pages that would otherwise be swapped out to disk through host swapping in a compression cache located in the main memory. Host resource exhaustion can be relieved via a process known as memory ballooning. This process requests that free pages be allocated from the VM to the host for reuse. Lastly, Hypervisor swapping causes the host to force arbitrary VM pages out to disk. Additional information can be obtained by visiting: 76

77 Virtual machine memory concepts Figure 19 shows memory settings parameters in the virtual machine. VSPEX Configuration Guidelines Figure 19. Virtual machine memory settings Configured memory Physical memory allocated to the virtual machine at the time of creation. Reserved memory Memory that guaranteed to the virtual machine. Touched memory Memory that is active or in use by the virtual machine. Swappable Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines via ballooning, compression or swapping. Following are the recommended best practices for memory allocation: Do not disable the default memory reclamation techniques. These lightweight processes enable flexibility with minimal impact to workloads. Allocate the size of memory for virtual machines intelligently. Over-allocation wastes resources, while under-allocation causes performance impacts which can affect other virtual machines sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources. In severe cases when hypervisor swapping is encountered, the virtual machine performance will likely be adversely affected. Having performance baselines for your virtual machine workloads assists this process. 77

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING VMware Horizon View 5.2 and VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract This guide describes the

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING

EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for END-USER COMPUTING VMware View 5.1 and VMware vsphere 5.1 for up to 2000 Virtual Desktops Enabled by Brocade Network Fabrics,

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD VSPEX Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 250 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes

More information

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA Design Guide Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA VMware vsphere 5.1 for up to 2000 Virtual Desktops EMC VSPEX Abstract This guide describes required components and a configuration

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD VMware vsphere 5.1 for up to 500 Virtual Machines Enabled by Microsoft Windows Server 2012, EMC VNX, and EMC Next- EMC VSPEX Abstract This document describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD

EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Proven Infrastructure EMC VSPEX with Brocade Networking Solutions for PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 100 Virtual Machines Enabled by Brocade VDX with VCS Fabrics, EMC

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 6.0 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Data Protection EMC VSPEX Abstract This describes how

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes the steps required to deploy a Microsoft Exchange Server 2013 solution on

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD Proven Infrastructure EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 with Hyper-V for up to 500 Virtual Machines Enabled by EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Private Cloud for

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon 6.0 with View and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING INFRASTRUCTURES FOR THE POST PC ERA Umair Riaz vspecialist 2 The Way We Work Is Changing Access From Anywhere Applications On The Go Devices End User Options 3 Challenges Facing Your Business

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix XenDesktop and XenApp for Dell EMC XC Family September 2018 H17388 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Dell EMC Ready System for VDI on VxRail

Dell EMC Ready System for VDI on VxRail Dell EMC Ready System for VDI on VxRail Citrix XenDesktop for Dell EMC VxRail Hyperconverged Appliance April 2018 H16968.1 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

Dell EMC Ready System for VDI on XC Series

Dell EMC Ready System for VDI on XC Series Dell EMC Ready System for VDI on XC Series Citrix XenDesktop for Dell EMC XC Series Hyperconverged Appliance March 2018 H16969 Deployment Guide Abstract This deployment guide provides instructions for

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 Proven Solutions Guide EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6 EMC VNX Series (NFS), VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Profile Manager 4.1 Simplify management and decrease TCO

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo Vendor: Citrix Exam Code: 1Y0-401 Exam Name: Designing Citrix XenDesktop 7.6 Solutions Version: Demo DEMO QUESTION 1 Which option requires the fewest components to implement a fault-tolerant, load-balanced

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This Reference Architecture Guide describes, in summary, a solution that enables IT organizations to quickly and effectively provision and manage Oracle Database as a Service (DBaaS) on Federation Enterprise

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy multiple Microsoft SQL Server

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales:

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

EMC VSPEX PRIVATE CLOUD

EMC VSPEX PRIVATE CLOUD EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This describes the EMC VSPEX Proven Infrastructure solution for private

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Simplify management and decrease total cost of ownership Guarantee a superior desktop experience Ensure a successful virtual desktop deployment

More information

Copyright 2012 EMC Corporation. All rights reserved. EMC VSPEX. Christian Stein

Copyright 2012 EMC Corporation. All rights reserved. EMC VSPEX. Christian Stein 1 EMC VSPEX Christian Stein Christian.stein@emc.com 2 Cloud A New Architecture Old World Physical New World Virtual Dedicated, Vertical Stacks Dynamic Pools Of Compute & Storage Three Paths To The Private

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 3.0 This document supports the version of each product listed and supports

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Umair Riaz vspecialist 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/ Distributed Computing Cloud Computing 3 EMC s Mission

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 EMC VSPEX Abstract This describes how to design virtualized Microsoft Exchange Server 2010 resources on the appropriate EMC VSPEX Proven Infrastructures

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012

EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT SQL SERVER 2012 EMC VSPEX Abstract This describes how to design virtualized Microsoft SQL Server resources on the appropriate EMC VSPEX Proven Infrastructure

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview Dell EMC VxBlock Systems for VMware NSX 6.3 Architecture Overview Document revision 1.1 March 2018 Revision history Date Document revision Description of changes March 2018 1.1 Updated the graphic in Logical

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

VxRack System SDDC Enabling External Services

VxRack System SDDC Enabling External Services VxRack System SDDC Enabling External Services May 2018 H17144 Abstract This document describes how to enable external services for a VxRack System SDDC. Use cases included are Dell EMC Avamar-based backup

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 EMC VSPEX CHOICE WITHOUT COMPROMISE 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/ Distributed Computing Cloud Computing 3 Cloud A New Architecture Old World Physical New World

More information

vsan Management Cluster First Published On: Last Updated On:

vsan Management Cluster First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 03-05-2018 1 1. vsan Management Cluster 1.1.Solution Overview Table Of Contents 2 1. vsan Management Cluster 3 1.1 Solution Overview HyperConverged Infrastructure

More information

CVE-400-1I Engineering a Citrix Virtualization Solution

CVE-400-1I Engineering a Citrix Virtualization Solution CVE-400-1I Engineering a Citrix Virtualization Solution The CVE-400-1I Engineering a Citrix Virtualization Solution course teaches Citrix engineers how to plan for and perform the tasks necessary to successfully

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with FLIR Latitude Surveillance Dell EMC Storage with FLIR Latitude Configuration Guide H15106 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published June 2016 Dell believes the information

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 BUILDING AN EFFICIENT AND FLEXIBLE VIRTUAL INFRASTRUCTURE Storing and Protecting Wouter Kolff Advisory Technology Consultant EMCCAe 2 Waves Of Change Mainframe Minicomputer PC/ Microprocessor Networked/

More information

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS

EMC VNX FAMILY. Next-generation unified storage, optimized for virtualized applications. THE VNXe SERIES SIMPLE, EFFICIENT, AND AFFORDABLE ESSENTIALS EMC VNX FAMILY Next-generation unified storage, optimized for virtualized applications ESSENTIALS Unified storage for multi-protocol file, block, and object storage Powerful new multi-core Intel CPUs with

More information

Features. HDX WAN optimization. QoS

Features. HDX WAN optimization. QoS May 2013 Citrix CloudBridge Accelerates, controls and optimizes applications to all locations: datacenter, branch offices, public and private clouds and mobile users Citrix CloudBridge provides a unified

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

Surveillance Dell EMC Storage with Digifort Enterprise

Surveillance Dell EMC Storage with Digifort Enterprise Surveillance Dell EMC Storage with Digifort Enterprise Configuration Guide H15230 REV 1.1 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published August 2016 Dell believes the

More information

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures

VMware vsphere 4. The Best Platform for Building Cloud Infrastructures Table of Contents Get the efficiency and low cost of cloud computing with uncompromising control over service levels and with the freedom of choice................ 3 Key Benefits........................................................

More information

VMware vsphere 6.5: Install, Configure, Manage (5 Days)

VMware vsphere 6.5: Install, Configure, Manage (5 Days) www.peaklearningllc.com VMware vsphere 6.5: Install, Configure, Manage (5 Days) Introduction This five-day course features intensive hands-on training that focuses on installing, configuring, and managing

More information

Avaya Collaboration Pod 4200 Series

Avaya Collaboration Pod 4200 Series Highlights Simplify your installation with a pre-integrated virtualized solution: VMware-based solution helps reduce infrastructure costs and time to deploy Faster time to service: Ready-made, pre-tested

More information

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP

EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP DESIGN GUIDE EMC VSPEX FOR VIRTUALIZED ORACLE DATABASE 12c OLTP Enabled by EMC VNXe and EMC Data Protection VMware vsphere 5.5 Red Hat Enterprise Linux 6.4 EMC VSPEX Abstract This describes how to design

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

CMB-207-1I Citrix Desktop Virtualization Fast Track

CMB-207-1I Citrix Desktop Virtualization Fast Track Page1 CMB-207-1I Citrix Desktop Virtualization Fast Track This fast-paced course covers select content from training courses CXA-206: Citrix XenApp 6.5 Administration and CXD-202: Citrix XenDesktop 5 Administration

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange Server

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Pivot3 Acuity with Microsoft SQL Server Reference Architecture Pivot3 Acuity with Microsoft SQL Server 2014 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales: sales@pivot3.com Austin,

More information

Reduce costs and enhance user access with Lenovo Client Virtualization solutions

Reduce costs and enhance user access with Lenovo Client Virtualization solutions SYSTEM X SERVERS SOLUTION BRIEF Reduce costs and enhance user access with Lenovo Client Virtualization solutions Gain the benefits of client virtualization while maximizing your Lenovo infrastructure Highlights

More information

VMware vcloud Air User's Guide

VMware vcloud Air User's Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN

vsphere Networking Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 EN Update 2 VMware vsphere 5.5 VMware ESXi 5.5 vcenter Server 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Symantec Reference Architecture for Business Critical Virtualization

Symantec Reference Architecture for Business Critical Virtualization Symantec Reference Architecture for Business Critical Virtualization David Troutt Senior Principal Program Manager 11/6/2012 Symantec Reference Architecture 1 Mission Critical Applications Virtualization

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon 7 on Dell EMC XC Family September 2018 H17387 Deployment Guide Abstract This deployment guide provides instructions for deploying VMware

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2010 WITH MICROSOFT HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2010

More information