Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and ThinkSystem Servers

Size: px
Start display at page:

Download "Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and ThinkSystem Servers"

Transcription

1 Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and ThinkSystem Servers Last update: 22 January 2018 Version 1.2 Configuration Reference Number: VDICX01XX73 Reference Architecture for Citrix XenDesktop Contains performance data and sizing recommendations for servers, storage, and networking Describes variety of storage models including SAN storage and hyper-converged systems Contains detailed bill of materials for servers, storage, networking, and racks Mike Perks Kenny Bain Pawan Sharma

2 Table of Contents 1 Introduction Architectural overview Component model Citrix XenDesktop provisioning Provisioning Services Machine Creation Services Storage model VMware vsan VMware vsan storage policies Operational model Operational model scenarios Enterprise operational model Hyper-converged operational model Hypervisor support VMware ESXi Citrix XenServer Compute servers for virtual desktops Intel Xeon Scalable processor servers Compute servers for VMware vsan Intel Xeon Scalable processor servers Graphics acceleration Management servers Systems management Shared storage Lenovo ThinkSystem DS6200 storage array Networking GbE networking GbE administration networking Racks Deployment models Deployment example 1: vsan hyper-converged using ThinkSystem servers ii

3 5 Appendix: Bill of materials BOM for enterprise and SMB compute servers BOM for hyper-converged compute servers BOM for enterprise and SMB management servers BOM for shared storage BOM for networking BOM for Flex System chassis BOM for rack Resources iii

4 1 Introduction The intended audience for this document is technical IT architects, system administrators, and managers who are interested in server-based desktop virtualization and server-based computing (terminal services or application virtualization) that uses Citrix XenDesktop. In this document, the term client virtualization is used as to refer to all of these variations. Compare this term to server virtualization, which refers to the virtualization of server-based business logic and databases. This document describes the reference architecture for Citrix XenDesktop 7.15 and also supports the previous versions of Citrix XenDesktop 7.x. This document should be read with the Lenovo Client Virtualization (LCV) base reference architecture document that is available at this website: lenovopress.com/lp0756. The business problem, business value, requirements, and hardware details are described in the LCV base reference architecture document and are not repeated here for brevity. This document gives an architecture overview and logical component model of Citrix XenDesktop. The document also provides the operational model of Citrix XenDesktop by combining Lenovo hardware platforms such as ThinkSystem Servers and RackSwitch networking with storage such as ThinkSystem DS6200 storage or VMware vsan. The operational model presents performance benchmark measurements and discussion, sizing guidance, and some example deployment models. The last section contains detailed bill of material configurations for each piece of hardware. See also the Reference Architecture for Workloads using the Lenovo ThinkAgile HX Series appliances for Citrix XenDesktop specific information. This document is available at this website: lenovopress.com/lp

5 2 Architectural overview Figure 1 shows all of the main features of the Lenovo Client Virtualization reference architecture with Citrix XenDesktop. This reference architecture does not address the issues of remote access and authorization, data traffic reduction, traffic monitoring, and general issues of multi-site deployment and network management. This document limits the description to the components that are inside the customer s intranet. Figure 1: LCV reference architecture with Citrix XenDesktop 2

6 3 Component model Figure 2 is a layered view of the LCV solution that is mapped to the Citrix XenDesktop virtualization infrastructure. Figure 2: Component model with Citrix XenDesktop Citrix XenDesktop features the following main components: Desktop Studio Storefront Desktop Studio is the main administrator GUI for Citrix XenDesktop. It is used to configure and manage all of the main entities, including servers, desktop pools and provisioning, policy, and licensing. Storefront provides the user interface to the XenDesktop environment. The Web Interface brokers user authentication, enumerates the available desktops and, upon start, delivers a.ica file to the Citrix Receiver on the user s local device to start a connection. The Independent Computing Architecture (ICA) file contains configuration information for the Citrix receiver to communicate with the virtual desktop. Because the Web Interface is a critical component, redundant servers must be available to provide fault tolerance. 3

7 Delivery controller The Delivery controller is responsible for maintaining the proper level of idle desktops to allow for instantaneous connections, monitoring the state of online and connected desktops, and shutting down desktops as needed. A XenDesktop farm is a larger grouping of virtual machine servers. Each delivery controller in the XenDesktop acts as an XML server that is responsible for brokering user authentication, resource enumeration, and desktop starting. Because a failure in the XML service results in users being unable to start their desktops, it is recommended that you configure multiple controllers per farm. PVS and MCS License Server XenDesktop SQL Server vcenter Server Provisioning Services (PVS) is used to provision stateless desktops at a large scale. Machine Creation Services (MCS) is used to provision dedicated or stateless desktops in a quick and integrated manner. For more information, see Citrix XenDesktop provisioning section on page 5. The Citrix License Server is responsible for managing the licenses for all XenDesktop components. XenDesktop has a 30-day grace period that allows the system to function normally for 30 days if the license server becomes unavailable. This grace period offsets the complexity of otherwise building redundancy into the license server. Each Citrix XenDesktop site requires an SQL Server database that is called the data store, which used to centralize farm configuration information and transaction logs. The data store maintains all static and dynamic information about the XenDesktop environment. Because the XenDeskop SQL server is a critical component, redundant servers must be available to provide fault tolerance. By using a single console, vcenter Server provides centralized management of the virtual machines (VMs) for the VMware ESXi hypervisor. VMware vcenter can be used to perform live migration (called VMware vmotion), which allows a running VM to be moved from one physical server to another without downtime. Redundancy for vcenter Server is achieved through VMware high availability (HA). The vcenter Server also contains a licensing server for VMware ESXi. vcenter SQL Server vcenter Server for VMware ESXi hypervisor requires an SQL database. The vcenter SQL server might be Microsoft Data Engine (MSDE), Oracle, or SQL Server. Because the vcenter SQL server is a critical component, redundant servers must be available to provide fault tolerance. Customer SQL databases (including respective redundancy) can be used. 4

8 Client devices VDA Hypervisor Accelerator VM Shared storage Citrix XenDesktop supports a broad set of devices and all major device operating platforms, including Apple ios, Google Android, and Google ChromeOS. XenDesktop enables a rich, native experience on each device, including support for gestures and multi-touch features, which customizes the experience based on the type of device. Each client device has a Citrix Receiver, which acts as the agent to communicate with the virtual desktop by using the ICA/HDX protocol. Each VM needs a Citrix Virtual Desktop Agent (VDA) to capture desktop data and send it to the Citrix Receiver in the client device. The VDA also emulates keyboard and gestures sent from the receiver. ICA is the Citrix remote display protocol for VDI. XenDesktop has an open architecture that supports the use of several different hypervisors, such as VMware ESXi (vsphere), Citrix XenServer, and Microsoft Hyper-V. The optional accelerator VM. Shared storage is used to store user profiles and user data files. Depending on the provisioning model that is used, different data is stored for VM images. For more information, see Storage model. Shared storage can also be distributed storage as used in hyper-converged systems. For more information, see the Lenovo Client Virtualization base reference architecture document that is available at this website: lenovopress.com/lp Citrix XenDesktop provisioning Citrix XenDesktop features the following primary provisioning models: Provisioning Services (PVS) Machine Creation Services (MCS) Provisioning Services Hosted VDI desktops can be deployed with or without Citrix PVS. The advantage of the use of PVS is that you can stream a single desktop image to create multiple virtual desktops on one or more servers in a data center. Figure 3 shows the sequence of operations that are run by XenDesktop to deliver a hosted VDI virtual desktop. When the virtual disk (vdisk) master image is available from the network, the VM on a target device no longer needs its local hard disk drive (HDD) to operate; it boots directly from the network and behaves as if it were running from a local drive on the target device, which is why PVS is recommended for stateless virtual desktops. PVS often is not used for dedicated virtual desktops because the write cache is not stored on shared storage. PVS is also used with Microsoft Roaming Profiles (MSRPs) so that the user s profile information can be separated out and reused. Profile data is available from shared storage. 5

9 It is a best practice to use snapshots for changes to the master VM images and also keep copies as a backup. Figure 3: Using PVS for a stateless model Machine Creation Services Unlike PVS, MCS does not require more servers. Instead, it uses integrated functionality that is built into the hypervisor (VMware ESXi, Citrix XenServer, or Microsoft Hyper-V) and communicates through the respective APIs. Each desktop has one difference disk and one identity disk (as shown in Figure 4). The difference disk is used to capture any changes that are made to the master image. The identity disk is used to store information, such as device name and password. Figure 4: MCS image and difference/identity disk storage model 6

10 The following types of Image Assignment Models for MCS are available: Pooled-random: Desktops are assigned randomly. When they log off, the desktop is free for another user. When rebooted, any changes that were made are destroyed. Pooled-static: Desktops are permanently assigned to a single user. When a user logs off, only that user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made are destroyed. Dedicated: Desktops are permanently assigned to a single user. When a user logs off, only that user can use the desktop, regardless if the desktop is rebooted. During reboots, any changes that are made persist across subsequent restarts. MCS thin provisions each desktop from a master image by using built-in technology to provide each desktop with a unique identity. Only changes that are made to the desktop use more disk space. For this reason, MCS dedicated desktops are used for dedicated desktops. There is a new caching option in Citrix XenDesktop 7.9 onwards for Pooled and Hosted Shared desktops. Figure 5 shows a screenshot of the option. Figure 5: Caching option for Pooled and Hosted Shared desktops 3.2 Storage model This section describes the different types of shared or distributed data stored for stateless and dedicated desktops. Stateless and dedicated virtual desktops should have the following common shared storage items: The master VM image and snapshots are stored by using Network File System (NFS) or block I/O shared storage. The paging file (or vswap) is transient data that can be redirected to NFS storage. In general, it is recommended to disable swapping, which reduces storage use (shared or local). The desktop memory size should be chosen to match the user workload rather than depending on a smaller image and swapping, which reduces overall desktop performance. 7

11 User profiles (from MSRP) are stored by using Common Internet File System (CIFS). User data files are stored by using CIFS. Dedicated virtual desktops or stateless virtual desktops that need mobility require the following items to be on NFS or block I/O shared storage: Difference disks are used to store user s changes to the base VM image. The difference disks are per user and can become quite large for dedicated desktops. Identity disks are used to store the computer name and password and are small. Stateless desktops can use local solid-state drive (SSD) storage for the PVS write cache, which is used to store all image writes on local SSD storage. These image writes are discarded when the VM is shut down. 3.3 VMware vsan VMware vsan is a Software Defined Storage (SDS) solution embedded in the ESXi hypervisor. vsan pools flash caching devices and magnetic disks across three or more 10 GbE connected servers into a single shared datastore that is resilient and simple to manage. vsan can be scaled to 64 servers, with each server supporting up to 5 disk groups, with each disk group consisting of a single flash caching device (SSD) and up to 7 HDDs. Performance and capacity can easily be increased simply by adding more components: disks, flash or servers. The flash cache is used to accelerate both reads and writes. Frequently read data is kept in read cache; writes are coalesced in cache and destaged to disk efficiently, greatly improving application performance. vsan manages data in the form of flexible data containers that are called objects and the following types of objects for VMs are available: VM Home VM swap (.vswp) VMDK (.vmdk) Snapshots (.vmsn) Internally, VM objects are split into multiple components that are based on performance and availability requirements that are defined in the VM storage profile. These components are distributed across multiple hosts in a cluster to tolerate simultaneous failures and meet performance requirements. vsan uses a distributed RAID architecture to distribute data across the cluster. Components are distributed with the use of the following two storage policies: Number of stripes per object. It uses RAID 0 method. Number of failures to tolerate. It uses either RAID-1 or RAID-5/6 method. RAID-5/6 is currently supported for the all flash configuration only VMware vsan storage policies VMware vsan uses the Storage Policy-based Management (SPBM) function in vsphere to enable policy driven virtual machine provisioning, and uses vsphere APIs for Storage Awareness (VASA) to expose vsan's storage capabilities to vcenter. 8

12 This approach means that storage resources are dynamically provisioned based on requested policy, and not pre-allocated as with many traditional storage solutions. Storage services are precisely aligned to VM boundaries; change the policy, and vsan will implement the changes for the selected VMs. VMware vsan has the following storage policies: Storage Policy Description Default Maximum Failure Tolerance Method Defines a method used to tolerate failures. RAID-1 uses mirroring and RAID 5/6 uses parity blocks (erasure encoding) to provide space efficiency. RAID-5/6 is supported only for All Flash configurations. RAID 5 requires minimum 4 hosts and RAID 6 requires minimum 6 hosts. When RAID 5/6 is chosen, RAID 5 is used when FTT=1 and RAID 6 is used when FTT=2. RAID-1 N/A Number of failures to tolerate Number of disk stripes per object Object space reservation Flash read cache reservation Defines the number of host, disk, or network failures a VM object can tolerate. For n failures tolerated, n+1 copies of the VM object are created and 2n+1 hosts with storage are required. For example with a FTT=1, RAID-1 uses 2x the storage and RAID- 5/6 uses 1.33x the storage. When FTT=2, RAID-1 uses 3x the storage and RAID-5/6 uses 1.5x the storage. The number of HDDs across which each replica of a VM object is striped. A value higher than 1 might result in better performance, but can result in higher use of resources. Percentage of the logical size of the object that should be reserved (or thick provisioned) during VM creation. The rest of the storage object is thin provisioned. If your disk is thick provisioned, 100% is reserved automatically. When deduplication and compression is enabled, this should be set to either 0% (do not apply) or 100%. SSD capacity reserved as read cache for the VM object. Specified as a percentage of the logical size of the object. Should be used only to address read performance issues. Reserved flash capacity cannot be used by other objects. Unreserved flash is shared fairly among all objects % 100% 0% 100% Force provisioning If the option is set to Yes, the object is provisioned, even if the storage policy cannot be satisfied by the data store. Use this parameter in bootstrapping scenarios and during an outage when standard provisioning is no longer possible. The default of No is acceptable for most production environments. No N/A IOPS limit for object Defines IOPS limit for a disk and assumes a default block size of 32 KB. Read, write and cache operations are all considered equivalent. When the IOPS exceeds the limit, then IO is throttled. 0 User Defined 9

13 Storage Policy Description Default Maximum Disable object checksum Detects corruption caused by hardware/software components including memory, drives, etc. during the read or write operations. No Yes Object checksums carry a small disk IO, memory and compute overhead and can be disabled on a per object basis. The following storage policies and default values are recommended for Citrix XenDesktop linked clones and full clones. Table 1 lists the default storage policies for linked clones. Table 1: Default storage policy values for linked clones Dedicated Stateless Storage Policy VM_HOME Number of disk stripes per object Flash-memory read cache reservation 0% 10 0% 0% 0% 10 0% % % Number of failures to tolerate (FTT) Failure Tolerance Method N/A N/A N/A N/A N/A N/A N/A Force provisioning No No No No No No No Object-space reservation 0% 0% 0% 100% 0% 0% 0% Disable object Checksum N/A N/A N/A N/A N/A N/A N/A IOPS limit for object N/A N/A N/A N/A N/A N/A N/A Replica Table 2 lists the default storage policies for full clones. Table 2: Default storage policy values for full clones Storage Policy VM_HOME OS Disk Dedicated Full Clone Disk Number of disk stripes per object 1 1 Flash-memory read cache reservation 0% 0% Number of failures to tolerate (FTT) 1 1 Failure Tolerance Method N/A N/A Force provisioning No No Object-space reservation 0% 100% Disable object Checksum N/A N/A IOPS limit for object N/A N/A Persistent Disk VM_HOME Replica OS Disk 10

14 4 Operational model This section describes the options for mapping the logical components of a client virtualization solution onto hardware and software. The Operational model scenarios section gives an overview of the available mappings and has pointers into the other sections for the related hardware. Each subsection contains performance data, has recommendations on how to size for that particular hardware, and a pointer to the BOM configurations that are described in section 5 on page 31. The last part of this section contains some deployment models for example customer scenarios. 4.1 Operational model scenarios Figure 6 shows the following operational models (solutions) in Lenovo Client Virtualization: enterprise, smallmedium business (SMB), and hyper-converged. Enterprise {>600 users} Traditional Converged Hyper-Converged Compute/Management Servers ThinkSystem SR630/SR650 Hypervisor ESXi Graphics Acceleration NVIDIA GRID 2.0 M60 Shared Storage ThinkSystem DS6200 Networking 10GbE 8 or 16 Gb FC Flex System Chassis Compute/Management Servers ThinkSystem SN550 Hypervisor ESXi Shared Storage ThinkSystem DS6200 Networking 10GbE 8 or 16 Gb FC Compute/Management Servers ThinkSystem SR630/SR650 Hypervisor ESXi Graphics Acceleration NVIDIA GRID 2.0 M60 Shared Storage Not applicable Networking 10GbE SMB {<600 users} Compute/Management Servers ThinkSystem SR630/SR650 Hypervisor ESXi Graphics Acceleration NVIDIA GRID 2.0 M60 Shared Storage ThinkSystem DS6200 Networking 10GbE Not Recommended Figure 6: Operational model scenarios The vertical axis is split into two halves: greater than 600 users is termed Enterprise and less than 600 is termed SMB. The 600 user split is not exact and provides rough guidance between Enterprise and SMB. The last column in Figure 6 (labelled hyper-converged ) spans both halves because a hyper-converged solution can be deployed in a linear fashion from a small number of users (100) up to a large number of users (>4000). The horizontal axis is split into three columns. The left-most column represents traditional rack-based systems with top-of-rack (TOR) switches and shared storage. The middle column represents converged systems where the compute, networking, and sometimes storage are converged into a chassis, such as the Flex 11

15 System. The right-most column represents hyper-converged systems and the software that is used in these systems. For the purposes of this reference architecture, the traditional and converged columns are merged for enterprise solutions; the only significant differences are the networking, form factor, and capabilities of the compute servers. Converged systems are not generally recommended for the SMB space because the converged hardware chassis can be more overhead when only a few compute nodes are needed. Other compute nodes in the converged chassis can be used for other workloads to make this hardware architecture more cost-effective Enterprise operational model For the enterprise operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.3 Compute servers for virtual desktops 4.5 Graphics acceleration 4.6 Management servers 4.7 Systems management 4.8 Shared storage 4.9 Networking 4.10 Racks To show the enterprise operational model for different sized customer environments, four different sizing models are provided for supporting 300, 600, 1500, and 3000 users. The SMB model is the same as the Enterprise model for traditional systems Hyper-converged operational model For the hyper-converged operational model, see the following sections for more information about each component, its performance, and sizing guidance: 4.2 Hypervisor support 4.4 Compute servers for VMware vsan 4.5 Graphics acceleration 4.6 Management servers 4.7 Systems management GbE networking 4.10 Racks To show the hyper-converged operational model for different sized customer environments, four different sizing models are provided for supporting 300, 600, 1500, and 3000 users. The management server VMs for a hyper-converged cluster can either be in a separate hyper-converged cluster or on traditional shared storage. 12

16 4.2 Hypervisor support This reference architecture was tested with VMware ESXi 6.5 U1 and Citrix XenServer 7.3 hypervisors VMware ESXi The hypervisor is convenient because it can start from a M.2 boot drive and does not require any additional local storage. Alternatively, ESXi can be booted from SAN shared storage. For the VMware ESXi hypervisor, the number of vcenter clusters depends on the number of compute servers. To use the latest version ESXi, download the Lenovo custom image from the following website: my.vmware.com/web/vmware/info/slug/datacenter_cloud_infrastructure/vmware_vsphere/6_5#custom_iso Citrix XenServer The hypervisor is convenient because it can start from a M.2 boot drive and does not require any additional local storage. 4.3 Compute servers for virtual desktops This section describes stateless and dedicated virtual desktop models. In some customer environments, stateless and dedicated desktop models might be required, which requires a hybrid implementation. In this section stateless refers to MCS Pooled-Random and dedicated refers to MCS Dedicated. Compute servers are servers that run a hypervisor and host virtual desktops. There are several considerations for the performance of the compute server, including the processor family and clock speed, the number of processors, the speed and size of main memory, and local storage options. For stateless users, the typical range of memory that is required for each desktop is 2 GB - 6 GB. For dedicated users, the range of memory for each desktop is 2 GB - 8 GB. Designers and engineers that require graphics acceleration might need 8 GB - 16 GB of RAM per desktop. In general, power users that require larger memory sizes also require more virtual processors. The virtual desktop memory should be large enough so that swapping is not needed and vswap can be disabled. This reference architecture standardizes on 2 GB per desktop as the absolute minimum requirement of a Windows 10 desktop. Windows 10 was used for all of the performance testing. In general Windows 10 requires 10% to 20% more compute power than Windows 7. The following optimizations were applied to Windows 10 base image: Applied #VDILIKEAPRO Tuning Template(developed by loginvsi) see the following for more details: loginvsi.com/blog/520-the-ultimate-windows-10-tuning-template-for-any-vdi-environment Set Adobe acrobat as a default app for PDF files using steps in following webpage: adobe.com/devnet-docs/acrobatetk/tools/adminguide/pdfviewer.html Disabled Windows Modules installer service on the base image because the CPU utilization can remain high after rebooting all the VMs. By default this service is set to manual rather than disabled. 13

17 For more information, see BOM for enterprise and SMB compute servers section on page Intel Xeon Scalable processor servers This section shows the performance results and sizing guidance for Lenovo ThinkSystem compute servers based on the Intel Scalable processor (Purley Skylake). VMware ESXi 6.5 U1 performance results Table 3 lists the VSI max performance results for different Login VSI 4.1 workloads with ESXi 6.5 U1 and Windows 10. Table 3: Login VSI Performance for ESXi Processor Workload Stateless Dedicated Two Scalable 6130 processors 2.10 GHz, 16C 125W Office worker 276 users 271 users Two Scalable 6130 processors 2.10 GHz, 16C 125W Knowledge worker 260 users 258 users Two Scalable 8176 processors 2.10 GHz, 28C 165W Knowledge worker 418 users 420 users Two Scalable 6130 processors 2.10 GHz, 16C 125W Power worker 202 users 195 users Two Scalable 8176 processors 2.10 GHz, 28C 165W Power worker 347 users 347 users These results indicate the comparative processor performance. The following conclusions can be drawn: The performance for stateless and dedicated virtual desktops is similar. The Scalable 8176 processor has significantly better performance than the Scalable 6130 processor and is directly proportional to the additional 12 cores but at an added cost. Citrix XenServer 7.3 performance results Table 4 lists the VSI max performance results for different Login VSI 4.1 workloads with XenServer 7.3 and Windows 10. Table 4: Login VSI Performance for XenServer Processor Workload Stateless Dedicated Two Scalable 6130 processors 2.10 GHz, 16C 125W Office worker 240 users 244 users Two Scalable 6130 processors 2.10 GHz, 16C 125W Knowledge worker 223 users 224 users Compute server recommendation The Xeon Scalable 6130 processor has a good cost/user density ratio. Many configurations are bound by memory; therefore, a faster processor might not provide any added value. Some users require a very fast processor and for those users, the Xeon Scalable 8176 processor is a good choice. The default recommendation is two Xeon Scalable 6130 processors and 768 GB of system memory because this configuration provides the best coverage and density for a range of users. 14

18 For an office worker, Lenovo testing shows that 200 users per server is a good baseline and has an average of 65% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 240 users per server has an average of 78% usage of the processors. It is important to keep a 25% or more headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. For a knowledge worker, Lenovo testing shows that 165 users per server is a good baseline and has an average of 53% usage of the processors in the server. For the degraded failover case, Lenovo testing shows that 198 users per server have an average of 62% usage of the processors. For a power user, Lenovo testing shows that 125 users per server is a good baseline and has an average of 57% usage of the processors in the server. For the degraded failover case, Lenovo testing shows that 150 users per server have an average of 69% usage of the processors. Moving up to two Xeon Scalable 8176 processors shows those densities increasing to 200 users and 240 users respectively with lower CPU utilizations. Table 5 summarizes the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 5: Processor usage for ESXi Processor Workload Users per Server CPU Utilization Two 6130 Office worker 200 users Normal Mode 65% Two 6130 Office worker 240 users Failover Mode 78% Two 6130 Knowledge worker 165 users Normal Mode 53% Two 6130 Knowledge worker 198 users Failover Mode 62% Two 6130 Power worker 125 users Normal Mode 57% Two 6130 Power worker 150 users Failover Mode 69% Table 6 summarizes the processor usage with XenServer for the recommended user counts for normal mode and failover mode. Table 6: Processor usage for XenServer Processor Workload Users per Server CPU Utilization Two 6130 Office worker 180 users Normal Mode 71% Two 6130 Office worker 216 users Failover Mode 89% Two 6130 Knowledge worker 165 users Normal Mode 72% Two 6130 Knowledge worker 198 users Failover Mode 90% Table 7 lists the recommended number of virtual desktops per server for different workload types and VM memory sizes. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. 15

19 Table 7: Recommended number of virtual desktops per server Workload Office worker Knowledge worker Power worker Processor Two 6130 Two 6130 Two 6130 VM memory size 3 GB 4 GB 5 GB System memory 768 GB 768 GB 768 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 8 lists the approximate number of compute servers that are needed for different numbers of users and Office worker workloads. Table 8: Compute servers needed for Office workers and different numbers of users Office workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) Table 9 lists the approximate number of compute servers that are needed for different numbers of users and Knowledge worker workloads. Table 9: Compute servers needed for Knowledge workers and different numbers of users Knowledge workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) Table 10 lists the approximate number of compute servers that are needed for different numbers of users and Power worker workloads. Table 10: Compute servers needed for Power workers and different numbers of users Power workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) Compute servers for VMware vsan This section presents the compute servers for the hyper-converged system using VMware vsan 6.6 with ESXi 6.5 U1. In this section stateless refers to MCS Pooled-Random and dedicated refers to MCS Dedicated. Additional processing and memory is required to support vsan. Systems with two disk groups need approximately 32 GB and those with 4 disk groups need 50GB. See the following for more details on the calculation of vsan system memory: kb.vmware.com/selfservice/microsites/search.do?cmd=displaykc&externalid=

20 As the price per GB for flash memory continues to reduce, there is a trend to use all SSDs to provide the overall best performance for vsan. For more information, see BOM for hyper-converged compute servers on page Intel Xeon Scalable processor servers VMware vsan is tested by using Login VSI 4.1. Four Lenovo ThinkSystem SR630 servers with two Xeon Scalable 6130 processors and 768 GB of system memory were networked together by using a 10 GbE TOR switch with two 10 GbE connections per server. Each server was configured with two all flash disk groups because a single disk group does not provide the necessary resiliency. Each disk group had one 800 GB SSD and two 3.84 GB SSDs. This provided more than enough capacity for linked clone or full clone VMs. Additional 3.84 GB SSDs could be added for more capacity but may not be needed when all flash de-duplication is used. Table 11 lists the per server VSI max performance results for different Login VSI 4.1 workloads with ESXi 6.5 U1 and Windows 10. Linked clones were used with the VMware default storage policy of number of failures to tolerate (FTT) of 0 and stripe of 1. No de-duplication or compression was used. Table 11: Login VSI Performance Processor Workload Stateless Dedicated Two Scalable 6130 processors 2.10 GHz, 16C 125W Office worker 266 users 269 users Two Scalable 6130 processors 2.10 GHz, 16C 125W Knowledge worker 210 users 210 users Two Scalable 6130 processors 2.10 GHz, 16C 125W Power worker 185 users 185 users These results indicate the comparative processor performance. The following conclusions can be drawn: The performance for dedicated virtual desktops is slightly better than stateless desktops. The Xeon Scalable 6130 processor has a good cost/user density ratio. Many configurations are bound by memory; therefore, a faster processor might not provide any added value. The default recommendation is two Xeon Scalable 6130 processors and 768 GB of system memory because this configuration provides the best coverage and density for a range of users. For an office worker, Lenovo testing shows that 200 users per server is a good baseline and has an average of 69% usage of the processors in the server. If a server goes down, users on that server must be transferred to the remaining servers. For this degraded failover case, Lenovo testing shows that 240 users per server have an average of 83% usage of the processors. It is important to keep this 25% headroom on servers to cope with possible failover scenarios. Lenovo recommends a general failover ratio of 5:1. By using a target of 200 users per server, the maximum number of office workers is 12,800 in a 64 node cluster. For a knowledge worker, Lenovo testing shows that 155 users per server is a good baseline and has an average of 66% usage of the processors in the server. For the degraded failover case, Lenovo testing shows that 186 users per server have an average of 81% usage of the processors. By using a target of 155 users per server, the maximum number of knowledge workers is 9,920 in a 64 node cluster. For a power user, Lenovo testing shows that 125 users per server is a good baseline and has an average of 72% usage of the processors in the server. For the degraded failover case, Lenovo testing shows that

21 users per server have an average of 85% usage of the processors. By using a target of 125 users per server, the maximum number of knowledge workers is 8,000 in a 64 node cluster. Table 12 summarizes the processor usage with ESXi for the recommended user counts for normal mode and failover mode. Table 12: Processor usage Processor Workload Users per Server CPU Utilization Two 6130 Office worker 200 users Normal Mode 69% Two 6130 Office worker 240 users Failover Mode 83% Two 6130 Knowledge worker 155 users Normal Mode 66% Two 6130 Knowledge worker 186 users Failover Mode 81% Two 6130 Power worker 125 users Normal Mode 72% Two 6130 Power worker 150 users Failover Mode 85% Table 13 lists the recommended number of virtual desktops per server for different workload types and VM memory sizes. The number of users is reduced in some cases to fit within the available memory and still maintain a reasonably balanced system of compute and memory. Table 13: Recommended number of virtual desktops per server Workload Office worker Knowledge worker Power worker Processor Two 6130 Two 6130 Two 6130 VM memory size 3 GB 4 GB 5 GB System memory 768 GB 768 GB 768 GB Memory overhead of vsan 32 GB 32 GB 32 GB Desktops per server (normal mode) Desktops per server (failover mode) Table 14 lists the approximate number of compute servers that are needed for different numbers of users and Office worker workloads. Table 14: Compute servers needed for Office workers and different numbers of users Office workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) Table 15 lists the approximate number of compute servers that are needed for different numbers of users and Knowledge worker workloads. 18

22 Table 15: Compute servers needed for Knowledge workers and different numbers of users Knowledge workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) Table 16 lists the approximate number of compute servers that are needed for different numbers of users and Power worker workloads. Table 16: Compute servers needed for Power workers and different numbers of users Power workers 300 users 600 users 1200 users 3000 users Compute users (normal) Compute users (failover) The Login VSI measurement above use a default login interval of 30 seconds per server. Table 17 shows what happens when the login interval for a cluster of 4 all flash servers is increased as might happen during a login storm. In order to maintain the 4 second response time, the number of supported desktops would need to be reduced. Table 17: Performance during login storm Workload Number of VMs Login Interval Stateless Linked Clones Dedicated Linked Clones Knowledge worker 940 Every 30 seconds 842 users 842 users Knowledge worker 940 Every 10 seconds 742 users 835 users Knowledge worker 940 Every 5 seconds 528 users 735 users Knowledge worker 940 Every 2 seconds 243 users 456 users A boot storm occurs when a substantial number of VMs are all started within a short period of time. Booting a large number of VMs simultaneously requires large IOPS otherwise the VMs become slow and unresponsive. Different numbers of VMs were booted on a cluster of 4 vsan all flash servers. The VMs were unpowered in vcenter and the boot storm created by powering on all of the VMs simultaneously. The time for all of the VMs to become visible was measured. Figure 7 shows the boot times for different numbers of VMs in the 4 node cluster. With an even spread of VMs on each node, the boot time for the VMs on each node was similar to the overall cluster boot time. These results show that even a large number of VMs can be very quickly restarted using a vsan all flash cluster. 19

23 16 Boot Time (minutes) Virtual Machines Figure 7: Boot storm performance 4.5 Graphics acceleration The VMware ESXi hypervisor supports the following options for graphics acceleration: Dedicated GPU with one GPU per user, which is called virtual dedicated graphics acceleration (vdga) mode. GPU hardware virtualization (vgpu) that partitions each GPU. Shared GPU with users sharing a GPU, which is called virtual shared graphics acceleration (vsga) mode and is not recommended because of user contention for shared use of the GPU. VMware also provides software emulation of a GPU, which can be processor-intensive and disruptive to other users who have a choppy experience because of reduced processor performance. Software emulation is not recommended for any user who requires graphics acceleration. The XenServer hypervisor supports the following options for graphics acceleration: Pass-through mode (which is similar to vdga under ESXi) GPU hardware virtualization (vgpu) that partitions each GPU (similar to vgpu under ESXi) The vdga (or pass-through) option has a low user density as it restricts a single user to access each very powerful GPU. This option is not flexible and is no longer cost effective even for high-end power users. Therefore vdga is no longer recommended especially given that the performance of the equivalent vgpu mode is similar. When using the vgpu option with ESXi 6.5 and the latest drivers from NVidia, it is necessary to change the default GPU mode from Shared (vsga) to Shared Direct (vgpu) for each GPU using VMware vcenter. This enables the correct GPU support for the VMs which would otherwise result in the VM not powering on correctly and the standard graphics resources not available error message. The host needs to be rebooted for the changes to take effect. The performance of graphics acceleration was tested using the Lenovo ThinkSystem SR650 servers. Each server supports up to two GPU adapters. The Heaven benchmark is used to measure the per user frame rate for different GPUs, resolutions, and image quality. This benchmark is graphics-heavy and is fairly realistic for designers and engineers. Power users or knowledge workers usually have less intense graphics workloads 20

24 and can achieve higher frame rates. Table 18 lists the results of the Heaven benchmark as FPS that are available to each user with the GRID 2,0 M60 adapter by using vgpu mode with DirectX 11. Table 18: Performance of GRID 2.0 M60 vgpu modes with DirectX 11 Quality Tessellation Anti-Aliasing Resolution M60-8Q M60-4Q M60-2Q M60-4A M60-2A High Normal x1024 Untested Untested High Normal x1050 Untested N/A N/A High Normal x1200 Untested N/A N/A Ultra Extreme x1024 Untested Untested Ultra Extreme x N/A N/A Ultra Extreme x N/A N/A Ultra Extreme x Untested N/A N/A Ultra Extreme 8hai 2560x Untested N/A N/A Lenovo recommends that a medium to high powered CPU, such as the Xeon Scalable 6130, is used for accelerated graphics applications tend to also require extra load on the processor. For vgpu mode, Lenovo recommends at least 384GB of server memory. Because there are many variables when graphics acceleration is used, Lenovo recommends that testing is done in the customer environment to verify the performance for the required user workloads. For more information about the bill of materials (BOM) for GRID 2.0 M60 GPUs for Lenovo ThinkSystem SR650 servers, see the following corresponding BOMs: BOM for enterprise and SMB compute servers section on page 31. BOM for hyper-converged compute servers on page Management servers Management servers should have the same hardware specification as compute servers so that they can be used interchangeably in a worst-case scenario. The Citrix XenDesktop management servers also use the same hypervisor, but have management VMs instead of user desktops. Table 19 lists the VM requirements and performance characteristics of each management service. 21

25 Table 19: Characteristics of XenDesktop and ESXi management services Management service VM Virtual processors System memory Storage Windows OS HA needed Performance characteristic Delivery controller 4 8 GB 60 GB 2016 Yes 5000 user connections Web Interface 4 4 GB 60 GB 2016 Yes 30,000 connections per hour Citrix licensing server XenDesktop SQL server 2 4 GB 60 GB 2016 No 170 licenses per second 2 8 GB 60 GB 2016 Yes 5000 users PVS servers 4 32 GB 60 GB (depends on number of images) 2016 Yes Up to 1000 desktops, memory should be a minimum of 2 GB plus 1.5 GB per image served vcenter server vcenter SQL server 8 16 GB 60 GB 2016 No Up to 2000 desktops 4 8 GB 200 GB 2016 Yes Double the virtual processors and memory for more than 2500 users PVS servers often are run natively on Windows servers. The testing showed that they can run well inside a VM, if it is sized per Table 19. The disk space for PVS servers is related to the number of provisioned images. Table 20 lists the number of management VMs for each size of users following the high availability and performance characteristics. The number of vcenter servers is half of the number of vcenter clusters because each vcenter server can handle two clusters of up to 1000 desktops. Table 20: Management VMs needed XenDesktop management service VM 300 users 600 users 1200 users 3000 users Delivery Controllers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) Includes Citrix Licensing server Y Y N N Includes Web server Y Y N N Web Interface N/A N/A 2 (1+1) 2 (1+1) Citrix licensing servers N/A N/A 1 1 XenDesktop SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) PVS servers for stateless case only 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) 22

26 ESXi management service VM 300 users 600 users 1200 users 3000 users vcenter servers vcenter SQL servers 2 (1+1) 2 (1+1) 2 (1+1) 2 (1+1) Each management VM requires a certain amount of virtual processors, memory, and disk. There is enough capacity in the management servers for all of these VMs. Table 21 lists an example mapping of the management VMs to the two physical management servers for 3000 users. Table 21: Management server VM mapping (3000 users) Management service for 3000 stateless users Management server 1 Management server 2 vcenter servers (2) 1 1 vcenter SQL server (2) 1 1 XenDesktop SQL server (2) 1 1 Delivery controller (2) 1 1 Web Interface (2) 1 1 License server (1) 1 PVS servers for stateless desktops (2) 1 1 It is assumed that common services, such as Microsoft Active Directory, Dynamic Host Configuration Protocol (DHCP), domain name server (DNS), and Microsoft licensing servers exist in the customer environment. For shared storage systems that support block data transfers only, it is also necessary to provide some file I/O servers that support CIFS or NFS shares and translate file requests to the block storage system. For high availability, two or more Windows storage servers are clustered. Based on the number and type of desktops, Table 22 lists the recommended number of physical management servers. In all cases, there is redundancy in the physical management servers and the management VMs. Note for a small number of users the management VMs could be executed on the same servers as the virtual desktop VMs. Table 22: Management servers needed Management servers 300 users 600 users 1200 users 3000 users Stateless desktop model Dedicated desktop model Windows Storage Server For more information, see BOM for enterprise and SMB management servers on page

27 4.7 Systems management Lenovo XClarity Administrator is a centralized resource management solution that reduces complexity, speeds up response, and enhances the availability of Lenovo server systems and solutions. The Lenovo XClarity Administrator provides agent-free hardware management for Lenovo s ThinkSystem, System x rack servers and Flex System compute nodes and components, including the Chassis Management Module (CMM) and Flex System I/O modules. Figure 8 shows the Lenovo XClarity administrator interface, where Flex System components and rack servers are managed and are seen on the dashboard. Lenovo XClarity Administrator is a virtual appliance that is quickly imported into a virtualized environment server configuration. Figure 8: XClarity Administrator interface 24

28 4.8 Shared storage VDI workloads, such as virtual desktop provisioning, VM loading across the network, and access to user profiles and data files place huge demands on network shared storage. Experimentation with VDI infrastructures shows that the input/output operation per second (IOPS) performance takes precedence over storage capacity. This precedence means that more slower speed drives are needed to match the performance of fewer higher speed drives. Even with the fastest HDDs available today (15k rpm), there can still be excess capacity in the storage system because extra spindles are needed to provide the IOPS performance. From experience, this extra storage is more than sufficient for the other types of data such as SQL databases and transaction logs. The large rate of IOPS, and therefore, large number of drives needed for dedicated virtual desktops can be ameliorated to some extent by caching data in flash memory or SSD drives. The storage configurations are based on the peak performance requirement, which usually occurs during the so-called logon storm. This is when all workers at a company arrive in the morning and try to start their virtual desktops, all at the same time. It is always recommended that user data files (shared folders) and user profile data are stored separately from the user image. By default, this has to be done for stateless virtual desktops and should also be done for dedicated virtual desktops. It is assumed that 100% of the users at peak load times require concurrent access to user data and profiles. In View 5.1, VMware introduced the View Storage Accelerator (VSA) feature that is based on the ESXi Content-Based Read Cache (CBRC). VSA provides a per-host RAM-based solution for VMs, which considerably reduces the read I/O requests that are issued to the shared storage. It is recommended to use the maximum VSA cache size of 2GB. Performance measurements by Lenovo show that VSA has a negligible effect on the number of virtual desktops that can be used on a compute server while it can reduce the read requests to storage by one-fifth. Table 23 summarizes the peak IOPS and disk space requirements for virtual desktops on a per-user basis. Dedicated virtual desktops require a high number of IOPS and a large amount of disk space for the linked clones. Note that the linked clones also can grow in size over time. Table 23: Shared virtual desktop shared storage performance requirements Shared virtual desktops Protocol Size IOPS Write % Master image NFS or Block 30 GB Difference disks User AppData folder NFS or Block 10 GB 18 85% vswap (recommended to be disabled) NFS or Block User files CIFS/NFS 5 GB 1 75% User profile (through MSRP) CIFS 100 MB % The sizes and IOPS for user data files and user profiles that are listed in Table 23 can vary depending on the customer environment. For example, power users might require 10 GB and five IOPS for user files because of 25

29 the applications they use. It is assumed that 100% of the users at peak load times require concurrent access to user data files and profiles. Many customers need a hybrid environment of stateless and dedicated desktops for their users. The IOPS for dedicated users outweigh those for stateless users; therefore, it is best to bias towards dedicated users in any storage controller configuration. The storage configurations that are presented in this section include conservative assumptions about the VM size, changes to the VM, and user data sizes to ensure that the configurations can cope with the most demanding user scenarios Lenovo ThinkSystem DS6200 storage array The Lenovo ThinkSystem DS6200 storage array supports up to 240 drives by using up to 9 expansion enclosures. Drives can be HDDs or SSDs in a variety of sizes and speeds. A SSD based cache can be provided for each of the redundant controllers. This tiered storage support also allows a mixture of different disk drives. Slower drives can be used for shared folders and profiles; faster drives and SSDs can be used for dedicated virtual desktops and desktop images. To support file I/O (CIFS and NFS) into Lenovo storage, Windows storage servers must be added, as described in Management servers on page 21. Experimentation with the DS6200 shows that an array of 20 SSDs can support at least 3576 stateless users assuming a new login every 2 seconds. Figure 9 shows the IOP graphs for two the controller during the Login VSI test of 4000 users. The graphs show a maximum of 22,000 IOPS or about 6 per VM. Figure 9: DS6200 IOPs for all SSD array Figure 10 shows the Login VSI chart for the test of 3590 users. The response time linearly increases but never consistently exceeds the 4 second threshold. The maximum number of stateless desktops is approximately

30 Figure 10: Login VSI Chart for all SSD array Testing with a hybrid array of SSDs and HDDs shows good performance until the SSD cache is full and then the performance drops sharply. Lenovo recommends using all SSDs to provide the best performance both in terms of general usage and for boot and login storms. Note that de-duplication and compression features are not available. Therefore the required disk capacity can be determined from the storage capacity of the linked clone or full clone VMs, master images, and user storage. Table 24 lists the storage configuration that is needed for each of the different numbers of stateless users. It is assumed that the master images are 40 GB (actual) with linked clones of 3GB each. Each user requires up to 5GB of persistent storage. The total storage includes 30% capacity for growth. Table 24: Storage configuration for stateless users Stateless storage 300 users 600 users 1200 users 3000 users Master image count Master image per controller with backup 160 GB 160 GB 320 GB 640 GB Stateless linked clones (no growth) 1800 GB 3600 GB 7200 GB GB User storage 1500 GB 3000 GB 3600 GB GB Total storage (with 30% growth) 4.4 TB 8.6 TB 14 TB 33 TB Number of 800 GB SSDs (RAID 5) Number of 3.84 GB SSDs (RAID 5) ThinkSystem DS6200 control enclosures Table 25 lists the storage configuration that is needed for each of the different numbers of dedicated users. It is assumed that the master images are 40 GB (actual) with linked clone growth of up to 50%. Each user requires up to 5GB of persistent storage. 27

31 Table 25: Storage configuration for dedicated users Stateless storage 300 users 600 users 1200 users 3000 users Master image count Master image per controller with backup 160 GB 160 GB 320 GB 640 GB Stateless linked clones (50% growth) 6900 GB GB GB GB User storage 1500 GB 3000 GB 3600 GB GB Total storage (no growth) 9 TB 16.6 TB 31 TB 83 TB Number of 800 GB SSDs (RAID 5) Number of 3.84 GB SSDs (RAID 5) ThinkSystem DS6200 control enclosures Refer to the BOM for shared storage on page 39 for more details. 4.9 Networking The main driver for the type of networking that is needed for VDI is the connection to storage. If the shared storage is block-based (such as the Lenovo ThinkSystem DS6200 storage array), it is likely that a SAN that is based on 8 or 16 Gbps FC, or 10 GbE iscsi connection is needed. Other types of storage can be network attached by using 1 Gb or 10 Gb Ethernet. For vsan 10 GbE networking should always be used. Also, there is user and management virtual local area networks (VLANs) available that require 1 Gb or 10 Gb Ethernet as described in the Lenovo Client Virtualization reference architecture, which is available at this website: lenovopress.com/lp0756. Automated failover and redundancy of the entire network infrastructure and shared storage is important. This failover and redundancy is achieved by having at least two of everything and ensuring that there are dual paths between the compute servers, management servers, and shared storage. If only a single Flex System Enterprise Chassis is used, the chassis switches are sufficient and no other TOR switch is needed. For rack servers, more than one Flex System Enterprise Chassis TOR switches are required. For more information, see BOM for networking on page GbE networking For 10 GbE networking, the use of CIFS, NFS, or iscsi, the Lenovo RackSwitch G8124E, and G8272 TOR switches are recommended because they support VLANs by using Virtual Fabric. Redundancy and automated failover is available by using link aggregation, such as Link Aggregation Control Protocol (LACP) and two of everything. For the Flex System chassis, pairs of the EN4093R switch should be used and connected to a G8124 or G8272 TOR switch. The TOR 10GbE switches are needed for multiple Flex chassis or external connectivity. ISCSI also requires converged network adapters (CNAs) that have the LOM extension. Table 26 lists the TOR 10 GbE network switches for each user size. 28

32 Table 26: TOR 10 GbE network switches needed 10 GbE TOR network switch 300 users 600 users 1200 users 3000 users G8124E 24-port switch G port switch GbE administration networking A 1 GbE network should be used to administer all of the other devices in the system. Separate 1 GbE switches are used for the IT administration network. Lenovo recommends that redundancy is also built into this network at the switch level. At minimum, a second switch should be available in case a switch goes down. Table 27 lists the number of 1 GbE switches for each user size. Table 27: TOR 1 GbE network switches needed 1 GbE TOR network switch 300 users 600 users 1200 users 3000 users G port switch Table 28 shows the number of 1 GbE connections that are needed for the administration network and switches for each type of device. The total number of connections is the sum of the device counts multiplied by the number of each device. Table 28: 1 GbE connections needed Device Number of 1 GbE connections for administration ThinkSystem rack server 1 Flex System Enterprise Chassis CMM 2 Flex System Enterprise Chassis switches 1 per switch (optional) Lenovo ThinkSystem DS6200 storage array 4 TOR switches Racks The number of racks and chassis for Flex System compute nodes depends upon the precise configuration that is supported and the total height of all of the component parts: servers, storage, networking switches, and Flex System Enterprise Chassis (if applicable). The number of racks for ThinkSystem servers is also dependent on the total height of all of the components. For more information, see the BOM section on page

33 4.11 Deployment models This section contains the following example deployment models: vsan hyper-converged using ThinkSystem servers Deployment example 1: vsan hyper-converged using ThinkSystem servers This deployment example supports 1200 dedicated virtual desktops with VMware vsan in hybrid mode (SSDs and HDDs). Each VM needs 3 GB of RAM. Six servers are needed assuming 200 users per server using the Office worker profile. There can be up to 1 complete server failure and there is still capacity to run the 1200 users on five servers (ratio of 5:1). Each virtual desktop with Windows 10 takes 40 GB. The total capacity would be 126 TB, assuming full clones of 1200 VMs, one complete mirror of the data (FFT=1), space for swapping, and 30% extra capacity. However, the use of linked clones for vsan or de-duplication for all flash means that a substantially smaller capacity is sufficient. This example uses six ThinkSystem SR630 1U servers and two G8124E RackSwitch TOR switches. Each server has two Xeon Scalable 6130 processors, 768 GB of memory, two network cards, two 800 GB SSDs, and six 1.2TB HDDs. Two disk groups of one SSD and three HDDs are used for vsan. Figure 11 shows the deployment configuration for this example. Figure 11: Deployment configuration for 1200 dedicated users using vsan on ThinkSystem servers 30

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and System x Servers

Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and System x Servers Reference Architecture: Lenovo Client Virtualization with Citrix XenDesktop and System x Servers Last update: 27 March 2017 Version 1.4 Reference Architecture for Citrix XenDesktop Contains performance

More information

Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers

Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers Last update: 29 March 2017 Version 1.7 Reference Architecture for VMware Horizon (with View) Contains performance

More information

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Last update: 27 February 2017 Version 1.5 Provides a technical overview of Lenovo Converged HX Series Nutanix

More information

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012 IBM SmartCloud Desktop Infrastructure with ware View 12 December 2012 Copyright IBM Corporation, 2012 Table of contents Introduction...1 Architectural overview...1 Component model...2 ware View provisioning...

More information

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Key Results 2. Introduction 2.1.Scope 2.2.Audience 3. Technology Overview

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix Virtual Apps and Desktops on VxRail and vsan Ready Nodes October 2018 H17344.1 Validation Guide Abstract This validation guide describes the architecture

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances

Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Reference Architecture for Workloads using the Lenovo Converged HX Series Nutanix Appliances Last update: 20 June 2017 Version 1.6 Provides a technical overview of Lenovo Converged HX Series Nutanix appliances

More information

Introduction to Virtualization. From NDG In partnership with VMware IT Academy

Introduction to Virtualization. From NDG In partnership with VMware IT Academy Introduction to Virtualization From NDG In partnership with VMware IT Academy www.vmware.com/go/academy Why learn virtualization? Modern computing is more efficient due to virtualization Virtualization

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix XenDesktop and XenApp for Dell EMC XC Family September 2018 H17388 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and VMware vsphere Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

VMware Virtual SAN. High Performance Scalable Storage Architecture VMware Inc. All rights reserved.

VMware Virtual SAN. High Performance Scalable Storage Architecture VMware Inc. All rights reserved. VMware Virtual SAN High Performance Scalable Storage Architecture 2014 VMware Inc. All rights reserved. Agenda Importance of Software Defined approach for Storage Introduction VMware Virtual SAN Key Properties

More information

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes TECHNICAL WHITE PAPER SEPTEMBER 2016 VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes Table of Contents

More information

Dell EMC Ready System for VDI on VxRail

Dell EMC Ready System for VDI on VxRail Dell EMC Ready System for VDI on VxRail Citrix XenDesktop for Dell EMC VxRail Hyperconverged Appliance April 2018 H16968.1 Deployment Guide Abstract This deployment guide provides instructions for deploying

More information

Nutanix Reference Architecture Version 1.1 July 2016 RA-2022

Nutanix Reference Architecture Version 1.1 July 2016 RA-2022 Citrix XenDesktop on vsphere Nutanix Reference Architecture Version 1.1 July 2016 RA-2022 Copyright Copyright 2016 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 150 San Jose, CA 95110 All rights

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING EMC VSPEX END-USER COMPUTING Citrix XenDesktop EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user computing solution for Citrix XenDesktop using EMC ScaleIO and VMware vsphere to provide

More information

Dell EMC Ready System for VDI on XC Series

Dell EMC Ready System for VDI on XC Series Dell EMC Ready System for VDI on XC Series Citrix XenDesktop for Dell EMC XC Series Hyperconverged Appliance March 2018 H16969 Deployment Guide Abstract This deployment guide provides instructions for

More information

IBM SmartCloud Desktop Infrastructure with IBM Virtual Desktop Reference architecture. 07 February 2013

IBM SmartCloud Desktop Infrastructure with IBM Virtual Desktop Reference architecture. 07 February 2013 IBM SmartCloud Desktop Infrastructure with IBM Virtual Desktop 07 February 2013 Copyright IBM Corporation, 2013 Table of contents Introduction...1 Architectural overview...1 Component model...2 IBM Virtual

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO1926BU A Day in the Life of a VSAN I/O Diving in to the I/O Flow of vsan John Nicholson (@lost_signal) Pete Koehler (@vmpete) VMworld 2017 Content: Not for publication #VMworld #STO1926BU Disclaimer

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix Virtual Apps and Desktops on Dell EMC XC Family October 2018 H17391.1 Validation Guide Abstract This validation guide describes the architecture

More information

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017

Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 Running VMware vsan Witness Appliance in VMware vcloudair First Published On: April 26, 2017 Last Updated On: April 26, 2017 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Solution Overview

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator First Published: February 19, 2014 Last Modified: February 21, 2014 Americas Headquarters Cisco Systems, Inc. 170 West

More information

White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series

White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series White Paper Effects of the Deduplication/Compression Function in Virtual Platforms ETERNUS AF series and ETERNUS DX S4/S3 series Copyright 2017 FUJITSU LIMITED Page 1 of 17 http://www.fujitsu.com/eternus/

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon on VxRail and VSAN Ready Nodes September 2018 H17341.1 Validation Guide Abstract This validation guide describes the architecture and performance

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

Lenovo Validated Designs

Lenovo Validated Designs Lenovo Validated Designs for ThinkAgile HX Appliances Deliver greater reliability and simplify the modern datacenter Simplify Solutions Infrastructure Lenovo and Nutanix share a common vision of delivering

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

vsan Remote Office Deployment January 09, 2018

vsan Remote Office Deployment January 09, 2018 January 09, 2018 1 1. vsan Remote Office Deployment 1.1.Solution Overview Table of Contents 2 1. vsan Remote Office Deployment 3 1.1 Solution Overview Native vsphere Storage for Remote and Branch Offices

More information

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 Administering VMware Virtual SAN Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX

More information

vsan All Flash Features First Published On: Last Updated On:

vsan All Flash Features First Published On: Last Updated On: First Published On: 11-07-2016 Last Updated On: 11-07-2016 1 1. vsan All Flash Features 1.1.Deduplication and Compression 1.2.RAID-5/RAID-6 Erasure Coding Table of Contents 2 1. vsan All Flash Features

More information

vsan 6.6 Performance Improvements First Published On: Last Updated On:

vsan 6.6 Performance Improvements First Published On: Last Updated On: vsan 6.6 Performance Improvements First Published On: 07-24-2017 Last Updated On: 07-28-2017 1 Table of Contents 1. Overview 1.1.Executive Summary 1.2.Introduction 2. vsan Testing Configuration and Conditions

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

Microsoft SQL Server 2014 on vsan 6.2 All-Flash December 15, 2017

Microsoft SQL Server 2014 on vsan 6.2 All-Flash December 15, 2017 Microsoft SQL Server 2014 on vsan 6.2 All-Flash December 15, 2017 1 Table of Contents 1. Microsoft SQL Server 2014 on vsan 6.2 All-Flash 1.1.Executive Summary 1.2.vSAN SQL Server Reference Architecture

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING VSPEX Proven Infrastructure EMC VSPEX END-USER COMPUTING Citrix XenDesktop 5.6 with VMware vsphere 5.1 for up to 250 Virtual Desktops Enabled by EMC VNXe and EMC Next-Generation Backup EMC VSPEX Abstract

More information

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System First Published: August 25, 2016 Last Modified: August 31, 2016 Americas Headquarters Cisco Systems, Inc. 170 West

More information

Enable Modern Work Styles with Microsoft VDI. Jeff Chin Windows Client Solutions Specialist

Enable Modern Work Styles with Microsoft VDI. Jeff Chin Windows Client Solutions Specialist Enable Modern Work Styles with Microsoft VDI Jeff Chin Windows Client Solutions Specialist Empowering People-centric IT User and Device Management Access and Information Protection Microsoft Virtual Desktop

More information

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014 VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014 Table of Contents Introduction.... 3 VMware Virtual SAN....

More information

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,

More information

IOmark-VM. Datrium DVX Test Report: VM-HC b Test Report Date: 27, October

IOmark-VM. Datrium DVX Test Report: VM-HC b Test Report Date: 27, October IOmark-VM Datrium DVX Test Report: VM-HC-171024-b Test Report Date: 27, October 2017 Copyright 2010-2017 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI, VDI-IOmark, and IOmark are trademarks

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and Microsoft Hyper-V Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes how to design an EMC VSPEX end-user

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

Native vsphere Storage for Remote and Branch Offices

Native vsphere Storage for Remote and Branch Offices SOLUTION OVERVIEW VMware vsan Remote Office Deployment Native vsphere Storage for Remote and Branch Offices VMware vsan is the industry-leading software powering Hyper-Converged Infrastructure (HCI) solutions.

More information

The next step in Software-Defined Storage with Virtual SAN

The next step in Software-Defined Storage with Virtual SAN The next step in Software-Defined Storage with Virtual SAN Osama I. Al-Dosary VMware vforum, 2014 2014 VMware Inc. All rights reserved. Agenda Virtual SAN s Place in the SDDC Overview Features and Benefits

More information

Reduce costs and enhance user access with Lenovo Client Virtualization solutions

Reduce costs and enhance user access with Lenovo Client Virtualization solutions SYSTEM X SERVERS SOLUTION BRIEF Reduce costs and enhance user access with Lenovo Client Virtualization solutions Gain the benefits of client virtualization while maximizing your Lenovo infrastructure Highlights

More information

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director Hyper-Convergence De-mystified Francis O Haire Group Technology Director The Cloud Era Is Well Underway Rapid Time to Market I deployed my application in five minutes. Fractional IT Consumption I use and

More information

TECHNICAL WHITE PAPER - JANUARY VMware Horizon 7 on VMware vsan Best Practices TECHNICAL WHITE PAPER

TECHNICAL WHITE PAPER - JANUARY VMware Horizon 7 on VMware vsan Best Practices TECHNICAL WHITE PAPER TECHNICAL WHITE PAPER - JANUARY 2019 VMware Horizon 7 on VMware vsan Best Practices TECHNICAL WHITE PAPER Table of Contents Introduction 3 Purpose... 3 Audience... 3 Technology Overview and Best Practices

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

Dell EMC vsan Ready Nodes for VDI

Dell EMC vsan Ready Nodes for VDI Dell EMC vsan Ready Nodes for VDI Integration of VMware Horizon on Dell EMC vsan Ready Nodes April 2018 H17030.1 Deployment Guide Abstract This deployment guide provides instructions for deploying VMware

More information

Technical Overview. Jack Smith Sr. Solutions Architect

Technical Overview. Jack Smith Sr. Solutions Architect Technical Overview Jack Smith Sr. Solutions Architect Liquidware Labs Methodology Production Environments Assess Design POCs/Pilots Stratusphere FIT Stratusphere UX Validate Migrate ProfileUnity FlexApp

More information

What's New in vsan 6.2 First Published On: Last Updated On:

What's New in vsan 6.2 First Published On: Last Updated On: First Published On: 07-07-2016 Last Updated On: 08-23-2017 1 1. Introduction 1.1.Preface 1.2.Architecture Overview 2. Space Efficiency 2.1.Deduplication and Compression 2.2.RAID - 5/6 (Erasure Coding)

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

Agenda. Flexcast Management Architecture XenDesktop 7: Install, Manage, Support Migration/upgrade Best Practices Demo Upgrade tips (if time permits)

Agenda. Flexcast Management Architecture XenDesktop 7: Install, Manage, Support Migration/upgrade Best Practices Demo Upgrade tips (if time permits) side 1 side 2 Agenda Flexcast Management Architecture XenDesktop 7: Install, Manage, Support Migration/upgrade Best Practices Demo Upgrade tips (if time permits) side 3 FlexCast Management Architecture

More information

Why Datrium DVX is Best for VDI

Why Datrium DVX is Best for VDI Why Datrium DVX is Best for VDI 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction Managing a robust and growing virtual desktop infrastructure in current

More information

Virtual Security Server

Virtual Security Server Data Sheet VSS Virtual Security Server Security clients anytime, anywhere, any device CENTRALIZED CLIENT MANAGEMENT UP TO 50% LESS BANDWIDTH UP TO 80 VIDEO STREAMS MOBILE ACCESS INTEGRATED SECURITY SYSTEMS

More information

Microsoft SQL Server 2014 on VMware vsan 6.2 All-Flash October 31, 2017

Microsoft SQL Server 2014 on VMware vsan 6.2 All-Flash October 31, 2017 Microsoft SQL Server 2014 on VMware vsan 6.2 All-Flash October 31, 2017 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Solution Overview 1.3.Key Results 2. vsan SQL Server Reference Architecture

More information

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales:

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product.

EXAM - VCP5-DCV. VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam. Buy Full Product. VMware EXAM - VCP5-DCV VMware Certified Professional 5 Data Center Virtualization (VCP5-DCV) Exam Buy Full Product http://www.examskey.com/vcp5-dcv.html Examskey VMware VCP5-DCV exam demo product is here

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Virtual Volumes FAQs First Published On: Last Updated On:

Virtual Volumes FAQs First Published On: Last Updated On: First Published On: 03-20-2017 Last Updated On: 07-13-2018 1 Table of Contents 1. FAQs 1.1.Introduction and General Information 1.2.Technical Support 1.3.Requirements and Capabilities 2 1. FAQs Frequently

More information

Hedvig as backup target for Veeam

Hedvig as backup target for Veeam Hedvig as backup target for Veeam Solution Whitepaper Version 1.0 April 2018 Table of contents Executive overview... 3 Introduction... 3 Solution components... 4 Hedvig... 4 Hedvig Virtual Disk (vdisk)...

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon on Dell EMC XC Family September 2018 H17386 Validation Guide Abstract This validation guide describes the architecture and performance of

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon 7 on Dell EMC XC Family September 2018 H17387 Deployment Guide Abstract This deployment guide provides instructions for deploying VMware

More information

NexentaVSA for View. Hardware Configuration Reference nv4v-v A

NexentaVSA for View. Hardware Configuration Reference nv4v-v A NexentaVSA for View Hardware Configuration Reference 1.0 5000-nv4v-v0.0-000003-A Copyright 2012 Nexenta Systems, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted

More information

Dell EMC. VxRack System FLEX Architecture Overview

Dell EMC. VxRack System FLEX Architecture Overview Dell EMC VxRack System FLEX Architecture Overview Document revision 1.6 October 2017 Revision history Date Document revision Description of changes October 2017 1.6 Editorial updates Updated Cisco Nexus

More information

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo

Vendor: Citrix. Exam Code: 1Y Exam Name: Designing Citrix XenDesktop 7.6 Solutions. Version: Demo Vendor: Citrix Exam Code: 1Y0-401 Exam Name: Designing Citrix XenDesktop 7.6 Solutions Version: Demo DEMO QUESTION 1 Which option requires the fewest components to implement a fault-tolerant, load-balanced

More information

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

Exam4Tests.   Latest exam questions & answers help you to pass IT exam test easily Exam4Tests http://www.exam4tests.com Latest exam questions & answers help you to pass IT exam test easily Exam : VCP510PSE Title : VMware Certified Professional 5 - Data Center Virtualization PSE Vendor

More information

Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6

Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6 White Paper Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6 Reference Architecture August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta)

Exam Name: VMware Certified Professional on vsphere 5 (Private Beta) Vendor: VMware Exam Code: VCP-511 Exam Name: VMware Certified Professional on vsphere 5 (Private Beta) Version: DEMO QUESTION 1 The VMware vcenter Server Appliance has been deployed using default settings.

More information

Eliminate the Complexity of Multiple Infrastructure Silos

Eliminate the Complexity of Multiple Infrastructure Silos SOLUTION OVERVIEW Eliminate the Complexity of Multiple Infrastructure Silos A common approach to building out compute and storage infrastructure for varying workloads has been dedicated resources based

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix Virtual Apps and Desktops on VxRail and vsan Ready Nodes November 2018 H17343.1 Design Guide Abstract This design guide provides technical considerations

More information

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,

More information

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018 Scalability Testing with Login VSI v16.2 White Paper Parallels Remote Application Server 2018 Table of Contents Scalability... 3 Testing the Scalability of Parallels RAS... 3 Configurations for Scalability

More information

Horizon Console Administration. 13 DEC 2018 VMware Horizon 7 7.7

Horizon Console Administration. 13 DEC 2018 VMware Horizon 7 7.7 Horizon Console Administration 13 DEC 2018 VMware Horizon 7 7.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this

More information

vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7

vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7 vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have

More information

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series

White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation ETERNUS AF S2 series White Paper Features and Benefits of Fujitsu All-Flash Arrays for Virtualization and Consolidation Fujitsu All-Flash Arrays are extremely effective tools when virtualization is used for server consolidation.

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

Free up rack space by replacing old servers and storage

Free up rack space by replacing old servers and storage A Principled Technologies report: Hands-on testing. Real-world results. Free up rack space by replacing old servers and storage A 2U Dell PowerEdge FX2s and all-flash VMware vsan solution powered by Intel

More information

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Pivot3 Acuity with Microsoft SQL Server Reference Architecture Pivot3 Acuity with Microsoft SQL Server 2014 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales: sales@pivot3.com Austin,

More information

VDI-in-a-Box 5.1.x :27:51 UTC Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement

VDI-in-a-Box 5.1.x :27:51 UTC Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement VDI-in-a-Box 5.1.x 2015-03-16 16:27:51 UTC 2015 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Contents VDI-in-a-Box 5.1.x... 6 VDI-in-a-Box 5.1.x... 7 About Citrix

More information

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ] s@lm@n VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ] VMware VCP-511 : Practice Test Question No : 1 Click the Exhibit button. An administrator has

More information

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Performance VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R Table of Contents Scalability enhancements....................................................................

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Nutanix Complete Cluster Reference Architecture for Virtual Desktop Infrastructure

Nutanix Complete Cluster Reference Architecture for Virtual Desktop Infrastructure Nutanix Complete Cluster Reference Architecture for Virtual Desktop Infrastructure with VMware vsphere 5 and View 5 Table of Contents 1. Executive Summary...1 2. Introduction to Project Colossus...2 2.1.

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO3308BES NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM #VMworld #STO3308BES Disclaimer This presentation may contain product features

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design for Software-Defined Data Center 4.0 This document supports the version of each product listed and supports

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN Russ Fellows Enabling you to make the best technology decisions November 2017 EXECUTIVE OVERVIEW* The new Intel Xeon Scalable platform

More information