DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16

Size: px
Start display at page:

Download "DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16"

Transcription

1 Installing and Configuring the DM-MPIO REFERENCE ARCHITECTURE DELL EMC XTREMIO X2 WITH CITRIX XENDESKTOP 7.16 Abstract This reference architecture evaluates the best-in-class performance and scalability delivered by Dell EMC XtremIO X2 for Citrix XenDesktop 7.16 VDI above VMware vsphere 6.5 infrastructure. We present data-quantifying performance at scale for thousands of desktops in each stage of the VDI lifecycle. Datacenter design elements, both hardware and software, that synergize into achieving the optimum results, are also discussed in detail. March, 2018 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

2 Contents Abstract... 1 Executive Summary... 4 Business Case... 5 Overview... 6 Test Results... 7 Summary... 7 Deployment Performance Results... 8 Citrix Machine Creation Services (MCS)... 8 Citrix Provisioning Services (PVS)... 9 MCS Full Clone Provisioning... 9 MCS Linked Clone Provisioning Production Use Performance Results Boot Storms LoginVSI Results Solution's Hardware Layer Storage Array: Dell EMC XtremIO X2 All-Flash Array XtremIO X2 Overview Architecture and Scalability XIOS and the I/O Flow System Features XtremIO Management Server Test Setup Compute Hosts: Dell PowerEdge Servers Storage Configuration Zoning Storage Volumes Initiator Groups and LUN Mapping Storage Networks Solution's Software Layer Hypervisor Management Layer vcenter Server Appliance Hypervisor ESX Clusters Network Configuration Storage Configuration, EMC SIS and VSI DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

3 Virtual Desktop Management Layer: Citrix XenDesktop Citrix XenDesktop Citrix XenDesktop Components Machine Creation Services (MCS) Provisioning Services (PVS) PVS Write Cache Personal vdisk Citrix XenDesktop 7.16 Configurations and Tuning XenDesktop Delivery Controller Microsoft Windows 10 Desktop Configuration and Optimization Conclusion References Appendix A Test Methodology How to Learn More DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

4 Executive Summary This paper describes a reference architecture for deploying a Citrix XenDesktop 7.16 Virtual Desktop Infrastructure (VDI) environment and published applications using Dell EMC XtremIO X2 storage array. It also discusses design considerations for deploying such an environment. Based on the data presented herein, we firmly establish the value of XtremIO X2 as a best-in-class all-flash array for Citrix XenDesktop Enterprise deployments. This reference architecture presents a complete VDI solution for Citrix XenDesktop 7.16 delivering virtualized 32-bit Windows 10 desktops using MCS and PVS technologies with applications such as Microsoft Office 2016, Adobe Reader 11, Java, IE and other common desktop user applications. It discusses design considerations that will give you a reference point for successfully deploying a VDI project using XtremIO X2, and describes tests performed by XtremIO to validate and measure the operation and performance of the recommended solution. 4 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

5 Business Case A well-known objective of virtualizing desktops is lowering the Total Cost of Ownership (TCO). TCO generally includes capital expenditures from purchasing hardware such as storage, servers, networking switches and routers, in addition to the software licensing and maintenance costs. The main goals in virtualizing desktops are to improve economics and efficiency in desktop delivery, ease maintenance and management, and improve desktop security. In addition to these goals, a key objective of a successful VDI deployment, and one that probably matters the most, is the end user experience. It is imperative for VDI deployments to demonstrate parity with that of physical workstations when it comes to the end user experience. The overwhelming value of virtualizing desktops in a software-defined datacenter and the need to deliver a rich end-user experience compels us to select the best-of-breed infrastructure components for our VDI deployment. Selecting the best-in-class and performant storage system, that is also easy to manage, helps to achieve our long-term goal of lowering the TCO, and hence is a critical piece of the infrastructure. The shared storage infrastructure in a VDI solution should be robust enough to deliver consistent performance and scalability for thousands of desktops regardless of the desktop delivery mechanism (linked clones, full clones, etc.). XtremIO brings tremendous value by providing consistent performance at scale with features such as always-on inline deduplication, compression, thin provisioning and unique data protection capabilities. Seamless interoperability with VMware vsphere is achieved by using VMware APIs for Array Integration (VAAI). Dell EMC Solutions Integration Service (SIS) and Virtual Storage Integrator's (VSI) ease of management make choosing this best-of-breed all-flash array even more attractive for desktop virtualization applications. XtremIO is a scale-out storage system that can grow in storage capacity, compute resources and bandwidth capacity whenever storage requirements for the environment are enhanced. With the advent of multi-core server systems with increasing number of CPU cores per processor (following Moore's law), we are able to consolidate a growing number of desktops on a single enterprise-class server. When combined with XtremIO X2 All-Flash Array, we can consolidate vast numbers of virtualized desktops on a single storage array, thereby achieving high consolidation at great performance from a storage and a compute perspective. The solution is based on Citrix XenDesktop 7.16 which provides a complete end-to-end solution delivering Microsoft Windows virtual desktops or server-based hosted shared sessions to users on a wide variety of endpoint devices. Virtual desktops are dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time they log on. Citrix XenDesktop 7.16 provides a complete virtual desktop delivery system by integrating several distributed components with advanced configuration tools that simplify the creation and real-time management of the virtual desktop infrastructure. Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce that can improve productivity. With Citrix XenDesktop 7.16, IT can effectively control app and desktop provisioning while securing data assets and lowering capital and operating expenses. 5 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

6 Overview It is well known that implementing a complete VDI solution is a multi-faceted effort with nuances encompassing compute, memory, network and most importantly storage. Our focus in this reference architecture is on XtremIO X2 capabilities and benefits in such a solution; however, we intend to give a complete picture of a VDI solution. An XtremIO X2 cluster provides sufficient storage capacity and adequate performance for servicing the I/O requests and storage bandwidth required for a scale of thousands and tens of thousands of virtual desktops. This includes desktop delivery, management operations, login and boot storms, and production use at scale. The XtremIO X2 Storage Array provides top class performance when deploying virtual desktops and running management operations on them, as well as when subjected to live user emulation tests using LoginVSI (Login Virtual Session Indexer a software simulating user workloads for Windows-based virtualized desktops). The XtremIO All Flash Storage array is based upon a scale-out architecture. It is comprised of building blocks called X-Bricks which can be clustered together to grow performance and capacity as required. An X-Brick is the basic building block of an XtremIO cluster. Each X-Brick is a highly-available, high-performance unit that consists of dual Active-Active Storage Controllers, with CPU and RAM resources, Ethernet, FC and iscsi connections, and a Disk Array Enclosure (DAE) containing the SSDs that hold the data. With XtremIO X2, a single X-Brick can service the storage capacity and bandwidth requirements for 4000 desktops, with capacity to spare. XtremIO X2 All-Flash Array is designed to provide high responsiveness for increasing data usage for thousands of users and is extremely beneficial for VDI projects. In subsequent sections of this reference architecture, we will present XtremIO's compounding returns for its data reduction capabilities and the high performance it provides to VDI environments with thousands of desktops. We will see the benefits in terms of data reduction and storage performance in deploying an Instant Clone Desktop Pool as well as in deploying a Linked Clone Desktop Pool. XtremIO's scale-out architecture allows scaling any environment, in our case VDI environments, in a linear way that satisfies both the capacity and performance needs of the growing infrastructure. An XtremIO X2 cluster can start with any number of required X-Bricks to service the current or initial loads and can grow linearly (up to 4 X-Bricks in a cluster) to appropriately service the increasing environment (to be increased to 8 X-Bricks in the future, depending on the cluster's type). With X2, in addition to its scale-out capabilities, an XtremIO storage array can scale-up by adding extra SSDs to an X-Brick. An X-Brick can contain between 18 and 72 SSDs (in increments of 6) of fixed sizes (400GB or 1.92TB, depending on the cluster's type, with future versions allowing 3.84TB sized SSDs). In developing this VDI solution, we have selected VMware vsphere 6.5 update 1 as the virtualization platform, and Citrix XenDesktop 7.16 for virtual desktop delivery and management. Windows 10 (32-bit) is the virtual desktops' operating system. EMC VSI (Virtual Storage Integrator) 7.3 and the vsphere Web Client are used to apply best practices pertaining to XtremIO storage Volumes and the general environment. To some degree, data in subsequent sections of this reference architecture helps us quantify the end user experience for a desktop user and also demonstrates the efficiency in management operations that a datacenter administrator may achieve when deploying a VDI environment on XtremIO X2 all-flash array. We begin the reference architecture by discussing test results, which are classified into the following categories: Management Operations resource consumption and time to complete Citrix Xendesktop management operations. Production Use resource consumption patterns and time to complete a boot storm, and resource consumption patterns and responsiveness when desktops in the pool are subjected to "LoginVSI Knowledge Worker" workloads, emulating real users' workloads. After presenting and analyzing the test results of our VDI environment, we will discuss the different elements of our infrastructure, beginning with the hardware layer and moving up to the software layer, including the features and best practices we recommend for the environment. This includes extensive details of the XtremIO X2 storage array, storage network equipment and host details at the hardware level, and the VMware vsphere Hypervisor (ESXi), vcenter Server and Citrix Xendesktop environment at the software level. The details of the virtual machine settings and LoginVSI workload profile provide us with the complete picture of how all building blocks of a VDI environment function together. 6 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

7 Test Results In this section, we elaborate on the tests performed on our VDI environment and their results. We start with a summary of the results and related conclusions, and dive deeper into each test's detailed results and analyzed data and statistics (including various storage and compute metrics such as bandwidth, latency, IOPS, and CPU and RAM utilization). Summary Citrix XenDesktop delivers virtual Windows desktops and applications as secure services on any device. It provides a native touch-enabled look and feel that is optimized for the device type as well as the network. A Citrix XenDesktop desktop pool has the following basic lifecycle stages: Provisioning Production work by active users Maintenance operations We will show summary and detailed test results for these stages, divided into two types of lifecycle phases: Management Operations (Provisioning and Maintenance operations) and Production Use From the perspective of datacenter administrators, operational efficiency is translated to time to complete management operations. The less time it takes to provision desktops and perform maintenance operations, the faster the availability is of VDI desktop pools for production. It is for this reason that the storage array's throughput performance deserves special attention the more throughput the system can provide, the faster those management operations will complete. The storage array throughput is measured in terms of IOPS or bandwidth that manifest in terms of data transfer rate. During production, desktops are in actual use by end users via remote sessions. Two events are tested to examine the infrastructure's performance and ability to serve VDI users: Virtual desktops boot storm, and heavy workloads produced by high percentage of users using their desktops. Boot storms are measured by time to complete, and heavy workloads by the "user experience". The criteria dictating "user experience" is the applications' responsiveness and overall desktop experience. We use the proven LoginVSI tests (explained further in this paper) to evaluate user experience, and track storage latency during those LoginVSI tests. Table 1 shows a summary of the test results for all stages of a VDI desktop pool lifecycle with 4000 desktops for Instant Clone and Linked Clone desktops, when deployed on an XtremIO X2 cluster as its storage array. Note that the Recompose and Refresh maintenance operations are not applicable for Linked Clone desktops. Table 1. VDI Performance Tests with XtremIO X2 Results Summary 4000 DESKTOPS MCS LINKED CLONES MCS FULL CLONES PVS CLONES Elapsed Time Deployment 50 Minutes 65 Minutes N/A LoginVSI Boot Storm 10 Minutes 10 Minutes 10 Minutes LoginVSI VSI Baseline LoginVSI VSI Average LoginVSI VSI Max Not Reached Not Reached Not Reached We notice the excellent results for deployment time, boot storm performance, and maintenance operation time, as well as the accomplished LoginVSI results (detailed in LoginVSI Results on page 15) that emulate production work by active users. 7 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

8 We suggest a scale-out approach for VDI environments, in which we add compute and memory resources (more ESX hosts) as we scale up in the number of desktops. In our tests, we deployed Virtual Desktops with two vcpus and 4GB of RAM (not all utilized since we are using a 32-bit operating system) per desktop. After performing a number of tests to understand the appropriate scaling, we concluded the appropriate scale to be 125 desktops per single ESX host (with the given host configuration listed in Table 2). Using this scale, we deployed 4000 virtual desktops on 32 ESX hosts. For storage volume size, the selected scale was 125 virtual desktops per XtremIO Volume of 3TB (the maximum number of desktops per a single LUN when provisioned with VAAI is 500). As we will see next, the total of 32 volumes and 96TB were easily handled by our single X-Brick X2 cluster, both in terms of capacity and performance (IOPS, bandwidth and latency). In the rest of this section, we take a deeper look into the data collected in our storage array and other environment components during each of the management operation tests, as well as during boot storms and LoginVSI's "Knowledge worker" workload tests. A data-driven understanding of our XtremIO X2 storage array's behavior provides us with evidence that assure a rich user experience and efficiency in management operations when using this effective all-flash array. This is manifested by providing performance-at-scale, for thousands of desktops. The data collected below includes statistics of storage bandwidth, IOPS, I/O latency, CPU utilization and more. Performance statistics were collected from the XtremIO Management Server (XMS) by using XtremIO RESTful API (Representational State Transfer Application Program Interface). This API is a powerful feature that enables performance monitoring while executing management operation tests and running LoginVSI workloads. These results provided a clear view of the exceptional capabilities of XtremIO X2 for VDI environments. Deployment Performance Results In this section, we take a deeper look at performance statistics from our XtremIO X2 array when used for in a VDI environment for performing management operations such as MCS full clone and MCS linked clone desktop provisioning. PVS provisioning is preformed synchronously, while the resources consumed are mostly the CPU and memory of the hosts. Since it is not impacted by the storage performance, it is not detailed in this section. Citrix Machine Creation Services (MCS) Machine Creation Services (MCS) is a centralized provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle. MCS enables the management of several types of machines within a catalog in Citrix Studio. Desktop customization is persistent for machines that use the Personal vdisk (PvDisk or PvD) feature, while non-personal vdisk machines are appropriate if desktop changes are discarded when the user logs off. Desktops provisioned using MCS share a common base image within a catalog. Because of the XtremIO X2 architecture, the base image is stored only once in the storage array, providing efficient data storage and maximizing the utilization of flash disks, while providing exceptional performance and optimal I/O response time for the virtual desktops. Figure 1. Logical Representation of an MCS-base Disk and Linked Clone 8 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

9 Citrix Provisioning Services (PVS) Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between software and the hardware on which it runs. By streaming a single shared disk image (vdisk) instead of copying images to individual machines, PVS lets organizations reduce the number of disk images that they manage. Because the number of machines continues to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing. Because machines stream disk data dynamically in real time from a single shared image, machine image consistency is ensured. In addition, large pools of machines can completely change their configuration, applications, and even the operating system during a reboot operation. Figure 2. Boot Process of a PVS Target Device MCS Full Clone Provisioning The operational efficiency of datacenter administrators is determined mainly by the completion rate of desktop delivery (provisioning) and management operations. It is critical for datacenter administrators that the provisioning and maintenance operations on VDI desktops finish in a timely manner to be ready for production users. The time it takes to provision the desktops is directly related to storage performance capabilities. As shown in Figure 3, XtremIO X2 is handling storage bandwidths as high as ~20GB/s with over 100k IOPS (read + write) during a 4000 Full Clone desktops provisioning phase, resulting in a quick and efficient desktop delivery (65 minutes for all 4000 Full Clone desktops). Figure 3. XtremIO X2 IOPS and I/O Bandwidth 4000 Full Clone Desktops Provisioning It took 65 minutes for the system to finish the provisioning and OS customization of all 4000 desktops with our X2 array. We can deduce that desktops were provisioned in our test at an excellent rate of about 62 desktops per minute, or one desktop provisioned every second. 9 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

10 Figure 4 shows the block sizes distribution during the Instant Clone provisioning process. We can see that most of the bandwidth used is 256kB and >1MB blocks, as these are the block sizes that were configured at the software level (VMware) to use with our storage array. Figure 4. XtremIO X2 Bandwidth by Block Size 4000 Full Clone Desktops Provisioning In Figure 5, we can see the IOPS and latency statistics in an Instant Clone provisioning process of 4000 desktops. The graph shows again that IOPS are well over 100k but that the latency for all I/O operations remains less than 0.1 msec, yielding the excellent performance and fast-paced provisioning of our virtual desktop environment. Figure 5. XtremIO X2 Latency vs. IOPS 4000 Instant Clone Desktops Provisioning 10 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

11 Figure 6 shows the CPU utilization of our Storage Controllers during the Instant Clone provisioning process. This process is less demanding than the Full Clone provisioning process, due to significantly less data written. We can see that the CPU utilization of the Storage Controllers normally remains around 60. We can also see the excellent synergy across our X2 cluster, when all our Active-Active Storage Controllers' CPUs share the load and effort, with CPU utilization virtually equal between all Controllers for the entire process. Figure 6. XtremIO X2 CPU Utilization 4000 Full Clone Desktops Provisioning Figure 7 shows XtremIO's incredible storage savings for the scenario of 4000 Full Clone desktops provisioned (each with about 13.5GB used space in their 40GB-sized C: drive volume). Notice that the physical capacity footprint of the 4000 desktops after XtremIO deduplication and compression is at GB, while the logical capacity is 51.95TB. This is a direct result of an extraordinary data reduction factor reaching 65.5:1 (32.4:1 for deduplication and 2.0:1 for compression). Thin provisioning further-adds to storage efficiency, aggregating it to 391.1:1. Figure 7. XtremIO X2 Data Savings 4000 Full Clone Desktops Provisioning 11 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

12 MCS Linked Clone Provisioning As with MCS Linked Clones, we also examined storage statistics for 4000 Linked Clone desktops provisioning. As Figure 8 shows, our X2 array handles storage bandwidths as high as ~4K IOPS for small I/O operations. This I/O pattern is a result of Linked Clones' use of VMware snapshots which means no outstanding data is written to the array. Instead, pointers and VMware metadata are used. Unlike the process of deploying Linked Clones via VMware Horizon View, XenDesktop creates the computer accounts in advance and associates them with the virtual desktops during their initial power on. This mechanism saves a lot of resources during the deployment of the pool and the entire provisioning process for the 4000 desktops took 50 minutes. This is over 30% faster than what it took provisioning Full Clone desktops (65 minutes), translating to a rate of 80 desktops per minute. Figure 8. XtremIO X2 IOPS and I/O Bandwidth 4000 Linked Clone Desktops Provisioning Figure 9 shows the block sizes distribution during the Linked Clone provisioning process. We can see the 400MB/s of the I/O operations of 512KB blocks which are generated during the desktops power on. Figure 9. XtremIO X2 Bandwidth by Block Size 4000 Linked Clone Desktops Provisioning 12 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

13 Examining the IOPS and latency stats during the Linked Clone provisioning process of the 4000 desktops, we can see in Figure 10 a latency of mostly below 0.2 msec with some peaks of a higher latency, almost entirely under 0.4msec. These high-performance numbers are the reason for the excellent provisioning rate achieved in our test. Figure 10. XtremIO X2 Latency vs. IOPS 4000 Linked Clone Desktops Provisioning Figure 11 shows the CPU utilization of the Storage Controllers during the Linked Clone provisioning process. This process hardly loads the storage array. This is due to the significantly less data written as controlled by the Citrix platforms. We can see that the CPU utilization of the Storage Controllers stays normally at around 2%. Figure 11. XtremIO X2 CPU Utilization 4000 Linked Clone Desktops Provisioning Figure 12 shows the incredible efficiency that is achieved in storage capacity when using Linked Clones on XtremIO X2. The 4000 Linked Clone desktops provisioned takes up a logical footprint of 51.62TB, while the physical is only 1.01 GB as a result of an impressive data reduction factor of 51.4:1 (21.5:1 for deduplication and 2.4:1 for compression). Thin provisioning is also a great saving factor especially with Linked Clones (here with almost savings), as the desktops are merely VMware snapshots of an original parent machine, and consume no space until changes are being made. Figure 12. XtremIO X2 Data Savings 4000 Linked Clone Desktops Provisioning 13 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

14 Production Use Performance Results This section examines how an XtremIO X2 single X-Brick cluster delivers the best-in-class user experience with high performance during a boot storm and during the actual work of virtual desktop users, as emulated by LoginVSI "Knowledge Worker" LoginVSI's workload that emulates more advanced users (details below). Boot Storms The rebooting of VDI desktops at a large scale is a process often orchestrated by administrators by invoking the vsphere task of rebooting virtual machines asynchronously (albeit sequentially), but it can also be performed by the end user. It is necessary, for instance, in scenarios where new applications or operating system updates are installed and need to be deployed to the virtual desktops. Desktops are issued with a reboot without waiting for previous ones to finish booting up. As a result, multiple desktops boot up at the same time. The number of concurrent reboots is also affected by the limit configured in the vcenter server configurations. This configuration can be altered after some experimentation to determine how many concurrent operations a given vcenter server is capable of handling. Figure 13 show storage bandwidth consumption and IOPS for rebooting 4000 Linked Clone virtual desktops simultaneously. The entire process took about 10 minutes when processed on a single X-Brick X2 cluster. Figure 13. XtremIO X2 IOPS and I/O Bandwidth 4000 Linked Clone Desktops Boot Storm The 10 minutes it took to reboot the 4000 desktops in both cases translate to an amazing rate of 6.67 desktops every second, or one desktop boot per 150 milliseconds. Looking closely at the figures above, we can see that even though the process with Linked Clones required more IOPS with a lower bandwidth, it was still able to complete in 10 minutes, the same time required for the reboot with Instant Clones. We will explain this next using the block distribution graphs and XtremIO X2 advanced Write Boost feature. Figure 14 shows the block distribution during the 4000 Linked Clone desktops boot storm. We can see that the I/Os per block size remain the same for most sizes during the operation. Figure 14. XtremIO X2 Bandwidth by Block Size 4000 Linked Clone Desktops Boot Storm 14 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

15 Figure 15 shows the CPU utilization for the 4000 Linked Clone boot storm. We can see that for the 4000 Linked Clone boot storm, the CPU is well utilized in a range between 65% and 75%, mainly due to the increase in I/O operation and the use of Write Boost when booting up Linked Clones. Figure 15. XtremIO X2 CPU Utilization 4000 MCS Linked Clone Desktops Boot Storm LoginVSI Results In this section, we present the LoginVSI "Knowledge Worker" workload results for the 4000 Instant Clone and Linked Clone desktops. The "Knowledge Worker" profile of LoginVSI emulates user actions such as opening a Word document, modifying an Excel spreadsheet, browsing a PDF document, web browsing or streaming a webinar. This emulates typical "advanced" user behavior and helps characterize XtremIO's performance in such scenarios. While characterizing the user experience in those scenarios, any I/O latency that is detected in the storage array is of the utmost importance. This is a parameter that directly influences the end user experience. Other parameters impacting user experience are CPU and memory usage on the ESX hosts and storage network bandwidth utilization. Figure 16. LoginVSI's "Knowledge Worker" Workload Profile We chose Microsoft Windows 10 build 1709(32-bit) as the desktop operating system. Office 2016 suite, Adobe Reader 11 and the latest Oracle JRE, Internet Explorer 11, and Doro PDF Printer were installed and used by LoginVSI's "Knowledge Worker" workloads. 15 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

16 Figure 17 and Figure 18 show the LoginVSI results of our 4000 MCS Full Clone, MCS Linked Clone, and PVS Clone desktops respectively. LoginVSI scores are determined by observing the average application latencies, highlighting the speed at which user operations are completed. This helps quantify user experience, since the measurements considered are at the application level. As a case in point, the blue line in each of the LoginVSI charts follows the progression of the "VSI average" against the number of active sessions. This is an aggregated metric, using average application latencies as more desktop sessions are added over time. The factor to be observed in these graphs is the VSImax threshold, which represents the threshold beyond which LoginVSI's methodology indicates that the user experience has deteriorated to the point where the maximum number of desktops that can be consolidated in a given VDI infrastructure has been reached. Figure 17. LoginVSI's "Knowledge Worker" Results 4000 MCS Full Clone Desktops Figure 18. LoginVSI's "Knowledge Worker" Results 4000 MCS Lined Clone Desktops 16 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

17 Figure 19. LoginVSI's "Knowledge Worker" Results 4000 PVS Clone Desktops From the average shown in both graphs (the blue line), the application latency quantified is much lower than the VSImax threshold watermark for the 4000 active users (~1100 average vs. a ~840 baseline). This demonstrates how XtremIO X2 all-flash single X-Brick cluster provides a best-in-class delivery of user experience for up to 4000 VDI users, with room to scale further. More details about LoginVSI test methodology can be found in Appendix A Test Methodology and in the LoginVSI documentation. These LoginVSI results help us understand the user experience and are a testimony of the scalability and performance that manifests into an optimal end user experience with XtremIO X2. The obvious reason, as highlighted by Figure 20, Figure 21 and Figure 22, is none other than the outstanding storage latency demonstrated by XtremIO X2. Figure 20. XtremIO X2 Latency vs. IOPS 4000 MCS Linked Clone Desktops In-Use 17 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

18 Figure 21. XtremIO X2 Latency vs. IOPS 4000 MCS Full Clone Desktops In-Use Figure 22. XtremIO X2 Latency vs. IOPS 4000 PVS Clone Desktops In-Use For all the three desktops methods, we can see a steady and remarkable ~0.2msec latency for the entire LoginVSI workload test. We see a small rise in latency numbers as IOPS accumulate, but never exceeding 0.3msec. These numbers yield the great LoginVSI results described above, and provide a superb user experience for our VDI users. 18 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

19 Figure 23, Figure 24 and Figure 25 present total IOPS and bandwidth seen during the LoginVSI "Knowledge Worker" profile workload on our 4000 MCS Linked Clone desktops, 4000 MCS Full Clone desktops, and 4000 PVS Clone desktops respectively. In all occasions, the bandwidth at the peak of the workload reaches about ~1.5GB/s. Figure 23. XtremIO X2 IOPS and I/O Bandwidth 4000 MCS Linked Clone Desktops In-Use Figure 24. XtremIO X2 IOPS and I/O Bandwidth 4000 MCS Full Clone Desktops In-Use Figure 25. XtremIO X2 IOPS and I/O Bandwidth 4000 PVS Clone Desktops In-Use 19 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

20 Figure 26, Figure 27 and Figure 28 show the CPU Utilization of our X2 storage array during the LoginVSI "Knowledge Worker" profile workload of 4000 MCS Full Clone, Linked Clone and PVS Clone desktops. We can see that the CPU utilization at the peak of the workload reaches about 30% and 20% in MCS scenarios respectively, while it reached 13% utilization for PVS Clones This emphasizes that although they save much space and provide various advantages, the MCS Linked clones are slightly heavier than MCS Full clones since they are based on the same master images/inmemory metadata. As for the PVS Clones, since some of the workload runs in memory, the CPU utilization is lower, but as a result, the memory utilization at the host level is higher. Figure 26. XtremIO X2 CPU Utilization 4000 MCS Full Clone Desktops In-Use Figure 27. XtremIO X2 CPU Utilization 4000 MCS Linked Clone Desktops In-Use Figure 28. XtremIO X2 CPU Utilization 4000 PVS Clone Desktops In-Use 20 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

21 Figure 29, Figure 30 and Figure 31 show block size distribution of 4000 Instant Clone and Linked Clone desktops respectively during the LoginVSI "Knowledge Worker" profile workload. We can see that the I/Os per block size remain the same for most sizes, while the bandwidth usage increase when more users login into their virtual desktops. Figure 29. XtremIO X2 Bandwidth by Block Size 4000 MCS Linked Clone Desktops In-Use Figure 30. XtremIO X2 Bandwidth by Block Size 4000 MCS Full Clone Desktops In-Use Figure 31. XtremIO X2 Bandwidth by Block Size 4000 PVS Clone Desktops In-Use Examining all the graphs collected during the LoginVSI "Knowledge Worker" profile workload test, we see that the X2 single-brick is more than capable of managing and servicing 4000 VDI working stations, with room to serve additional volumes and workloads. 21 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

22 We also took a deeper look at the ESXi hosts to see if our scaling fits from a compute-resources perspective as well. Specifically, we checked both the CPU utilization of our ESX hosts and their Memory utilization ( Figure 32) during the LoginVSI "Knowledge Worker" profile workload test on the 4000 desktops. Please note that using RAM Write Cache for PVS Clones (which is described later) increases the Memory utilization drastically in light of the fact that storage workload is offloaded to RAM. Figure 32. ESX Hosts CPU and Memory Utilization 4000 MCS Linked Clone Desktops In-Use We can see an approximate 65% utilization of both CPU and memory resources of the ESX hosts, indicating a wellutilized environment and good resource consumption of the hosts, leaving room for extra VMs in the environment, and spare resources for VMotion of VMs (due to hosts failures, planned upgrades, etc.). In Figure 33 below, we see the change in CPU utilization of a single ESX host in the environment as the LoginVSI "Knowledge Worker" profile workload test progresses. The test creates logins and workloads to the virtual desktops in a cumulative way, emulating a typical working environment in which users log in during a span of a few dozen minutes and not all at the same time. This behavior is seen clearly in the figure below, as the CPU utilization of this ESX host increases as time passes, until all virtual desktops in the host are in use and CPU utilization reaches about 70%. Figure 33. A Single ESX Host CPU Utilization 4000 Desktops In-Use 22 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

23 Solution's Hardware Layer Based on the data presented above, it is evident that storage/virtualization administrators must strive to achieve an optimal user experience for their VDI desktop end users. The following sections discuss how the hardware and software synergize in order to achieve these goals. We begin at the hardware layer, taking a wide look at our XtremIO X2 array and the features and benefits it provides to VDI environments, continue by discussing the details of our ESX hosts, based on Dell PowerEdge servers, on which our entire environment runs, and then review our storage configuration and networks that connect the servers to the storage array, thereby encompassing all of the hardware components of the solution. We follow this up with details of the software layer by providing configuration details for VMware vsphere, VMware Horizon View Suite, Dell and EMC plugins for VMware and configuration settings on the "parent" virtual machine from which VDI desktops are deployed. Storage Array: Dell EMC XtremIO X2 All-Flash Array Dell EMC's XtremIO is an enterprise-class scalable all-flash storage array that provides rich data services with high performance. It is designed from the ground up to unlock flash technology's instant performance potential by uniquely leveraging the characteristics of SSDs and using advanced inline data reduction methods to reduce the physical data that must be stored on the disks. XtremIO's storage system uses industry-standard components and proprietary intelligent software to deliver unparalleled levels of performance, achieving consistent low latency for up to millions of IOPS. It comes with a simple, easy-to-use interface for storage administrators and fits a wide variety of use cases for customers in need of a fast and efficient storage system for their datacenters, requiring very little planning to set-up before provisioning. XtremIO leverages flash to deliver value across multiple dimensions: Performance provides consistent low-latency and up to millions of IOPS. Scalability -uses a scale-out and scale-up architecture. Storage Efficiency -uses data reduction techniques such as deduplication, compression and thin-provisioning. Data Protection -uses a proprietary flash-optimized algorithm named XDP. Environment Consolidation -uses XtremIO Virtual Copies or VMware's XCOPY. We will further review XtremIO X2 features and capabilities. XtremIO X2 Overview XtremIO X2 is the new generation of Dell EMC's All-Flash Array storage system. It adds enhancements and flexibility in several aspects to the already proficient and high-performant former generation storage array. Features such as scale-up for a more flexible system, write boost for a more sensible and high-performing storage array, NVRAM for improved data availability and a new web-based UI for managing the storage array and monitoring its alerts and performance stats, add the extra value and advancements required in the evolving world of computer infrastructure. The XtremIO X2 Storage Array uses building blocks called X-Bricks. Each X-Brick has its own compute, bandwidth and storage resources, and can be clustered together with additional X-Bricks to grow in both performance and capacity (scale-out). Each X-Brick can also grow individually in terms of capacity, with an option to add to up to 72 SSDs in each brick. XtremIO architecture is based on a metadata-centric, content-aware system, which helps streamline data operations efficiently without requiring any movement of data post-write for any maintenance reason (data protection, data reduction, etc. all done inline). The system lays out the data uniformly across all SSDs in all X-Bricks in the system using unique fingerprints of the incoming data and controls access using metadata tables. This contributes to an extremely balanced system across all X-Bricks in terms of compute power, storage bandwidth and capacity. 23 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

24 Using the same unique fingerprints, XtremIO is equipped with exceptional always-on in-line data deduplication abilities, which highly benefits virtualized environments. Together with its data compression and thin provisioning capabilities (both also in-line and always-on), it achieves incomparable data reduction rates. System operation is controlled by storage administrators via a stand-alone dedicated Linux-based server called the XtremIO Management Server (XMS). An intuitive user interface is used to manage and monitor the storage cluster and its performance. The XMS can be either a physical or a virtual server and can manage multiple XtremIO clusters. With its intelligent architecture, XtremIO provides a storage system that is easy to set-up, needs zero tuning by the client, and does not require complex capacity or data protection planning. All this is handled autonomously by the system. Architecture and Scalability An XtremIO X2 Storage System is comprised of a set of X-Bricks that together form a cluster. This is the basic building block of an XtremIO array. There are two types of X2 X-Bricks available: X2-S and X2-R. X2-S is for environments whose storage needs are more I/O intensive than capacity intensive, as they use smaller SSDs and less RAM. An effective use of the X2-S is for environments that have high data reduction ratios (high compression ratio or a great deal of duplicated data) which lower the capacity footprint of the data significantly. X2-R X-Bricks clusters are made for the capacity intensive environments, with bigger disks, more RAM and a bigger expansion potential in future releases. The two X-Brick types cannot be mixed together in a single system, so the decision which type is suitable for your environment must be made in advance. Each X-Brick is comprised of two 1U Storage Controllers (SCs) with: Two dual socket Haswell CPUs 346GB RAM (for X2-S) or 1TB RAM (for X2-R) Two 1/10GbE iscsi ports Two user interface interchangeable ports (either 4/8/16Gb FC or 1/10GbE iscsi) Two 56Gb/s InfiniBand ports One 100/1000/10000 Mb/s management port One 1Gb/s IPMI port Two redundant power supply units (PSUs) One 2U Disk Array Enclosure (DAE) containing: Up to 72 SSDs of sizes 400GB (for X2-S) or 1.92TB (for X2-R) Two redundant SAS interconnect modules Two redundant power supply units (PSUs) 4U 1U 1U Second Storage Controller First Storage Controller 2U DAE Figure 34. An XtremIO X2 X-Brick 24 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

25 The Storage Controllers on each X-Brick are connected to their DAE via redundant SAS interconnects. An XtremIO storage array can have one or multiple X-Bricks. Multiple X-Bricks are clustered together into an XtremIO array, using an InfiniBand switch and the Storage Controllers' InfiniBand ports for back-end connectivity between Storage Controllers and DAEs across all X-Bricks in the cluster. The system uses the Remote Direct Memory Access (RDMA) protocol for this back-end connectivity, ensuring a highly-available ultra-low latency network for communication between all components of the cluster. The InfiniBand switches are the same size (1U) for both X2-S and X2-R cluster types, but include 12 ports for X2-S and 36 ports for X2-R. By leveraging RDMA, an XtremIO system is essentially a single sharedmemory space spanning all of its Storage Controllers. The 1GB port for management is configured with an IPv4 address. The XMS, which is the cluster's management software, communicates with the Storage Controllers via the management interface. Through this interface, the XMS communicates with the Storage Controllers, and sends storage management requests such as creating an XtremIO Volume or mapping a Volume to an Initiator Group. The second 1GB/s port for IPMI interconnects the X-Brick's two Storage Controllers. IPMI connectivity is strictly within the bounds of an X-Brick and will never be connected to an IPMI port of a Storage Controller in another X-Brick in the cluster. With X2, an XtremIO cluster has both scale-out and scale-up capabilities. Scale-out is implemented by adding X-Bricks to an existing cluster. The addition of an X-Brick to an existing cluster linearly increases its compute power, bandwidth and capacity. Each X-Brick that is added to the cluster brings with it two Storage Controllers, each with its CPU power, RAM and FC/iSCSI ports to service the clients of the environment, together with a DAE with SSDs to increase the capacity provided by the cluster. Adding an X-Brick to scale-out an XtremIO cluster is intended for environments that grow both in capacity and performance needs, such as in the case of an increase in the number of active users and their data, or a database which grows in data and complexity. An XtremIO cluster can start with any number of X-Bricks that fits the environment's initial needs and can currently grow to up to 4 X-Bricks (for both X2-S and X2-R). Future code upgrades of XtremIO X2 will support up to 8 X-Bricks for X2-R arrays. Figure 35. Scale Out Capabilities Single to Multiple X2 X-Brick Clusters 25 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

26 Scale-up of an XtremIO cluster is implemented by adding SSDs to existing DAEs in the cluster. This is intended for environments that grow in capacity needs without need for extra performance. For example, this may occur when the same number of users have an increasing amount of data to save, or when an environment grows in both capacity and performance needs but has only reached its capacity limits with additional performance available with its current infrastructure. Each DAE can hold up to 72 SSDs and is divided into 2 groups of SSDs called Data Protection Groups (DPGs). Each DPG can hold a minimum of 18 SSDs and can grow by increments of 6 SSDs up to the maximum of 36 SSDs. In other words, 18, 24, 30 or 36 SSDs may be installed per DPG, where up to 2 DPGs can occupy a DAE. SSDs are 400GB per drive for X2-S clusters and 1.92TB per drive for X2-R clusters. Future releases will allow customers to populate their X2-R clusters with 3.84TB sized drives, doubling the physical capacity available in their clusters. Figure 36. Scale Up Capabilities Up to 2 DPGs and 72 SSDs per DAE For more details on XtremIO X2, see the XtremIO X2 Specifications [ 2] and XtremIO X2 Datasheet [ 3]. 26 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

27 XIOS and the I/O Flow Each Storage Controller within the XtremIO cluster runs a specially-customized lightweight Linux-based operating system as the base platform of the array. The XtremIO Operating System (XIOS) handles all activities within a Storage Controller and runs on top of the Linux-based operating system. XIOS is optimized for handling high I/O rates and manages the system's functional modules, RDMA communication, monitoring etc. Figure 37. X-Brick Components XIOS has a proprietary process scheduling-and-handling algorithm designed to meet the specific requirements of a content-aware, low-latency, and high-performing storage system. It provides efficient scheduling and data access, Instant exploitation of CPU resources, optimized inter-sub-process communication, and minimized dependency between subprocesses that run on different sockets. The XtremIO Operating System gathers a variety of metadata tables on incoming data including data fingerprint, location in the system, mappings and reference counts. The metadata is used as the fundamental reference for performing system operations such as laying out incoming data uniformly, implementing inline data reduction services, and accessing data on read requests. The metadata is also involved in communication with external applications (such as VMware XCOPY and Microsoft ODX) to optimize integration with the storage system. Regardless of which Storage Controller receives an I/O request from a host, multiple Storage Controllers on multiple X- Bricks cooperate to process the request. The data layout in the XtremIO system ensures that all components share the load and participate evenly in processing I/O operations. An important functionality of XIOS is its data reduction capabilities. This is achieved by using inline data deduplication and compression. Data deduplication and data compression complement each other. Data deduplication removes redundancies, whereas data compression compresses the already deduplicated data before it is written to the flash media. XtremIO is an always-on thin-provisioned storage system, further realizing storage savings by the storage system, which never writes a block of zeros to the disks. XtremIO integrates with existing SANs through 16Gb/s Fibre Channel or 10Gb/s Ethernet iscsi connectivity to service hosts' I/O requests. Details of the XIOS architecture and its data reduction capabilities are available in the Introduction to DELL EMC XtremIO X2 Storage Array document [ 4]. 27 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

28 XtremIO Write I/O Flow In a write operation to the storage array, the incoming data stream reaches any one of the Active-Active Storage Controllers and is broken into data blocks. For every data block, the array fingerprints the data with a unique identifier and stores it in the cluster's mapping table. The mapping table maps the host Logical Block Addresses (LBA) to the block fingerprints, and the block fingerprints to its physical location in the array (the DAE, SSD and offset the block is located at). The fingerprint of a block has two objectives: to determine if the block is a duplicate of a block that already exists in the array and to distribute blocks uniformly across the cluster. The array divides the list of potential fingerprints among Storage Controllers and assigns each its own fingerprint range. The mathematical process that calculates the fingerprints results in a uniform distribution of fingerprint values and thus fingerprints and blocks are evenly distributed across all Storage Controllers in the cluster. A write operation works as follows: 1. A new write request reaches the cluster. 2. The new write is broken into data blocks. 3. For each data block: a. A fingerprint is calculated for the block. b. An LBA-to-fingerprint mapping is created for this write request. c. The fingerprint is checked to see if it already exists in the array. d. If it exists, the reference count for this fingerprint is incremented by one. e. If it does not exist: 1. A location is chosen on the array where the block will be written (distributed uniformly across the array according to fingerprint value). 2. A fingerprint-to-physical location mapping is created. 3. The data is compressed. 4. The data is written. 5. The reference count for the fingerprint is set to one. 28 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

29 Deduplicated writes are of course much faster than original writes. Once the array identifies a write as a duplicate, it updates the LBA-to-fingerprint mapping for the write and updates the reference count for this fingerprint. No further data is written to the array and the operation is completed quickly, adding an extra benefit to in-line deduplication. Figure 38 shows an example of an incoming data stream which contains duplicate blocks with identical fingerprints. Figure 38. Incoming Data Stream Example with Duplicate blocks As mentioned, fingerprints also help to decide where to write the block in the array. Figure 39 shows the incoming stream demonstrated in Figure 38, after duplicates were removed, as it is being written to the array. The blocks are divided to their appointed Storage Controller according to their fingerprint value, which ensures a uniform distribution of the data across the cluster. The blocks are transferred to their destinations in the array using Remote Direct Memory Access (RDMA) via the low-latency InfiniBand network. F, Data Storage Controller DAE InfiniBand 2, A, 1, 9, Data Data Data Data X-Brick 2 X-Brick 1 Storage Controller Storage Controller DAE Data Data Storage Controller Data Data Data Data Data Data Data 0, C, 963FE7B CA38C90 134F F7A F3AFBA3 AB45CB A8 Figure 39. Incoming Deduplicated Data Stream Written to the Storage Controllers 29 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

30 The actual write of the data blocks to the SSDs is carried out asynchronously. At the time of the application write, the system places the data blocks in the in-memory write buffer, and protects it using journaling to local and remote NVRAMs. Once it is written to the local NVRAM and replicated to a remote one, the Storage Controller returns an acknowledgment to the host. This guarantees a quick response to the host, ensures low-latency of I/O traffic, and preserves the data in case of system failure (power-related or any other). When enough blocks are collected in the buffer (to fill up a full stripe), the system writes them to the SSDs on the DAE. Figure 40 demonstrates the phase of writing the data to the DAEs after a full stripe of data blocks is collected in each Storage Controller. DataData DataDataData Data Data Data Data Data Data Data Data P1 P2 Data Data Data Data Data Data Data P1 P2 X-Brick 2 Storage Controller DAE Storage Controller X-Brick 1 Storage Controller DataDataData DataDataData Data Data Data Data Data Data Data P1 P2 Data Data Data Data Data Data Data P1 P2 DAE Storage Controller Figure 40. Full Stripe of Blocks Written to the DAEs XtremIO Read I/O Flow In a read operation, the system first performs a look-up of the logical address in the LBA-to-fingerprint mapping. The fingerprint found is then looked up in the fingerprint-to-physical mapping and the data is retrieved from the right physical location. Just as with writes, the read load is also evenly shared across the cluster, as blocks are evenly distributed, and all volumes are accessible across all X-Bricks. If the requested block size is larger than the data block size, the system performs parallel data block reads across the cluster and assembles them into bigger blocks before returning them to the application. A compressed data block is decompressed before it is delivered to the host. XtremIO has a memory-based read cache in each Storage Controller. The read cache is organized by content fingerprint. Blocks whose contents are more likely to be read are placed in the read cache for a fast retrieve. 30 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

31 A read operation works as such: 1. A new read request reaches the cluster. 2. The read request is analyzed to determine the LBAs for all data blocks and a buffer is created to hold the data. 3. For each LBA: a. The LBA-to-fingerprint mapping is checked to find the fingerprint of each data block to be read. b. The fingerprint-to-physical location mapping is checked to find the physical location of each of the data blocks. c. The requested data block is read from its physical location (read cache or a place in the disk) and transmitted to the buffer created in step 2 in the Storage Controller that processes the request via RDMA over InfiniBand. 4. The system assembles the requested read from all data blocks transmitted to the buffer and sends it back to the host. System Features The XtremIO X2 Storage Array offers a wide range of built-in features that require no special license. The architecture and implementation of these features is unique to XtremIO and is designed around the capabilities and limitations of flash media. We will list some key features included in the system. Inline Data Reduction XtremIO's unique Inline Data Reduction is achieved by these two mechanisms: Inline Data Deduplication and Inline Data Compression Data Deduplication Inline Data Deduplication is the removal of duplicate I/O blocks from a stream of data prior to it being written to the flash media. XtremIO inline deduplication is always on, meaning no configuration is needed for this important feature. The deduplication is at a global level, meaning no duplicate blocks are written over the entire array. Being an inline and global process, no resource-consuming background processes or additional reads and writes (which are mainly associated with post-processing deduplication) are necessary for the feature's activity, thus increasing SSD endurance and eliminating performance degradation. As mentioned earlier, deduplication on XtremIO is performed using the content's fingerprints (see XtremIO Write I/O Flow on page 28). The fingerprints are also used for uniform distribution of data blocks across the array, thus providing inherent load balancing for performance and enhancing flash wear-level efficiency, since the data never needs to be rewritten or rebalanced. XtremIO uses a content-aware, globally deduplicated Unified Data Cache for highly efficient data deduplication. The system's unique content-aware storage architecture provides a substantially larger cache size with a small DRAM allocation. Therefore, XtremIO is the ideal solution for difficult data access patterns, such as "boot storms" common in VDI environments. XtremIO has excellent data deduplication ratios, especially for virtualized environments. With it, SSD usage is smarter, flash longevity is maximized, logical storage capacity is multiplied (see Figure 7 and Figure 12 for examples) and total cost of ownership is reduced. 31 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

32 Data Compression Inline data compression is that done on data prior to being written to the flash media. XtremIO automatically compresses data after all duplications are removed, ensuring that the compression is performed only for unique data blocks. The compression is performed in real-time and not as a post-processing operation. This way, it does not overuse the SSDs or impact performance. Compressibility rates depend on the type of data written. Data Compression complements data deduplication in many cases and saves storage capacity by storing only unique data block in the most efficient manner. We can see the benefits and capacity savings for the deduplication-compression combination demonstrated in Figure 41 and some real ratios in the Test Results section in Figure 7 and Figure 12. This is the only data written to the flash media. Data Written by Host 3:1 Data Deduplication 2:1 Data Compression 6:1 Total Data Reduction Figure 41. Data Deduplication and Data Compression Demonstrated Thin Provisioning XtremIO storage is natively thin provisioned, using a small internal block size. All volumes in the system are thin provisioned, meaning that the system consumes capacity only when it is needed. No storage space is ever pre-allocated before writing. Because of XtremIO's content-aware architecture, blocks can be stored at any location in the system (with the metadata referring to their location), and the data is written only when unique blocks are received. Therefore, as opposed to diskoriented architecture, no space creeping or garbage collection is necessary on XtremIO, volume fragmentation does not occur in the array, and defragmentation utilities are not needed. This XtremIO feature enables consistent performance and data management across the entire life cycle of a volume, regardless of the system capacity utilization or the write patterns of clients. Integrated Copy Data Management XtremIO pioneered the concept of integrated Copy Data Management (icdm) the ability to consolidate both primary data and its associated copies on the same scale-out all-flash array for unprecedented agility and efficiency. XtremIO is one of a kind in its capabilities to consolidate multiple workloads and entire business processes safely and efficiently, providing organizations with a new level of agility and self-service for on-demand procedures. XtremIO provides consolidation, supporting on-demand copy operations at scale, and still maintains delivery of all performance SLAs in a consistent and predictable way. 32 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

33 Consolidation of primary data and its copies in the same array has numerous benefits: 1. It can make development and testing activities up to 50% faster, creating copies of production code quickly for development and testing purposes, and then refreshing the output back into production for the full cycle of code upgrades in the same array. This dramatically reduces complexity and infrastructure needs, as well as development risks, and increases the quality of the product. 2. Production data can be extracted and pushed to all downstream analytics applications on-demand as a simple inmemory operation. Copies of the data are high performance and receive the same SLA as production copies without compromising production SLAs. XtremIO offers this on-demand as both self-service and automated workflows for both application and infrastructure teams. 3. Operations such as patches, upgrades and tuning tests can be made quickly using copies of production data. Diagnosing problems of applications and databases can be done using these copies, and changes can be applied and refreshed back to production. The same process can be used for testing new technologies and combining them in production environments. 4. icdm can also be used for data protection purposes, as it enables creating many copies at low point-in-time intervals for recovery. Application integration and orchestration policies can be set to auto-manage data protection, using different SLAs. XtremIO Virtual Copies XtremIO uses its own implementation of snapshots for all icdm purposes, called XtremIO Virtual Copies (XVCs). XVCs are created by capturing the state of data in volumes at a particular point in time and allowing users to access that data when needed, regardless of the state of the source volume (even deletion). They allow any access type and can be taken either from a source volume or another Virtual Copy. XtremIO's Virtual Copy technology is implemented by leveraging the content-aware capabilities of the system and optimized for SSDs with a unique metadata tree structure that directs I/O to the right data timestamp. This allows efficient copy creation that can sustain high performance, while maximizing the media endurance. Figure 42. A Metadata Tree Structure Example of XVCs When creating a Virtual Copy, the system only generates a pointer to the ancestor metadata of the actual data in the system, making the operation very quick. This operation does not have any impact on the system and does not consume any capacity at the point of creation, unlike traditional snapshots, which may need to reserve space or copy the metadata for each snapshot. Virtual Copy capacity consumption occurs only when changes are made to any copy of the data. Then, the system updates the metadata of the changed volume to reflect the new write, and stores the blocks in the system using the standard write flow process. The system supports the creation of Virtual Copies on a single, as well as on a set, of volumes. All Virtual Copies of the volumes in the set are cross-consistent and contain the exact same point-in-time. This can be done manually by selecting a set of volumes for copying, or by placing volumes in a Consistency Group and making copies of that Group. 33 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

34 Virtual Copy deletions are lightweight and proportional only to the amount of changed blocks between the entities. The system uses its content-aware capabilities to handle copy deletions. Each data block has a counter that indicates the number of instances of that block in the system. If a block is referenced from some copy of the data, it will not be deleted. Any block whose counter value reaches zero is marked as deleted and will be overwritten when new unique data enters the system. With XVCs, XtremIO's icdm offers the following tools and workflows to provide the consolidation capabilities: Consistency Groups (CG) Grouping of volumes to allow Virtual Copies to be taken on a group of volumes as a single entity. Snapshot Sets A group of Virtual Copies volumes taken together using CGs or a group of manually-chosen volumes. Protection Copies Immutable read-only copies created for data protection and recovery purposes. Protection Scheduler Used for local protection of a volume or a CG. It can be defined using intervals of seconds/minutes/hours or can be set using a specific time of day or week. It has a retention policy based on the number of copies needed or the permitted age of the oldest snapshot. Restore from Protection Restore a production volume or CG from one of its descendant snapshot sets. Repurposing Copies Virtual Copies configured with changing access types (read-write / read-only / no-access) for alternating purposes. Refresh a Repurposing Copy Refresh a Virtual Copy of a volume or a CG from the parent object or other related copies with relevant updated data. It does not require volume provisioning changes for the refresh to take effect, but only host-side logical volume management operations to discover the changes. XtremIO Data Protection XtremIO Data Protection (XDP) provides a "self-healing" double-parity data protection with very high efficiency to the storage system. It requires very little capacity overhead and metadata space and does not require dedicated spare drives for rebuilds. Instead, XDP leverages the "hot space" concept, where any free space available in the array can be utilized for failed drive reconstructions. The system always reserves sufficient distributed capacity for performing at least a single drive rebuild. In the rare case of a double SSD failure, the second drive will be rebuilt only if there is enough space to rebuild the second drive as well, or when one of the failed SSDs is replaced. The XDP algorithm provides: N+2 drive protection. Capacity overhead of only 5.5%-11% (depends on the number of disks in the protection group). 60% more write-efficient than RAID1. Superior flash endurance to any RAID algorithm, due to the smaller number of writes and even distribution of data. Automatic rebuilds that are faster than traditional RAID algorithms. 34 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

35 As shown in Figure 43, XDP uses a variation of N+2 row and diagonal parity which provides protection from two simultaneous SSD errors. An X-Brick DAE may contain up to 72 SSDs organized in two Data Protection Groups (DPGs). XDP is managed independently on the DPG level. A DPG of 36 SSDs will result in capacity overhead of only 5.5% for its data protection needs. D 0 D 1 D 2 D 3 D 4 P Q k k = 5 (prime) 5 Figure 43. N+2 Row and Diagonal Parity 35 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

36 Data at Rest Encryption Data at Rest Encryption (DARE) provides a solution for securing critical data even when the media is removed from the array, for customers in need of such security. XtremIO arrays utilize a high-performance inline encryption technique to ensure that all data stored on the array is unusable if the SSD media is removed. This prevents unauthorized access in the event of theft or loss during transport, and makes it possible to return/replace failed components containing sensitive data. DARE has been established as a mandatory requirement in several industries, such as health care, banking, and government institutions. At the heart of XtremIO's DARE solution is Self-Encrypting Drive (SED) technology. An SED has dedicated hardware which is used to encrypt and decrypt data as it is written to or read from the drive. Offloading the encryption task to the SSDs enables XtremIO to maintain the same software architecture whenever encryption is enabled or disabled on the array. All XtremIO's features and services (including Inline Data Reduction, XtremIO Data Protection, Thin Provisioning, XtremIO Virtual Copies, etc.) are available on an encrypted cluster as well as on a non-encrypted cluster, and performance is not impacted when using encryption. A unique Data Encryption Key (DEK) is created during the drive manufacturing process and does not leave the drive at any time. The DEK can be erased or changed, rendering its current data unreadable forever. To ensure that only authorized hosts can access the data on the SED, the DEK is protected by an Authentication Key (AK) that resides on the Storage Controller. Without the AK, the DEK is encrypted and cannot be used to encrypt or decrypt data. Figure 44. Data at Rest Encryption in XtremIO 36 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

37 Write Boost In the new X2 storage array, the write flow algorithm was improved significantly to improve array performance, countering the rise in compute power and disk speeds, and accounting for common applications' I/O patterns and block sizes. As mentioned when discussing the write I/O flow, the commit to the host is now asynchronous to the actual writing of the blocks to disk. The commit is sent after the changes are written to a local and remote NVRAMs for protection, and are written to the disk only later, at a time that best optimizes the system's activity. In addition to the shortened procedure from write to commit, the new algorithm addresses an issue relevant to many applications and clients: a high percentage of small I/Os creating load on the storage system and influencing latency, especially on bigger I/O blocks. Examining customers' applications and I/O patterns, it was found that many I/Os from common applications come in small blocks, under than 16K pages, creating high loads on the storage array. Figure 45 shows the block size histogram from the entire XtremIO install base. The percentage of blocks smaller than 16KB is highly evident. The new algorithm takes care of this issue by aggregating small writes to bigger blocks in the array before writing them to disk, making them less demanding on the system, which is now more capable of handling bigger I/Os faster. The test results for the improved algorithm were amazing: the improvement in latency for several cases is around 400% and allows XtremIO X2 to address application requirements with 0.5msec or lower latency. Figure 45. XtremIO Install Base Block Size Histogram VMware APIs for Array Integration (VAAI) VAAI was first introduced as VMware's improvements to host-based VM cloning. It offloads the workload of cloning a VM to the storage array, making cloning much more efficient. Instead of copying all blocks of a VM from the array and back to it for the creation of a new cloned VM, the application lets the array do it internally, utilizing the array's features and saving host and network resources that are no longer involved in the actual cloning of data. This procedure of offloading the operation to the storage array is backed by the X-copy (extended copy) command to the array, which is used when cloning large amounts of complex data. XtremIO is fully VAAI compliant, allowing the array to communicate directly with vsphere and provide accelerated storage vmotion, VM provisioning, and thin provisioning functionality. In addition, XtremIO's VAAI integration improves X-copy efficiency even further by making the whole operation metadata driven. Due to its inline data reduction features and inmemory metadata, no actual data blocks are copied during an X-copy command. The system only creates new pointers to the existing data within the Storage Controllers' memory. Therefore, the operation saves host and network resources and does not consume storage resources, leaving no impact on the system's performance, as opposed to other implementations of VAAI and the X-copy command. Performance tests of XtremIO during X-copy operations and comparison between X1 and X2 with different block sizes can be found in a dedicated post written at XtremIO's CTO blog [ 9]. 37 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

38 Figure 46 illustrates the X-copy operation when performed against an XtremIO storage array and shows the efficiency in metadata-based cloning. A VM1 X-Copy command (full clone) Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Ptr Ptr Ptr Ptr Ptr Ptr Metadata in RAM XtremIO A B C D Data on SSD B VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Copy metadata pointers Ptr Ptr Ptr Ptr Ptr Ptr XtremIO A B C D Data on SSD C VM1 Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 No data blocks are copied. New pointers are created to the existing data. VM2 New New New New New New Addr 1 Addr 2 Addr 3 Addr 4 Addr 5 Addr 6 Metadata in RAM Ptr Ptr Ptr Ptr Ptr Ptr Ptr Ptr Ptr Ptr Ptr Ptr XtremIO A B C D Data on SSD Figure 46. VAAI X-Copy with XtremIO The XtremIO features for VAAI support include: Zero Blocks / Write Same used for zeroing-out disk regions and providing accelerated volume formatting. Clone Blocks / Full Copy / X-Copy used for copying or migrating data within the same physical array; an almost instantaneous operation on XtremIO due to its metadata-driven operations. Record Based Locking / Atomic Test & Set (ATS) used during creation and locking of files on VMFS volumes and during power-down and powering-up of VMS. Block Delete / Unmap / Trim used for reclamation of unused space using the SCSI UNMAP feature. Other features of XtremIO X2 (some described in previous sections): Scalability (scale-up and scale-out) Even Data Distribution (uniformity) High Availability (no single points of failures) Non-disruptive Upgrade and Expansion RecoverPoint Integration (for replications to local or remote arrays) 38 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

39 XtremIO Management Server The XtremIO Management Server (XMS) is the component that manages XtremIO clusters (up to 8 clusters). It is preinstalled with CLI, GUI and RESTful API interfaces, and can be installed on a dedicated physical server or a VMware virtual machine. The XMS manages the cluster via the management ports on both Storage Controllers of the first X-Brick in the cluster and uses a standard TCPI/IP connection to communicate with them. It is not part of the XtremIO data path, thus can be disconnected from an XtremIO cluster without jeopardizing data I/O tasks. A failure on the XMS affects only monitoring and configuration activities, such as creating and attaching volumes. A virtual XMS is naturally less vulnerable to such failures. The GUI is based on a new Web User Interface (WebUI), which is accessible with any browser, and provides easy-to-use tools for performing most system operations (certain management operations must be performed using the CLI). Some of the most useful features of the new WebUI are described following. Dashboard The Dashboard window presents an overview of the cluster. It has three panels: 1. Health Provides an overview of the system's health status and alerts. 2. Performance (shown in Figure 47) Provides an overview of the system's overall performance and top used Volumes and Initiator Groups. 3. Capacity (shown in Figure 48) Provides an overview of the system's physical capacity and data savings. Note these figures represent views available in the dashboard and not test results shown in earlier figures. Figure 47. XtremIO WebUI Dashboard Performance Panel 39 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

40 Figure 48. XtremIO WebUI Dashboard Capacity Panel The main Navigation menu bar is located on the left side of the UI. Users can select one of the navigation menu options related to XtremIO's management actions. The main menus contain options for the Dashboard, Notifications, Configuration, Reports, Hardware and Inventory. Notifications In the Notifications menu, we can navigate to the Events window (shown in Figure 49) and the Alerts window, showing major and minor issues related to the cluster's health and operations. Figure 49. XtremIO WebUI Notifications Events Window 40 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

41 Configuration The Configuration window displays the cluster's logical components: Volumes (shown in Figure 50), Consistency Groups, Snapshot Sets, Initiator Groups, Initiators, and Protection Schedulers. From this window we can create and modify these entities by using the action panel on the top right. Figure 50. XtremIO WebUI Configuration Reports In the Reports menu, we can navigate to different windows to show graphs and data of different aspects of the system's activities, mainly related to the system's performance and resource utilization. Menu options we can choose to view include: Overview, Performance, Blocks, Latency, CPU Utilization, Capacity, Savings, Endurance, SSD Balance, Usage or User Defined reports. We can view reports using different time resolutions and components. Entities to be viewed are selected with the "Select Entity" option in the Report menu (shown in Figure 51). In addition, pre-defined or custom time intervals can be selected for the report as shown in Figure 52. The Test Result graphs shown earlier in this document were generated with these menu options. 41 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

42 Figure 51. XtremIO WebUI Reports Selecting Specific Entities to View Figure 52. XtremIO WebUI Reports Selecting Specific Times to View The Overview window shows basic reports on the system, including performance, weekly I/O patterns and storage capacity information. The Performance window shows extensive performance reports which mainly include Bandwidth, IOPS and Latency information. The Blocks window shows block distribution and statistics of I/Os going through the system. The Latency window (shown in Figure 53) shows Latency reports per block size and IOPS metrics. The CPU Utilization window shows CPU utilization of all Storage Controllers in the system. 42 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

43 Figure 53. XtremIO WebUI Reports Latency Window The Capacity window (shown in Figure 54) shows capacity statistics and the change in storage capacity over time. The Savings window shows Data Reduction statistics and change over time. The Endurance window shows SSD's endurance status and statistics. The SSD Balance window shows data balance and variance between the SSDs. The Usage window shows Bandwidth and IOPS usage, both overall and separately for reads and writes. The User Defined window allows users to define their own reports. Figure 54. XtremIO WebUI Reports Capacity Window 43 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

44 Monitoring Monitoring, managing and optimizing storage health are critical to ensure performance of a VDI infrastructures. Simple and easy-to-use has always been the design principle for XtremIO Management Server (XMS). With XIOS 6.0, XMS delivers an HTML5 user interface for consumer-grade simplicity with enterprise-class features. The improved user interface includes: Contextual, automated workflow suggestions for management activities. Advance reporting and analytics that make it easy to troubleshoot. Global search to quickly find that proverbial needle in the haystack. The simple, yet powerful user interface drives efficiency by enabling administrators to manage, monitor, receive notifications, and set alerts on the storage. With XMS, key system metrics are displayed in an easy-to-read graphical dashboard. From the main dashboard, you can easily monitor the overall system health, performance and capacity metrics and drill down to each object for additional details. This information allows you to quickly identify potential issues and take corrective actions. XtremIO X2 collects real time and historical data (up to 2 years) for a rich set of statistics. These statistics are collected at both the Cluster/Array level and also at the object level (Volumes, Initiator Groups, Targets, etc.). This data collection is available from day one, enabling XMS to provide advanced analytics to the storage environment running VDI infrastructures. Figure 55. XtremIO WebUI Blocks Distribution Windows 44 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

45 Advanced Analytics Reporting VDI desktops data access pattern varies based on many factors such as desktop applications behavior, boot storms, login storms, and OS updates. This greatly complicates storage sizing for VDI environments. XMS built-in reporting tracks data traffic patterns, thus significantly simplifies the sizing effort. With X2 release, XMS provides a built-in reporting widget that tracks weekly data traffic pattern. You can easily discover IOPs pattern on each day and hour of the week and understand if the pattern is sporadic or consistent over a period time. Figure 56. XtremIO WebUI Weekly Patterns Reporting Widget The CHANGE button on the widget tracks and displays changes (increasing or decreasing) of the past week relative to the past 8 weeks. If there is no major change (i.e. that in the past week the hourly pattern did not change relative to the past 8 weeks), then there will be no up/down arrow indication. However, if there is an increase/decrease in the traffic of this week relative to the past 8 weeks, a visual arrow indication will appear. Figure 57. XtremIO WebUI Weekly Patterns Reporting on Relative Changes in Data Pattern 45 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

46 Hardware In the Hardware menu, a picture is provided of the physical cluster and the installed X-Bricks. When viewing the FRONT panel, we can select and highlight any component of the X-Brick and view related detailed information in the panel on the right. Figure 58 shows a hardware view of Storage Controller #1 in X-Brick #1 including installed disks and status LEDs. We can further click on the "OPEN DAE" button to see a visual illustration of the X-Brick's DAE and its SSDs, and view additional information on each SSD and Row Controller. Figure 58. XtremIO WebUI Hardware Front Panel Figure 59 shows the back panel view including physical connections to and within the X-Brick. This includes FC connections, Power, iscsi, SAS, Management, IPMI and InfiniBand. Connections can be filtered by the "Show Connections" list at the top right. Figure 59. XtremIO WebUI Hardware Back Panel Show Connections Inventory In the Inventory menu, all components in the environment are shown together with related information. This includes: XMS, Clusters, X-Bricks, Storage Controllers, Local Disks, Storage Controller PSUs, XEnvs, Data Protection Groups, SSDs, DAEs, DAE Controllers, DAE PSUs, DAE Row Controllers, InfiniBand Switches and NVRAMs. 46 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

47 XMS Menus The XMS Menus are global system menus that can be accessed in the top right tools of the interface. We can use them to Search components in the system, view Health status of managed components, view major Alerts, view and configure System Settings (shown in Figure 60) and use the User Menu to view login information (and logout), and support options. Figure 60. XtremIO WebUI XMS Menus System Settings As mentioned, other interfaces are also available to monitor and manage an XtremIO cluster with the XMS server. The system's Command Line Interface (CLI) can be used for everything the GUI provides and more. A RESTful API is another pre-installed interface in the system which allows HTTP-based commands to manage clusters. And for Windows' PowerShell console uses, a PowerShell API Module is also available for XtremIO management. Test Setup We used an XtremIO cluster with a single X2-S X-Brick as the storage array for our environment. The X-Brick had 36 drives of 400GB size each which, after leaving capacity for parity calculations and other needs, amounts to about 11.2TB of physical capacity. As we saw in the Test Results section, this is more than enough capacity for our 4000 virtual desktops. 36 drives are half the amount that can fit in a single X-Brick. This means that in terms of capacity, we can grow to a maximum of x8 the capacity in this test setup with our scale-up (up to 72 drives per X-Brick) and scale-out (up to 4 X- Bricks per cluster) capabilities for X2-S. For X2-R, we currently provide drives which are about 5 times bigger, yielding a much higher capacity. X2-R drives will soon be 10 times bigger, and X2-R clusters could grow to up to 8 X-Bricks. Performance-wise, we can also see from the Test Results section that our single X2-S X-Brick was enough to service our VDI environment of 4000 desktops, with excellent storage traffic metrics (latency, bandwidth, IOPS) and resource consumption metrics (CPU, RAM) throughout all of the VDI environment's processes. X2-R clusters would have even higher compute performance as they have x3 the RAM of X2-S. Compute Hosts: Dell PowerEdge Servers The test setup includes a homogenous cluster of 32 ESX servers for hosting the Citrix desktops and 2 ESX servers for virtual appliances, which are used to manage the Citrix and vsphere infrastructure. We chose Dell's PowerEdge FC630 as our ESX hosts, as they have the compute power to deal with an environment at such a scale (125 virtual desktops per ESX host) and are a good fit for virtualization environments. Dell PowerEdge servers work with the Dell OpenManage systems management portfolio that simplifies and automates server lifecycle management, and can be integrated with VMware vsphere with a dedicated plugin. 47 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

48 Table 2 lists ESX hosts details at our environment. Table 2. PROPERTIES System make Model CPU cores Processor type ESX Hosts Details Used for VDI Desktops and Infrastructure Processor Sockets ESX HOSTS Dell Cores per socket 18 Logical processors 72 Memory Ethernet NICs 4 Ethernet NICs type iscsi NICs 4 iscsi NICs type FC adapters 4 FC adapters type On-board SAS controller 1 PowerEdge FC CPUs x 2.10GHz Intel Xeon CPU E GHz 524 GB QLogic Gb QLogic Gb QLE2742 Dual Port 32Gb In our test, we used FC connectivity to attach XtremIO LUNs to the ESX hosts, but iscsi connectivity could have been used in the same manner. It is highly recommended to select and purchase servers after verifying the vendor, make and model from VMware's hardware compatibility list (HCL). It is also recommended that the latest firmware be installed for the server and its adapters, and that the latest GA release of VMware vsphere ESXi, including any of the latest update releases or express patches be used. For more information on Dell EMC PowerEdge FC630 see its specification sheet [ 12]. 48 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

49 Storage Configuration This section outlines the storage configuration in our test environment, highlighting zoning considerations, XtremIO Volumes, Initiator Groups, and mapping between Volumes and Initiator Groups. Zoning In a single X-Brick cluster configuration, a host equipped with a dual port storage adapter may have up to four paths per device. Figure 61 shows the logical connection topology for four paths. Each XtremIO Storage Controller has two Fibre Channel paths that connect to the physical host, via redundant SAN switches. Figure 61. Dual Port HBA on an ESX Host to a Single X2 X-Brick Cluster Zoning As recommended in EMC Host Connectivity Guide for VMware ESX Server [ 6], the following connectivity guidelines should be followed: Use multiple HBAs on the servers. Use at least two SAN switches to provide redundant paths between the servers and the XtremIO cluster. Restrict zoning to four paths to the storage ports from a single host. Use a single-target-per-single-initiator (1:1) zoning scheme. Storage Volumes We provisioned two sets of XtremIO Volumes as follows: 1 Volume of 4TB for hosting all virtual machines for management functions of the VDI environment. 32 X 3TB Volumes for hosting PVS/MCS Linked Clone desktops. 32 X 10TB Volumes for hosting MCS Full Clone desktops. We highly recommend leveraging the capabilities of EMC VSI plugin for vsphere Web client, to provision multiple XtremIO Volumes. Initiator Groups and LUN Mapping We configured a 1:1 mapping between Initiator Groups and ESX hosts in our test environment. Each of our ESX hosts has a dual port FC HBA, thus each Initiator Group contains two Initiators mapped to the two WWNs of the FC HBA. Altogether 34 Initiator Groups were created, as follows: 2 Initiator Groups for mapping volumes to 2 management servers. 32 Initiator Groups for mapping volumes to all 32 ESX hosts hosting VDI desktops. 49 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

50 The Initiator Groups and Volumes mapping was as follows: 1 Volume (size = 2TB) mapped to the 2 management infrastructure's Initiator Groups. 32 Volumes (3TB for PVS/ MCS Linked Clones, 10TB for MCS Full Clones) mapped to the 32 ESX hosts' Initiator Groups hosting virtual desktops. Storage Networks We used FC connectivity between our X2 storage array and the ESX hosts to provision LUNs, but our environment was also iscsi-ready. For SAN fabric, we used Brocade G620 switches connecting the HBAs on the host to the Storage Controllers on the X-Brick. Some important Brocade G620 details are summarized in Table 3. For more details on the FC switch, refer to Brocade G620 Switch Datasheet. Table 3. Brocade G620 FC Switch Details Make/Model Brocade 6510 Form factor 1U FC Ports 64 Port Speed Maximum Aggregate Bandwidth Supported Media 32Gb 2048Gbps Full Duplex 128Gbps, 32Gbps, 16Gbps, 10Gbps For iscsi connectivity, we used Mellanox MSX1016 switches connecting host ports to the Storage Controllers on the X- Brick. Some important Mellanox MSX1016 details are summarized in Table 4. For more details on the iscsi switch, refer to Mellanox MSX1016 Switch Product Brief. Table 4. Mellanox MSX GbE Switch Details Make/Model Mellanox MSX GbE Form factor 1U Ports 64 Port Speed 10G Jumbo Frames Supported (9216 Byte size) Supported Media 1GbE, 10GbE We highly recommend installing the most recent FC and iscsi switch firmware for datacenter deployments. 50 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

51 Solution's Software Layer This section describes the software layers of our VDI environment, including configurations pertaining to VMware vsphere Hypervisor (ESXi), vcenter server and VMware Horizon View Suite. We also detail the virtual machines that enact specific roles in the software-defined datacenter for VDI. Hypervisor Management Layer The following section describes configuration settings that were adopted for the Hypervisor Management Layer of the environment namely VMware ESXi 6.5 Update 1 components including relevant plugins. vcenter Server Appliance We installed a single instance of VMware vcenter Server (VCS) 6.5 Update 1, deployed with SQL server 2017, Platform Services Controller (PSC) with embedded Single Sign-On service (SSO). For production, we recommend deploying 2 VCSs, one for the management infrastructure and one for the VDI infrastructure, and deploying a lightweight PSC and SSO appliance, with the two VCSs binding to the same SSO domain. The configuration of the VCS used in our environment is outlined in Table 5. Table 5. vcenter Server Virtual Appliance Settings PROPERTIES VCSA No. of vcpu 32 Memory 64GB Database SQL Server 2017 No. of virtual disks 1 Total Storage 60GB Hypervisor All VMware ESXi servers hosting virtual desktops and the two servers hosting the virtual machines for management functions are running VMware vsphere Hypervisor (ESXi 6.5 Update 1). We highly recommend installing the most recent ESX 6.5 update release. A single virtual datacenter was created to hold both the management cluster and the VDI cluster. 51 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

52 ESX Clusters Management Cluster A management ESX cluster for the VDI environment was created with two ESX servers for hosting server virtual machines and virtual appliances for the following functions: DHCP server Active Directory co-located with a DNS server VMware vcenter Server SQL Server 2017 database for VMware vcenter database and Citrix Studio database 2 Citrix Studio Servers 5 Citrix PVS Servers EMC Solutions Integration Service LoginVSI file shares LoginVSI management LoginVSI launchers Virtual machines for LoginVSI file shares, LoginVSI management and LoginVSI launchers are a part of the test platform and should not be factored in while planning a production VDI infrastructure. Details on the LoginVSI infrastructure are discussed in Appendix A Test Methodology and in LoginVSI's documentations. VDI Cluster A VDI ESX cluster was created for the 32 ESX servers to host the virtual desktops. It is highly recommended to enable DRS (Dynamic Resource Scheduling) for the cluster. Collectively, the cluster has approximately the equivalent of the following resources: Compute resources totaling 2.42THz Memory resources totaling 16TB Network Configuration For ease of management and a unified view into the network configuration, we recommend vsphere Distributed Switch (VDS) for the production environment. For best results, we highly recommended segregating network traffic by using multiple vswitch Port groups, backed-up by its individual physical NICs. For production environments, we recommend configuring a VMkernel port group, using a 1GbE (or faster) interface for the management traffic between the ESX hosts and the vcenter Server Appliance. A second VMkernel port group, located on a separate vswitch, backed-up by a separate 1GbE (or faster) interface, should be configured for vmotion traffic. Lastly, we recommend having the virtual machine port group on another vswitch, backed-up by a 10GbE interface for all virtual machine traffic. If iscsi is used as the storage protocol, another VMkernel port group backed-up by a separate 10GbE NIC interface is needed. 52 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

53 Figure 62 shows our non-production environment's in-band network configuration, where management traffic and virtual machine traffic flow through the same 10GbE NIC. The switch connected to the NICs was isolated at the physical layer from the outside world to avoid interference. We strongly advise following VMware's best practices for networking in vsphere environments. Figure 62. In-Band Virtual Machine Networking Used in Test Environment 53 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

54 Storage Configuration, EMC SIS and VSI EMC Solutions Integration Service 7.3 (EMC SIS) provides unique storage integration capabilities between VMware vsphere 6.5 and EMC XtremIO X2 (XMS and above). The EMC VSI (Virtual Storage Integrator) 7.3 plugin for VMware vsphere web client can be registered via EMC SIS. Multiple new VMFS (Virtual Machine File System) datastores can be created using the VSI plugin, and backed-up by XtremIO Volumes at the click of a button. The VSI plugin interacts with EMC XtremIO to create Volumes of the required size, map them to the appropriate Initiator Groups, and create a VMFS datastore on vsphere, ready for use. Figure 63 shows an example of using the EMC plugin for datastore creation. Figure 63. Create a Datastore Using the EMC VSI Plugin The VSI plugin can also be used for modifying ESXi host/cluster storage-related settings, setting multipath management and policies, and invoking space re-claim operations from an ESX server or from a cluster. The VSI plugin is the best way to enforce the following EMC-recommended best practices for ESX servers (see Figure 64): Enable VAAI. Set Queue depth on FC HBA to 256. Set multi-pathing policy to "round robin" on each of the XtremIO SCSI Disks. Set I/O path switching parameter to 1. Set outstanding number of I/O request limit to 256. Set the "SchedQuantum" parameter to 64. Set the maximum limit on disk I/O size to DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

55 Figure 64. Configuring EMC Recommended Settings Using the VSI Plugin In our environment, we provisioned 32 XtremIO Volumes for storing all relevant virtual machine data. The VSI plugin can be used to enforce the above-mentioned configuration settings on all these datastores, across all ESX servers in the cluster, at the click of a button. As discussed in Storage Configuration on page 49, we have a dual port 16G FC HBA. An XtremIO X2 single X-Brick cluster configuration has two Storage Controllers resulting in four Targets. In accordance with XtremIO X2 best practices for zoning, each LUN will have four paths. The same logic should be used when provisioning storage using iscsi connectivity. 55 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

56 Virtual Desktop Management Layer: Citrix XenDesktop 7.16 The following section describes configuration settings that are adopted for Citrix XenDesktop 7.16 MCS and PVS Pools Citrix provides a complete virtual app and desktop solution to meet customers' needs from a single, easy-to-deploy platform. Citrix XenApp and XenDesktop 7.16 integrate Citrix XenApp application delivery technologies and XenDesktop desktop virtualization technologies into a single architecture and management experience. This new architecture unifies both management and delivery components to enable a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as secure mobile services to users anywhere on any device. Figure 65. XenDesktop 7.16 Architecture Components Citrix XenDesktop The solution described in this reference architecture is based on Citrix XenDesktop 7.16 which provides a complete endto-end solution delivering Microsoft Windows virtual desktops to users on a wide variety of endpoint devices. Virtual desktops are dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time they log on. Citrix XenDesktop provides a complete virtual desktop delivery system by integrating several distributed components with advanced configuration tools that simplify the creation and real-time management of the virtual desktop infrastructure. The XenDesktop 7.16 release offers these benefits: Comprehensive virtual desktop delivery for any use case. The XenDesktop 7.16 release incorporates the full power of XenApp, delivering full desktops or just applications to users. Administrators can deploy both XenApp published applications and desktops (to maximize IT control at low cost) or personalized VDI desktops (with simplified image management) from the same management console. Citrix XenDesktop 7.16 leverages common policies and cohesive tools to govern both infrastructure resources and user access. 56 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

57 Simplified support and choice of BYO (Bring Your Own) devices. XenDesktop 7.16 brings thousands of corporate Microsoft Windows-based applications to mobile devices with a native-touch experience and intensive design and engineering applications. Lower cost and complexity of application and desktop management. XenDesktop 7.16 helps IT organizations take advantage of agile and cost-effective cloud offerings, allowing the virtualized infrastructure to flex and meet seasonal demands or the need for sudden capacity changes. IT organizations can deploy XenDesktop application and desktop workloads to private or public clouds. Protection of sensitive information through centralization. XenDesktop decreases the risk of corporate data loss, enabling access while securing intellectual property and centralizing applications since assets reside in the datacenter. Virtual Delivery Agent improvements. Universal print server and driver enhancements and support for the HDX 3D Pro graphics acceleration for Windows 10 are key additions in XenDesktop Improved high-definition user experience. XenDesktop 7.16 continues the evolutionary display protocol leadership with enhanced Thinwire display remoting protocol and Framehawk support for HDX 3D Pro. Citrix XenDesktop Components The XenDesktop 7.16 architecture includes the following components: Studio Studio is the management console that enables you to configure and manage your deployment, eliminating the need for separate consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users. Delivery Controller (DC) Installed on servers in the data center, the controller authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops. The Controller also manages the state of desktops, starting and stopping them based on demand and administrative configuration. Database At least one Microsoft SQL Server database is required for every XenApp or XenDesktop Site to store configuration and session information. The Delivery Controller must have a persistent connection to the database as it stores data collected and managed by the Controller services. Director Director is a web-based tool that enables IT support teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users. You can also view and interact with a user's sessions using Microsoft Remote Assistance. Starting in version 7.16, Director now includes detailed descriptions for connection and machine failures, one-month historical data (Enterprise edition), custom reporting, and SNMP traps notifications. Receiver Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user's devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, Web, and Software as a Service (SaaS) applications. For devices that cannot install the Receiver software, Citrix Receiver for HTML5 provides connectivity through a HTML5-compatible web browser. StoreFront StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications accessed by users. StoreFront version 3.8 (released with XenDesktop 7.16) and above includes the ability to create and use multiple IIS websites each having its own domain name. License Server The Citrix License Server is an essential component at any Citrix-based solution. Every Citrix product environment must have at least one shared or dedicated license server. License servers are computers that are either partly or completely dedicated to storing and managing licenses. Citrix products request licenses from a license server when users attempt to connect. Machine Creation Services (MCS) A collection of services that work together to create virtual servers and desktops from a master image on demand; optimizing storage utilization and providing a pristine virtual machine to users every time they log on. Machine Creation Services is fully integrated and administrated in Citrix Studio. 57 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

58 Provisioning Services (PVS) The Provisioning Services infrastructure is based on software-streaming technology. This technology allows computers to be provisioned and re-provisioned in real-time from a single shared-disk image. Virtual Delivery Agent (VDA) The Virtual Desktop Agent is a transparent plugin that is installed on every virtual desktop or XenApp host (RDSH) and enables the direct connection between the virtual desktop and users' endpoint devices. Windows and Linux VDAs are available. Machine Creation Services (MCS) Citrix Machine Creation Services is the native provisioning mechanism within Citrix XenDesktop for virtual desktop image creation and management. Machine Creation Services uses the hypervisor APIs to create, start, stop, and delete virtual desktop images. Desktop images are organized in a Machine Catalog, and within that catalog there are a number of options available to create and deploy virtual desktops: Random non-persistent desktops, also known as pooled VDI desktops. Each time users log on to use one of these desktops, they connect to a dynamically selected desktop in a pool of desktops based on a single master image. All changes to the desktop are lost when the machine reboots. Static non-persistent desktop. The first time a user logs on the use one off these desktops, he is assigned a desktop from a pool of desktops based on a single master image. After the first use, each time a user logs in to use one of these desktop, he connects to the same desktop he was assigned on first use. All changes to the desktop are lost when the machine reboots. Static persistent, also known as VDI with Personal vdisk. Unlike other types of VDI desktops, these desktops can be fully personalized by users. The first time a user logs on to use one of these desktops, he is assigned a desktop from a pool of desktops based on a single master image. Subsequent logons from that user connect to the same desktop that was assigned on first use. Changes to the desktop are retained when the machine reboots because they are stored in a Personal vdisk. Figure 66. XenDesktop 7.16 MCS Architecture Components All the desktops in a random or static catalog are based on a master desktop template which is selected during the catalog creation process. MCS then takes snapshots of the master template and layers two additional virtual disks on top: an Identity vdisk and a Difference vdisk. The Identity vdisk includes all the specific desktop identity information such as host names and passwords. The Difference vdisk is where all the writes and changes to the desktop are stored. These Identity and Difference vdisks are stored for each desktop on the same data store as their related clone. 58 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

59 While traditionally used for small to medium sized XenDesktop deployments, MCS offers substantial storage cost savings because of the snapshot/identity/difference disk methodology. The disk space requirements of the identity and difference disks, layered on top of a master image snapshot, is far less than that of a dedicated desktop architecture. In addition to provisioning MCS Linked Clone pools, XenDesktop 7.16 supports provisioning MCS Full Clone pools, which was not possible until now. Full Clone desktops highlight XtremIO X2's outstanding advantages in working with VDI environments, including the exceptional deduplication and compression mechanisms, thus reducing the storage cost. Figure 67. XenDesktop 7.16 MCS 4000 Full Clones Pool With Citrix Studio, we can configure the interface with the vsphere environment, which will be used to populate our virtual desktops, thus selecting resources that will be used for running the environment, including Datastores, vcenter, ESXi hosts, virtual networks etc. Figure 68. Configuring the Interface 59 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

60 Provisioning Services (PVS) PVS is an alternative method of image provisioning which uses streaming to share a single base vdisk image instead of copying images to VMs. PVS is used to deliver shared vdisk images to physical or virtual machines. Provisioning Services enables real-time streamed provisioning and re-provisioning so that administrators do not need to manage and patch individual systems. After PVS components are installed and configured, a vdisk is created from a device's hard drive by taking a snapshot of the OS and application image and then storing that image as a vdisk file on the network. A device used for this process is referred to as a master target device. The devices that use the vdisks are called target devices. vdisks can exist on a PVS or file share, and in larger deployments, they exist on a storage system with which PVS can communicate. vdisks can be assigned to a single target device in private-image mode, or to multiple target devices in standard-image mode. When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server. Unlike thin-client technology, processing takes place on the target device. The target device downloads the boot file from a Provisioning Server and boots. Based on the boot configuration settings, the appropriate vdisk is mounted on the Provisioning Server. The vdisk software is then streamed to the target device as needed, appearing as a regular hard drive to the system. Instead of immediately pulling all the vdisk contents down to the target device (as with traditional imaging solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get a completely new operating system and a new software version in the time it takes to reboot. This approach dramatically decreases the amount of network bandwidth required, making it possible to support a larger number of target devices on a network without impacting performance. Figure 69. XenDesktop 7.16 PVS Architecture Components Desktop images are organized in a Machine Catalog. Within that catalog, there are a number of options available to create and deploy virtual or physical desktops: Random: Virtual or physical desktops are assigned randomly as users connect. When they logoff, the desktop is reset to its original state and is available for other users. Any changes made by the user are discarded at log off. Static: Virtual desktops are assigned to the same user every time with user changes stored on a separate Personal Disk. 60 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

61 Using Provisioning Services, vdisk images are configured in Standard Image mode/read-only, or Private Image mode/read/write. A vdisk in Standard Image mode allows multiple desktops to boot from it simultaneously, thus greatly reducing the number of images that must be maintained and the amount of storage that is otherwise required (nonpersistent). Private Image mode vdisks are equivalent to dedicated hard disks and can only be used by one target device at a time (persistent). The Provisioning Server runs on a virtual instance of Windows Server 2012 R2 or Windows 2016 on the Management Server(s). PVS Write Cache Citrix Provisioning Services delivery of standard images relies on write-caches to store any writes made by the target OS. The most common write-cache implementation places write-cache on the target machine's storage. Independent of the physical or virtual nature of the target machine, this storage has to be allocated and formatted to be usable. If data is written to the PVS vdisk in a caching mode, the data is not written back to the base vdisk. Instead it is written to a write cache file in one of the following locations: Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device's hard drive. This option frees up the Provisioning Server since it does not have to process write requests and does not have the finite limitation of RAM. Cache in device RAM. Write cache can exist as a temporary file in the target device's RAM. This provides the fastest method of disk access since memory access is always faster than disk access. Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is only available for Windows 10 and later. When RAM is not available, the target device write cache is written to the local disk. When RAM is available, the target device write cache is written to RAM first. When RAM is full, the data block not accessed for the longest time is written to the local differencing disk to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that the target device will consume. Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration, all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a hard drive is stolen. For large scale production environments, we recommend using the Cache on device hard drive method which takes advantage of XtremIO X2's exceptional performance and deduplication and compression capabilities, rather than overloading the physical servers' resources. Figure 70. XenDesktop 7.16 PVS vdisk Properties 61 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

62 Personal vdisk Citrix Personal vdisk is an enterprise workspace virtualization solution that is built into Citrix XenDesktop. Personal vdisk provides the user customization and personalization benefits of a persistent desktop image, with the storage savings and performance of a single/shared image. Used in conjunction with a static desktop experience, Citrix Personal vdisk allows each user to receive personal storage in the form of a layered vdisk (3GB minimum). This personal vdisk enables users to personalize and save their desktop environment, while providing storage for any user or departmental apps. Personal vdisk provides the following benefits to XenDesktop: Persistent personalization of user profiles, settings and data. Enables deployment and management of user installed and entitlement-based applications. Fully compatible with Microsoft SCCM and App-V. 100% persistence with VDI pooled Storage management. Near Zero management overhead. For this reference architecture, we used 5 PVS servers in order to load balance them during boot storms. Each PVS Server contains a copy of the Windows 10 vdisk in Standard Image mode. This allows multiple desktops to boot from it simultaneously thus greatly reducing the number of images that must be maintained and the amount of storage that is otherwise required. In addition, 6GB Personal vdisk was attached to each virtual desktop as an additional vmdk file, providing a dedicated storage space for any user profile, settings and data. Figure 71. XenDesktop 7.16 PVS Device Collection Balanced Across Multiple PVS Servers 62 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

63 Citrix XenDesktop 7.16 Configurations and Tuning Citrix XenDesktop 7.16 customizations were quite minimal. Some of the tuning done is highlighted in this section. XenDesktop Delivery Controller The default concurrent XenDesktop operations are defined in the Hosting Configuration section of Citrix Studio. These default values are quite conservative and can be increased to higher values. XtremIO X2 best practices for maximum operations include tuning the Delivery Controller as follows: Max new actions per minute (recommended value = 200). Max Simultaneous actions (all types) (recommended value = 200). The higher values will drastically cut down the amount of time spent for operations such as creating and updating your machine catalogs. Figure 72. Advanced Connection Settings in Citrix XenDesktop DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

64 Microsoft Windows 10 Desktop Configuration and Optimization Login VSI provides guidelines for the suggested hardware configuration of the base Windows 10 image for each level of workload that is to be exercised. Since we wanted to mirror production customer environments as closely as possible in our testing, we elected to use Windows 10 Enterprise 32-bit, Office 2016 and the Knowledge Worker workload as we find that environment is most common in our present customer base. The Win10 32-bit OS configuration for MCS and PVS is listed in more detail in Table 6 Table 6. COMPONENT Desktop Windows bit Desktop Configuration Hardware Version 13 vcpu Memory DESCRIPTION Windows 10 Enterprise 1709 (32-bit) 2 (1 Socket, 2 Cores) 4GB vnlc Adapter VMXNET 3 SCSI Controller Virtual Disk Paravirtual 32 GB Thin Provision XenDesktop Virtual Delivery Agent (VDA) Citrix VDA 7.16 Installed Applications Microsoft Office 2016, Adobe Reader 11, Flash Player 11 Active X, Doro 1.82, Internet Explorer, Archive-7Zip, Windows Media Player To optimize the desktop OS we used Citrix Optimizer (CTXO) which is a Windows based tool to help Citrix administrators optimize various components in their environment, most notably optimizing the OS with Virtual Delivery Agent (VDA). The tool is PowerShell based, but also includes a graphical UI. Citrix Optimizer can run in three different modes: Analyze analyze the current system against a specified template and display any differences. Execute apply the optimizations from the template. Rollback (available in PowerShell only for Beta release) revert the optimization changes applied previously. Citrix Optimizer currently supports a few different actions/modules that can be defined in templates: Removal of built-in Windows applications (UWP) Enabling/disabling of Windows Services Enabling/disabling of Scheduled Tasks Registry changes Custom PowerShell code 64 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

65 Figure 73. Citrix Optimizer (CTXO) for Windows 10 Build 1709 We strongly recommend using this tool frequently, and optimizing the Master Images by disabling unnecessary Windows features in order to obtain maximum performance. 65 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

66 Conclusion From the results presented herein, we conclude that Dell EMC's XtremIO X2 All-Flash Storage Array offers the best-inclass performance, and fulfils all requirements for storage capacity and storage I/O processing for VDI environments. This reference architecture provides the details of hardware and software components of our VDI infrastructure and published applications and their configurations, which gives datacenter architects a great starting point for designing their VDI environments for performance at scale. 66 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

67 References 1. Dell EMC XtremIO Main Page 2. Dell EMC XtremIO X2 Specifications 3. Dell EMC XtremIO X2 Datasheet 4. DELL EMC XtremIO X2 Storage Array 5. XtremIO X2 vsphere Demo 6. EMC Host Connectivity Guide for VMware ESX Server 7. XtremIO X2 with VDI benefits Video 8. XtremIO X VM Boot Demo 9. XtremIO CTO Blog (with product announcements and technology deep dives) XtremIO VDI Reference Architecture Dell EMC Virtual Storage Integrator (VSI) Product Page Dell EMC PowerEdge FC630 Specification Sheet VMware vsphere 6.5 Configuration Maximum Guide Performance Best Practices for VMware vsphere f 15. LoginVSI Main Documentation Page LoginVSI VSImax and Test Methodology explained Citrix XenApp and XenDesktop Main Page Citrix Optimizer XenApp and XenDesktop 7.16 Release Notes Citrix Products Documentation 67 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

68 Appendix A Test Methodology We used the LoginVSI "Knowledge Worker" profile to emulate workloads on our VDI desktops. LoginVSI effectively emulates a typical office worker profile that logs into a VDI desktop, and performs activities representative of a desktop user, such as opening a Word document, modifying an Excel spreadsheet and browsing a PDF file or a web page. A LoginVSI setup has the following components driving the LoginVSI workload: A LoginVSI management server A LoginVSI file share server LoginVSI launcher machines A LoginVSI management server has applications that launch the tests and monitor user session status, and has an analyzer application that calculates and presents the VSI baseline, VSI average and VSI threshold shown in Figure 17, Figure 18, and Figure 19.The file share server consists of folders and files that are accessed as part of the workload. The launcher machines launch the user sessions that initiate a connection to the desktop, which then start the workload. LoginVSI is a vendor-independent tool that helps characterize the desktop "user experience" of an environment, regardless of the VDI vendor or the protocol used for remote sessions. The "Desktop Direct Connect" mode (known as DDC), is specifically targeted for storage vendors. DDC enables a user to directly log on to the desktop, bypassing the remote protocol but still driving I/O operations that are produced by a user logon and by application launches thereafter. For scaling 4000 desktop connections, we needed three "Launcher" virtual machine instances. To ensure that we are not limited by file share limitations, we used ten file server instances. File access is distributed across the servers by LoginVSI. Table 7 shows the Launchers and file servers virtual machine settings. Table 7. PROPERTIES Launcher and File Server Virtual Machine Settings 3 LAUNCHER SERVERS + 10 FILE SERVERS Operating System 64-bit Windows Server 2016 VM Version 13 No. of vcpus 2 Memory No. of vnics 1 No. of Virtual Disks 1 Total Storage Provisioned 4GB 40GB The management cluster, consisting of two ESXi servers, hosted the "launcher" virtual machines. These launcher virtual machines are part of the LoginVSI test platform that initiates connections to the VDI desktops. For more details on LoginVSI components functionality see the LoginVSI documentation. 68 DELL EMC XtremIO X2 with Citrix XenDesktop 7.16

69 How to Learn More For a detailed presentation explaining XtremIO X2 Storage Array's capabilities and how XtremIO X2 substantially improves performance, operational efficiency, ease-of-use and total cost of ownership, please contact XtremIO X2 at XtremIO@emc.com. We will schedule a private briefing in person or via a web meeting. XtremIO X2 provides benefits in many environments and mixed workload consolidations, including virtual server, cloud, virtual desktop, database, analytics and business applications. Learn more about Dell EMC XtremIO Contact a Dell EMC Expert View more resources Join the and #XtremIO DELL EMC XtremIO X2 with Citrix XenDesktop 7.16 All Rights Reserved. Dell, EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be trademarks of their respective owners. Reference Number H17019

VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2

VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2 WHITE PAPER VIRTUALIZED DESKTOP INFRASTRUCTURE WITH DELL EMC XTREMIO X2 AND VMWARE HORIZON ENTERPRISE 7.2 Abstract This reference architecture evaluates the best-in-class performance and scalability delivered

More information

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Reference Architecture EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7 Simplify management and decrease total cost of ownership Guarantee a superior desktop experience Ensure a successful virtual desktop deployment

More information

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Key Results 2. Introduction 2.1.Scope 2.2.Audience 3. Technology Overview

More information

Datrium Technical Note Citrix Ready Setup for XenDesktop on Datrium DVX

Datrium Technical Note Citrix Ready Setup for XenDesktop on Datrium DVX Datrium Technical Note Citrix Ready Setup for XenDesktop on Datrium DVX June 2017 TR-2017-03-1 Copyright 2017 Datrium, Inc. All Rights Reserved. This product is protected by U.S. and international copyright

More information

Citrix Ready Setup for XenDesktop on Datrium DVX

Citrix Ready Setup for XenDesktop on Datrium DVX Citrix Ready Setup for XenDesktop on Datrium DVX 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction This document covers the setup and use of Citrix XenDesktop

More information

BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2

BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2 Installing and Configuring the DM-MPIO WHITE PAPER BEST PRACTICES FOR RUNNING ORACLE ON DELL EMC XTREMIO X2 Abstract This White Paper describes the best practices and recommendations when deploying an

More information

End User Computing. Haider Aziz Advisory System Engineer-EMEA. Redefining Application and Data Delivery to the Modern Workforce

End User Computing. Haider Aziz Advisory System Engineer-EMEA. Redefining Application and Data Delivery to the Modern Workforce Haider Aziz Advisory System Engineer-EMEA End User Computing Redefining Application and Data Delivery to the Modern Workforce 1 Why Customers Care The World of End Users and IT is Transforming NEW Users

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

VMware Horizon View. VMware Horizon View with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013

VMware Horizon View. VMware Horizon View with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013 VMware Horizon View VMware Horizon View with Tintri VMstore TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013 Table of Contents Introduction... 1 Before VDI and VMware Horizon View... 1 Critical

More information

SolidFire and Ceph Architectural Comparison

SolidFire and Ceph Architectural Comparison The All-Flash Array Built for the Next Generation Data Center SolidFire and Ceph Architectural Comparison July 2014 Overview When comparing the architecture for Ceph and SolidFire, it is clear that both

More information

Why Datrium DVX is Best for VDI

Why Datrium DVX is Best for VDI Why Datrium DVX is Best for VDI 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction Managing a robust and growing virtual desktop infrastructure in current

More information

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,

More information

EMC END-USER COMPUTING

EMC END-USER COMPUTING EMC END-USER COMPUTING Citrix XenDesktop 7.9 and VMware vsphere 6.0 with VxRail Appliance Scalable, proven virtual desktop solution from EMC and Citrix Simplified deployment and management Hyper-converged

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March IOmark- VM HP MSA P2000 Test Report: VM- 140304-2a Test Report Date: 4, March 2014 Copyright 2010-2014 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and IOmark are trademarks

More information

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1

What s New in VMware vsphere 4.1 Performance. VMware vsphere 4.1 What s New in VMware vsphere 4.1 Performance VMware vsphere 4.1 T E C H N I C A L W H I T E P A P E R Table of Contents Scalability enhancements....................................................................

More information

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales:

More information

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes TECHNICAL WHITE PAPER SEPTEMBER 2016 VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes Table of Contents

More information

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa

FLASH.NEXT. Zero to One Million IOPS in a Flash. Ahmed Iraqi Account Systems Engineer North Africa 1 FLASH.NEXT Zero to One Million IOPS in a Flash Ahmed Iraqi Account Systems Engineer North Africa 2 The Performance Gap CPU performance increases 100x every decade Hard disk drive performance has stagnated

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING DESIGN GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes how to design an EMC VSPEX End-User Computing

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.1, VMware Horizon View 5.2, and VMware Horizon View Composer 5.2 Simplify

More information

AccelStor NeoSapphire VDI Reference Architecture for 500 users with VMware Horizon 7

AccelStor NeoSapphire VDI Reference Architecture for 500 users with VMware Horizon 7 Technical White Paper AccelStor NeoSapphire VDI Reference Architecture for 500 users with VMware Horizon 7 Aug 2018 The information contained herein is subject to change without notice. All trademarks

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.5 and VMware vsphere with EMC XtremIO Enabled by EMC VNX and EMC Data Protection EMC VSPEX Abstract This describes the high-level steps

More information

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO

EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vsphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This describes the

More information

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE

EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE EMC INFRASTRUCTURE FOR SUPERIOR END-USER COMPUTING EXPERIENCE Enabled by the EMC XtremIO All-Flash Array, VMware vsphere 5.0, Citrix XenDesktop 5.6, and Citrix Provisioning Services 6.1 EMC Solutions Group

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash

Copyright 2013 EMC Corporation. All rights reserved. FLASH NEXT: Zero to One Million IOPs In A Flash 1 FLASH NEXT: Zero to One Million IOPs In A Flash 2 Expectations Are Reset Forever 3 DATA IS GROWING 4 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance Must Be

More information

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System First Published: August 25, 2016 Last Modified: August 31, 2016 Americas Headquarters Cisco Systems, Inc. 170 West

More information

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August

IOmark-VM. VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC a Test Report Date: 16, August IOmark-VM VMware VSAN Intel Servers + VMware VSAN Storage SW Test Report: VM-HC-160816-a Test Report Date: 16, August 2016 Copyright 2010-2016 Evaluator Group, Inc. All rights reserved. IOmark-VM, IOmark-VDI,

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for Citrix Virtual Apps and Desktops on VxRail and vsan Ready Nodes October 2018 H17344.1 Validation Guide Abstract This validation guide describes the architecture

More information

5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage

5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage 5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage Abstract This reference architecture document records real-world workload performance data for a virtual desktop infrastructure (VDI) storage

More information

Comparison of Storage Protocol Performance ESX Server 3.5

Comparison of Storage Protocol Performance ESX Server 3.5 Performance Study Comparison of Storage Protocol Performance ESX Server 3.5 This study provides performance comparisons of various storage connection options available to VMware ESX Server. We used the

More information

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator First Published: February 19, 2014 Last Modified: February 21, 2014 Americas Headquarters Cisco Systems, Inc. 170 West

More information

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2 Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture...

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 2 Flash.Next: Zero To One Million IOPs Karthik Pinnamaneni Sr. Systems Engineer Karthik.Pinnamaneni@emc.com 3 An Order Of Magnitude Better Performance 2000 IOPS/GB 0.5 IOPS/GB 150 IOPS/GB 4000X FASTER

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

Infinio Accelerator Product Overview White Paper

Infinio Accelerator Product Overview White Paper Infinio Accelerator Product Overview White Paper November 2015 Table of Contents Executive Summary.3 Disruptive datacenter trends and new storage architectures..3 Separating storage performance from capacity..4

More information

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper

Deep Dive on SimpliVity s OmniStack A Technical Whitepaper Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity

More information

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN White Paper VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN Benefits of EMC VNX for Block Integration with VMware VAAI EMC SOLUTIONS GROUP Abstract This white paper highlights the

More information

SAN Acceleration Using Nexenta Connect View Edition with Third- Party SAN Storage

SAN Acceleration Using Nexenta Connect View Edition with Third- Party SAN Storage SAN Acceleration Using Nexenta Connect View Edition with Third- Party SAN Storage NEXENTA OFFICE OF CTO ILYA GRAFUTKO Table of Contents VDI Performance... 3 NexentaConnect and Storage Attached Network...

More information

Nutanix Reference Architecture Version 1.1 July 2016 RA-2022

Nutanix Reference Architecture Version 1.1 July 2016 RA-2022 Citrix XenDesktop on vsphere Nutanix Reference Architecture Version 1.1 July 2016 RA-2022 Copyright Copyright 2016 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 150 San Jose, CA 95110 All rights

More information

Performance Lab Report & Architecture Overview Summary of SnapVDI Features and Performance Testing Using Login VSI

Performance Lab Report & Architecture Overview Summary of SnapVDI Features and Performance Testing Using Login VSI by American Megatrends, Inc. Performance Lab Report & Architecture Overview Summary of SnapVDI Features and Performance Testing Using Login VSI Table of Contents Executive Summary... 4 1. Introduction...

More information

IBM Emulex 16Gb Fibre Channel HBA Evaluation

IBM Emulex 16Gb Fibre Channel HBA Evaluation IBM Emulex 16Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance

More information

Understanding Data Locality in VMware vsan First Published On: Last Updated On:

Understanding Data Locality in VMware vsan First Published On: Last Updated On: Understanding Data Locality in VMware vsan First Published On: 07-20-2016 Last Updated On: 09-30-2016 1 Table of Contents 1. Understanding Data Locality in VMware vsan 1.1.Introduction 1.2.vSAN Design

More information

Optimizing XenApp for the Virtual Data Center

Optimizing XenApp for the Virtual Data Center Optimizing XenApp for the Virtual Data Center April 11 th, 2012 Shawn Bass Citrix CTP & Microsoft MVP Independent Consultant www.shawnbass.com @ShawnBass Jim Moyle Citrix CTP Lead European Solutions Consultant

More information

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR 1 VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR PRINCIPAL CORPORATE SYSTEMS ENGINEER RECOVERPOINT AND VPLEX 2 AGENDA VPLEX Overview RecoverPoint

More information

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9 PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is

More information

SoftNAS Cloud Performance Evaluation on AWS

SoftNAS Cloud Performance Evaluation on AWS SoftNAS Cloud Performance Evaluation on AWS October 25, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for AWS:... 5 Test Methodology... 6 Performance Summary

More information

NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT

NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT White Paper: NexGen N5 Performance in a Virtualized Environment January 2015 Contents Introduction... 2 Objective... 2 Audience... 2 NexGen N5... 2 Test

More information

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018 Scalability Testing with Login VSI v16.2 White Paper Parallels Remote Application Server 2018 Table of Contents Scalability... 3 Testing the Scalability of Parallels RAS... 3 Configurations for Scalability

More information

SolidFire Reference Architecture for VMware Horizon View

SolidFire Reference Architecture for VMware Horizon View The All-Flash Array Built for the Next Generation Data Center SolidFire Reference Architecture for VMware Horizon View Table of Contents SolidFire Legal Notices.... 3 Introduction.... 4 Goals and Objectives....

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

Citrix XenDesktop with Provisioning Server for VDI on Dell Compellent SC8000 All Flash Arrays for 3,000 Users

Citrix XenDesktop with Provisioning Server for VDI on Dell Compellent SC8000 All Flash Arrays for 3,000 Users Citrix XenDesktop with Provisioning Server for VDI on Dell Compellent SC8000 All Flash Arrays for 3,000 Users A Dell Compellent VDI Reference Architecture Dell Compellent Technical Solutions December 2013

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage

2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage 2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage Dell EMC Engineering September 2017 A Dell EMC Reference Architecture Revisions Date September 2017 Description Initial release Acknowledgements

More information

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades

Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation Report: Improving SQL Server Database Performance with Dot Hill AssuredSAN 4824 Flash Upgrades Evaluation report prepared under contract with Dot Hill August 2015 Executive Summary Solid state

More information

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile Tegile Systems 1 Introducing Tegile Company Overview Product Overview Solutions & Use Cases Partnering with Tegile 2 Company Overview Company Overview Te gile - [tey-jile] Tegile = technology + agile Founded

More information

Free up rack space by replacing old servers and storage

Free up rack space by replacing old servers and storage A Principled Technologies report: Hands-on testing. Real-world results. Free up rack space by replacing old servers and storage A 2U Dell PowerEdge FX2s and all-flash VMware vsan solution powered by Intel

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

Workspace & Storage Infrastructure for Service Providers

Workspace & Storage Infrastructure for Service Providers Workspace & Storage Infrastructure for Service Providers Garry Soriano Regional Technical Consultant Citrix Cloud Channel Summit 2015 @rhipecloud #RCCS15 The industry s most complete Mobile Workspace solution

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Dell EMC Ready Architectures for VDI

Dell EMC Ready Architectures for VDI Dell EMC Ready Architectures for VDI Designs for VMware Horizon on VxRail and VSAN Ready Nodes September 2018 H17341.1 Validation Guide Abstract This validation guide describes the architecture and performance

More information

SoftNAS Cloud Performance Evaluation on Microsoft Azure

SoftNAS Cloud Performance Evaluation on Microsoft Azure SoftNAS Cloud Performance Evaluation on Microsoft Azure November 30, 2016 Contents SoftNAS Cloud Overview... 3 Introduction... 3 Executive Summary... 4 Key Findings for Azure:... 5 Test Methodology...

More information

UNLEASH YOUR APPLICATIONS

UNLEASH YOUR APPLICATIONS UNLEASH YOUR APPLICATIONS Meet the 100% Flash Scale-Out Enterprise Storage Array from XtremIO Opportunities to truly innovate are rare. Yet today, flash technology has created the opportunity to not only

More information

Cloud Meets Big Data For VMware Environments

Cloud Meets Big Data For VMware Environments Cloud Meets Big Data For VMware Environments

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Optimise Your Virtualised Applications with Flash

Optimise Your Virtualised Applications with Flash Optimise Your Virtualised Applications with Flash Darren McCullum, Regional Sales Manager, EMC ANZ Trichy Premkumar, Principal Systems Engineer, EMC ANZ 1 What Is The #1 Benefit of This Car? DID YOU CONSIDER?

More information

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Pivot3 Acuity with Microsoft SQL Server Reference Architecture Pivot3 Acuity with Microsoft SQL Server 2014 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales: sales@pivot3.com Austin,

More information

FlashStack for Citrix XenDesktop Solution Guide

FlashStack for Citrix XenDesktop Solution Guide FlashStack for Citrix XenDesktop Solution Guide Data centers are looking to be more streamlined, flexible and transformative to help deliver the applications and workloads to drive business impact. One

More information

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE

Copyright 2013 EMC Corporation. All rights reserved. FLASH REDEFINING THE POSSIBLE 1 FLASH REDEFINING THE POSSIBLE 2 REDEFINING THE POSSIBLE 3 Expectations Are Reset Forever 4 DATA IS GROWING 5 While At The Same Time Costs Must Be Contained Information Must Become An Asset Performance

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

CBRE VMware View Reference Architecture & Success Story Stateless Virtual Desktops with VMware, Atlantis Computing, Trend Micro & Xsigo Systems

CBRE VMware View Reference Architecture & Success Story Stateless Virtual Desktops with VMware, Atlantis Computing, Trend Micro & Xsigo Systems CBRE VMware View Reference Architecture & Stateless Virtual Desktops with VMware, Atlantis Computing, Trend Micro & Xsigo Systems Table of Contents Table of Contents... 2 Overview... 3 Business Drivers...

More information

EMC SOLUTION FOR SPLUNK

EMC SOLUTION FOR SPLUNK EMC SOLUTION FOR SPLUNK Splunk validation using all-flash EMC XtremIO and EMC Isilon scale-out NAS ABSTRACT This white paper provides details on the validation of functionality and performance of Splunk

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Protect enterprise data, achieve long-term data retention

Protect enterprise data, achieve long-term data retention Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce

More information

Image Management for View Desktops using Mirage

Image Management for View Desktops using Mirage Image Management for View Desktops using Mirage Mirage 5.9.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

Pivot3 vstac VDI-Simple Scalability for VMware View 5 Date: February 2012 Author: Tony Palmer, Sr. Lab Engineer/Analyst

Pivot3 vstac VDI-Simple Scalability for VMware View 5 Date: February 2012 Author: Tony Palmer, Sr. Lab Engineer/Analyst ESG Lab Review Pivot3 vstac VDI-Simple Scalability for VMware View 5 Date: February 2012 Author: Tony Palmer, Sr. Lab Engineer/Analyst Abstract: This ESG Lab review documents hands-on testing of the Pivot3

More information

VMWare Horizon View Solution Guide

VMWare Horizon View Solution Guide A KAMINARIO WHITE PAPER VMWare Horizon View Solution Guide August 2013 Table of Contents Executive Summary 3 Why Kaminario K2 4 Test Environment 4 Test Pool Configuration 5 Test Image Configuration 6 Management

More information

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M 1 W W W. F U S I O N I O. C O M iocontrol Reference Architecture for VMware Horizon View iocontrol Reference Architecture for VMware Horizon View Introduction Desktop management at any scale is a tedious

More information

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware

The Future of Virtualization. Jeff Jennings Global Vice President Products & Solutions VMware The Future of Virtualization Jeff Jennings Global Vice President Products & Solutions VMware From Virtual Infrastructure to VDC- Windows Linux Future Future Future lication Availability Security Scalability

More information

Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE

Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE Mobile Secure Desktop Implementation with Pivot3 HOW-TO GUIDE Solution Overview Purpose built to deliver a simple, scalable enterprise-class Virtual Desktop Infrastructure; Pivot3 vstac VDI appliances

More information

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale Kiewiet Kritzinger DELL EMC CPSD Snr varchitect Introduction to hyper-converged Focus on innovation, not IT integration

More information

Storage Solutions for VMware: InfiniBox. White Paper

Storage Solutions for VMware: InfiniBox. White Paper Storage Solutions for VMware: InfiniBox White Paper Abstract The integration between infrastructure and applications can drive greater flexibility and speed in helping businesses to be competitive and

More information

Dell EMC XC Series Appliances A Winning VDI Solution with Scalable Infrastructure

Dell EMC XC Series Appliances A Winning VDI Solution with Scalable Infrastructure Dell EMC XC Series Appliances A Winning VDI Solution with Scalable Infrastructure The linear scalability of the Dell EMC XC series appliances powered by Nutanix for VDI deployments. Dell EMC Engineering

More information

The Future of Virtualization Desktop to the Datacentre. Raghu Raghuram Vice President Product and Solutions VMware

The Future of Virtualization Desktop to the Datacentre. Raghu Raghuram Vice President Product and Solutions VMware The Future of Virtualization Desktop to the Datacentre Raghu Raghuram Vice President Product and Solutions VMware Virtualization- Desktop to the Datacentre VDC- vcloud vclient With our partners, we are

More information

Dell EMC ScaleIO Ready Node

Dell EMC ScaleIO Ready Node Essentials Pre-validated, tested and optimized servers to provide the best performance possible Single vendor for the purchase and support of your SDS software and hardware All-Flash configurations provide

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for storage

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved.

SolidFire. Petr Slačík Systems Engineer NetApp NetApp, Inc. All rights reserved. SolidFire Petr Slačík Systems Engineer NetApp petr.slacik@netapp.com 27.3.2017 1 2017 NetApp, Inc. All rights reserved. 1 SolidFire Introduction 2 Element OS Scale-out Guaranteed Performance Automated

More information

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0 Storage Considerations for VMware vcloud Director Version 1.0 T e c h n i c a l W H I T E P A P E R Introduction VMware vcloud Director is a new solution that addresses the challenge of rapidly provisioning

More information

White Paper. The impact of virtualization security on your VDI environment

White Paper. The impact of virtualization security on your VDI environment The impact of virtualization security on your VDI environment Contents Introduction...3 What is VDI?...3 Virtualization security challenges...3 Choosing the right virtualization security solution...4 Conclusion...7

More information

Virtual Desktop Infrastructure with Dell Fluid Cache for SAN

Virtual Desktop Infrastructure with Dell Fluid Cache for SAN Virtual Desktop Infrastructure with Dell Fluid Cache for SAN This Dell technical white paper describes the tasks to deploy a high IOPS (heavy user), 800-user, virtual desktop environment in a VMware Horizon

More information