NetApp All-Flash FAS Solution

Size: px
Start display at page:

Download "NetApp All-Flash FAS Solution"

Transcription

1 Technical Report NetApp All-Flash FAS Solution For Nonpersistent Desktops with VMware Horizon View Chris Gebhardt, Chad Morgenstern, Rachel Zhu, NetApp August 2014 TR-4307

2 TABLE OF CONTENTS 1 Executive Summary Reference Architecture Objectives Solution Overview Introduction Document Overview NetApp All-Flash FAS Overview VMware Horizon View Login VSI Solution Infrastructure Hardware Infrastructure Software Components VMware vsphere NetApp Virtual Storage Console Virtual Desktops Login VSI Server Login VSI Launcher VM Microsoft Windows Infrastructure VM Storage Design Storage Design Overview Aggregate Layout Volume Layout NetApp Virtual Storage Console for VMware vsphere Network Design Network Switching Host Server Networking Storage Networking Horizon View Design Overview User Assignment Automated Desktop Pools Linked-Clones Desktops Creating VMware Horizon View Desktop Pools Login VSI Workload NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

3 7.1 Login VSI Components Testing and Validation: Linked-Clones Desktops Overview Test Results Overview Storage Efficiency Provisioning 2,000 VMware Horizon View Linked Clones Boot Storm Test Boot Storm During Storage Failover Test Steady-State Login VSI Test Refresh Test Recompose Test Conclusion Key Findings References Future Document Revisions Version History Acknowledgements LIST OF TABLES Table 1) Test results....8 Table 2) FAS8000 storage system technical specifications Table 3) VMware Horizon View Connection VM configuration Table 4) Horizon View Composer VM configuration Table 5) Hardware components of server categories Table 6) Solution software components Table 7) VMware vcenter Server VM configuration Table 8) Microsoft SQL Server database VM configuration Table 9) NetApp VSC VM configuration Table 10) Virtual desktop configuration Table 11) Login VSI Server configuration Table 12) Login VSI launcher VM configuration Table 13) Microsoft Windows infrastructure VM Table 14) VMware Horizon View configuration options Table 15) Test results overview NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

4 Table 16) Efficiency results Table 17) Results for linked-clones provisioning Table 18) Results for linked-clones boot storm Table 19) Power-on method, storage latency, and boot time Table 20) Results for linked-clone boot storm during storage failover Table 21) Results for linked-clones Login VSI initial login and workload Table 22) Results for linked-clones Login VSI initial login and workload during storage failover Table 23) Results for linked-clones Monday morning login and workload Table 24) Results for linked-clones Monday morning login and workload during storage failover Table 25) Results for linked-clone Tuesday morning login and workload during storage failover Table 26) Results for linked-clones refresh operation Table 27) Results for linked-clones recompose operation LIST OF FIGURES Figure 1) Clustered Data ONTAP Figure 2) Horizon View deployment (graphic supplied by VMware) Figure 3) VMware View linked clone using View Composer Figure 4) View concurrent operation limits Figure 5) Solution infrastructure Figure 6) Setting the uuid.action in the vmx file with Windows PowerShell Figure 7) VMware OS Optimization tool Figure 8) Login VSI launcher configuration Figure 9) Multipath HA to DS2246 shelves of SSD Figure 10) SSD layout Figure 11) Volume layout Figure 12) Network topology of storage to server Figure 13) VMware Horizon View pool and desktop-to-datastore relationship Figure 14) Windows PowerShell script to create 4 pools of 500 desktops each Figure 15) Login VSI components Figure 16) Desktop-to-launcher relationship Figure 17) Creating 500 VMs in one pool named vdi01n Figure 18) Throughput and IOPS for linked-clones creation Figure 19) Storage controller CPU utilization for linked-clones creation Figure 20) Throughput and IOPS for linked-clones boot storm Figure 21) Storage controller CPU utilization for linked-clones boot storm Figure 22) Read/write IOPS for linked-clones boot storm Figure 23) Read/write ratio for linked-clones boot storm Figure 24) Throughput and IOPS for linked-clones boot storm during storage failover Figure 25) Storage controller CPU utilization for linked-clones boot storm during storage failover NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

5 Figure 26) Read/write IOPS for linked-clones boot storm during storage failover Figure 27) Read/write ratio for linked-clones boot storm during storage failover Figure 28) VSImax results for linked-clones Login VSI initial login and workload Figure 29) Scatterplot for linked-clones Login VSI login times Figure 30) Throughput, latency, and IOPS for linked-clones Login VSI initial login and workload Figure 31) Storage controller CPU utilization for linked-clones Login VSI initial login and workload Figure 32) Read/write IOPS for linked-clones Login VSI initial login and workload Figure 33) Read/write ratio for linked-clones Login VSI initial login and workload Figure 34) VSImax results for linked-clones Login VSI initial login and workload during storage failover Figure 35) Scatterplot for linked-clones Login VSI initial login times during storage failover Figure 36) Throughput, latency, and IOPS for linked-clones Login VSI initial login and workload during storage failover Figure 37) Storage controller CPU utilization for linked-clones Login VSI initial login and workload during storage failover Figure 38) Read/write IOPS for linked-clones Login VSI initial login and workload during storage failover Figure 39) Read/write ratio for linked-clones Login VSI initial login and workload during storage failover Figure 40) VSImax results for linked-clones Monday morning login and workload Figure 41) Scatterplot of linked-clones Monday morning login times Figure 42) Throughput, latency, and IOPS for linked-clones Monday morning login and workload Figure 43) Storage controller CPU utilization for linked-clones Monday morning login and workload Figure 44) Read/write IOPS for linked-clones Monday morning login and workload Figure 45) Read/write ratio for linked-clones Monday morning login and workload Figure 46) VSImax results for linked-clones Monday morning login and workload during storage failover Figure 47) Scatterplot of linked-clones Monday morning login times during storage failover Figure 48) Throughput, latency, and IOPS for linked-clones Monday morning login and workload during storage failover Figure 49) Storage controller CPU utilization for linked-clones Monday morning login and workload during storage failover Figure 50) Read/write IOPS for linked-clones Monday morning login and workload during storage failover Figure 51) Read/write ratio for linked-clones Monday morning login and workload during storage failover Figure 52) VSImax results for linked-clones Tuesday morning login and workload during storage failover Figure 53) Scatterplot of linked-clones Tuesday morning login times during storage failover Figure 54) Throughput, latency, and IOPS for linked-clones Tuesday morning login and workload during storage failover Figure 55) Storage controller CPU utilization for linked-clones Tuesday morning login and workload during storage failover Figure 56) Read/write IOPS for linked-clones Tuesday morning login and workload during storage failover Figure 57) Read/write ratio for linked-clones Tuesday morning login and workload during storage failover Figure 58) Windows PowerShell commands to refresh all four pools of desktops Figure 59) Throughput and IOPS for linked-clones refresh operation Figure 60) Storage controller CPU utilization for linked-clones refresh operation Figure 61) Read/write IOPS for linked-clones refresh operation NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

6 Figure 62) Read/write ratio for linked-clones refresh operation Figure 63) Throughput and IOPS for linked-clones recompose operation Figure 64) Storage controller CPU utilization for linked-clones recompose operation Figure 65) Read/write IOPS for linked-clones recompose operation Figure 66) Read/write ratio for linked-clones recompose operation NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

7 1 Executive Summary The decision to virtualize desktops affects multiple aspects of an IT organization, including infrastructure and storage requirements, application delivery, end-user devices, and technical support. In addition, correctly architecting, deploying, and managing a virtual desktop infrastructure (VDI) can be challenging because of the large number of solution components in the architecture. Therefore, it is critical to build the solution on industry-proven platforms such as NetApp storage and FlexPod converged infrastructure, along with industry-proven software solutions from VMware. VMware and NetApp provide leading desktop virtualization and storage solutions, respectively, for customers to successfully meet these challenges and gain the numerous benefits available from a VDI solution, such as workspace mobility, centralized management, consolidated and secure delivery of data, and device independence. New products are constantly being introduced that promise to solve all VDI challenges of performance, cost, or complexity. Each new product introduces more choices, complexities, and risks to your business in an already complicated solution. NetApp, founded in 1993, has been delivering enterprise-class storage solutions for virtual desktops since 2006, and it offers real answers to these problems. The criteria for determining the success of a VDI implementation include end-user experience. The enduser experience must be as good as or better than any previous experience on a physical PC or virtual desktop. The VMware Horizon View desktop virtualization solution delivers excellent end-user experience and performance over LAN, WAN, and extreme WAN through the Horizon View PCoIP display protocol adaptive technology. In addition, VMware has repeatedly enhanced the protocol to deliver 3D applications, improve real-time audio-video experience, and provide improved HTLM5 and mobility features for small form factor devices. Storage is often the leading cause of end-user performance problems. The NetApp all-flash FAS solution with the FAS8000 platform solves the performance problems commonly found in VDI deployments. Another determinant of project success is solution cost. The original promise that virtual desktops could save companies endless amounts of money proved incorrect. Storage has often been the most expensive part of the VDI solution, especially when storage efficiency and flash acceleration technologies were lacking. It was also common practice to forgo an assessment. Skipping this critical step meant that companies often overbought or undersized the storage infrastructure because information is the key to making sound architectural decisions that result in wise IT spending. NetApp has many technologies that help customers reduce the storage cost of a VDI solution. Technologies such as deduplication, thin provisioning, and compression help reduce the total amount of storage required for VDI. The NetApp Virtual Storage Tier (VST), extended with Flash Cache and Flash Pool, helps accelerate end-user experience while reducing spinning media. Storage platforms that scale up and scale out with clustered Data ONTAP help deliver the right architecture to meet the customer s price and performance requirements. NetApp can help achieve the customer s cost and performance goals while providing rich data management features. NetApp customers might pay as little as US$55 per desktop for storage when deploying at scale. This figure includes the cost of hardware, software, and three years of 24/7 premium support with 4-hour parts replacement. With VMware and NetApp, companies can accelerate the VDI end-user experience by using NetApp allflash FAS storage for Horizon View. NetApp all-flash FAS storage, powered by the FAS8000 system, is the optimal platform for using high-performing solid-state disks (SSDs) without adding risk to desktop virtualization initiatives. When a storage failure prevents users from working, that inactivity translates into lost revenue and productivity. That is why what used to be considered a tier 3 and 4 application is now critical to business operations. Having a storage system with a robust set of data management and availability features is key to keeping the users working and lessens the risk to the business. NetApp clustered Data ONTAP has multiple built-in features to help improve availability, such as active-active high availability (HA) and nondisruptive operations to seamlessly move data in the storage cluster without user impact. 7 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

8 NetApp also provides the ability to easily increase storage system capacity by simply adding disks or shelves. There is no need to purchase additional controllers in order to add users when additional capacity is required. When the platform requires expansion, additional nodes can be added in a scale-out fashion and managed within the same management framework and interface. Workloads can then be nondisruptively migrated or balanced to the new nodes in the cluster without the users ever noticing. 1.1 Reference Architecture Objectives In this reference architecture, NetApp tested VMware Horizon View user and administrator workloads to demonstrate how the NetApp all-flash FAS solution eliminates the most common barriers to virtual desktop adoption. The testing covered common administrative tasks on 2,000 desktops, or on 4,000 desktops when tests were performed in a failed-over state. Including tasks such as provisioning, booting, and performing refresh and recompose maintenance activities made it possible to understand time to complete, storage response, and storage utilization. We also included end-user workloads and reviewed how different types of logins (initial, cold, and warm) affected login time and end-user experience. Most of these login and workload scenarios took place not only during normal operations but also during storage failover. 1.2 Solution Overview The reference architecture is based on VMware vsphere 5.5, VMware Horizon View 5.3.1, and VMware View Composer 5.3.1, which were used to host, provision, and run 2,000 Microsoft Windows 7 virtual desktops. The 2,000 desktops were hosted by a NetApp all-flash FAS8060 storage system running the NetApp Data ONTAP operating system (OS) configured with GB SSDs. Four Fibre Channel (FC) datastores were presented from the NetApp system to the VMware ESXi hosts for use by the desktops. Host-to-host communication took place over a 10GbE network through the VMware virtual network adapters. Virtual machines (VMs) were used for core infrastructure components such as Active Directory, database servers, and other services. In all tests, end-user login time, guest response time, and maintenance activities performance were excellent. The NetApp all-flash FAS system performed well, reaching a combined peak IOPS of 147,147 while averaging 50% CPU utilization during most operations. All test categories demonstrated that, based on the 2,000-user workload and maintenance operations, the all-flash FAS8060 system should be capable of doubling the workload to 4,000 users while still being able to fail over in the event of a failure. At a density of 4,000 VMs on an all-flash FAS8060 system with the same I/O profile, storage for VDI might be as low as US$55 per desktop. This figure includes the cost of hardware, software, and three years of 24/7 premium support with 4-hour parts replacement. Table 1 lists the results obtained during testing. Table 1) Test results. Test Time to Complete Peak IOPS Peak Throughput Average Storage Latency Provisioning 2,000 desktops 140 min 43, GB/sec 0.431ms Boot storm test (VMware vcenter power-on operations) Boot storm test (50 VMware Horizon View concurrent power-on operations) 6 min, 50 sec 147, GB/sec 14.50ms 10 min, 3 sec 98, GB/sec 2.6ms Boot storm during failover 10 min, 7 sec 90, GB/sec 23.40ms 8 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

9 Test Time to Complete Peak IOPS Peak Throughput Average Storage Latency Login VSI initial login and workload 23 sec/vm 69, GB/sec 0.595ms Login VSI initial login and workload during failover 25 sec/vm 62, GB/sec 0.712ms Login VSI Monday morning login and workload 8.1 sec/vm 36, GB/sec 0.557ms Login VSI Monday morning login and workload during failover Login VSI Tuesday morning login and workload during failover 8.5 sec/vm 39, GB/sec 0.657ms 8.1 sec/vm 31, GB/sec 0.698ms Refresh operation 45 min 121, GB/sec 1.009ms Recompose operation 4 hr, 25 min 59, GB/sec 0.440ms 2 Introduction This section provides an overview of the NetApp all-flash FAS solution for Horizon View, explains the purpose of this document, and introduces Login VSI. 2.1 Document Overview This document describes the solution components used in a 2,000-seat VMware Horizon View deployment on a NetApp all-flash FAS reference architecture. It covers the hardware and software used in the validation, the configuration of the hardware and software, use cases that were tested, and performance results of the tests completed. During these performance tests, many different scenarios were tested to validate the performance of the storage during the lifecycle of a virtual desktop deployment. The testing included the following criteria: Provisioning 2,000 VMware Horizon View linked-clones desktops Boot storm test of 2,000 desktops (with and without storage failover) Login VSI initial login and steady-state workload (with and without storage failover) Monday morning login and steady-state workload with Login VSI 4.1 RC3 (with and without storage failover) Tuesday morning login and steady-state workload with Login VSI 4.1 RC3 (with storage failover) Refresh operation of 2,000 desktops Recompose operation of 2,000 desktops Note: In this document, Login VSI 4.1 RC3 is referred to as Login VSI 4.1. Storage performance and end-user acceptance were the main focus of the testing. If a bottleneck occurred within any component of the infrastructure, it was identified and remediated if possible. There were multiple exceptions to this rule. The execution of certain tests (such as provisioning, refresh, and recompose) was limited by the software and not by the storage. This was evident because no component in the infrastructure became the bottleneck. In addition, other reference architectures in the industry achieved identical results with no bottleneck in the storage systems or other components. 9 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

10 2.2 NetApp All-Flash FAS Overview Built on more than 20 years of innovation, Data ONTAP has evolved to meet the changing needs of customers and help drive their success. Clustered Data ONTAP provides a rich set of data management features and clustering for scale-out, operational efficiency, and nondisruptive operations to offer customers one of the most compelling value propositions in the industry. The IT landscape is undergoing a fundamental shift to IT as a service, a model that requires a pool of compute, network, and storage to serve a wide range of applications and deliver a wide range of services. Innovations such as clustered Data ONTAP are fueling this revolution. Outstanding Performance The NetApp all-flash FAS solution shares the same unified storage architecture, Data ONTAP OS, management interface, rich data services, and advanced features set as the rest of the fabric-attached storage (FAS) product families. This unique combination of all-flash media with Data ONTAP delivers the consistent low latency and high IOPS of all-flash storage, with the industry-leading clustered Data ONTAP OS. In addition, it offers proven enterprise availability, reliability, and scalability; storage efficiency proven in thousands of VDI deployments; unified storage with multiprotocol access; advanced data services; and operational agility through tight application integrations. FAS8000 Technical Specifications Table 2 provides the technical specifications for the four FAS series: FAS8080 EX, FAS8060, FAS8040, and FAS8020. Note: All data in this table applies to active-active dual-controller configurations. 10 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

11 Table 2) FAS8000 storage system technical specifications. Features FAS8080 EX FAS8060 FAS8040 FAS8020 Maximum raw capacity Maximum number of drives 5760TB 4800TB 2880TB 1920TB 1,440 1, Controller form factor Two 6U chassis, each with 1 controller and an IOXM Single-enclosure HA; 2 controllers in single 6U chassis Single-enclosure HA; 2 controllers in single 6U chassis Single-enclosure HA; 2 controllers in single 3U chassis Memory 256GB 128GB 64GB 48GB Maximum Flash Cache 24TB 8TB 4TB 3TB Maximum Flash Pool 36TB 18TB 12TB 6TB Combined flash total 36TB 18TB 12TB 6TB NVRAM 32GB 16GB 16GB 8GB PCIe expansion slots Onboard I/O: UTA2 (10GbE/FCoE, 16Gb FC) Onboard I/O: 10GbE Onboard I/O: GbE Onboard I/O: 6Gb SAS Optical SAS support Yes Yes Yes Yes Storage networking supported OS version FC, FCoE, iscsi, NFS, pnfs, CIFS/SMB, HTTP, FTP FAS8080 EX Data ONTAP RC1 or later, FAS8060, FAS8040, FAS8020 Data ONTAP RC2 or later Scale-Out Data centers require agility. In a data center, each storage controller has CPU, memory, and disk shelf limits. Scale-out means that as the storage environment grows, additional controllers can be added seamlessly to the resource pool residing on a shared storage infrastructure. Host and client connections as well as datastores can be moved seamlessly and nondisruptively anywhere within the resource pool. The benefits of scale-out include the following: Nondisruptive operations Ability to keep adding thousands of users to the virtual desktop environment without downtime Operational simplicity and flexibility 11 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

12 As Figure 1 shows, clustered Data ONTAP offers a way to solve the scalability requirements in a storage environment. A clustered Data ONTAP system can scale up to 24 nodes, depending on platform and protocol, and can contain different disk types and controller models in the same storage cluster. Figure 1) Clustered Data ONTAP. Note: The storage virtual machine (SVM) referred to in Figure 1 was formerly known as Vserver. Nondisruptive Operations The move to shared infrastructure has made it nearly impossible to schedule downtime to accomplish routine maintenance. NetApp clustered Data ONTAP is designed to eliminate the planned downtime needed for maintenance operations and lifecycle operations as well as the unplanned downtime caused by hardware and software failures. Three standard tools make this elimination of downtime possible: DataMotion for Volumes (vol move) allows you to move data volumes from one aggregate to another on the same or a different cluster node. Logical interface (LIF) migrate allows you to virtualize the physical Ethernet interfaces in clustered Data ONTAP. LIF migrate lets you move LIFs from one network port to another on the same or a different cluster node. Aggregate relocate (ARL) allows you to transfer complete aggregates from one controller in an HA pair to the other without data movement. Used individually and in combination, these tools offer the ability to nondisruptively perform a full range of operations, from moving a volume from a faster to a slower disk, all the way up to a complete controller and storage technology refresh. As storage nodes are added to the system, all physical resources CPUs, cache memory, network I/O bandwidth, and disk I/O bandwidth can easily be kept in balance. Clustered Data ONTAP systems enable users to: Add or remove storage shelves (over 23PB in an 8-node cluster and up to 69PB in a 24-node cluster) Move data between storage controllers and tiers of storage without disrupting users and applications Dynamically assign, promote, and retire storage, while providing continuous access to data as administrators upgrade or replace storage These capabilities allow administrators to increase capacity while balancing workloads and can reduce or eliminate storage I/O hot spots without the need to remount shares, modify client settings, or stop running applications. 12 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

13 Availability Shared storage infrastructure provides services to thousands of virtual desktops. In such environments, downtime is not an option. The NetApp all-flash FAS solution eliminates sources of downtime and protects critical data against disaster through two key features: High availability (HA). A NetApp HA pair provides seamless failover to its partner in case of any hardware failure. Each of the two identical storage controllers in the HA pair configuration serves data independently during normal operation. During an individual storage controller failure, the data service process is transferred from the failed storage controller to the surviving partner. RAID-DP. During any virtualized desktop deployment, data protection is critical because any RAID failure might disconnect hundreds to thousands of end users from their desktops, resulting in lost productivity. RAID-DP provides performance comparable to that of RAID 10, yet it requires fewer disks to achieve equivalent protection. RAID-DP provides protection against double disk failure, in contrast to RAID 5, which can protect against only one disk failure per RAID group, in effect providing RAID 10 performance and protection at a RAID 5 price point. Optimized Writes The NetApp WAFL (Write Anywhere File Layout) file system enables NetApp to process writes efficiently. When the Data ONTAP OS receives an I/O, it stores the I/O in battery-backed NVRAM and sends back an acknowledgement (or ACK), notifying the sender that the write is committed. Acknowledging the write before writing to disk allows Data ONTAP to perform many functions to optimize the data layout for optimal write/write coalescing. Before being written to disk, I/Os are coalesced into larger blocks because larger sequential blocks require less CPU for each operation. Enhancing Flash Data ONTAP has been leveraging flash technologies since 2009 and has supported SSDs since This relatively long experience in dealing with SSDs has allowed NetApp to tune Data ONTAP features to optimize SSD performance and enhance flash media endurance. As described in the previous sections, because Data ONTAP acknowledges writes after they are in DRAM and logged to NVRAM, SSDs are not in the critical write path. Therefore, write latencies are very low. Data ONTAP also enables efficient use of SSDs when destaging cache by coalescing writes into a single sequential stripe across all SSDs at once. Data ONTAP writes to free space whenever possible, minimizing overwrites for every dataset, not only for deduped or compressed data. This wear-leveling feature of Data ONTAP is native to the architecture, and it also leverages the wearleveling and garbage-collection algorithms built into the SSDs to extend the life of the devices. Therefore, NetApp provides up to a five-year warranty with all SSDs (three-year standard warranty, plus the offer of an additional two-year extended warranty, with no restrictions on the number of drive writes). The parallelism built into Data ONTAP, combined with the multicore CPUs and large system memories in the FAS8000 storage controllers, takes full advantage of SSD performance and has powered the test results described in this document. Advanced Data Management Capabilities This section describes the storage efficiencies, multiprotocol support, VMware integrations, and replication capabilities of the NetApp all-flash FAS solution. Storage Efficiencies Most desktop virtualization implementations deploy thousands of desktops from a small number of golden VM images, resulting in large amounts of duplicate data. This is especially the case with the VM operating system. 13 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

14 The NetApp all-flash FAS solution includes built-in thin provisioning, data deduplication, compression, and zero-cost cloning with FlexClone that offers multilevel storage efficiency across virtual desktop data, installed applications, and user data. The comprehensive storage efficiency enables a significantly reduced storage footprint for virtualized desktop implementations, with a capacity reduction of up to 10:1, or 90% (based on existing customer deployments and NetApp solutions lab validation). Three features make this storage efficiency possible: Thin provisioning allows multiple applications to share a single pool of on-demand storage, eliminating the need to provision more storage for one application while another application still has plenty of allocated but unused storage. Deduplication saves space on primary storage by removing redundant copies of blocks in a volume that hosts hundreds of virtual desktops. This process is transparent to the application and the user, and it can be enabled and disabled on the fly. To eliminate any potential concerns about postprocess deduplication causing additional wear on the SSDs, NetApp provides up to a five-year warranty with all SSDs (three-year standard, plus offers an additional two-year extended warranty, with no restrictions on the number of drive writes. FlexClone offers hardware-assisted rapid creation of space-efficient, writable, point-in-time images of individual VM files, LUNs, or flexible volumes. It is fully integrated with VMware vsphere vstorage APIs for Array Integration (VAAI) and Microsoft offloaded data transfer (ODX). The use of FlexClone technology in VDI deployments provides high levels of scalability and significant cost, space, and time savings. Both file-level and volume-level cloning are tightly integrated with the VMware vcenter Server through the NetApp VSC Provisioning and Cloning vcenter plug-in and native VM cloning offload with VMware VAAI and Microsoft ODX. The VSC provides the flexibility to rapidly provision and redeploy thousands of VMs with hundreds of VMs in each datastore. Multiprotocol Support By supporting all common NAS and SAN protocols on a single platform, NetApp unified storage enables: Direct access to storage by each client Network file sharing across different platforms without the need for protocol-emulation products such as SAMBA, NFS Maestro, or PC-NFS Simple and fast data storage and data access for all client systems Fewer storage systems Greater efficiency from each system deployed Clustered Data ONTAP can support several protocols concurrently in the same storage system. Data ONTAP 7G and 7-Mode versions also include support for multiple protocols. Unified storage is important to VMware Horizon View solutions, such as CIFS SMB for user data, NFS or SAN for the VM datastores, and guest-connect iscsi LUNs for Windows applications. The following protocols are supported: NFS v3, v4, v4.1, including pnfs iscsi FC Fibre Channel over Ethernet (FCoE) CIFS VMware Integrations The complexity of deploying and managing thousands of virtual desktop could be daunting without the right tools. NetApp Virtual Storage Console (VSC) for VMware vsphere is tightly integrated with VMware vcenter for rapidly provisioning, managing, configuring, and backing up a VMware Horizon View 14 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

15 implementation. NetApp VSC significantly increases operational efficiency and agility by simplifying the deployment and management process for thousands of virtual desktops. The following plug-ins and software features simplify deployment and administration of virtual desktop environments: NetApp VSC Provisioning and Cloning plug-in enables customers to rapidly provision, manage, import, and reclaim space of thinly provisioned VMs and redeploy thousands of VMs. NetApp VSC Backup and Recovery plug-in integrates VMware snapshot functionality with NetApp Snapshot functionality to protect VMware Horizon View environments. Replication The NetApp Backup and Recovery plug-in for Virtual Storage Console (VSC) is a unique, scalable, integrated data protection solution for persistent desktop VMware Horizon View environments. The backup and recovery plug-in allows customers to leverage VMware snapshot functionality with NetApp array-based block-level Snapshot copies to provide consistent backups for the virtual desktops. The backup and recovery plug-in is integrated with NetApp SnapMirror replication technology, which preserves the deduplicated storage savings from the source to the destination storage array. Deduplication is then not required to be rerun on the destination storage array. When a VMware Horizon View environment is replicated with SnapMirror, the replicated data can quickly be brought online to provide production access during a site or data center outage. In addition, SnapMirror is fully integrated with VMware Site Recovery Manager (SRM) and NetApp FlexClone technology to instantly create zerocost writable copies of the replicated virtual desktops at the remote site that can be used for disaster recovery (DR) testing or for test and development work. 2.3 VMware Horizon View VMware Horizon View is an enterprise-class desktop virtualization solution that delivers virtualized or remote desktops and applications to end users through a single platform. Horizon View allows IT to manage desktops, applications, and data centrally while increasing flexibility and customization at the endpoint for the user. It enables levels of availability and agility of desktop services unmatched by traditional PCs at about half the total cost of ownership (TCO) per desktop. Horizon View is a tightly integrated, end-to-end solution built on the industry-leading virtualization platform, VMware vsphere. Figure 2 provides an architectural overview of a Horizon View deployment that includes seven main components: View Connection Server streamlines the management, provisioning, and deployment of virtual desktops by acting as a broker for client connections, authenticating and directing incoming user desktop requests. Administrators can centrally manage thousands of virtual desktops from a single console, and end users connect through View Connection Server to securely and easily access their personalized virtual desktops. View Security Server is an instance of View Connection Server that adds an additional layer of security between the Internet and the internal network. View Composer Server is an optional feature that allows you to manage pools of linked-cloned desktops by creating master images that share a common virtual disk. View Agent service communicates between VMs and Horizon Client. View Agent is installed on all VMs managed by vcenter Server so that View Connection Server can communicate with them. View Agent also provides features such as connection monitoring, virtual printing, persona management, and access to locally connected USB devices. View Agent is installed in the guest OS. Horizon Clients can be installed on each endpoint device to enable end users to access their virtual desktops from devices such as zero clients, thin clients, Windows PCs, Mac computers, and iosbased and Android-based mobile devices. Horizon Clients are available for Windows, Mac, Ubuntu, Linux, ios, and Android to provide the connection to remote desktops from the device of choice. 15 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

16 View Persona Management is an optional feature that provides persistent, dynamic user profiles across user sessions on different desktops. This capability allows you to deploy pools of stateless, floating desktops and enables users to maintain their designated settings between sessions. User profile data is downloaded as needed to speed up login and logout time. New user settings are automatically sent to the user profile repository during desktop use. ThinApp is an optional software component included with Horizon that creates virtualized applications. Figure 2) Horizon View deployment (graphic supplied by VMware). The following sections describe the Horizon View components used in this reference architecture: Horizon View Connection Server and Horizon View Composer. Horizon View Connection Server VMware Horizon View Connection Server is responsible for provisioning and managing virtual desktops and for brokering the connections between clients and the virtual desktop machines. A single Connection Server instance can support up to 2,000 simultaneous connections. In addition, five Connection Server instances can work together to support up to 10,000 virtual desktops. For increased availability, View supports using two additional Connection Server instances as standby servers. The Connection Server can optionally log events to a centralized database that is running either Oracle Database or Microsoft SQL Server. Table 3 lists the components of the VMware Horizon View Connection VM configuration. Note: Only one Horizon View connection server was used in this reference architecture. This decision created a single point of failure but provided better control during testing. Production deployments should use multiple View servers to provide broker availability. Table 3) VMware Horizon View Connection VM configuration. Horizon View Connection VM Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu 4 vcpus 16 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

17 Horizon View Connection VM Memory Network adapter type Configuration 10GB VMXNET3 Network adapters 2 Hard disk size Hard disk type 60GB Thin Horizon View Composer The VMware Horizon View Composer server is a critical component of solutions that use VMware Horizon View linked clones. This server is responsible for the creation and maintenance operations of VMware Horizon View linked clones. It works with the View Connection Server to rapidly provision storage-efficient virtual desktops for use in the VMware Horizon View desktop environment. These linked-clones desktops created by the Composer can be either dedicated or floating virtual desktops in an automated pool. (For this reference architecture, dedicated desktops in an automated pool were created.) The Composer server is also involved during maintenance operations, such as refresh, recompose, and rebalance. These operations improve the storage efficiency, performance, security, and compliance of the virtual desktop environment. Figure 3 shows a VMware Horizon View linked clone using VMware View Composer. Figure 3) VMware View linked clone using View Composer. The Composer server can be installed on a VMware vcenter Server or as a standalone server, excluding any servers participating in the VMware Horizon View environment, such as the Connection Server, the transfer server, the security server, and so on. (For this reference architecture, the Composer server was installed on a separate VM.) Table 4 lists the components of the Horizon View Composer VM configuration. 17 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

18 Table 4) Horizon View Composer VM configuration. Horizon View Composer VM Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu Memory Network adapter type 4 vcpus 8GB VMXNET3 Network adapters 1 Hard disk size Hard disk type 60GB Thin As Figure 4 shows, for these tests, we increased the maximum number of concurrent View Composer maintenance operations and the maximum number of provisioning operations to 30. Figure 4) View concurrent operation limits. 2.4 Login VSI Login Virtual Session Indexer (Login VSI) is the industry-standard load-testing tool for testing the performance and scalability of centralized Windows desktop environments such as server-based computing (SBC) and virtual desktop infrastructure (VDI). Login VSI is used for testing and benchmarking by all major hardware and software vendors and is recommended by both leading IT analysts and the technical community. Login VSI is vendor independent and works with standardized user workloads; therefore, conclusions based on Login VSI test data are objective, verifiable, and replicable. 18 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

19 SBC-oriented and VDI-oriented vendor organizations that are committed to enhancing end-user experience in the most efficient way use Login VSI as an objective method of testing, benchmarking, and improving the performance and scalability of their solutions. VSImax provides the proof (vendor independent, industry standard, and easy to understand) to innovative technology vendors to demonstrate the power and scalability, and the gains, of their solutions. Login VSI based test results are published in technical white papers and presented at conferences. Login VSI is used by end-user organizations, system integrators, hosting providers, and testing companies. It is also the standard tool used in all tests executed in the internationally acclaimed Project Virtual Reality Check. For more information about Login VSI or for a free test license, refer to the Login VSI website. 3 Solution Infrastructure This section describes the software and hardware components of the solution. Figure 5 shows the solution infrastructure. Figure 5) Solution infrastructure. 3.1 Hardware Infrastructure During solution testing, 24 Cisco Unified Computing System (Cisco UCS ) blade servers were used to host the infrastructure and the desktop VMs. The desktops and infrastructure servers were hosted on discrete resources so that the workload to the NetApp all-flash FAS system could be precisely measured. It is a NetApp and industry best practice to separate the desktop VMs from the infrastructure VMs because noisy neighbors or bully virtual desktops can affect the infrastructure, which can have a negative impact on all users, applications, and performance results. A separate NetApp FAS storage system (not shown) was used to host the infrastructure and launcher VMs as well as the boot LUNs from the desktop hosts. This configuration would be typical for a customer environment. Table 5 lists the hardware specifications of each server category. 19 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

20 Table 5) Hardware components of server categories. Hardware Components Configuration Infrastructure Servers Server quantity CPU model Total number of cores Memory per server Storage 2 Cisco UCS B200 M3 blade servers Intel Xeon CPU E v2 at 2.60GHz (8-core) 16 cores 256GB One 10GB boot LUN per host Desktop Servers Server quantity CPU model Total number of cores Memory per server Storage 16 Cisco UCS B200 M3 blade servers Intel Xeon CPU E v2 at 2.80GHz (10-core) 20 cores 256GB One 10GB boot LUN per host Launcher Servers Server quantity CPU model Total number of cores Memory per server Storage 6 Cisco UCS B200 M3 blade servers Intel Xeon CPU E at 2.00GHz (8-core) 16 cores 192GB One 10GB boot LUN per host Networking Networking switch 2 Cisco Nexus 5548UP Storage NetApp system Disk shelf Disk drives FAS8060 HA pair 2 DS GB SSDs 3.2 Software Components This section describes the purpose of each software product used to test the NetApp all-flash FAS system and provides configuration details. Table 6 lists the software components and identifies the version of each component. 20 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

21 Table 6) Solution software components. Software Version NetApp FAS Clustered Data ONTAP NetApp Windows PowerShell toolkit NetApp System Manager RC1 NetApp Virtual Storage Console (VSC) 5.0 Storage protocol Fibre Channel Networking Cisco Nexus 5548UP NX-OS software release 7.0(0)N1(1) VMware Software VMware ESXi 5.5.0, VMware vcenter Server 5.5.0, VMware Horizon View Administrator 5.3.1, VMware View Composer 5.3.1, VMware Horizon View Client 2.3.3, VMware Horizon View Agent 5.3.1, VMware vsphere PowerCLI 5.5.0, 5836 Workload Generation Utility Login VSI Professional Login VSI 4.1 RC3 ( ) Database Server Microsoft SQL Server Microsoft SQL Server Native Client 2008 R2 (64-bit) 11.0 (64-bit) 3.3 VMware vsphere 5.5 This section describes the VMware vsphere components of the solution. VMware ESXi 5.5 The tested reference architecture used VMware ESXi 5.5 across all servers. For hardware configuration information, refer to Table 5. VMware vcenter 5.5 Configuration The tested reference architecture used VMware vcenter Server 5.5 running on a Windows 2008 R2 server. This vcenter Server was configured to host the infrastructure cluster, the Login VSI launcher cluster, and the desktop clusters. For the vcenter Server database, a Windows 2008 R2 VM was configured with Microsoft SQL Server 2008 R2. Table 7 lists the components of the VMware vcenter 21 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

22 Server VM configuration, and Table 8 lists the components of the Microsoft SQL Server database VM configuration. Table 7) VMware vcenter Server VM configuration. VMware vcenter Server VM Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 8 vcpu Memory Network adapter type 4 vcpus 8GB VMXNET3 Network adapters 2 Hard disk size Hard disk type 60GB Thin Table 8) Microsoft SQL Server database VM configuration. Microsoft SQL Server VM Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 8 vcpu Memory Network adapter type 2 vcpus 4GB VMXNET3 Network adapters 2 Hard disk size Hard disk type 60GB Thin 3.4 NetApp Virtual Storage Console The NetApp Virtual Storage Console (VSC) is a management plug-in for VMware vcenter Server that enables simplified management and orchestration of common NetApp administrative tasks. This tested reference architecture used the VSC for the following tasks: Setting NetApp best practices for ESXi hosts (timeout values, host bus adapter [HBA], multipath input/output [MPIO], and Network File System [NFS] settings) Provisioning datastores Cloning infrastructure VMs and Login VSI launcher machines The VSC can be coinstalled on the VMware vcenter Server instance when the Windows version of vcenter is used. For this reference architecture, a separate server was used to host the VSC. Table 9 lists the components of the tested NetApp VSC VM configuration. 22 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

23 Table 9) NetApp VSC VM configuration. NetApp VSC Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu Memory Network adapter type 2 vcpus 4GB VMXNET3 Network adapters 1 Hard disk size Hard disk type 60GB Thin 3.5 Virtual Desktops The desktop VM template was created with the virtual hardware and software listed in Table 10. The VM hardware and software were installed and configured according to Login VSI documentation. Table 10) Virtual desktop configuration. Desktop Configuration Desktop VM VM quantity 2,000 VM hardware version 10 vcpu Memory 1 vcpu 2GB Network adapter type VMXNET 3 Network adapters 1 Hard disk size Hard disk type 24GB Thin Desktop Software Guest OS Microsoft Windows 7 (32-bit) VM hardware version ESXi 5.5 and later (VM version 10) VMware tools version 9344 (default for VMware ESXi, 5.5.0, ) Microsoft Office 2010 version Microsoft.NET Framework 3.5 Adobe Acrobat Reader NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

24 Desktop Configuration Adobe Flash Player Java Doro PDF 1.82 VMware Horizon View Agent Login VSI target software 4.1 After the desktops were provisioned, Windows PowerShell was used to set the uuid.action in the vmx file on each VM in the desktop s datastore so that during testing no questions would be asked about the movements of VMs. Figure 6 shows the complete command. Figure 6) Setting the uuid.action in the vmx file with Windows PowerShell. Get-Cluster Desktops Get-VM Get-AdvancedSetting -Name uuid.action Set-AdvancedSetting -Value "keep" -Confirm:$false Guest Optimization In keeping with VMware Horizon View best practices, guest OS optimizations were applied to the template VMs used in this reference architecture. Figure 7 shows the VMware OS Optimization tool that was used to perform the guest optimizations. Figure 7) VMware OS Optimization tool. Although it might be possible to run desktops without guest optimizations, the impact of not optimizing must be first understood. Many recommended optimizations address services and features (such as hibernation, Windows update, or system restore) that do not provide value in a virtual desktop environment. To run services and features that do not add value would decrease the overall density of the 24 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

25 solution and increase cost because they would consume CPU, memory, and storage resources in relation to both capacity and I/O. To achieve the most scalable, highest performing, and most cost-effective virtual desktop deployment, NetApp recommends that each customer evaluate the optimization scripts for Horizon View and apply them based on need. The VMware Horizon View Optimization Guide for Windows 7 and Windows 8 describes the guest OS optimization process, from how to install Windows 7 to how to prepare the VM for deployment. 3.6 Login VSI Server The Login VSI Server is where the Login VSI binaries are run as well as the Windows share that hosts the user data, binaries, and workload results. The tested machine was configured with the virtual hardware listed in Table 11. Table 11) Login VSI Server configuration. Login VSI Server Configuration VM quantity 1 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu Memory Network adapter type 4 vcpus 8GB VMXNET3 Network adapters 1 Hard disk size Hard disk type 60GB Thin Figure 8 shows the Login VSI launcher configuration. 25 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

26 Figure 8) Login VSI launcher configuration. 3.7 Login VSI Launcher VM Table 12 lists the components of the Login VSI launcher VM configuration. Table 12) Login VSI launcher VM configuration. Login VSI Launcher VM Configuration VM quantity 80 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu Memory Network adapter type 2 vcpus 4GB VMXNET3 Network adapters 1 Hard disk size Hard disk type 60GB Thin 3.8 Microsoft Windows Infrastructure VM In the tested configuration, two VMs were provisioned and configured to serve Active Directory, Domain Name System (DNS), and Dynamic Host Configuration Protocol (DHCP) services during the reference architecture. The servers provided these services to both infrastructure and desktop VMs. Table 13 lists the components of the Microsoft Windows infrastructure VM. 26 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

27 Table 13) Microsoft Windows infrastructure VM. Microsoft Windows Infrastructure VM Configuration VM quantity 2 OS Microsoft Windows Server 2008 R2 (64-bit) VM hardware version 10 vcpu Memory Network adapter type 2 vcpus 4GB VMXNET3 Network adapters 1 Hard disk size Hard disk type 60GB Thin 4 Storage Design This section provides an overview of the storage design, the aggregate and volume layout, and the VSC. 4.1 Storage Design Overview For this configuration, shown in Figure 9, we used a 6U FAS8060 controller and two DS2246 disk shelves that are 2U per shelf for a total of 10U. Note that the image in Figure 9 is a logical view because both nodes reside in one 6U enclosure; this diagram illustrates multipath HA. Figure 9) Multipath HA to DS2246 shelves of SSD. 4.2 Aggregate Layout In this reference architecture, we used GB SSDs divided across two nodes of a FAS8060 controller. As shown in Figure 10, each node had a 2-disk root aggregate, a 15-disk data aggregate, and one spare. 27 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

28 Figure 10) SSD layout. 4.3 Volume Layout To adhere to NetApp best practices, all volumes were provisioned with the NetApp VSC. During these tests, only 1.3TB was consumed of the total 8TB. Figure 11 shows the volume layout. Figure 11) Volume layout. Note: A rootvol for the VDI storage virtual machine (SVM, formerly known as Vserver) was present but is not depicted in Figure 11. The rootvol volume was 1GB in size with 28MB consumed. 4.4 NetApp Virtual Storage Console for VMware vsphere The NetApp VSC was used to provision the datastores in this reference architecture. 28 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

29 5 Network Design Figure 12 shows the network topology linking the NetApp all-flash FAS8060 switchless two-node cluster to the Intel X86 servers hosting VDI VMs. Figure 12) Network topology of storage to server. 5.1 Network Switching Two Cisco Nexus 5548UP switches running NX-OS software release 7.0(0)N1(1) were used in this validation. These switches were chosen because of their ability to switch both IP Ethernet and FC/FCoE on one platform. FC zoning was done in these switches, and two SAN switching fabrics (A and B) were maintained. From an Ethernet perspective, virtual port channels (vpcs) were used, allowing a port channel from storage to be spread across both switches. 5.2 Host Server Networking Each host server had an FCoE HBA that provided two 10GB converged Ethernet ports that contained FCoE for FC networking and Ethernet for IP networking. FCoE from the host servers was used both for FC SAN boot of the servers and for accessing FC VM datastores on the NetApp FAS8060 servers. From an Ethernet perspective, each VMware ESXi host had a dedicated vswitch with both Ethernet ports configured as active and with source MAC hashing. 5.3 Storage Networking Each of the two NetApp FAS8060 storage systems had a two-port interface group or LACP port channel connected to a vpc across the two Cisco Nexus 5548UP switches. This switch was used for both Ethernet and FC traffic. In addition, four 8Gb/sec FC targets were configured from each FAS8060 system, with two going to each switch. Asymmetric Logical Unit Access (ALUA) was used to provide multipathing and load balancing of the FC links. This configuration allowed each of the two storage controllers to provide up to 32Gb/sec of FC aggregate bandwidth. Initiator groups were also configured on the FAS8060 systems to map datastore LUNs to the ESXi host servers. 6 Horizon View Design This section provides an overview of VMware Horizon View design and explains user assignment, automated desktop pools, linked-clones desktops, and the creation of desktop pools. 29 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

30 6.1 Overview In a typical large-scale virtual desktop deployment, the maximum limits of the VMware Horizon View Connection Server can be reached when each Connection Server instance supports up to 2,000 simultaneous connections. When this occurs, it is necessary to add more Connection Server instances and to build additional VMware Horizon View desktop infrastructures to support additional virtual desktops. Each such desktop infrastructure is referred to as a pool of desktops (POD). A POD is a building-block approach to architecting a solution. The size of the POD is defined by the VMware Horizon View desktop infrastructure (the desktop VMs) plus any additional VMware Horizon View infrastructure resources that are necessary to support the desktop infrastructure PODs. In some cases, it might be best to design PODs that are smaller than the maximum size to allow for growth in each POD or to reduce the size of the fault domain. Using a POD-based design gives IT a simplified management model and a standardized way to scale linearly and predictably. By using clustered Data ONTAP, customers can have smaller fault domains that result in higher availability. In this reference architecture, the number of Horizon View Connection servers was limited to one so that the POD-based design limits could be scaled. However, the results of the testing show that it might have been possible to deploy multiple PODs on this platform. VMware Horizon View groups desktops into discrete management units called pools. Policies and entitlements can be set for each pool so that all desktops in a pool have the same provisioning methods, user assignment policies, logout actions, display settings, data redirection settings, data persistence rules, and so forth. 6.2 User Assignment Each desktop pool can be configured with a different user assignment. User assignments can be either dedicated or floating. Dedicated Assignment Through the dedicated assignment of desktops, users log in to the same virtual desktop each time they log in. Dedicated assignment allows users to store data either on a persistent disk (when using linked clones) or locally (when using full clones). These are usually considered and used as persistent desktops; however, it is the act of refreshing or recomposing that makes them nonpersistent. User-to-desktop entitlement can be a manual or an automatic process. The administrator can entitle a given desktop to a user or can opt to allow VMware Horizon View to automatically entitle the user to a desktop when the user logs in for the first time. Floating Assignment With floating user assignment, users are randomly assigned to desktops each time they log in. These are usually considered and used as nonpersistent desktops; however, a user who does not log out of the desktop would always return to the same desktop. 6.3 Automated Desktop Pools An automated desktop pool dynamically provisions virtual desktops. With this pool type, VMware Horizon View creates a portion of the desktops immediately and then, based on demand, provisions additional desktops to the limits that were set for the pool. An automated pool can contain dedicated or floating desktops. These desktops can be full clones or linked clones. A major benefit of using VMware Horizon View with automated pools is that additional desktops are created dynamically on demand. This automation greatly simplifies the repetitive administrative tasks associated with provisioning desktops. 30 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

31 6.4 Linked-Clones Desktops To the end user, a linked-clones desktop looks and feels like a normal desktop, but it is storage efficient, consuming a fraction of the storage required for a full desktop. Because of the architecture of linked clones, three unique maintenance operations can be performed to improve the storage efficiency, the performance, and the security and compliance of the virtual desktop environment: refresh, recompose, and rebalance. 6.5 Creating VMware Horizon View Desktop Pools Figure 13 shows how the VMs, pools, and datastores were designed in the tested reference architecture. The design used four pools with 500 VMs per pool. Each node of the NetApp all-flash FAS cluster had two VM datastores. Each pool used one datastore to host both the replica and the OS disk. Using a single datastore for both replica and OS disk made it possible to report on the workload as a whole for each VM. Splitting them up would have provided results for each, but because both were required nevertheless, it was better to report holistically. Keeping them together is more taxing on the storage during provisioning because a replica must be created for each datastore, and more storage controller cache is used during steady state. Figure 13) VMware Horizon View pool and desktop-to-datastore relationship. The Windows PowerShell script shown in Figure 14 creates four pools named vdi0#n0#. In the tested reference architecture, these four pools were created across two nodes of the NetApp all-flash FAS cluster. This approach allowed the best parallelism across the storage system. The Login VSI Active Directory group was then entitled to the created pools. This Windows PowerShell script was run from the VMware Horizon View PowerCLI located on the VMware Horizon View server. 31 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

32 Figure 14) Windows PowerShell script to create 4 pools of 500 desktops each. $vcserver = "vc1.ra.rtp.netapp.com" $domain = "ra.rtp.netapp.com" $username = "administrator" $numvms = "500" $parentvmpath = "/RA/vm/WIN7SP1" $parentsnapshotpath = "/view" $vmfolderpath = "/RA/vm" $resourcepoolpath = "/RA/host/Desktops/Resources" $overcommit = "Aggressive" $persistance = "Persistent" $OrganizationalUnit = "OU=Computers,OU=LoginVSI" Connect-VIserver $vcserver Username $administrator #Create pools below Write-Host "Creating $numvms desktops named vdi01n01- in datastores " vdi01n01 Get-ViewVC -servername $vcserver Get-ComposerDomain -domain $domain -Username $username Add- AutomaticLinkedCLonePool -pool_id vdi01n01 -displayname vdi01n01 -nameprefix "vdi01n01- {n:fixed=3}" -parentvmpath $parentvmpath -parentsnapshotpath $parentsnapshotpath -vmfolderpath $vmfolderpath -resourcepoolpath $resourcepoolpath -datastorespecs "[$overcommit,os,data]/ra/host/desktops/vdi01n01" -HeadroomCount $numvms -UseSeSparseDiskFormat $true -SeSparseThreshold 0 -minimumcount $numvms -maximumcount $numvms -OrganizationalUnit $OrganizationalUnit -UseTempDisk $false -UseUserDataDisk $false -PowerPolicy "AlwaysOn" Write-Host "Creating $numvms desktops named vdi02n01- in datastores " vdi02n01 Get-ViewVC -servername $vcserver Get-ComposerDomain -domain $domain -Username $username Add- AutomaticLinkedCLonePool -pool_id vdi02n01 -displayname vdi02n01 -nameprefix "vdi02n01- {n:fixed=3}" -parentvmpath $parentvmpath -parentsnapshotpath $parentsnapshotpath -vmfolderpath $vmfolderpath -resourcepoolpath $resourcepoolpath -datastorespecs "[$overcommit,os,data]/ra/host/desktops/vdi02n01" -HeadroomCount $numvms -UseSeSparseDiskFormat $true -SeSparseThreshold 0 -minimumcount $numvms -maximumcount $numvms -OrganizationalUnit $OrganizationalUnit -UseTempDisk $false -UseUserDataDisk $false -PowerPolicy "AlwaysOn" Write-Host "Creating $numvms desktops named vdi01n02- in datastores " vdi01n02 Get-ViewVC -servername $vcserver Get-ComposerDomain -domain $domain -Username $username Add- AutomaticLinkedCLonePool -pool_id vdi01n02 -displayname vdi01n02 -nameprefix "vdi01n02- {n:fixed=3}" -parentvmpath $parentvmpath -parentsnapshotpath $parentsnapshotpath -vmfolderpath $vmfolderpath -resourcepoolpath $resourcepoolpath -datastorespecs "[$overcommit,os,data]/ra/host/desktops/vdi01n02" -HeadroomCount $numvms -UseSeSparseDiskFormat $true -SeSparseThreshold 0 -minimumcount $numvms -maximumcount $numvms -OrganizationalUnit $OrganizationalUnit -UseTempDisk $false -UseUserDataDisk $false -PowerPolicy "AlwaysOn" Write-Host "Creating $numvms desktops named vdi02n02- in datastores " vdi02n02 Get-ViewVC -servername $vcserver Get-ComposerDomain -domain $domain -Username $username Add- AutomaticLinkedCLonePool -pool_id vdi02n02 -displayname vdi02n02 -nameprefix "vdi02n02- {n:fixed=3}" -parentvmpath $parentvmpath -parentsnapshotpath $parentsnapshotpath -vmfolderpath $vmfolderpath -resourcepoolpath $resourcepoolpath -datastorespecs "[$overcommit,os,data]/ra/host/desktops/vdi02n02" -HeadroomCount $numvms -UseSeSparseDiskFormat $true -SeSparseThreshold 0 -minimumcount $numvms -maximumcount $numvms -OrganizationalUnit $OrganizationalUnit -UseTempDisk $false -UseUserDataDisk $false -PowerPolicy "AlwaysOn" #Entitle pools below sleep 300 Add-PoolEntitlement -Pool_id vdi01n01 -Sid S Add-PoolEntitlement -Pool_id vdi02n01 -Sid S Add-PoolEntitlement -Pool_id vdi01n02 -Sid S Add-PoolEntitlement -Pool_id vdi02n02 -Sid S Prerequisites Before testing began, the following requirements were met: 2,000 users and a group were created in Active Directory by using the Login VSI scripts. Datastores were created on the NetApp storage by using the NetApp VSC. 32 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

33 7 Login VSI Workload Login VSI is an industry-standard workload-generation utility for VDI. The Login VSI tool works by replicating a typical user s behaviors. Multiple different workloads can be selected, and the workload can be customized for specific applications and user profiles. 7.1 Login VSI Components As shown in Figure 15, Login VSI includes multiple different components to run and analyze user workloads. The Login VSI server was used to configure the components (such as Active Directory, the user workload profile, and the test profile) and to gather the data. In addition, a CIFS share was created on the Login VSI server that shared the user files that the workload would use. When the test was executed, the Login VSI share logged into the launcher servers, which in turn logged into the target desktops and began the workload. Figure 15) Login VSI components. Login VSI Launcher The tested reference architecture followed the Login VSI best practice of having 25 VMs per launcher server. PCoIP was used as the display protocol between the launcher servers and the virtual desktops. Figure 16 shows the relationship between the desktops and the launcher server. 33 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

34 Figure 16) Desktop-to-launcher relationship. Workload These tests used the Login VSI 4.1 office worker workload to simulate users working. The office worker workload, which is available in Login VSI 4.1, is a beta workload that is based on a knowledge worker workload. The team from Login VSI recommended using this workload with Login VSI 4.1 because it is very similar to the medium workload in Login VSI 3.7. The applications that were used are listed in Table 10 under the Desktop Software subheading. 8 Testing and Validation: Linked-Clones Desktops This section describes the testing and validation of linked-clones desktops. 8.1 Overview During testing, the VMware Horizon View configuration listed in Table 14 was used. As stated previously, a Windows PowerShell script was used for provisioning. The 2,000 desktops were provisioned with the options listed in Table 14. Table 14) VMware Horizon View configuration options. Component Pool type User assignment Enable automatic assignment Clone type Maximum number of desktops Number of spare (powered-on) desktops View Composer disks Configuration Option Automated pool Dedicated Yes View Composer linked clones 500 per pool 500 per pool Do not redirect disposable files 34 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

35 Component Replica disks (separate datastores for replica and OS disks) User data disk User views storage accelerator Reclaim VM disk space Datastore selection Storage overcommit Customization method Power policy Configuration Option No No No No (deselect other options) 1 datastore per pool Aggressive QuickPrep Always on Dedicated Desktops The reference architecture used dedicated desktops with automated assignment so that any workload issues could easily be pinpointed. This approach also allowed users to be assigned specific desktops and enabled the measurement of login with the profile creation, which would represent either a fresh desktop or a floating desktop, as well as the second login to a desktop. The measured behaviors are referred to as Monday and Tuesday morning login, as referenced in NetApp TR-3949: NetApp and VMware View 5,000-Seat Performance Report. 8.2 Test Results Overview Table 15 lists the high-level results that were achieved during the reference architecture testing. Table 15) Test results overview. Test Time to Complete Peak IOPS Peak Throughput Average Storage Latency Provisioning 2,000 desktops 140 min 43, GB/sec 0.431ms Boot storm test (VMware vcenter power-on operations) Boot storm test (VMware Horizon View 50 concurrent power-on operations) 6 min, 50 sec 147, GB/sec 14.50ms 10 min, 3 sec 98, GB/sec 2.6ms Boot storm during failover 10 min, 7 sec 90, GB/sec 23.40ms Login VSI initial login and workload 23 sec/vm 69, GB/sec 0.595ms Login VSI initial login and workload during failover 25 sec/vm 62, GB/sec 0.712ms Login VSI Monday morning login and workload 8.1 sec/vm 36, GB/sec 0.557ms Login VSI Monday morning login and workload during failover Login VSI Tuesday morning login and workload during failover 8.5 sec/vm 39, GB/sec 0.657ms 8.1 sec/vm 31, GB/sec 0.698ms 35 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

36 Test Time to Complete Peak IOPS Peak Throughput Average Storage Latency Refresh operation 45 min 121, GB/sec 1.009ms Recompose operation 4 hr, 25 min 59, GB/sec 0.440ms Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. 8.3 Storage Efficiency During the tests, deduplication was enabled so that storage efficiency could be measured. On average, a 2:1 deduplication ratio, or 50% deduplication, was observed. Because of the synthetic nature of the data used to perform these tests, these are not typical of real-world savings. In addition, although thin provisioning was used for each volume and LUN, thin provisioning is not a storage-reduction technology and therefore was not reported on. Table 16 lists the efficiency results from the testing. Table 16) Efficiency results. Point of Measurement Total Used Dedupe Savings Dedupe Percent Ratio After 2,000 desktops were provisioned GB 170GB GB 49% 2:1 8.4 Provisioning 2,000 VMware Horizon View Linked Clones This section describes test objectives and methodology and provides results from testing the provisioning 2,000 VMware Horizon View linked clones. Test Objectives and Methodology The objective of this test was to determine how long it would take to provision 2,000 VMware Horizon View virtual desktops. This scenario is most applicable to the initial deployment of a new POD or the reprovisioning of an existing environment. To set up for the tests, 2,000 VMware Horizon View native linked clones were created, using a Windows PowerShell script for simplicity and repeatability. Figure 17 shows one line of the script completely filled out to demonstrate what was done for one pool of 500 VMs. The script shown in Figure 14 (in section 6.5, Creating VMware Horizon View Desktop Pools ) contains the entire script that was used to create the pools. Figure 17) Creating 500 VMs in one pool named vdi01n01. Get-ViewVC -servername "vc1.ra.rtp.netapp.com" Get-ComposerDomain -domain "ra.rtp.netapp.com"- Username "administrator" Add-AutomaticLinkedCLonePool -pool_id vdi01n01 -displayname vdi01n01 -nameprefix "vdi01n01-{n:fixed=3}" -parentvmpath "/RA/vm/WIN7SP1" -parentsnapshotpath "/view" - vmfolderpath "/RA/vm" -resourcepoolpath "/RA/host/Desktops/Resources" -datastorespecs "[ Aggressive,OS,data]/RA/host/Desktops/vdi01n01" -HeadroomCount 500 -UseSeSparseDiskFormat $true - SeSparseThreshold 0 -minimumcount 500 -maximumcount 500 -OrganizationalUnit "OU=Computers,OU=LoginVSI" -UseTempDisk $false -UseUserDataDisk $false -PowerPolicy "AlwaysOn" For this testing, NetApp chose specific pool and provisioning settings that would stress the storage while providing the most granular reporting capabilities. NetApp does not advocate using or disabling these features because each might provide significant value in the correct use case. NetApp recommends that customers test these features to understand their impacts before deploying with these features enabled. 36 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

37 These features include, but are not limited to, persona management, replica tiering, user data disks, disposable file disks, space reclamation, and View Storage Accelerator. Table 17 lists the provisioning data that was gathered. Table 17) Results for linked-clones provisioning. Measurement Time to provisioning 2,000 linked-clones desktops Average storage latency (ms) Data 140 min (all desktops had the status Available in VMware Horizon View) 0.431ms Peak IOPS 43,344 Average IOPS 29,500 Peak throughput Average throughput 1166MB/sec 704MB/sec Peak storage CPU utilization 24% Average storage CPU utilization 16% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Throughput and IOPS During the provisioning test, the storage controllers had a combined peak of 43,344 IOPS, 1166MB/sec throughput, and an average of 16% utilization per storage controller with an average latency of 0.431ms. Figure 18 shows the throughput and IOPS for linked-clones creation. Figure 18) Throughput and IOPS for linked-clones creation. Storage Controller CPU Utilization Figure 19 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. The utilization average was 16% with a peak of 24%. 37 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

38 Figure 19) Storage controller CPU utilization for linked-clones creation. Customer Impact (Test Conclusions) During the provisioning of 2,000 VMware linked clones, the storage controller had enough headroom to perform a significantly greater number of concurrent provisioning operations. On average, the NetApp allflash FAS system and systems from other all-flash vendors provision at the rate of approximately 12 to 14 VMs per second. The extremely low latencies, low CPU utilization, and minimal overall work being done on the storage controller appear to indicate that storage performance is not a factor in linked-clone provisioning time and therefore should not be used to differentiate platforms. 8.5 Boot Storm Test This section describes test objectives and methodology and provides results from boot storm testing. Test Objectives and Methodology The objective of this test was to determine how long it would take to boot 2,000 virtual desktops, which might happen, for example, after maintenance activities and server host failures. This test was performed by powering on all 2,000 VMs from within the VMware vcenter server and observing when the status of all VMs in VMware Horizon View changed to Available. Table 18 lists the boot storm data that was gathered. Table 18) Results for linked-clones boot storm. Measurement Time to boot 2,000 linked-clones desktops Average storage latency (ms) Data 6 min, 50 sec (all desktops had the status Available in VMware Horizon View) ms Peak IOPS 147,147 Average IOPS 112,960 Peak throughput Average throughput 5.2GB/sec 2.6GB/sec Peak storage CPU utilization 63% 38 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

39 Measurement Data Average storage CPU utilization 50% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Throughput and IOPS During the boot storm test, the storage controllers had a combined peak of 147,147 IOPS, 5.2GB/sec throughput, and an average of 50% CPU utilization per storage controller with an average latency of ms. Figure 20 shows the throughput and IOPS for the linked-clones boot storm. Figure 20) Throughput and IOPS for linked-clones boot storm. Storage Controller CPU Utilization Figure 21 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. Utilization average was 50% with a peak of 64%. Figure 21) Storage controller CPU utilization for linked-clones boot storm. Read/Write IOPS Figure 22 shows the read/write IOPS for the boot storm test. 39 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

40 Figure 22) Read/write IOPS for linked-clones boot storm. Read/Write Ratio Figure 23 shows the read/write ratio for the boot storm test. Figure 23) Read/write ratio for linked-clones boot storm. Customer Impact (Test Conclusions) During the boot of 2,000 VMware linked clones, the storage controller had enough headroom to perform a significantly greater number of concurrent boot operations. The data indicates that the storage controller could boot approximately 4,000 VMware linked clones in approximately 10 minutes. Note: For this test, we set the number of concurrent power-on tasks very high so that we could perform the boot in the shortest amount of time without regard to storage latency. When this value was set to 50 concurrent power-on operations and VMware Horizon View was used to perform the poweron operation, we achieved longer boot times but lower latencies. The objective was to see how quickly we could power on the VMs. Customers can reduce the impact to other VMs by using VMware Horizon View to throttle the number of simultaneous power-on operations. Table 19 lists the results for storage latency and boot time. 40 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

41 Table 19) Power-on method, storage latency, and boot time. Power-On Method Concurrent Power-On Operations Storage Latency Boot Time for 2,000 VMs From VMware vcenter No throttle 14.5ms 6 min, 50 sec From VMware Horizon View ms 10 min, 3 sec 8.6 Boot Storm During Storage Failover Test This section describes test objectives and methodology and provides results from boot storm testing during storage controller failover. Test Objectives and Methodology The objective of this test was to determine how long it would take to boot 2,000 virtual desktops if the storage controller had a problem and was failed over. This test used the same methodologies and process that were used in section 8.5, Boot Storm Test. Table 20 shows the data that was gathered for the boot storm during storage failover. Table 20) Results for linked-clone boot storm during storage failover. Measurement Time to boot 2,000 linked-clones desktops during storage failover Average storage latency (ms) Data 10 min, 7 sec (all desktops had the status Available in VMware Horizon View) ms Peak IOPS 90,763 Average IOPS 80,990 Peak throughput Average throughput 2.1GB/sec 1.5GB/sec Peak storage CPU utilization 77% Average storage CPU utilization 72% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Throughput and IOPS During the boot storm failover test, the storage controllers had a combined peak of 90,763 IOPS, 2.1GB/sec throughput, and an average of 72% physical CPU utilization per storage controller with an average latency of ms. Figure 24 shows the throughput and IOPS. 41 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

42 Figure 24) Throughput and IOPS for linked-clones boot storm during storage failover. Storage Controller CPU Utilization Figure 25 shows the storage controller CPU utilization on one node of the two-node NetApp cluster while it was failed over. Utilization average was 72% with a peak of 77%. Figure 25) Storage controller CPU utilization for linked-clones boot storm during storage failover. Read/Write IOPS Figure 26 shows the read/write IOPS for the boot storm test during storage failover. 42 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

43 Figure 26) Read/write IOPS for linked-clones boot storm during storage failover. Read/Write Ratio Figure 27 shows the read/write ratio for the boot storm test during storage failover. Figure 27) Read/write ratio for linked-clones boot storm during storage failover. Customer Impact (Test Conclusions) During the boot of 2,000 VMware linked clones, the storage controller was able to boot 2,000 desktops on one node in 10 minutes and 7 seconds. Therefore, a storage controller running 2,000 VMs on each node (for a total of 4,000 VMs) would still take 10 minutes and 7 seconds to boot. 8.7 Steady-State Login VSI Test This section describes test objectives and methodology and provides results from steady-state Login VSI testing. Test Objectives and Methodology The objective of this test was to run a Login VSI 4.1 office worker workload to determine how the storage controller performed and what the end-user experience was like. This Login VSI workload first had the users log in to their desktops and begin working. The login phase occurred over a 30-minute period. 43 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

44 Three different login scenarios were included because each has a different I/O profile. We measured storage performance as well as login time and VSImax, a Login VSI value that represents the maximum number of users who can be deployed on the given platform. VSImax was not reached in any of the Login VSI tests. The following sections define the login scenarios. Login VSI Initial Login and Workload Test In this scenario, 2,000 users logged in and downloaded the Login VSI _VSI_Content package containing the user data to be used by Login VSI during the test. This content package is approximately 800MB. Therefore, during the first iteration of the Login VSI test, over 1.6TB of data was downloaded from the storage during the initial 30 minutes and copied to the VMs. Table 21 lists the initial login and workload results. Table 21) Results for linked-clones Login VSI initial login and workload. Measurement Desktop login time Average storage latency (ms) Data 23 sec (800MB of data per user) 0.595ms Peak IOPS 69,427 Average IOPS 34,505 Peak throughput Average throughput 1.0GB/sec 0.6GB/sec Peak storage CPU utilization 58% Average storage CPU utilization 34% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Login VSI VSImax Results Because the Login VSI VSImax v4.1 was not reached, more VMs could be deployed on this infrastructure. Figure 28 shows the VSImax results. Figure 28) VSImax results for linked-clones Login VSI initial login and workload. 44 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

45 Desktop Login Time During the Login VSI initial login, it took approximately 23 seconds to log in because 800MB of data had to be copied from the Login VSI share to each desktop. Figure 29 shows a scatterplot of the login times. Figure 29) Scatterplot for linked-clones Login VSI login times. Throughput, Latency, and IOPS During the Login VSI initial login and workload test, the storage controllers had a combined peak of 69,427 IOPS, 1.0GB/sec throughput, and an average of 34% CPU utilization per storage controller with an average latency of 0.595ms. Figure 30 shows the login and workload throughput, latency, and IOPS. Figure 30) Throughput, latency, and IOPS for linked-clones Login VSI initial login and workload. Storage Controller CPU Utilization Figure 31 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. Utilization average was 34% with a peak of 58%. 45 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

46 Figure 31) Storage controller CPU utilization for linked-clones Login VSI initial login and workload. Read/Write IOPS Figure 32 shows the initial login and workload read/write IOPS. Figure 32) Read/write IOPS for linked-clones Login VSI initial login and workload. Read/Write Ratio Figure 33 shows the initial login and workload read/write ratio. 46 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

47 Figure 33) Read/write ratio for linked-clones Login VSI initial login and workload. Customer Impact (Test Conclusions) Although the desktop login time of 23 seconds per desktop might be considered a fair login time, the amount of work being performed during this period in our scenario was unusually large and is not typical of customer environments. This initial login copied a significant amount of data to prepare for the Login VSI test; therefore, it is not a common situation. Assessments should be performed to determine profile size so that the impact can be understood. Given the worst-case scenario of each user downloading 800MB of data at login, the storage controller performed very well at under 1ms latency and an average of 34% CPU utilization. These numbers indicate that the storage controller is capable of doing significantly more work. Login VSI Initial Login and Workload During Storage Failover Test In this scenario, 2,000 users logged in during a storage failover and downloaded the Login VSI VSI_Content package containing the user data to be used by Login VSI during the test. This content package is approximately 800MB. Therefore, during the first iteration of the Login VSI test, over 1.6TB of data was downloaded from the storage during the initial 30 minutes and copied to the VMs. Table 22 lists the results for initial login and workload during storage failover. Table 22) Results for linked-clones Login VSI initial login and workload during storage failover. Measurement Desktop login time Average storage latency (ms) Data 25 sec (800MB of data per user) 0.712ms Peak IOPS 62,116 Average IOPS 31,068 Peak throughput Average throughput 1.0GB/sec 0.5GB/sec Peak storage CPU utilization 85% Average storage CPU utilization 56% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. 47 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

48 Login VSI VSImax Results Because the Login VSI VSImax v4.1 was not reached, more VMs could be deployed on this infrastructure. Figure 34 shows the VSImax results for initial login and workload during storage failover. Figure 34) VSImax results for linked-clones Login VSI initial login and workload during storage failover. Desktop Login Time During this Login VSI initial login, it took approximately 25 seconds to log in because 800MB of data had to be copied from the Login VSI share to each desktop. Figure 35 shows a scatterplot for the login times. Figure 35) Scatterplot for linked-clones Login VSI initial login times during storage failover. Throughput, Latency, and IOPS During the boot Login VSI login test, the storage controllers had a combined peak of 62,116 IOPS, 1.0GB/sec throughput, and an average of 56% CPU utilization per storage controller with an average latency of 0.712ms. Figure 36 shows throughput, latency, and IOPS for the initial login and workload during storage failover. 48 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

49 Figure 36) Throughput, latency, and IOPS for linked-clones Login VSI initial login and workload during storage failover. Storage Controller CPU Utilization Figure 37 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. Utilization average was 56% with a peak of 85%. Figure 37) Storage controller CPU utilization for linked-clones Login VSI initial login and workload during storage failover. Read/Write IOPS Figure 38 shows the read/write IOPS for initial login and workload during storage failover. 49 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

50 Figure 38) Read/write IOPS for linked-clones Login VSI initial login and workload during storage failover. Read/Write Ratio Figure 39 shows the read/write ratio for initial login and workload during storage failover. Figure 39) Read/write ratio for linked-clones Login VSI initial login and workload during storage failover. Customer Impact (Test Conclusions) This scenario would be the most extreme login workload, in which all 2,000 desktops log in to a storage controller during a 30-minute period and copy 800MB of data for a total of 1.6TB. Even given the extreme workload, the storage latency was under 1ms, and the average CPU utilization was 56%. As these excellent performance numbers and fair login time indicate, more work could still be performed on the storage controller. Monday Morning Login and Workload Test In this scenario, 2,000 users logged in after the VMs had been rebooted. During this type of login, user and profile data, application binaries, and libraries had to be read from disk because they were not already contained in the VM memory. Table 23 shows the results. 50 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

51 Table 23) Results for linked-clones Monday morning login and workload. Measurement Desktop login time Average storage latency (ms) Data 8.1 sec 0.557ms Peak IOPS 36,450 Average IOPS 17,146 Peak throughput Average throughput 759MB/sec 354MB/sec Peak storage CPU utilization 39% Average storage CPU utilization 20% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Login VSI VSImax Results Because the Login VSI VSImax v4.1 was not reached, more VMs could be deployed on this infrastructure. Figure 40 shows the VSImax results for Monday morning login and workload. Figure 40) VSImax results for linked-clones Monday morning login and workload. Desktop Login Time Average desktop login time was 8.1 seconds, which is considered a good login time. Figure 41 shows a scatterplot of the Monday morning login times. 51 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

52 Figure 41) Scatterplot of linked-clones Monday morning login times. Throughput, Latency, and IOPS During the Monday morning login test, the storage controllers had a combined peak of 36,450 IOPS, 759MB/sec throughput, and an average of 20% CPU utilization per storage controller with an average latency of 0.557ms. Figure 42 shows the throughput, latency, and IOPS for Monday morning login and workload. Figure 42) Throughput, latency, and IOPS for linked-clones Monday morning login and workload. Storage Controller CPU Utilization Figure 43 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. Utilization average was 20% with a peak of 39%. 52 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

53 Figure 43) Storage controller CPU utilization for linked-clones Monday morning login and workload. Read/Write IOPS Figure 44 shows the read/write IOPS for Monday morning login and workload. Figure 44) Read/write IOPS for linked-clones Monday morning login and workload. Read/Write Ratio Figure 45 shows the read/write ratio for Monday morning login and workload. 53 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

54 Figure 45) Read/write ratio for linked-clones Monday morning login and workload. Customer Impact (Test Conclusions) During the Monday morning login test, the storage controller performed very well. The CPU utilization was not high during this test, latencies were under 1ms, and desktop performance was excellent. These results suggest that it might be possible to double the storage controller workload to 4,000 users or more and still maintain excellent end-user performance. The Monday morning login during storage failover test described in the following section reinforces that point. Monday Morning Login and Workload During Storage Failover Test In this scenario, 2,000 users logged in for the first time after the VMs had been rebooted but during a storage failover event. During this type of login, user and profile data, application binaries, and libraries had to be read from disk because they were not already contained in the VM memory. Table 24 lists the results for Monday morning login and workload during storage failover. Table 24) Results for linked-clones Monday morning login and workload during storage failover. Measurement Desktop login time during storage failover Average storage latency (ms) Data 8.5 sec 0.657ms Peak IOPS 39,445 Average IOPS 23,543 Peak throughput Average throughput 837MB/sec 482MB/sec Peak storage CPU utilization 74% Average storage CPU utilization 48% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. 54 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

55 Login VSI VSImax Results Because the Login VSI VSImax v4.1 limit was not reached, more VMs could be deployed on this infrastructure. Figure 46 shows the VSImax results for Monday morning login and workload during storage failover. Figure 46) VSImax results for linked-clones Monday morning login and workload during storage failover. Desktop Login Time Average desktop login time was 8.5 seconds, which is considered a good login time, especially during a failover situation. Figure 47 shows a scatterplot of the Monday morning login times during storage failover. Figure 47) Scatterplot of linked-clones Monday morning login times during storage failover. Throughput, Latency, and IOPS During the test of Monday morning login during storage failover, the storage controllers had a combined peak of 39,445 IOPS, 837MB/sec throughput, and an average of 48% CPU utilization per storage controller with an average latency of 0.657ms. Figure 48 shows the throughput, latency, and IOPS for Monday morning login and workload during storage failover. 55 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

56 Figure 48) Throughput, latency, and IOPS for linked-clones Monday morning login and workload during storage failover. Storage Controller CPU Utilization Figure 49 shows the storage controller CPU utilization on one node of the two-node NetApp cluster while it was failed over. Utilization average was 48% with a peak of 74%. Figure 49) Storage controller CPU utilization for linked-clones Monday morning login and workload during storage failover. Read/Write IOPS Figure 50 shows the read/write IOPS for Monday morning login and workload during storage failover. 56 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

57 Figure 50) Read/write IOPS for linked-clones Monday morning login and workload during storage failover. Read/Write Ratio Figure 51 shows the read/write ratio for Monday morning login and workload during storage failover. Figure 51) Read/write ratio for linked-clones Monday morning login and workload during storage failover. Customer Impact (Test Conclusions) During the Monday morning login test during storage failover, the storage controller performed very well. The CPU utilization averaged less than 50%, latencies were under 1ms, and desktop performance was excellent. These results suggest that for this type of workload it might be possible to double the storage controller workload to 4,000 users total (2,000 per node) with excellent end-user performance and with the ability to tolerate a storage failover. Tuesday Morning Login and Workload During Storage Failover Test In this scenario, 2,000 users logged in to virtual desktops that had been logged into previously and that had not been power-cycled, and the storage controller was failed over. In this situation, VMs retain user and profile data, application binaries, and libraries in memory, which reduces the impact on storage. Table 25 lists the results for Tuesday morning login and workload during storage failover. 57 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

58 Table 25) Results for linked-clone Tuesday morning login and workload during storage failover. Measurement Desktop login time Average storage latency (ms) Data 8.1 sec 0.698ms Peak IOPS 31,164 Average IOPS 14,623 Peak throughput Average throughput 597MB/sec 321MB/sec Peak storage CPU utilization 63% Average storage CPU utilization 38% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Login VSI VSImax Results Because the Login VSI VSImax v4.1 was not reached, more VMs could be deployed on this infrastructure. Figure 52 shows the VSImax results for Tuesday morning login and workload during storage failover. Figure 52) VSImax results for linked-clones Tuesday morning login and workload during storage failover. Desktop Login Time Average desktop login time was 8.1 seconds, which is considered a good login time. Figure 53 shows a scatterplot of the Tuesday morning login times during storage failover. 58 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

59 Figure 53) Scatterplot of linked-clones Tuesday morning login times during storage failover. Throughput, Latency, and IOPS During the test of Tuesday morning login during storage failover, the storage controllers had a combined peak of 31,164 IOPS, 597MB/sec throughput, and an average of 38% CPU utilization per storage controller with an average latency of 0.698ms. Figure 54 shows throughput, latency, and IOPS for Tuesday morning login and workload during storage failover. Figure 54) Throughput, latency, and IOPS for linked-clones Tuesday morning login and workload during storage failover. Storage Controller CPU Utilization Figure 55 shows the storage controller CPU utilization on one node of the two-node NetApp cluster while it was failed over. Utilization average was 38% with a peak of 63%. 59 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

60 Figure 55) Storage controller CPU utilization for linked-clones Tuesday morning login and workload during storage failover. Read/Write IOPS Figure 56 shows the read/write IOPS for Tuesday morning login and workload during storage failover. Figure 56) Read/write IOPS for linked-clones Tuesday morning login and workload during storage failover. Read/Write Ratio Figure 57 shows the read/write ratio for Tuesday morning login and workload during storage failover. 60 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

61 Figure 57) Read/write ratio for linked-clones Tuesday morning login and workload during storage failover. Customer Impact (Test Conclusions) We performed this test only during failover because the initial and Monday login scenarios are much more intensive workloads. The purpose of this test was to demonstrate that an ordinary login can be performed during a failover event. This is one of the easier workloads for any storage controller to perform. 8.8 Refresh Test This section describes test objectives and methodology and provides results from refresh operation testing. Test Objectives and Methodology The objective of this test was to determine how long it took to perform the refresh maintenance operation on all 2,000 desktops and the impact to the storage system. For this test, we used the Windows PowerShell cmdlets for simplicity and repeatability. Figure 58 shows the syntax that was used to perform the refresh. Note: We used the optional stoponerror $false flag so that if an error did occur, it would not halt the entire operation; however, there were no errors during the refresh operation, so this flag could have been omitted. Figure 58) Windows PowerShell commands to refresh all four pools of desktops. Get-DesktopVM -pool_id vdi01n01 Send-LinkedCloneRefresh -schedule "May :20" - stoponerror $false Get-DesktopVM -pool_id vdi01n02 Send-LinkedCloneRefresh -schedule "May :20" - stoponerror $false Get-DesktopVM -pool_id vdi02n01 Send-LinkedCloneRefresh -schedule "May :20" - stoponerror $false Get-DesktopVM -pool_id vdi02n02 Send-LinkedCloneRefresh -schedule "May :20" - stoponerror $false Table 26 lists the results for the refresh operation. 61 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

62 Table 26) Results for linked-clones refresh operation. Measurement Time to refresh 2,000 linked-clones desktops Average storage latency (ms) Data 45 min (all desktops had the status Available in VMware Horizon View) 1.009ms Peak IOPS 121,435 Average IOPS 57,498 Peak throughput Average throughput 2.2GB/sec 1.2GB/sec Peak storage CPU utilization 50% Average storage CPU utilization 31% Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Throughput and IOPS During the refresh test, the storage controllers had a combined peak of 121,435 IOPS, 2.2GB/sec throughput, and an average of 31% CPU utilization per storage controller with an average latency of 1.009ms. Figure 59 shows the throughput and IOPS for the refresh operation. Figure 59) Throughput and IOPS for linked-clones refresh operation. Storage Controller CPU Utilization Figure 60 shows the storage controller CPU utilization across both nodes of the two-node NetApp cluster. Utilization average was 31% with a peak of 50%. 62 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

63 Figure 60) Storage controller CPU utilization for linked-clones refresh operation. Read/Write IOPS Figure 61 shows the read/write IOPS for the refresh operation. Figure 61) Read/write IOPS for linked-clones refresh operation. Read/Write Ratio Figure 62 shows the read/write ratio for the refresh operation. 63 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

64 Figure 62) Read/write ratio for linked-clones refresh operation. Customer Impact (Test Conclusions) A refresh operation can be performed on all 2,000 desktops in 45 minutes. Given the low utilization on the storage controller, it might be possible to perform the refresh operation during a storage failover event without affecting controller performance. There are limits to how quickly the refresh operation can occur, but this test demonstrated that storage performance was not the limiting factor. 8.9 Recompose Test This section describes test objectives and methodology and provides results from recompose operation testing. Test Objectives and Methodology The objective of this test was to determine how long it took to perform the recompose maintenance operation on all 2,000 desktops and the impact to the storage system. For this test we used the VMware Horizon View Administrative interface and recomposed all four pools by setting a schedule for the task. Table 27 lists the results for the recompose operation. Table 27) Results for linked-clones recompose operation. Measurement Time to recompose 2,000 linked-clones desktops Average storage latency (ms) Data 4 hr, 24 min (all desktops had the status Available in VMware Horizon View) 0.440ms Peak IOPS 59,420 Average IOPS 17,504 Peak throughput Average throughput 1,468MB/sec 377MB/sec Peak storage CPU utilization 29% Average storage CPU utilization 10% 64 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

65 Note: CPU and latency measurements are based on the average across both nodes of the cluster. IOPS and throughput are based on a combined total of each. Throughput and IOPS During the recompose test, the storage controllers had a combined peak of 59,420 IOPS, 1.4GB/sec throughput, and an average of 10% CPU utilization per storage controller with an average latency of 0.440ms. Figure 63 shows the throughput and IOPS for the recompose operation. Figure 63) Throughput and IOPS for linked-clones recompose operation. Storage Controller CPU Utilization Figure 64 shows the storage controller CPU utilization for the recompose operation. Figure 64) Storage controller CPU utilization for linked-clones recompose operation. Read/Write IOPS Figure 65 shows the read/write IOPS for the recompose operation. 65 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

66 Figure 65) Read/write IOPS for linked-clones recompose operation. Read/Write Ratio Figure 66 shows the read/write ratio for the recompose operation. Figure 66) Read/write ratio for linked-clones recompose operation. Customer Impact (Test Conclusions) A recompose operation can be performed of all 2,000 desktops in 4 hours and 24 minutes. Given the low utilization on the storage controller, it might be possible to perform the recompose operation during a storage failover event without affecting controller performance. There are limits to how quickly the recompose operation can occur, but this test demonstrated that storage performance was not the limiting factor. 9 Conclusion In all tests, end-user login time, guest response time, and maintenance activities performance were excellent. The NetApp all-flash FAS system performed very well, reaching peak IOPS of 147,147 during boot storm while averaging 50% CPU utilization. All test categories demonstrated that with the 2,000-user workload and maintenance operations, the all-flash FAS8060 storage system should be capable of doubling the workload to 4,000 users while still being able to fail over in the event of a failure. 66 NetApp All-Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View 2014 NetApp, Inc. All Rights Reserved.

NetApp All-Flash FAS Solution

NetApp All-Flash FAS Solution Technical Report NetApp All-Flash FAS Solution For Persistent Desktops with VMware Horizon View Chris Gebhardt, Chad Morgenstern, Rachel Zhu, NetApp March 2015 TR-4335 TABLE OF CONTENTS 1 Executive Summary...

More information

NetApp All Flash FAS Solution for VMware Horizon 6 and vsphere Virtual Volumes

NetApp All Flash FAS Solution for VMware Horizon 6 and vsphere Virtual Volumes 3 Technical Report NetApp All Flash FAS Solution for VMware Horizon 6 and vsphere Virtual Volumes A Technical Preview Chris Gebhardt and Bhavik Desai, NetApp August 2015 TR-4428 Abstract In this reference

More information

NetApp All-Flash FAS Solution

NetApp All-Flash FAS Solution 0 Technical Report NetApp All-Flash FAS Solution For Persistent and Nonpersistent Desktops Rachel Zhu, Chad Morgenstern, Chris Gebhardt, NetApp April 2015 TR-4342 TABLE OF CONTENTS 1 Executive Summary...

More information

NetApp All Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View

NetApp All Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View Technical Report NetApp All Flash FAS Solution for Nonpersistent Desktops with VMware Horizon View Joe Scott and Chris Gebhardt, NetApp May 217 TR-4539 Abstract This document describes the solution components

More information

NetApp All Flash FAS Solution for Persistent Desktops with Citrix XenDesktop

NetApp All Flash FAS Solution for Persistent Desktops with Citrix XenDesktop 0 Technical Report NetApp All Flash FAS Solution for Persistent Desktops with Citrix XenDesktop Chris Gebhardt, Bhavik Desai, Brian Casper, Joe Scott, NetApp September 2016 TR-4519 TABLE OF CONTENTS 1

More information

Downpour II: Converged Infrastructure with VMware vsphere 5.5 Solution

Downpour II: Converged Infrastructure with VMware vsphere 5.5 Solution 1) Technical Report Downpour II: Converged Infrastructure with VMware vsphere 5.5 Solution NetApp and HP Blade Servers with Cisco Nexus Switches Chris Rodriguez, David Arnette, Gary Riker, NetApp March

More information

VMware Horizon View 5.2 on NetApp Clustered Data ONTAP at $35/Desktop

VMware Horizon View 5.2 on NetApp Clustered Data ONTAP at $35/Desktop White Paper VMware Horizon View 5. on NetApp Clustered Data ONTAP at $5/Desktop Chris Gebhardt, Chad Morgenstern, Bryan Young, NetApp August 0 WP-790 TABLE OF CONTENTS Introduction... Configuration...

More information

NetApp and VMware View Solution Guide

NetApp and VMware View Solution Guide Technical Report NetApp and VMware View Solution Guide Best Practices for Design, Architecture, Deployment, and Management Chris Gebhardt and Abhinav Joshi NetApp February 2010 TR-3705 Version 4.0 ABSTRACT

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 White Paper INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5 EMC GLOBAL SOLUTIONS Abstract This white paper describes a simple, efficient,

More information

FlexPod Datacenter with Oracle Real Application Clusters 12c Release 1 and NetApp All Flash FAS

FlexPod Datacenter with Oracle Real Application Clusters 12c Release 1 and NetApp All Flash FAS FlexPod Datacenter with Oracle Real Application Clusters 12c Release 1 and NetApp All Flash FAS Reference Architecture Business Processing Workloads Group, NetApp September 2016 TR-4552 TABLE OF CONTENTS

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2. EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.6 Reference Architecture EMC SOLUTIONS GROUP August 2011 Copyright

More information

Today s trends in the storage world. Jacint Juhasz Storage Infrastructure Architect

Today s trends in the storage world. Jacint Juhasz Storage Infrastructure Architect Today s trends in the storage world Jacint Juhasz Storage Infrastructure Architect NetApp storage portfolio All-Flash Arrays for highest performance All-Flash SolidFire Data ONTAP-v for public cloud All

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.0, VMware View Persona Management, and VMware View Composer 2.7 Simplify management

More information

NetApp FAS8000 Series

NetApp FAS8000 Series Datasheet NetApp FAS8000 Series Respond more quickly to changing IT needs with unified scale-out storage and industry-leading data management KEY BENEFITS Support More Workloads Run SAN and NAS workloads

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Datasheet NetApp FAS6200 Series

Datasheet NetApp FAS6200 Series Datasheet NetApp FAS6200 Series Performance, availability, and scale for the enterprise KEY BENEFITS Deliver an Agile Data Infrastructure Intelligent management, immortal availability, and infinite scaling.

More information

NetApp Clustered Data ONTAP 8.2

NetApp Clustered Data ONTAP 8.2 Technical Report NetApp Clustered Data ONTAP 8.2 An Introduction Charlotte Brooks, NetApp May 2013 TR-3982 Abstract This technical report is an introduction to the architecture and key customer benefits

More information

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp Agenda The Landscape has Changed New Customer Requirements The Market has Begun to Move Comparing Performance Results Storage

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

VMware Horizon Desktop as a Service on NetApp All Flash FAS

VMware Horizon Desktop as a Service on NetApp All Flash FAS Technical Report VMware Horizon Desktop as a Service on NetApp All Flash FAS Brian Casper, NetApp September 2015 TR-4450 Abstract VMware Horizon Desktop as a Service (DaaS) is a virtual desktop infrastructure

More information

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme STO3308BES NetApp HCI. Ready For Next. Enterprise-Scale Hyper Converged Infrastructure Gabriel Chapman: Sr. Mgr. - NetApp HCI GTM #VMworld #STO3308BES Disclaimer This presentation may contain product features

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Reference Architecture EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst

NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst ESG Lab Spotlight NetApp Clustered Data ONTAP 8.2 Storage QoS Date: June 2013 Author: Tony Palmer, Senior Lab Analyst Abstract: This ESG Lab Spotlight explores how NetApp Data ONTAP 8.2 Storage QoS can

More information

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume Wendy Chen, Roger Lopez, and Josh Raw Dell Product Group February 2013 This document is for informational purposes only and may

More information

Why Datrium DVX is Best for VDI

Why Datrium DVX is Best for VDI Why Datrium DVX is Best for VDI 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction Managing a robust and growing virtual desktop infrastructure in current

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI White Paper with EMC Symmetrix FAST VP and VMware VAAI EMC GLOBAL SOLUTIONS Abstract This white paper demonstrates how an EMC Symmetrix VMAX running Enginuity 5875 can be used to provide the storage resources

More information

Hitachi Virtual Storage Platform Family

Hitachi Virtual Storage Platform Family Hitachi Virtual Storage Platform Family Advanced Storage Capabilities for All Organizations Andre Lahrmann 23. November 2017 Hitachi Vantara Vorweg: Aus Hitachi Data Systems wird Hitachi Vantara The efficiency

More information

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation

ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES. Cisco Server and NetApp Storage Implementation ATTACHMENT A SCOPE OF WORK IMPLEMENTATION SERVICES I. Executive Summary Superior Court of California, County of Orange (Court) is in the process of conducting a large enterprise hardware refresh. This

More information

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager

VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager VMware Join the Virtual Revolution! Brian McNeil VMware National Partner Business Manager 1 VMware By the Numbers Year Founded Employees R&D Engineers with Advanced Degrees Technology Partners Channel

More information

VMware vsphere with ESX 6 and vcenter 6

VMware vsphere with ESX 6 and vcenter 6 VMware vsphere with ESX 6 and vcenter 6 Course VM-06 5 Days Instructor-led, Hands-on Course Description This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere

More information

1000 User VMware Horizon View 7.x Best Practices

1000 User VMware Horizon View 7.x Best Practices TECHNICAL WHITE PAPER 1000 User VMware Horizon View 7.x Best Practices Tintri VMstore, Cisco UCS and VMware Horizon View 7.x www.tintri.com Revision History Version Date Description Author 1.0 08/04/2016

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December IOmark- VDI IBM IBM FlashSystem V9000 Test Report: VDI- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VM, VDI- IOmark,

More information

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018

XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 XenApp and XenDesktop 7.12 on vsan 6.5 All-Flash January 08, 2018 1 Table of Contents 1. Executive Summary 1.1.Business Case 1.2.Key Results 2. Introduction 2.1.Scope 2.2.Audience 3. Technology Overview

More information

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M 1 W W W. F U S I O N I O. C O M iocontrol Reference Architecture for VMware Horizon View iocontrol Reference Architecture for VMware Horizon View Introduction Desktop management at any scale is a tedious

More information

INTRODUCING VNX SERIES February 2011

INTRODUCING VNX SERIES February 2011 INTRODUCING VNX SERIES Next Generation Unified Storage Optimized for today s virtualized IT Unisphere The #1 Storage Infrastructure for Virtualisation Matthew Livermore Technical Sales Specialist (Unified

More information

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS A detailed overview of integration points and new storage features of vsphere 5.0 with EMC VNX platforms EMC Solutions

More information

ACCELERATE THE JOURNEY TO YOUR CLOUD

ACCELERATE THE JOURNEY TO YOUR CLOUD ACCELERATE THE JOURNEY TO YOUR CLOUD With Products Built for VMware Rob DeCarlo and Rob Glanzman NY/NJ Enterprise vspecialists 1 A Few VMware Statistics from Paul Statistics > 50% of Workloads Virtualized

More information

NetApp Integrated EVO:RAIL Solution

NetApp Integrated EVO:RAIL Solution Technical Report NetApp Integrated EVO:RAIL Solution Technical Overview and Best Practices Eric Railine, NetApp November 2015 TR-4470 Abstract The NetApp Integrated EVO:RAIL solution combines the robust

More information

La rivoluzione di NetApp

La rivoluzione di NetApp La rivoluzione di NetApp Clustered Data ONTAP, storage unificato e scalabile per garantire efficienza e operazioni senza interruzioni Roberto Patano Technical Manager, NetApp Italia IT Infrastructure Inhibitor

More information

Reference Architecture

Reference Architecture Reference Architecture EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX, VMWARE vsphere 4.1, VMWARE VIEW 4.5, VMWARE VIEW COMPOSER 2.5, AND CISCO UNIFIED COMPUTING SYSTEM Reference Architecture

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING VMware Horizon View 5.3 and VMware vsphere for up to 2,000 Virtual Desktops Enabled by EMC Next-Generation VNX and EMC Powered Backup EMC VSPEX Abstract

More information

NetApp AFF. Datasheet. Leading the future of flash

NetApp AFF. Datasheet. Leading the future of flash Datasheet NetApp AFF Leading the future of flash Key Benefits Unleash the power of your data with the industry s first end-to-end NVMe-based enterprise all-flash array that delivers up to 11.4 million

More information

5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage

5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage 5,000 Persistent VMware View VDI Users on Dell EMC SC9000 Storage Abstract This reference architecture document records real-world workload performance data for a virtual desktop infrastructure (VDI) storage

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9 PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is

More information

2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage

2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage 2000 Persistent VMware View VDI Users on Dell EMC SCv3020 Storage Dell EMC Engineering September 2017 A Dell EMC Reference Architecture Revisions Date September 2017 Description Initial release Acknowledgements

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

The Best Storage for Virtualized Environments

The Best Storage for Virtualized Environments The Best Storage for Virtualized Environments Paul Kessler Asia Pacific Solutions Marketing Alliances, NetApp Nov.4,2008 The Best Storage for Virtualized Environments Paul Kessler Solutions Marketing &

More information

NetApp FAS3200 Series

NetApp FAS3200 Series Systems NetApp FAS3200 Series Get advanced capabilities in a midrange storage system and easily respond to future expansion KEY benefits Best value with unmatched efficiency. The FAS3200 series delivers

More information

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System First Published: August 25, 2016 Last Modified: August 31, 2016 Americas Headquarters Cisco Systems, Inc. 170 West

More information

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile

Introducing Tegile. Company Overview. Product Overview. Solutions & Use Cases. Partnering with Tegile Tegile Systems 1 Introducing Tegile Company Overview Product Overview Solutions & Use Cases Partnering with Tegile 2 Company Overview Company Overview Te gile - [tey-jile] Tegile = technology + agile Founded

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator

Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator Citrix VDI Scalability Testing on Cisco UCS B200 M3 server with Storage Accelerator First Published: February 19, 2014 Last Modified: February 21, 2014 Americas Headquarters Cisco Systems, Inc. 170 West

More information

NetApp AFF A300 Review

NetApp AFF A300 Review StorageReview StorageReview takes an in-depth look at features, and performance of NetApp AFF A300 storage array. 1395 Crossman Ave Sunnyvale, CA 94089 United States Table of Contents INTRODUCTION... 3-5

More information

Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers

Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers Last update: 29 March 2017 Version 1.7 Reference Architecture for VMware Horizon (with View) Contains performance

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform Reference Architecture Guide By Roger Clark August 15, 2012 Feedback Hitachi Data Systems welcomes your feedback. Please share your

More information

FlexPod Datacenter with Apprenda for PaaS Including Red Hat OpenStack 8, Docker Containers, NetApp Jenkins Plugin, and ONTAP 9

FlexPod Datacenter with Apprenda for PaaS Including Red Hat OpenStack 8, Docker Containers, NetApp Jenkins Plugin, and ONTAP 9 NetApp Verified Architecture FlexPod Datacenter with Apprenda for PaaS Including Red Hat OpenStack 8, Docker Containers, NetApp Jenkins Plugin, and ONTAP 9 NVA Design Converged Infrastructure Engineering,

More information

Midsize Enterprise Solutions Selling Guide. Sell NetApp s midsize enterprise solutions and take your business and your customers further, faster

Midsize Enterprise Solutions Selling Guide. Sell NetApp s midsize enterprise solutions and take your business and your customers further, faster Midsize Enterprise Solutions Selling Guide Sell NetApp s midsize enterprise solutions and take your business and your customers further, faster Many of your midsize customers might have tried to reduce

More information

Agenda 1) Designing EUC for Optimal Experience 2) Flash Architectures and EUC 3) Designing EUC Solutions with Converged Infrastructure 4) Selecting, D

Agenda 1) Designing EUC for Optimal Experience 2) Flash Architectures and EUC 3) Designing EUC Solutions with Converged Infrastructure 4) Selecting, D ADV3310BUS End User Computing with NetApp; AFA, CI and HCI Jeremy Hall, Solutions Architect @VMJHALL Chris Gebhardt, Principal Technical Marketing Engineer @chrisgeb #VMworld #ADV3310BUS Agenda 1) Designing

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

VMware Horizon View. VMware Horizon View with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013

VMware Horizon View. VMware Horizon View with Tintri VMstore. TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013 VMware Horizon View VMware Horizon View with Tintri VMstore TECHNICAL SOLUTION OVERVIEW, Revision 1.1, January 2013 Table of Contents Introduction... 1 Before VDI and VMware Horizon View... 1 Critical

More information

Boost your data protection with NetApp + Veeam. Schahin Golshani Technical Partner Enablement Manager, MENA

Boost your data protection with NetApp + Veeam. Schahin Golshani Technical Partner Enablement Manager, MENA Boost your data protection with NetApp + Veeam Schahin Golshani Technical Partner Enablement Manager, MENA NetApp Product Strategy Market-leading innovations, that are NetApp Confidential Limited Use 3

More information

EMC VSPEX END-USER COMPUTING

EMC VSPEX END-USER COMPUTING IMPLEMENTATION GUIDE EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.1 and VMware vsphere for up to 500 Virtual Desktops Enabled by EMC VNXe3200 and EMC Powered Backup EMC VSPEX Abstract This describes

More information

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix

Nutanix White Paper. Hyper-Converged Infrastructure for Enterprise Applications. Version 1.0 March Enterprise Applications on Nutanix Nutanix White Paper Hyper-Converged Infrastructure for Enterprise Applications Version 1.0 March 2015 1 The Journey to Hyper-Converged Infrastructure The combination of hyper-convergence and web-scale

More information

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes TECHNICAL WHITE PAPER SEPTEMBER 2016 VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes Table of Contents

More information

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2 Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture...

More information

VMware vsphere on NetApp (VVNA)

VMware vsphere on NetApp (VVNA) VMware vsphere on NetApp (VVNA) COURSE OVERVIEW: Managing a vsphere storage virtualization environment requires knowledge of the features that exist between VMware and NetApp to handle large data workloads.

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect

Next Gen Storage StoreVirtual Alex Wilson Solutions Architect Next Gen Storage StoreVirtual 3200 Alex Wilson Solutions Architect NEW HPE StoreVirtual 3200 Storage Low-cost, next-gen storage that scales with you Start at < 5K* and add flash when you are ready Supercharge

More information

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration

An Oracle White Paper December Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration An Oracle White Paper December 2010 Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration Introduction...1 Overview of the Oracle VM Blade Cluster

More information

Virtual Desktop Infrastructure (VDI) Bassam Jbara

Virtual Desktop Infrastructure (VDI) Bassam Jbara Virtual Desktop Infrastructure (VDI) Bassam Jbara 1 VDI Historical Overview Desktop virtualization is a software technology that separates the desktop environment and associated application software from

More information

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity Enterprise power with everyday simplicity QUALIT Y AWARDS STO R A G E M A G A Z I N E EqualLogic Storage The Dell difference Ease of use Integrated tools for centralized monitoring and management Scale-out

More information

EBOOK. NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD

EBOOK. NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD EBOOK NetApp ONTAP Cloud FOR MICROSOFT AZURE ENTERPRISE DATA MANAGEMENT IN THE CLOUD NetApp ONTAP Cloud for Microsoft Azure The ONTAP Cloud Advantage 3 Enterprise-Class Data Management 5 How ONTAP Cloud

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere

Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Virtual Storage Console, VASA Provider, and Storage Replication Adapter for VMware vsphere Administration Guide for 7.2 release June 2018 215-13169_A0 doccomments@netapp.com Table of Contents 3 Contents

More information

Tony Paikeday Sr. Solutions Marketing Manager. Chris Westphal Sr. Product Marketing Manager. C Cisco Systems, Inc.

Tony Paikeday Sr. Solutions Marketing Manager. Chris Westphal Sr. Product Marketing Manager. C Cisco Systems, Inc. Regain Control of the Desktop: Cisco Desktop Virtualization Solution with VMware View 4.6 Tony Paikeday Sr. Solutions Marketing Manager Chris Westphal Sr. Product Marketing Manager 1 Today s Agenda Cisco

More information

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures

Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Technical Report Validating the NetApp Virtual Storage Tier in the Oracle Database Environment to Achieve Next-Generation Converged Infrastructures Tomohiro Iwamoto, Supported by Field Center of Innovation,

More information

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Reference Architecture EMC STORAGE FOR MILESTONE XPROTECT CORPORATE Milestone multitier video surveillance storage architectures Design guidelines for Live Database and Archive Database video storage EMC

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

HPE Synergy HPE SimpliVity 380

HPE Synergy HPE SimpliVity 380 HPE Synergy HPE SimpliVity 0 Pascal.Moens@hpe.com, Solutions Architect Technical Partner Lead February 0 HPE Synergy Composable infrastructure at HPE CPU Memory Local Storage LAN I/O SAN I/O Power Cooling

More information

EMC Backup and Recovery for Microsoft SQL Server

EMC Backup and Recovery for Microsoft SQL Server EMC Backup and Recovery for Microsoft SQL Server Enabled by Microsoft SQL Native Backup Reference Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information

More information

SvSAN Data Sheet - StorMagic

SvSAN Data Sheet - StorMagic SvSAN Data Sheet - StorMagic A Virtual SAN for distributed multi-site environments StorMagic SvSAN is a software storage solution that enables enterprises to eliminate downtime of business critical applications

More information

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 Proven Solutions Guide EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1 EMC VNX Series (NFS), VMware vsphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View

More information

Vblock System 540. with VMware Horizon View 6.1 Solution Architecture

Vblock System 540. with VMware Horizon View 6.1 Solution Architecture Vblock System 540 with VMware Horizon View 6.1 Solution Architecture Version 1.0 September 2015 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY

More information

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON

NetApp SolidFire and Pure Storage Architectural Comparison A SOLIDFIRE COMPETITIVE COMPARISON A SOLIDFIRE COMPETITIVE COMPARISON NetApp SolidFire and Pure Storage Architectural Comparison This document includes general information about Pure Storage architecture as it compares to NetApp SolidFire.

More information

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity Enterprise power with everyday simplicity QUALIT Y AWARDS STO R A G E M A G A Z I N E EqualLogic Storage The Dell difference Ease of use Integrated tools for centralized monitoring and management Scale-out

More information