Pure Storage Design Guide for Virtualized Engineering Workstations with nvidia GRID

Size: px
Start display at page:

Download "Pure Storage Design Guide for Virtualized Engineering Workstations with nvidia GRID"

Transcription

1 Pure Storage Design Guide for Virtualized Engineering Workstations with nvidia GRID July, 2016

2 Table of Contents Executive Summary... 4 Goals and Objectives... 4 Audience... 4 Design Guide Principles... 5 Infrastructure Components of the Design Guide... 6 Design Guide Solution Overview VMware vsphere Configuration and Tuning nvidia GRID K2 Card ESXi Configuration VMware Horizon 7 Configuration and Tuning Microsoft Windows 7 Physical Workstation Configuration Microsoft Windows 7 Virtual Workstation Configuration Desktop Testing Tool Login VSI Graphical Benchmarking Tool - SPECviewperf Pure Storage FlashArray Configuration Solution Validation Scalability Results Summary of Overall Findings Conclusions About the Author Pure Storage

3 2016 Pure Storage, Inc. All rights reserved. Pure Storage, the "P" Logo, and Pure1 are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. nvidia GRID K2, nvidia vgpu, VMware, VMware Horizon, Cisco UCS, Cisco C240-M4, SPECviewperf12, Login VSI are registered trademarks of nvidia, VMware, SPEC.org and Cisco in the U.S. and other countries. The Pure Storage product described in this documentation is distributed under a license agreement and may be used only in accordance with the terms of the agreement. The license agreement restricts its use, copying, distribution, decompilation, and reverse engineering. No part of this documentation may be reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any. THE DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE. Pure Storage, Inc. 650 Castro Street, Mountain View, CA Pure Storage

4 Executive Summary Virtualizing the most demanding engineering and designers graphics-rich desktops within a company has been commonly regarded as the last mile for Virtual Desktop Infrastructure (VDI) projects. The benefits of moving away from maintaining a high-end workstation and its unique applications underneath each engineer or designer s desk are numerous working from anywhere, physical resource sharing and proprietary data security are just a few. This can be a daunting, yet critical piece to the project s success. Despite these unknowns, there is a feeling and expectation of inevitably with transfering this workload from a physical to a virtualized state as the benefits and costs savings (both tangible and intangible) continue to grow. To provide a measure of insight into how to make your project successful, this document describes a methodology for deploying a VMware Horizon 7 VDI environment using nvidia GRID K2 cards, Cisco UCS rack servers, VMware vsphere 6.0, Login VSI (a performance testing tool for virtualized desktop simulation), SPECviewperf12 (the industry-standard tool for benchmarking graphics-rich applications), Microsoft Windows Server 2012 R2 and Microsoft Windows 7 (64-bit) as the guest operating systems. Pure Storage has validated the reference architecture within its lab this document presents the hardware and software configuration, the test workload configuration, testing results that includes implementation and sizing guidance for a mixed graphics-intensive VDI population. Goals and Objectives The goal of this document is to showcase the ease of deploying virtual workstations with Cisco UCS, Cisco Nexus, nvidia GRID and the Pure Storage FlashArray//m20. We will demonstrate the scalability and performance of graphics-rich VMware Horizon-based virtual workstation building blocks with Login VSI and SPECviewperf12 as our performance benchmark tools running workloads using some of the most popular CAD tools used in production customer environments. We will run the same graphics benchmarking software on a physical engineering workstation with comparable hardware characteristics to show how virtual workstations are not only equal to, but often exceed the performance of a physical workstation when backed by Pure Storage. In addition, we highlight the benefits of the Pure Storage FlashArray including data reduction, low latency and resiliency testing that provides a high quality user experience and provides customers an ideal solution for any Horizon deployment project. Audience The target audience for this document includes storage, virtualization and CAD application administrators, consulting data center architects, field engineers, and desktop specialists who want to implement VMware Horizon virtual workstations on a Pure Storage FlashArray with the VMware vsphere virtualization platform. A working knowledge of VMware vsphere, VMware Horizon, Login VSI, server, storage, networks and data center design is assumed but is not a prerequisite to read this document. Pure Storage

5 Design Guide Principles The guiding principles for implementing this reference architecture are: Repeatable: Create a scalable building block that can be easily replicated at any customer site. Publish the version of various firmware under test and resolve issues in the lab before customers deploy this solution. Virtualized: Implement every infrastructure component as a virtual machine. Available: Create a design that is resilient and not prone to failure of a single component. For example, we include best practices to enforce multiple paths to storage, multiple NICs for connectivity, and high availability (HA) clustering including vsphere dynamic resource scheduling (DRS) on vsphere. Additionally, we will simulate scenarios such as a 500 desktop VDI bootstorm while the graphics-rich benchmarks are running in parallel to highlight storage array resiliency under a production workload. Efficient: Take advantage of inline data reduction and low latency of the Pure Storage FlashArray by pushing the envelope of VMs per server and nvidia GRID K2 card density. Simple: Avoid unnecessary and/or complex adjustments to make the results look better than a normal out-of-box environment. Scalable: By reporting the near linear scaling of VMware vsphere environments within the architecture and by incrementing the number of hosts, we will show exceptional application experience, outstanding VM per host density and best-in-class flash storage performance. Performant: Confirm that the virtualized workstations achieve close to or better than the composite SPECviewperf12 score than that of the equivalent physical device we will be testing. In addition, provide guidance in recommended quantities for the vgpu profile, # of CPU cores and amount of RAM on a per CAD application basis. Pure Storage

6 Infrastructure Components of the Design Guide nvidia GRID vgpu: Delivering Scalable Graphics-Rich Virtual Desktops nvidia's GRID technology delivers a high-performance, interactive visual experience remotely, making complex 2D and 3D content accessible anywhere, any time, on any device. With GRID boards in a server, virtual desktop clients can for the first time access rich visual content with the largest datasets, in high resolution with interactive performance. They can, whether they're at the office, on the road, or off the clock checking in via a laptop, tablet or even phone. Running on GRID-enabled servers supplied by leading OEMs, and supported by the leading virtualization solutions from VMware and Citrix, NVIDIA GRID vgpu (virtual GPU) Figure 1: nvidia GRID K2 card technology delivers desktop-class performance that scales gracefully with a multitude of clients. Harnessing NVIDIA's GRID vgpu technology, professional-caliber visual computing is now ready for a shift to datacenters and clouds, to whatever degree meets the needs of the business. Figure 2: GRID virtual GPU technology: server-side rendering of rich 3D content, delivered wherever, whenever. With GRID vgpu technology, nvidia has virtualized the GPU in hardware, allowing multiple virtual machines to share one physical GPU without the need for software handholding and API abstraction. GRID GPUs provide virtualization in hardware: first and foremost a memory management unit (MMU) and Pure Storage

7 dedicated per-vm input buffers. The GRID GPU's Memory Management Unit (MMU) allocates, maps and translates a VM's virtual address to the host server's physical address, allowing each VM to works in its own address space without interfering with the others. Working hand in hand with the MMU are hundreds of independent input buffers, each dedicated to a different VM, isolating its graphics stream within independent rendering contexts. The combination of the address-space unifying MMU and VM-specific input buffers forms the linchpin to deliver the industry's first truly hardware-virtualized GPU. With virtualization support embedded directly in silicon, GRID vgpu alleviates the necessity of software abstraction to share GPU resources, eliminating performance-robbing CPU overhead, as well as concerns over application reliability and compatibility. Native GRID vgpu drivers leverage the same code base of nvidia Quadro GPUs, built on years of development and testing and running hundreds of professional applications in millions of PCs and ISV certified workstations around the world. GRID vgpu technology facilitates optimal GPU sharing, but it's another matter to ensure that each hosted virtual machine and each user that machine represents can get enough GPU resources to effectively process that machine's workloads timely and effectively. With GRID vgpu, IT administrators balance the graphics demand of users with the physical resources available via vgpu profiles. Each profile specifies how much memory and, on average, what fraction of a GRID GPU total available processing power that virtual machine can count on. IT administrators can ensure each user is adequately provisioned simply by selecting the appropriate profile. A system manager might share one GRID GPU with four power users, while allocating another to eight knowledge workers with lower visual processing demands. And what about the active designer or engineer with the highest graphic demands, demands that would have previously been served by the dedicated, GPU Pass-through model? With GRID vgpu technology, provisioning an entire physical GPU to one client requires no special-casing, but instead simply means choosing the proper GRID profile for example, selecting the GRID K280Q profile dedicates all of a GRID K2 GPU's resources to a single VM. Table 1: nvidia GRID K1 and K2 vgpu specifications Pure Storage

8 For more information, please visit the following link: Scalable-Graphics-Rich-Virtual-Desktops.pdf Cisco Unified Computing System The Cisco Unified Computing System (Cisco UCS) is a nextgeneration data center platform that unites compute, network, storage access, and virtualization into an organized structure aimed to reduce total cost of ownership and introduce vastly improved infrastructure deployment mechanisms at scale. UCS incorporates a unified network fabric with scalable, modular and powerful x86-architecture servers. With an innovative and proven design, Cisco UCS delivers an architecture that increases cost efficiency, agility, and flexibility beyond what traditional blade and rack-mount servers provide. Cisco makes organizations more effective by addressing the real problems that IT managers and executives face and solves them on a systemic level. Greater Time-on-Task Efficiency Automated configuration can change an IT organization s approach from reactive to proactive. The result is more time for innovation, less time spent on maintenance, and faster response times. These efficiencies allow IT staff more time to address strategic business initiatives. They also enable better quality of life for IT staff, which means higher morale and better staff retention both critical elements for long-term efficiency. Cisco UCS Manager is an embedded, model-based management system that allows IT administrators to set a vast range of server configuration policies, from firmware and BIOS settings to network and storage connectivity. Individual servers can be deployed in less time and with fewer steps than in traditional environments. Automation frees staff from tedious, repetitive, time-consuming chores that are often the source of errors that cause downtime, making the entire data center more cost effective. Easier Scaling Automation means rapid deployment, reduced opportunity cost, and better capital resource utilization. With Cisco UCS, rack-mount and blade servers can move from the loading dock and into production in a plug-and-play operation. Automatically configure blade servers use predefined policies simply by inserting the devices into an open blade chassis slot. Integrate rack-mount servers by connecting them to top-of-rack Cisco Nexus fabric extenders. Since policies make configuration automated and repeatable, configuring 100 new servers is as straightforward as configuring one server, delivering agile, costeffective scaling. Virtual Blade Chassis Figure 3: Cisco Unified Computing System With a separate network and separate management for each chassis, traditional blade systems are functionally an accidental architecture based on an approach that compresses all the components of a Pure Storage

9 rack into each and every chassis. Such traditional blade systems are managed with multiple management tools that are combined to provide greater convergence for what can be a more labor-intensive, errorprone and costly delivery methodology. Rack-mount servers are not integrated and must be managed separately or through additional tool sets, adding complexity, overhead, and the burden of more time. Architecturally, Cisco UCS blade and rack-mount servers are joined into a single virtual blade chassis that is centrally managed yet physically distributed across multiple blade chassis, rack-mount servers, and even racks and rows. This capability is delivered through Cisco fabric interconnects that provide redundant connectivity, a common management and networking interface, and enhanced flexibility. This larger virtual chassis, with a single redundant point of management, results in lower infrastructure cost per server, with fewer management touch points, and lower administration, capital, and operational costs. Cisco C240-M4 Rack Servers Figure 4: Cisco C240-M4 rack server The Cisco UCS C240-M4 Rack Server is an enterprise-class server designed to deliver exceptional performance, expandability, and efficiency for storage and I/O-intensive infrastructure workloads. This includes big data analytics, virtualization, and graphics-rich and bare-metal applications. The UCS C240-M4 Rack Server delivers outstanding levels of expandability and performance for standalone or UCS-managed environments in a two rack-unit (2RU) form factor. It provides: Dual Intel Xeon E v3 or v4 processors for improved performance suitable for nearly all two-socket applications Next-generation double-data-rate 4 (DDR4) memory, 12-Gbps SAS throughput, and NVMe PCIe SSD support Innovative Cisco UCS virtual interface card (VIC) support in PCIe or modular LAN-on-motherboard (mlom) form factor Graphics-rich experiences to more virtual users with support for the latest NVIDIA graphics processing units (GPUs) The UCS C240-M4 server also offers maximum reliability, availability, and serviceability (RAS) features, including: Tool-free CPU insertion Pure Storage

10 Easy-to-use latching lid Hot-swappable and hot-pluggable components Redundant Cisco Flexible Flash SD cards The Cisco UCS C240-M4 server can be deployed standalone or as part of the Cisco Unified Computing System (UCS). Cisco UCS unifies computing, networking, management, virtualization, and storage access into a single integrated architecture that can enable end-to-end server visibility, management, and control in both bare-metal and virtualized environments. With Cisco UCS-managed deployment, UCS C240-M4 takes advantage of our standards-based unified computing innovations to significantly reduce customers TCO and increase business agility. Pure Storage FlashArray//m makes server and workload investments more productive, while also lowering storage spend. With FlashArray//m, organizations can dramatically reduce the complexity of storage to make IT more agile and efficient, accelerating your journey to the cloud. FlashArray//m s performance can also make your business smarter by unleashing the power of real-time analytics, driving customer loyalty, and creating new, innovative customer experiences that simply weren t possible with disk. All by transforming your storage with FlashArray//m. Figure 5: Pure Storage GUI and FlashArray//m FlashArray//m enables you to transform your data center, cloud, or entire business with an affordable all-flash array capable of consolidating and accelerating all your key applications. Mini Size Reduce power, space and complexity by 90% 3U base chassis with TBs usable ~1kW of power 6 cables Mighty Performance Transform your datacenter, cloud, or entire business Up to 300,000 32K IOPS Up to 9 GB/s bandwidth <1ms average latency Modular Scale Scale FlashArray//m inside and outside of the chassis for generations Expandable to ~½ PB usable via expansion shelves Pure Storage

11 Upgrade controllers and drives to expand performance and/or capacity Meaningful Simplicity Appliance-like deployment with worry-free operations Plug-and-go deployment that takes minutes, not days Non-disruptive upgrades and hot-swap everything Less parts = more reliability The FlashArray//m expands upon the FlashArray s modular, stateless architecture, designed to enable expandability and upgradability for generations. The FlashArray//m leverages a chassis-based design with customizable modules, enabling both capacity and performance to be independently improved over time with advances in compute and flash, to meet your business needs today and tomorrow. The Pure Storage FlashArray is ideal for: Accelerating Databases and Applications Speed transactions by 10x with consistent low latency, enable online data analytics across wide datasets, and mix production, analytics, dev/test, and backup workloads without fear. Virtualizing and Consolidating Workloads Easily accommodate the most IO-hungry Tier 1 workloads, increase consolidation rates (thereby reducing servers), simplify VI administration, and accelerate common administrative tasks. Delivering the Ultimate Virtual Desktop Experience Support demanding users with better performance than physical desktops, scale without disruption from pilot to >1000 s of users, and experience all-flash performance for under $100/desktop. Protecting and Recovering Vital Data Assets Provide an always-on protection for business-critical data, maintain performance even under failure conditions, and recover instantly with FlashRecover. Pure Storage FlashArray sets the benchmark for all-flash enterprise storage arrays. It delivers: Consistent Performance FlashArray delivers consistent <1ms average latency. Performance is optimized for the real-world applications workloads that are dominated by I/O sizes of 32K or larger vs. 4K/8K hero performance benchmarks. Full performance is maintained even under failures/updates. Less Cost than Disk Inline de-duplication and compression deliver 5 10x space savings across a broad set of I/O workloads including databases, virtual machines and Virtual Desktop Infrastructure (VDI). Mission-Critical Resiliency FlashArray delivers >99.999% proven availability, as measured across the Pure Storage installed base and does so with non-disruptive everything without performance impact. Disaster Recovery Built-In FlashArray offers native, fully-integrated, data reduction-optimized backup and disaster recovery at no additional cost. Setup disaster recovery with policy-based automation within minutes. And, recover instantly from local, space-efficient snapshots or remote replicas. Simplicity Built-In FlashArray offers game-changing management simplicity that makes storage installation, configuration, provisioning and migration a snap. No more managing performance, RAID, tiers Pure Storage

12 or caching. Achieve optimal application performance without any tuning at any layer. Manage the FlashArray the way you like it: Web-based GUI, CLI, VMware vcenter, Rest API, or OpenStack. Table 2: Pure Storage FlashArray//m series controller specifications * Effective capacity assumes HA, RAID, and metadata overhead, GB-to-GiB conversion, and includes the benefit of data reduction with always-on inline deduplication, compression, and pattern removal. Average data reduction is calculated at 5-to-1. ** Why does Pure Storage quote 32K, not 4K IOPS? The industry commonly markets 4K IOPS benchmarks to inflate performance numbers, but real-world environments are dominated by IO sizes of 32K or larger. FlashArray adapts automatically to 512B-32KB IO for superior performance, scalability, and data reduction. ***//m20 can be expanded beyond the 3U base chassis with expansion shelves. Purity Operating Environment Purity implements advanced data reduction, storage management and flash management features, and all features of Purity are included in the base cost of the FlashArray//m. Pure Storage

13 Storage Software Built for Flash The FlashCare technology virtualizes the entire pool of flash within the FlashArray, and allows Purity to both extend the life and ensure the maximum performance of consumergrade MLC flash. Granular and Adaptive Purity Core is based upon a 512-byte variable block size metadata layer. This fine-grain metadata enables all of Purity s data and flash management services to operate at the highest efficiency. Best Data Reduction Available FlashReduce implements five forms of inline and post-process data reduction to offer the most complete data reduction in the industry. Data reduction operates at a 512-byte aligned variable block size, to enable effective reduction across a wide range of mixed workloads without tuning. Highly Available and Resilient FlashProtect implements high availability, dual-parity RAID-3D, nondisruptive upgrades, and encryption, all of which are designed to deliver full performance to the FlashArray during any failure or maintenance event. Backup and Disaster Recovery Built In FlashRecover combines space-saving snapshots, replication, and protection policies into an end-to-end data protection and recovery solution that protects data against loss locally and globally. All FlashProtect services are fully-integrated in the FlashArray and leverage the native data reduction capabilities. Pure1 Pure1 Manage By combining local web-based management with cloud-based monitoring, Pure1 Manage allows you to manage your FlashArray wherever you are with just a web browser. Pure1 Connect A rich set of APIs, plugin-is, application connectors, and automation toolkits enable you to connect FlashArray//m to all your data center and cloud monitoring, management, and orchestration tools. Pure1 Support FlashArray//m is constantly cloud- connected, enabling Pure Storage to deliver the most proactive support experience possible. Highly trained staff combined with big data analytics help resolve problems before they start. Pure1 Collaborate Extend your development and support experience online, leveraging the Pure1 Collaborate community to get peer-based support, and to share tips, tricks, and scripts. Experience Evergreen Storage Tired of the 3-5 year array replacement merry-go-round? Say hello to storage that behaves like SaaS and the cloud. You can deploy it once and keep expanding and improving it for 10 years or more, all without any downtime, performance impact or data migrations. Our new Right Size capacity guarnatee helps you get started knowing you ll get the effective capacity your applications need. And our new Capacity Consolidattion program keeps your media modern and dense as you expand. With the new Evergreen Storage you ll never re-buy a TB you already own. Pure Storage

14 VMware vsphere 6.0 VMware vsphere is a leading virtualization platform for building cloud infrastructures. It enables IT to meet SLAs (service-level agreements) for demanding business critical applications at a low TCO (total cost of ownership). vsphere accelerates the shift to cloud computing for existing data centers and also underpins compatible public cloud offerings, forming the foundation for a hybrid cloud model. VMware vsphere Hypervisor Architecture provides a robust, production-proven, highperformance virtualization layer. It enables multiple virtual machines to share hardware resources with performance that can match native throughput. Each vsphere Hypervisor 6.0 instance can support as many as 480 logical CPUs, 12TB of RAM, and 1024 virtual machines. By leveraging the newest hardware advances, ESXi 6.0 enables the virtualization of applications that were once thought to be non-virtualizable. VMware ESXi 6.0 has dramatically increased the scalability of the platform. With vsphere Hypervisor 6.0, clusters can scale to as many as 64 hosts, up from 32 in previous releases. With 64 hosts in a cluster, vsphere 6.0 can support 8,000 virtual machines in a single cluster. This enables greater consolidation ratios, more efficient use of VMware vsphere Distributed Resource Scheduler (vsphere DRS), and fewer clusters that must be separately managed. VMware vsphere Virtual Machine File System (VMFS) allows virtual machines to access shared storage devices (Fibre Channel, iscsi, etc.) and is a key enabling technology for other vsphere components such as VMware vsphere Storage vmotion. VMware vsphere Storage APIs provide integration with supported third-party data protection, multipathing and storage array solutions. Pure Storage vsphere Web Client Plugin Beginning with vsphere 5.1, Pure Storage offers a direct plugin to the vsphere Web Client that allows for full management of a FlashArray as well as a variety of integrated menu options to provision and manage storage. Prior to use, the Web Client Plugin must be installed and configured on the target vcenter server. There is no requirement to go to an external web site to download the plugin it is stored on the FlashArray controllers. 1. Display Storage Details this allows VMware users to see underlying details of a volume hosting a VMFS. Information like data reduction and performance metrics can be easily identified for a VMFS inside the vsphere Web Client. Pure Storage

15 2. Provision new volumes this will create a new volume on the Pure Storage FlashArray and present it to a host or cluster. It will automatically rescan the host(s) and then format the volume as VMFS. Optionally, you can have the wizard add the new volume to a pre-existing Protection Group. A Protection Group is a management object on the FlashArray that provides a local snapshot and/or remote replication schedule for the volumes in that group. Figure 6: Creating a new volume with the Pure Storage vsphere Web Client plugin 3. Expand existing volumes any existing volume on an authorized array can be expanded nondisruptively. The plugin will resize the FlashArray volume to the new size, rescan the host(s) and then automatically resize the hosted VMFS to encompass the new capacity. 4. Destroy volumes this will unmount and remove a given VMFS volume and then destroy it on the FlashArray. A destroyed volume can be recovered through the Pure GUI or CLI for 24 hours after destruction. 5. Manage snapshots users can create snapshots of a datastore or a set of datastores, restore volumes from snapshots, create new volumes from snapshots and delete snapshots. 6. Adjust datastore protection datastores can be added or removed to FlashArray Protection Groups to start or cease local and/or remote replication for that datastore. 7. Rename volumes underlying FlashArray volumes can be renamed from within the Web Client. 8. Configure multipathing all of the FlashArray datastores in a host or a host cluster can be quickly configured to use the Round Robin multipathing policy. It is important to note the Web Client Plugin does not currently alter the IO Operations limit so it is left at the default of 1, Check storage health ESXi hosts can be quickly checked to make sure host storage limits are not being exceeded. VMware Horizon 7 Horizon is a family of desktop and application virtualization solutions, which provide a streamlined approach to deliver, protect, and manage Windows desktops and applications to the end user so they can work anytime, anywhere, on any device. An architectural diagram detailing how Horizon integrates with nvidia GRID shown in Figure 7. Pure Storage

16 Key Features Horizon 7 leverages desktop virtualization with View and builds on these capabilities, allowing IT to deliver virtualized and remoted desktop and applications through a single platform and supports users with access to all their Windows and online resources through one unified workspace. Horizon 7 supports the following key functionalities: Desktops and Applications Delivered through a Single Platform Deliver virtual or remote desktops and applications through a single platform to streamline management and easily entitle end users. Access to Data can easily be Restricted - Sensitive data can be prevented from being copied onto a remote employee's home computer. Unified Workspace Securely delivers desktops, applications, and online services to end users through a unified workspace, providing a consistent user experience across devices, locations, media, and connections. Closed Loop Management and Automation Consolidated control, delivery and protection of user compute resources with cloud analytics and automation, cloud orchestration and self-service features. Administration Tasks and Management Chores are Reduced - Administrators can patch and upgrade applications and operating systems without touching a user's physical PC. Optimization with the Software-Defined Data Center Allocates resources dynamically with virtual storage, compute, and networking to manage and deliver desktop services on demand. Central Image Management Central image management for physical, virtual, and BYO devices. Hybrid-Cloud Flexibility Provides an architecture built for onsite and cloud-based deployment. Just-in-Time Delivery with Instant Clone Technology: Reduce infrastructure requirements while enhancing security with Instant Clone technology and App Volumes. Instantly deliver brand new personalized desktop and application services to end users every time they log in. Transformational User Experience with Blast Extreme: A new protocol built for the mobile cloud gives end users a better desktop experience across any network or location, and on more devices than ever before. Modernize Application Lifecycle Management with App Volumes: Transform application management from a slow, cumbersome process into a highly scalable, nimble delivery mechanism that provides faster application delivery and application management while reducing IT costs by up to 70%. Smart Policies with Streamlined Access: Improve end user satisfaction by simplifying authentication across all desktop and app services while improving security with smarter, contextual, role-based policies tied to a user, device or location. VMware Horizon Architecture and Components This section describes the components and VMware products that interact with Horizon View. Horizon View includes seven main components: Pure Storage

17 1. Horizon Connection Server 2. Horizon Composer Server 3. Horizon View Agent 4. Horizon Clients 5. Horizon User Web Portal 6. View Persona Management 7. nvidia Graphics Driver / vgpu Manager Figure 7: VMware Horizon architectural diagram with nvidia GRID Pure Storage

18 Login VSI Login VSI ( the industry standard load testing solution for virtualized desktop environments is a tool designed to simulate a large-scale deployment of virtualized desktop systems and study its effects on an entire virtualized infrastructure. The tool is scalable from a few virtual machines running on one VMware ESXi (or other supported hypervisor) host up to hundreds to even thousands of virtual machines distributed across a cluster of ESXi hosts. Moreover, in addition to performance characteristics of the virtual desktops themselves, this tool also accurately shows the absolute maximum number of virtual machines that can be deployed on a given host or cluster of hosts. This is accomplished by using Launcher Windows machines that simulate one or more end-point devices connecting in to the target VDI cluster and execute either pre-defined or customized classes of workloads that closely mimic real-world users. Login VSI assists in the setup and configuration of the testing infrastructure, runs a set of application operations selected to be representative of real-world user applications, and reports data on the latencies of those operations, thereby accurately modeling the expected end-user experience provided by the environment. Login VSI consists of the following components: Number of desktop virtual workstations running on one or more UCS ESXi hosts to be exercised by the benchmarking tool and measured for performance against the selected workload. Number of client launcher virtual machines running on one or more ESXi hosts on an entirely separate cluster to simulate end-users connecting into the VDI environment. Management Console on a Windows Server OS. SPECviewperf12 SPECviewperf 12 is the worldwide standard for measuring graphics performance based on professional applications. The latest version is SPECviewperf , which extends performance measurement from physical to virtualized workstations. SPECgpc members at the time of V release include AMD, Dell, Fujitsu, HP, Intel, Lenovo, Micron and NVIDIA. SPECviewperf 12 measures the 3D graphics performance of systems running under the OpenGL and Direct X application programming interfaces. The benchmark s workloads, called viewsets, represent graphics content and behavior from actual applications. SPECviewperf 12 has been tested and is supported under the 64-bit version of Microsoft Windows 7. Results from SPECviewperf 12 cannot be compared to those from previous versions of the benchmark. The benchmarks exercised within this design guide are focused on the majority of the most popular CAD applications found within the SPECviewperf12 suite including: Dassault Catia V6R2012 PTC Creo Parametrics 2.0 Pure Storage

19 Autodesk Showcase 2013 Siemens NX 8.0 Dassault SolidWorks 2013SP1 Design Guide Solution Overview Our vgpu implementation consists of a combined stack of hardware (storage, network and compute) and software (Cisco UCS Manager, VMware Horizon, VMware vcenter/esxi, nvidia GRID vgpu drivers and Pure Storage GUI). Network: 2 Cisco Nexus 9396 Switches, 2 Cisco MDS 9148 and 2 Cisco UCS Fabric Interconnect 6248UP for external and internal connectivity of IP and FC network. Storage: 1 Pure Storage FlashArray//m20 with Fibre Channel connectivity Compute: 4 Cisco UCS C240-M4 Rack Mount Servers for Virtual Workstations and Chassis with 8 B200M4 Blade Servers for Infrastructure / Load Generators Graphics: 8 nvidia GRID K2 Cards (2 cards per C240-M4 rack server) Pure Storage

20 8G FC Connectivity Spine WAN Network 10/40 GbE NW Connectivity 10GbE Uplinks Converged Interconnect Cisco Nexus 9396PX N9k - A Cisco Nexus 9396PX N9k - B Cisco Nexus 9396PX Cisco Nexus 9396PX N9K-M12PQ STS N9K-M12PQ ACT ACT ACT ACT ACT ACT ACT vpc Cisco MDS 9148S - A vpc CONSOLE Cisco MDS 9148S-B CONSOLE DS-C9148S-K9 USB STATUS USB MGMT ETH LINK MDS 9148S 16G Multilayer Fabric Switch STATUS MGMT ETH P/S STS ACT FAN BCN STS DS-C9148S-K9 STS ACT ACT ACT ACT ACT ACT 26 1 BCN P/S ACT FAN LINK ACT 1 MDS 9148S 16G Multilayer Fabric Switch CISCO UCS 6248UP /10 GIGABIT ETHERNET CISCO UCS 6248UP 1/2/4/8G FIBRE CHANNEL ID STAT /10 GIGABIT ETHERNET /2/4/8G FIBRE CHANNEL ID UCS E16UP UCS E16UP STAT Cisco UCS 6248 UP FI-B Cisco UCS 6248 UP FI-A FlashArray //m20 UCS B200 M3 UCS B200 M3 UCS 5108! SLOT SLOT 1 2! Reset Console UCS B200 M3! Console Reset! Console Reset! Console Reset! Console Reset UCS B200 M3 SLOT SLOT 3 4! Reset Console UCS B200 M3 UCS B200 M3 SLOT 6 SLOT 5! Console Reset! Console Reset UCS B200 M3 UCS B200 M3 SLOT SLOT 7 8 FAIL OK OK FAIL FAIL OK FAIL OK Cisco UCS 5108 CONSOLE 2x nvidia GRID K2! UCS C240 M3 Cisco C240-M4 CONSOLE 2x nvidia GRID K2! UCS C240 M3 Cisco C240-M4 CONSOLE 2x nvidia GRID K2! UCS C240 M3 Cisco C240-M4 CONSOLE 2x nvidia GRID K2! UCS C240 M3 Cisco C240-M4 Figure 8: Design Guide connectivity diagram Pure Storage

21 Cisco UCS Server Configuration A pair of Cisco UCS Fabric Interconnects 6248UP, and four identical Intel CPU-based Cisco UCS C-series C240-M4 rack servers were deployed for hosting the virtual workstations. An additional Cisco UCS 5108 chassis with 8 UCS B200-M4 servers was used to host the VMware, Login VSI and Windows management components. The UCS manager, UCS Fabric Interconnects, the rack servers and the components in the chassis were upgraded to the 3.1.1e firmware level. Each C240-M4 rack server had two mlom-csc-02 adapters with Cisco VIC 1227 cards and they were connected with two ports from the Cisco C240-M4 to the Cisco Fabric Interconnect, those were in turn connected to Cisco Nexus 9396 Switch for upstream connectivity to access the Pure Storage FlashArray LUNs. This highly resilient design prevents any single point of failure taking down the environment. The server configuration is detailed in Table 3. Table 3: Cisco C240-M4 server hardware configuration Cisco UCS Service Profile configuration In order to facilitate rapid deployment of UCS servers, a service profile template was created with the following characteristics: Pure Storage

22 1. We configured boot from SAN policy so that the server booted from a Pure Storage boot LUN (see Figure 9 below) Figure 9: Cisco UCS service profile template with boot from SAN policy configuration 2. We kept every other setting as default, we did not change any parameters 3. The Ethernet and FC adapter policy was set to VMware policy 4. The BIOS defaults were used for the C240-M4 blade servers 5. We configured two vhba FC adapters and four vnic Eth adapters on the Cisco VIC cards to avoid any single point of failure. 6. We deployed four service profiles from the template and associated it with each rack server. Figure 10 below shows the Cisco UCS manager snapshot of service profile setup for the tests. Pure Storage

23 Figure 10: Service Profile association with C240-M4 rack servers Network Configuration Two virtual switches each containing two vmnics were used for each host. We went with a standard vswitch for this design. The redundant NICs were teamed in active/active mode and VLAN configurations were done on the upstream Cisco Nexus 9396 switches. The virtual switch configuration and properties are shown in Figure 11 and Figure 12. Figure 11: ESXi server network configuration on all servers (vswitch1 for Horizon View desktops) Pure Storage

24 Figure 12: ESXi server network configuration on all servers (vswitch0 for host management) Pure Storage

25 VMware vsphere Configuration and Tuning In this section, we discuss the ESXi 6.0 cluster configuration, network configuration and ESXi tuning for the system configuration. ESXi Cluster and Storage Configuration A datacenter and a cluster with four hosts were configured with VMware High Availability (HA) clustering and Distributed Resource Scheduling (DRS) features. DRS was set to partially automated mode with power management turned off. The host EVC policy was set to Intel Haswell. The default BIOS for C240- M4 was chosen for all the service profiles. We had to create two datastore for the ESXi cluster for making the HA cluster datastore heartbeat work correctly. Due to the simplicity of both the Pure Storage FlashArray and the Cisco UCS, configuration of VMware ESXi best practice configuration is accordingly simple. ESXi uses its Native Multipathing Plugin architecture to manage I/O multipathing to underlying SAN storage volumes. Pure Storage FlashArray volumes (while not actually an ALUA array it indeed is active/active) volumes are claimed by default by the Storage Array Type Plugin (SATP) for ALUA devices. Therefore all devices (by default) would inherit the Most Recently Used (MRU) Path Selection Policy (PSP). This would limit I/O to a single path and would be very detrimental to performance, as only leveraging a single path/port to the array would remove the active/active nature and performance advantage of the FlashArray. All the ESXi servers were configured to change the default PSP for Pure devices from MRU to Round Robin (with advanced configuration to alternate paths after every I/O). The following command was run on each ESXi server prior to the presentation of FlashArray devices: esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1" Pure Storage

26 Figure 13 shows a properly configured Pure Storage LUN. Figure 13: Properly configured Pure Storage LUN Pure Storage

27 Figure 14. FlashArray host configuration nvidia GRID K2 Card ESXi Installation Before using the nvidia GRID K2 cards with vgpu shared application graphics we were required to install the vgpu Manager vsphere Installation Bundle (VIB) onto all four of our C240-M4 rack servers. The first step was downloading the latest GRID vgpu driver bundle from nvidia s website. After extracting the zip file, we then used WinSCP to transfer the vgpu Manager installation files to each of the 4 servers. After putting each server into maintenance mode we then installed the vgpu Manager software with the following ESXi command while logged in as root via ssh: esxcli software vib install -v /path_to_vib/nvidia_vib Upon confirmation that the install was successful, each host was then rebooted and then taken out of maintenance mode in order to complete the installation. For these experiments we were running at nvidia GRID driver version / Further instructions on how to implement this product are available in the following VMware KB article. VMware Horizon 7 Configuration and Tuning VMware Horizon 7 configurations were quite minimal; some of the tuning is highlighted in the section. Horizon View Connection Server Tuning 1. Use SE sparse Virtual disks format Pure Storage

28 VMware Horizon View 5.2 and above supports a vmdk disk format called Space Efficient (SE) sparse virtual disks which was introduced in vsphere 5.1. The advantages of SE sparse virtual disks can be summarized as follows: Benefits of growing and shrinking dynamically, this prevent vmdk bloat as desktops rewrite data and delete data. Available for View Composer based linked clone desktops (Not for persistent desktops) only VM hardware version 9 or later No need to do refresh/recompose to reclaim space No need to set blackout periods, as we handle UNMAPs efficiently We recommend using this new disk format for deploying linked-clone desktops on Pure Storage due to the space efficiencies and preventing vmdk bloats. Appendix A has screen shots of how to configure VM disk space reclaim in VMware Horizon View Disable View Storage Accelerator The View Storage Accelerator, VSA, is a feature in VMware Horizon View 5.1 and up based on VMware vsphere content based read caching (CBRC). There are several advantages of enabling VSA including containing boot storms by utilizing the host side caching of commonly used blocks. It even helps in steady state performance of desktops that use same applications. As Pure Storage FlashArray gives you lots of IOPS at very low latency, we don t need the extra layer of caching at the host level. The biggest disadvantage is the added time it takes to recompose and refresh desktops, as every time you change the image file it has to rebuild the disk digest file. Also it consumes host side memory for caching and consume host CPU for building digest files. For shorter desktop recompose times, we recommend turning off VSA. Appendix A has screen shot which has VSA disabled in the vcenter Server storage settings. 3. Tune maximum concurrent vcenter operations The default concurrent vcenter operations on the vcenter servers are defined in the View configuration s advanced vcenter settings. These values are quite conservative and can be increased to higher values. Pure Storage FlashArray can withstand more operations including Max Concurrent vcenter provisioning operation (recommended value >= 50) Max Concurrent Power operations (recommended value >= 50) Max concurrent View composer operations (recommended value >= 50) The higher values will drastically cut down the amount of time needed to accomplish typical View Administrative tasks such as recomposing or creating a new pool. Pure Storage

29 Figure 15: Tuning maximum concurrent vcenter operations Some caveats include: 1. These settings are global and will affect all pools. Pools on other slower disk arrays will suffer if you set these values higher, so enabling these will have adverse effects. 2. The vcenter configuration, especially number of vcpus, amount of memory, and the backing storage has implications from these settings. In order to attain the performance levels we have described in this white paper, it is important to note the vcenter configurations and size them according to VMware s sizing guidelines and increase them as needed if you notice a resource is becoming saturated. Microsoft Windows 7 Physical Workstation Configuration One of the challenges of benchmarking graphics-rich applications is that there are not a bevy of publicly available results available which shows an apples-to-apples comparison of various hardware platforms with realistic datasets. Factors such as CPU, amount of RAM, hard drive type and GPU type dictate the CAD application performance. As the purpose of this paper is more to show that virtualized workstations can offer equivalent or better performance than their physical counterparts; we elected to build a comparable physical engineering workstation and run it against the SPECviewperf12 benchmark against it in order to provide a baseline score that can help evaluate our scaled virtual workstations. Care was taken to select a physical workstation that aligns closely to those used in current customer production CAD environments. In many such instances, 7200rpm spinning mechanical drives are still required and used today because of the low cost per GB and space requirements for designers and engineers for applications, models and other engineering CAD data. Pure Storage

30 Table 4: Physical Workstation hardware configuration Microsoft Windows 7 Virtual Workstation Configuration Our virtualized Windows 7 workstations were built using the above physical configuration as a baseline blueprint in order to show relative performance similarities and differences between the two on a perapplication basis. A huge benefit of running virtualized workstations on top of a hypervisor is that it enables elastic addition and subtraction of almost any hardware resource. The five notable hardware components that we will be able to vary on our virtual workstations are: The physical workstation is using 100% of a single socket Xeon E5-2670v3 processor. When virtualized with ESXi, we were able to run many more VMs on this same CPU and will vary the number of cores that our virtual workstations use in order to track the performance impact. Each C240-M4 rack server has 256GB of memory. We will be able to vary our amount of RAM per virtual workstation in our benchmark testing. The virtualized workstation will be using Pure Storage for an all flash local drive. We will also vary the nvidia GRID vgpu profiles in order to see how each CAD application performs and what sort of hardware configuration and user density per GRID K2 card and C240- M4 server is recommended on a per application basis. nvidia vgpu incorporates a performance balancing feature known as Frame Rate limiter (FRL) which is enabled by default and limits the Frames Per Second (FPS) for each virtual workstation. Since the SPECviewperf12 benchmark uses FPS as the key metric for determining application performance, we elected to disable this function for a majority of our simulations since the physical workstation we are comparison against has no such limitation. The cost of changing this setting was introducing more variability to our results. To disable FRL, we edited the advanced settings of the powered off template VM and added the following configuration parameter (Add row): Name: pcipassthru0.cfg.frame_rate_limiter Value: 0 FRL can be re-enabled by setting this value to 1 or deleting the row entirely. Note that the VM needs to be powered off to change this setting. Pure Storage

31 We ran the VMware OS Optimization Tool in order to apply all VMware best practices simply and easily to our virtual workstation template image. A more in-depth description of the tool and a link to download it can be found here. A few additional optimizations were included in order to achieve maximum performance for this benchmark: The page file was set to a static value equal to double the amount of RAM on the template VM (e.g. 16GB of RAM meant a 32 GB Page File) VSync was disabled from within the nvidia control panel as when enabled it can limit the GPU performance to that of the monitor refresh rate. In the template VM settings, Expose hardware assisted virtualization to the guest OS was enabled. The Windows 7 64-bit OS virtual workstation initial configuration can be seen in more detail in Table 5 below. Table 5: Windows 7 64-bit Virtual Workstation configuration Desktop Testing Tool Login VSI Login VSI is a 3 rd party tool and the industry-standard designed to simulate real-world deployments of virtualized desktop systems and study its effects on an entire virtualized infrastructure. The tool is scalable from a few virtual machines running on one VMware vsphere host up to tens or even hundreds of thousands of virtual machines distributed across clusters of vsphere hosts. Pure Storage

32 For this set of tests, we elected to use the Login VSI custom workload generator, which enabled us to run the SPECviewperf12 benchmark on multiple virtualized workstations in parallel in order to observe performance at scale. SPECviewperf12 is providing our scoring for our graphics-rich applications. Graphical Benchmarking Tool - SPECviewperf SPECviewperf provides an automated framework for exercising some of the most popular CAD tools using representative real-world model files. The benchmark is also very customizable, allowing us to run only a subset of graphical-intensive CAD tools offered from the overall benchmark suite from the command line. Individual tests produce an average frames per second which is recorded, then weighted and used to calculate a composite score that provides a single number on a per application basis that can be easily used to compare and contrast results for that individual benchmark against platforms in different configurations. It should be noted that while the datasets and benchmarks used in SPECviewperf12 closely mirror production environments, generally speaking they represent a worst case scenario in terms of graphical performance. That is, typical engineers and designers will usually work on smaller pieces of an assembly rather than rotate the entire build in parallel with their co-workers. As such, in actual production use we believe that GPU utilization will be likely be less than the results gathered during our simulations. Additionally, as the benchmark data itself is static, it is primarily a read-only operation from a storage perspective. For this design guide we elected to use the following SPECviewperf12 CAD benchmarks: Pure Storage

33 CATIA Viewset (catia-04) The catia-04 Viewset was created from traces of the graphics workload generated by the CATIA V6 R2012 application from Dassault Systemes. Model sizes range from 5.1 to 21 million vertices. The Viewset includes numerous rendering modes supported by the application, including wireframe, antialiasing, shaded, shaded with edges, depth of field, and ambient occlusion. Viewset tests 1. Race car shaded with ambient occlusion and depth of field effect 2. Race car shaded with pencil effect 3. Race car shaded with ambient occlusion 4. Airplane shaded with ambient occlusion and depth of field effect 5. Airplane wireframe 6. Airplane shaded with pencil effect 7. Airplane shaded 8. Airplane shaded with edges 9. Airplane shaded with ambient occlusion 10. SUV1 vehicle shaded with ground reflection and ambient occlusion 11. SUV2 vehicle shaded with ground shadow 12. SUV2 vehicle shaded with ground reflection and ambient occlusion 13. Jet plane shaded with ground reflection and ambient occlusion 14. Jet plane shaded with edges with ground reflection and ambient occlusion Figure 16: Catia race car benchmark Figure 17: Catia airplane benchmark Figure 18: Catia SUV benchmark Pure Storage

34 Creo Viewset (creo-01) The creo-01 Viewset was created from traces of the graphics workload generated by the Creo 2 application from PTC. Model sizes range from 20 to 48 million vertices. The Viewset includes numerous rendering modes supported by the application, including wireframe, anti-aliasing, shaded, shaded with edges, and shaded reflection modes. Viewset tests Figure 19: Creo worldcar benchmark 1. Worldcar in shaded mode 2. Worldcar in wireframe with anti-aliasing enabled 3. Worldcar in shaded edges mode 4. Worldcar in hidden mode 5. Worldcar in shaded reflection mode 6. Worldcar in shaded mode 7. Worldcar in no-hidden mode with anti-aliasing enabled 8. Worldcar in shaded mode with anti-aliasing enabled 9. Plane in shaded mode 10. Plane in shaded edges mode 11. Plane in hidden mode 12. Plane in shaded mode with anti-aliasing enabled 13. Plane in shaded edges mode with high-quality edges enabled Figure 20: Creo plane benchmark Pure Storage

35 Showcase Viewset (showcase-01) The showcase-01 Viewset was created from traces of Autodesk s Showcase 2013 application. The model used in the viewset consists of 8 million vertices. The Viewset is the first Viewset in SPECviewperf to feature DX rendering. Rendering modes included in the Viewset include shading, projected shadows, and self-shadows. The following tests are included in the Viewset: 1. Shaded with self-shadows 2. Shaded with self-shadows and projected shadows 3. Shaded 4. Shaded with projected shadows Figure 21: Showcase race car benchmark Pure Storage

36 Siemens NX (snx-02) The snx-02 Viewset was created from traces of the graphics workload generated by the NX 8.0 application from Siemens PLM. Model sizes range from 7.15 to 8.45 million vertices. The Viewset includes numerous rendering modes supported by the application, including wireframe, anti-aliasing, shaded, shaded with edges, and studio mode. Viewset tests The following tests are included within the viewset: Figure 22: Siemens NX powertrain benchmark 1. Powertrain in advanced studio mode 2. Powertrain in shaded mode 3. Powertrain in shaded-with-edges mode 4. Powertrain in studio mode 5. Powertrain in wireframe mode 6. SUV in advanced studio mode 7. SUV in shaded mode 8. SUV in shaded-with-edges mode 9. SUV in studio mode 10. SUV in wireframe mode Figure 23: Siemens NX SUV benchmark Pure Storage

37 Solidworks Viewset (sw-03) The sw-03 Viewset was created from traces of Dassault Systems SolidWorks 2013 SP1 application. Models used in the Viewset range in size from 2.1 to 21 million vertices. The Viewset includes numerous rendering modes supported by the application, including shaded mode, shaded with edges, ambient occlusion, shaders, and environment maps. The following tests are included in the Viewset: 1. Vehicle in shaded mode -- normal shader with environment cubemap 2. Vehicle in shaded mode -- bump parallax mapping with environment cubemap 3. Vehicle in shaded mode -- ambient occlusion enabled with normal shader with environment map 4. Vehicle in shaded-with-edges mode -- normal shader with environment cubemap 5. Vehicle in wireframe mode 6. Rally car in shaded mode -- ambient occlusion enabled with normal shader with environment map 7. Rally car in shaded mode -- normal shader with environment cubemap 8. Rally car in shaded-with-edges mode -- normal shader with environment cubemap 9. Tesla Tower in shaded mode -- ambient occlusion enabled with normal shader with environment map 10. Tesla Tower in shaded mode -- normal shader with environment cubemap 11. Tesla Tower in shaded-with-edges mode -- normal shader with environment cubemap For additional information on SPECviewperf12 and to learn about other benchmarking software offered by SPEC please visit Figure 24: Solidworks rally car benchmark Figure 25: Solidworks Tesla tower benchmark Figure 26: Solidworks vehicle benchmark Pure Storage

38 Pure Storage FlashArray Configuration The FlashArray//m20 contains no special tuning or value changes from any normal configuration. The FlashArray contains twenty drive bays fully populated with two 256 GB SSDs each with two NVRAM devices for 10TB of raw space in total. The UCS hosts are redundantly connected to the controllers with two FC connections to each controller from two HBAs on each host over the Fibre Channel protocol for a total of eight logical paths. A cluster group was configured with all the ESXi hosts and a private volume was created for boot from SAN for each host. One 10 TB LUN was shared across the entire host group for hosting the virtual workstations. Solution Validation In order to deploy and scale virtualized workstations with acceptable performance, a proper hardware and software design, a good test plan and success criteria are required. This section talks about the test infrastructure, hardware configuration and infrastructure VM setup we had in place for this reference architecture. Test Setup The VMware Horizon 7 server and other infrastructure components were placed on a dedicated infrastructure cluster that was completely separated from the virtual workstations other than running under the same vcenter instance so that we could focus specifically on graphics-rich application performance on the environment. In addition, the physical engineering workstation used for baseline scoring was kept separate from our test infrastructure. The infrastructure cluster included: 8 dedicated infrastructure servers in an HA-enabled cluster were used to host the all of the infrastructure virtual machines: Active directory VM, DNS VM, and DHCP VM 2 VMware Horizon Connection Servers 1 VMware Horizon Composer Server 1 VMware vsphere Virtual Center server 1 Microsoft SQL Server for vcenter 1 Login VSI Management Console 10 Login VSI Launcher VMs One 5.5 TB (raw) FlashArray FA x 50GB Boot volume for the 8 infrastructure Cisco UCS blade servers 1 x 20 TB volume for the virtual server infrastructure components listed above Pure Storage

39 1 x 20 TB volume for the Login VSI Launcher VMs The vgpu virtual workstation test cluster included: One 10 TB Pure Storage FlashArray//m20 in HA configuration, including two controllers and two 5TB data packs for 10TB raw: 1 x 10 TB volume was provisioned on Pure Storage FlashArray for the VDI workstations. 4 X 50 GB Boot volume for the four Cisco UCS rack servers We used 4 Cisco UCS C-240M4 series rack servers based on dual socket Intel 2.3GHz processor with 256 GB of memory running ESXi 6.0 as the hypervisor used to host the desktops. Each server had 2 nvidia GRID K2 cards installed. Test Plan and Success Criteria The test procedure was broken up into the following segments: 1. Baseline scoring using the physical engineering workstation was first performed using the SPECviewperf12 benchmark. The baseline tests were performed 3 times (with a system reboot performed between each run) and then the average SPECviewperf12 composite scores for our CAD applications of interest were calculated. The results presented are the average scores from those three runs. 2. Starting with our virtualized workstations in the configuration shown in our Windows 7 Desktop for Virtualized Workstation Configuration section, we next repeated the same SPECviewperf12 test 3 times and averaged the composite scores amongst all virtualized workstations run in the simulation. After this initial run, the following parameters were varied in order to see how each individual CAD application performance was impacted: a. Amount of RAM (16GB vs. 24GB) b. vgpu Profile (Direct Pass-Thru vs. 280q vs. 260q) c. VMware Horizon Remote Display Protocol (PCoIP vs. VMware Blast Extreme) d. Frame Rate Limiter (FRL) enabled vs. disabled. By default this option is enabled and limits the frame rate of each shared graphics user in order to maintain a consistent user experience across a GRID K2 card. However, since the benchmark we are using uses frame rates per second in order to tabulate scoring, we elected to simulate with it disabled since the physical device had no such restriction in place. Further instructions on how to disable this setting can be found in the Windows 7x64 Virtual Workstation Configuration section of this document. We will gather and display relevant performance metrics from the ESXi hosts (host CPU, memory and GRID K2 card utilization) throughout the run to ensure that no resource saturation is occurring. The success criteria is as follows: 1. We will provide recommend virtual workstation hardware configurations on a per CAD application basis based upon our composite scores. In addition, number of virtualized workstations per Pure Storage

40 C240-M4 server will also be shared. Virtual Workstation CAD application composite scores must be within 15% or exceed our phyiscal baseline scores in order to be acceptable and recommended. 2. The backend Pure Storage FlashArray is keeping up with the I/O needs and is getting below 1 millisecond latency for 100% of the simulation run. At each point during our virtualized workstation tests we captured the ESXi server CPU/Memory/Disk utilization along with the Pure Storage FlashArray disk utilization data. SPECviewperf12 outputs composite scores on each of the test runs which would determine if our environment was properly configured and if we were seeing reasonable CAD application performance. Scalability Results This section highlights the test results for the scalability testing starting with physical workstation baseline which would be used for comparison purposes in the subsequent virtualized workstation testing. The results will be broken up into the following sections: 1. Initial baseline performance metrics for the physical engineering workstation: I. The SPECviewperf12 tests will be ran 3 separate times (with a reboot between each run) and the composite scores will be averaged and shown in order to provide our baseline scoring parameters and a point of comparison. 2. Virtualized workstation testing will then be performed and important hardware components (e.g. amount of RAM and vgpu profile) will be varied and the results will be shown and discussed. I. SPECviewperf12 performance testing will be run our GRID-enabled VMware Horizon cluster with the maximum number of VMs possible per GRID card based upon the vgpu profile being used. The test will be repeated 3 times (with a reboot between each run) in order to confirm consistency of results. SPECviewperf12 composite scores from all VMs running the workload will be tabulated and averaged. By launching SPECviewperf from the command line, we were able to randomize the individual CAD application launch order in order to make certain there would be minimal common operations occurring at once. From a few representative runs we will show (we found that ESXi memory, CPU and storage utilization did not vary much and never approached saturation in any testing): II. III. Charts from the ESXi cluster showing: CPU utilization, overall memory utilization (including any ballooning), and nvidia GRIDK2 card utilization. Pure Storage Dashboard showing the array performance during the entire simulation run including latency, IOPs, bandwidth, storage spaced used and data reduction. From all runs we will show: IV. SPECviewperf composite scores for each CAD application will be tabulated and shown relative to our baseline performance scores. Pure Storage

41 3. Recommended virtual workstation HW configuration for each CAD application I. Taking our results from the previous sections we will provide recommended HW configurations for the virtual workstations on a per CAD application basis. Test Results: Baseline Physical Workstation Our physical Windows 7 64-bit workstation was used to provide a baseline point of reference for virtualized workstation testing with the SPECviewperf12 OpenGL and DirectX applications. SPECviewperf12 was run three times (with a reboot between each run) and the below composite scores were averaged for our CAD applications of interest. The below results shows generally good performance for all CAD applications. Further comparisons with both physical and virtual hardware configurations from various vendors can be made at the SPEC.org results website. SPECviewperf12 captures and shows the the configuration of the system being tested. Our physical worksation configuration can be seen below in Figure 27. Figure 27: Baseline physical workstation configuration (specific vendor information redacted) Average baseline scores from our 3 SPECviewperf12 runs on the physical workstation can be seen in Figure 28. Pure Storage

42 Figure 28: SPECViewPerf12 composite score results for Physical Workstation Test Results: Baseline Virtual Workstation With our physical workstation baseline scores established, we next moved forward with testing 16 virtualized workstations running the SPECviewperf12 benchmark in parallel in as similar a configuration to the physical workstation in the above section as possible on our C240-M4 rack servers. The SPECviewperf12 runs were randomized by launching the various CAD applications in a different order on each virtualized workstation and the entire simulation on all desktops completed within approximately 30 minutes. Not surprisingly, we found that some CAD tools performed better than others in this initial setup as each unique CAD application functions differently and accordingly - puts a higher emphasis on certain hardware components. For this initial run we were limited to running 16 VMs on our test infrastructure as the vgpu 280q profile allows a maximum of 2 virtual workstations per GRID K2 card and we had a total of 8 cards installed in our 4 C240-M4 servers. First, SPECviewperf outputs a configuration listing of the virtualized platform which can be see in Figure 29. Pure Storage

43 Figure 29: SPECViewPef hardware configuration for baseline Virtual Workstation In the below table we can see an initial comparison of each CAD tool in our physical vs. the average of our 16 virtualized workstations over 3 runs. In most cases initial application performance is near equivalent - only the SolidWorks benchmark show that our physical workstation enjoyed a large advantage over the virtualized workstation. Meanwhile the virtualized workstation performed significantly better than its physical counterpart in the AutoDesk Showcase benchmark. Pure Storage

44 COMPOSITE SCORE Baseline Physical Workstation 50 Baseline Virtual Workstation Catia-04 Creo-01 Showcase-01 SNX-02 SW-03 SPECVIEWPERF12 BENCHMARK Figure 30: Comparison of SPECViewPerf12 scoring for Physical Workstation vs. Virtual Workstation using 280q vgpu profile The following view of the Pure Storage GUI taken during the simulation run confirms that the Pure Storage array maintained sub-ms latency throughout the simulation. Even though we were only running 16 virtual workstations, we can see that we were driving around 10K IOPs and close to 400MB/s sustained bandwidth (or approximately 625 IOPS and 25MB/s of bandwidth per virtual workstation) which highlights the highly storage-intensive nature of these applications. Pure Storage

45 Figure 31: Pure Storage GUI showing storage performance of 16 virtual workstations during SPECViewperf12 testing Test Results: Baseline Virtual Workstation with Frame Rate Limiter (FRL) disabled As we discussed earlier, a technology that nvidia has introduced to keep a consistent graphics experience for all users sharing the same GRID GPU is a feature called Frame Rate Limiter (FRL). This setting limits the Frames Per Second (FPS) on each vgpu profile on a per user basis so as to maintain a consistent user experience for all. As the SPECviewperf12 benchmark uses FPS as the key factor in determining the composite score, we elected to disable this setting for the remainder of our tests as it would artificially limit our SPECviewperf12 simulation scoring. The downside is that we did encounter more variability in our individual virtual workstation composite score results with the setting disabled. In the end, customers will need to experiement with their own applications and datasets in order to determine what the optimal configuration is for their own unique environment but it is worth noting the nvidia does recommend keeping the FRL setting enabled in production environments. We first repeated our previous test of the baseline virtual workstation configuration with the FRL setting disabled in order to determine how much improvement we would experience. As we can see from the below chart, the results are dramatically improved and in all instances, was near to, or exceed the performance of our baseline physical workstation once FRL was disabled. Pure Storage

46 Figure 32: SPECViewPerf12 composite score comparison between Physical Workstation, Virtual Workstation with 280q vgpu profile and Virtual Workstation with 280q vgpu profile with FRL disabled Storage performance remained more or less equivalent to the previous run with FRL disabled. Figure 33: Pure Storage GUI showing performance during 16 virtual workstation SPECViewperf12 simulation Finally, given that we were only running 4 virtualized workstations per ESXi host, we can see that CPU utilization was also correspondingly minimal with a maximum of only 27% during the test. We estimate that approximately 50 additional standard Knowledge Worker desktops per UCS C240M4 host (200 total) could also be used on this infrastructure in this configuration. Pure Storage

47 Figure 34: ESXi CPU performance from 4 C240-M4 rack server cluster Figure 35: ESXi MEM performance from 4 C240-M4 rack server cluster showing no ballooning or swapping Test Results: Virtual Workstation Hardware Variations Since virtual workstations provide much more flexibility in terms of adding and removing HW components than their physical counterparts, we next changed our virtualized workstation hardware configuration in a few different ways and repeated the SPECviewperf12 testing to see how each unique CAD tool would react to the addition or subtraction of a given resource. This section will include results of these variations that were made on the virtualized workstation. The GPU profile selected was the factor that determined the size of the virtual workstation pool we could run. In addition, we wanted to compare using the Pass-Thru GRID option which allows the virtual workstation to directly connect to the graphics card without any interaction or overhead from the hypervisor. We also further varied the vgpu profile as well as the amount of RAM and VMware Horizon remote protocol. All of these additional tests were performed with the FRL setting disabled. The following Table 6 shows a summary of the tests that were performed and the hardware configuration details for each scenario. Earlier tests showed no significant performance impact in increasing the number of vcpus from 10 so that parameter was not changed in this testing. Pure Storage

48 Table 6: Virtual Workstation test configurations Each of the above test groups will be plotted alongside of our baseline physical and virtual workstation (FRL off) configurations in order to show what sort of performance differences are found, if any. Using these results we will later provide our recommend minimum virtual workstation configuration for each unique CAD application. Test Results: Virtual Workstation with VMware Horizon Blast For our next round of tests we wanted to determine if there was any performance advantage in using the new (released in Horizon 7) remote protocol from VMware: Blast Extreme. This is an HTML5-based protocol that gained popularity and usage via the Horizon HTML client and that functionality has now been extended to include the Horizon Client as well. We found that while the new protocol performed almost identically to PCoIP, there was not a sufficient performance impact on any of our CAD applications that would warrant recommending using it over PCoIP, though it will not negatively impact user experience either. Figure 36: SPECViewperf12 composite score results for Virtual Workstation using 280q vgpu profile and VMware Blast Extreme Protocol (gray bar) Pure Storage

49 From monitoring our ESXi cluster resource utilization, we did find a slight increase in CPU resource consumption of a maximum of 33.71% during the simulation when we used Horizon Blast. Figure 37: ESXi CPU utilization for Virtual Workstation using 280q vgpu profile and VMware Blast Extreme Protocol ESXi memory remained identical to our previous run and showed no balooning or swapping at any point during the simulation. Figure 38: ESXi MEM utilization for Virtual Workstation using 280q vgpu profile and VMware Blast Extreme Protocol Test Results: Increase Virtual Workstation RAM to 24GB We next tried increasing the amount of RAM to 24GB in our pool. After updating the template image we recomposed the pool of 16 virtual workstations to replicate out the increased available RAM. We also reverted to using the PCoIP protocol in order to minimize the number of variables in our simulation. As we can see from the results in Figure 39, SPECviewperf12 performance was not significantly improved or deteriorated from this change in any of our target CAD applications. Pure Storage

50 Figure 39: SPECViewperf12 benchmark comparison of virtual workstation using 24GB of RAM (gray bar) Since there was no noticable advantage to assigning additional RAM to the virtualized workstations we elected to revert to 16GB RAM for the next few tests in order to continue to minimize variables from our physical workstation baseline. Test Results: Virtual Workstation with nvidia GRID Pass-Thru Next, we ran simulations using the nvidia GRID Pass-thru feature rather than a vgpu profile. The main difference is that the pass-thru setting bypasses the vgpu manager agent we installed on the hypervisor and the VMs are able to directly connect to the graphics card. You can host an equivalent number of virtual workstations using pass-thru as with the 280q vgpu profile. As we can see in the below chart, performance of the SPECviewperf12 applications was roughly the same as our baseline virtualized workstation configuration and notably better in most instances than the physical workstation. Pure Storage

51 Figure 40: SPECViewperf12 benchmark comparison of virtual workstation using nvidia GRID Passthru (gray bar) Test Results: 32 Virtual Workstation Pool using GRID 260q Profile We subsequently wanted to see how a larger pool of virtualized workstation would perform against this benchmark, since ultimately most if not all of VDI administrators are going to want to maximize their user density per GRID K2 card. We changed the vgpu profile to 260q, which enables 4 users to share a GRID K2 card simulateneously so we were able to increase our test pool size to 32 workstations. The next results chart indicates that the increased user density on the GRID cards resulted in a modest performance impact to the bulk of the CAD applications benchmarked. Encouragingly, however, we can see that in four out of five CAD tools our performance remained very close to, if not better than that of the physical workstation. Pure Storage

52 Figure 41: SPECViewperf12 Composite score comparison for 32 Virtual Workstations using 260q vgpu profile (gray bar) As we doubled the number of virtual workstations in the cluster, not suprisingly the CPU and memory utilization also approximatley doubled during our simulation though we still observed significant headroom in both cases. Figure 42: ESXi CPU utilization for 32 Virtual Workstations using 260q vgpu profile Pure Storage

53 Figure 43: ESXi MEM utilization for 32 Virtual Workstations using 260q vgpu profile The Pure Storage GUI also showed significantly higher bandwidth and IOPs during the run but maintained sub-millisecond latency throughout the test helping to ensure a responsive CAD tool test. Figure 44: Pure Storage GUI showing storage performance for 32 virtual workstations using 260q vgpu profile Test Results: Virtual Workstation vgpu 260q Profile with 24GB of RAM Finally, we elected to see if increasing the amount of RAM for the 32 virtual workstations would improve benchmark performance. The next chart shows no noteworthy improvements were found with the additional system RAM for this particular simulation. Pure Storage

54 Figure 45: SPECViewperf12 Composite score comparison for 32 Virtual Workstations using 260q vgpu profile with 24GB RAM (red bar) Recommended Virtual Hardware Configuration per CAD tool Using the previous results as a guide, we have compiled what we believe to be the minimum virtual workstation configuration requirements on a per CAD tool basis that will meet or exceed the baseline physical workstation coutnerpart. Somehwat surprisingly, the vgpu profile was the most important factor in shaping CAD application performance and our below configurations are a reflection of that fact. While we acknowledge that these results are not overly representative of production customer engineering environments; we do feel that they provide a strong proof point that virtual workstations with nvidia GRID cards can now be regarded as superior to their physical counterparts in almost every aspect while opening up a greater capability for worker flexibility, collaboration and security. Table 7: Recommended minimum Virtual Workstation HW configuration for each CAD tool Pure Storage

55 Summary of Overall Findings We were able to meet or exceed physical workstation performance with our virtualized workstations using Pure Storage and nvidia GRID in all test cases. Despite driving hundreds of IOPs per virtual workstation, the Pure Storage array was able to maintain delivering sub-millisecond latency throughout all simulations therby providing an outstanding end-user experience for CAD engineers and designers. As was expected, varying the vgpu profile determined the number of virtual workstations we were able to run in parallel as well as verified itself as being the most important setting in influencing CAD tool performance. Changing other virtual workstation HW settings such as the amount of RAM or remote graphics protocol did not significantly impact our results. Conclusions As with most things in the engineering space (if not in IT overall) the correct answer for a customer s unique situation is: it depends. With that being said, the results we have shown here using the industry standard graphical benchmarking tool clearly make the case that virtualized workstation performance can not only meet, but oftentimes exceed their physical counterparts when configured correctly. Also not surprising is that the optimal hardware configuration for your virtualized workstation environments will vary on an application-by-application basis since they all utilize available resources in different ways and some require more than others. What is important to note is that running an All Flash Array such as Pure Storage is an extremely important consideration in order to provide end-users with the best possible graphics-intensive application performance. This point was proven as the virtual workstations used up to several hundred IOPs per desktop which is a level of performance that an All-Flash solution such as Pure Storage can provide. Co-locating the compute, graphics, memory and data in a datacenter is a surefire way to improve performance, resiliency and perhaps most importantly data integrity and security. It seems like every other day there is another news story about a lost hard drive resulting in a data breach causing a loss in proprietary information as well as embarrassment to the offending company. Pure Storage uses AES- 256bit encryption at rest which provides robust protection against this kind of breach, especially when coupled with VMware Horizon s own encryption protocols that protects and encrypts data transmission across the wire. No longer is a lost laptop or local drive failure likely to result in data loss as it is safely stored at the datacenter. Furthermore, the benefits of centrally managing your engineering tools from a single master template VM vs. individually having to touch each desktop separately for updates and fixing issues cannot be understated. Ultimately, we highly recommend testing out your own unique datasets and applications internally at production scale in order to arrive at the optimal configuration for your end-users. The results of our testing will give you a big head start on what optimizations to use for optimal results. Pure Storage is very POC-friendly and setup of the array is typically completed in under an hour, allowing for meaningful benchmark testing to get underway in short order. Pure Storage

56 About the Author Kyle Grossmiller is a VDI Solutions Architect at Pure where he focuses on helping customers bring their VDI projects to the next level of success using Pure Storage s All-Flash Arrays. He provides technical expertise to help solve pressing client challenges and produces technical collateral that contains insights and guidance on how using Pure Storage delivers the best possible results for VDI. Prior to joining Pure, Kyle was at Lockheed Martin Space Systems and Enterprise Business Services for over 12 years where he worked in dual IT roles. In this capacity, he supported their engineering user base throughout multiple hardware platform lifecycles and major CAD software upgrades as well as serving as the technical lead for an internal private-cloud VDI project from planning to POC to production. From these experiences he has a unique and deep perspective on the ever-changing nature of IT. Kyle resides in San Francisco, CA and holds a Bachelor of Science degree in Electrical Engineering from Lafayette College in Easton, PA. Blog: Pure Storage

FLASHARRAY//M Business and IT Transformation in 3U

FLASHARRAY//M Business and IT Transformation in 3U FLASHARRAY//M Business and IT Transformation in 3U TRANSFORM IT Who knew that moving to all-flash storage could help reduce the cost of IT? FlashArray//m makes server and workload investments more productive,

More information

FlashArray//m. Business and IT Transformation in 3U. Transform Your Business. All-Flash Storage for Every Workload.

FlashArray//m. Business and IT Transformation in 3U. Transform Your Business. All-Flash Storage for Every Workload. Business and IT Transfmation in 3U Transfm IT. Who knew that moving to all-flash stage could help reduce the cost of IT? FlashArray//m makes server and wkload investments me productive, while lowering

More information

MIXED WORKLOADS ON PURE STORAGE. Microsoft SQL Server September 2016

MIXED WORKLOADS ON PURE STORAGE. Microsoft SQL Server September 2016 MIXED WORKLOADS ON PURE STORAGE Microsoft SQL Server September 2016 TABLE OF CONTENTS EXECUTIVE OVERVIEW... 2 GOALS AND OBJECTIVES... 2 AUDIENCE... 2 PURE STORAGE INTRODUCTION... 2 FLASHARRAY//M R2 SPECIFICATIONS...

More information

CATALOGIC ECX INTEGRATION. for Microsoft SQL Server January 2017

CATALOGIC ECX INTEGRATION. for Microsoft SQL Server January 2017 CATALOGIC ECX INTEGRATION for Microsoft SQL Server January 2017 TABLE OF CONTENTS EXECUTIVE OVERVIEW... 2 GOALS AND OBJECTIVES... 2 AUDIENCE... 2 PURE STORAGE INTRODUCTION... 2 FLASHARRAY//M R2 SPECIFICATIONS...

More information

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS Testing shows that a Pure Storage FlashArray//m storage array used for Microsoft SQL Server 2016 helps eliminate latency and preserve productivity.

More information

INTEGRATING PURE AND COHESITY

INTEGRATING PURE AND COHESITY WHITE PAPER INTEGRATING PURE AND COHESITY NEXT GENERATION PERFORMANCE AND RESILIENCE FOR YOUR MISSION-CRITICAL DATA TABLE OF CONTENTS INTRODUCTION... 3 TEST ENVIRONMENT... 3 TEST PROCESS AND SETUP... 5

More information

FLASHARRAY//M Smart Storage for Cloud IT

FLASHARRAY//M Smart Storage for Cloud IT FLASHARRAY//M Smart Storage for Cloud IT //M AT A GLANCE PURPOSE-BUILT to power your business: Transactional and analytic databases Virtualization and private cloud Business critical applications Virtual

More information

FAST SQL SERVER BACKUP AND RESTORE

FAST SQL SERVER BACKUP AND RESTORE WHITE PAPER FAST SQL SERVER BACKUP AND RESTORE WITH PURE STORAGE TABLE OF CONTENTS EXECUTIVE OVERVIEW... 3 GOALS AND OBJECTIVES... 3 AUDIENCE... 3 PURE STORAGE INTRODUCTION... 4 SOLUTION SUMMARY... 4 FLASHBLADE

More information

FlashStack Converged Infrastructure Solution

FlashStack Converged Infrastructure Solution FlashStack Converged Infrastructure Solution For VMware Horizon View 6.0 With Pure Storage FlashArray 400, VMware vsphere 5.5, Cisco UCS B-Series blade servers and Cisco Nexus 5500-series Switches Ravi

More information

The FlashStack Data Center

The FlashStack Data Center SOLUTION BRIEF The FlashStack Data Center THE CHALLENGE: DATA CENTER COMPLEXITY Deploying, operating, and maintaining data center infrastructure is complex, time consuming, and costly. The result is a

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

High performance and functionality

High performance and functionality IBM Storwize V7000F High-performance, highly functional, cost-effective all-flash storage Highlights Deploys all-flash performance with market-leading functionality Helps lower storage costs with data

More information

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray FLASHSTACK REFERENCE ARCHITECTURE December 2017 TABLE OF CONTENTS

More information

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Data Sheet Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes Fast and Flexible Hyperconverged Systems You need systems that can adapt to match the speed of your business. Cisco HyperFlex Systems

More information

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved. VMware Virtual SAN Technical Walkthrough Massimiliano Moschini Brand Specialist VCI - vexpert 2014 VMware Inc. All rights reserved. VMware Storage Innovations VI 3.x VMFS Snapshots Storage vmotion NAS

More information

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View Dell EMC Vblock System 340 with VMware Horizon 6.0 with View Version 1.0 November 2014 THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." VCE MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH

More information

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X FLASHSTACK REFERENCE ARCHITECTURE September 2018 TABLE

More information

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M 1 W W W. F U S I O N I O. C O M iocontrol Reference Architecture for VMware Horizon View iocontrol Reference Architecture for VMware Horizon View Introduction Desktop management at any scale is a tedious

More information

Oracle Database Consolidation on FlashStack

Oracle Database Consolidation on FlashStack White Paper Oracle Database Consolidation on FlashStack with VMware 2017 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Contents Executive Summary Introduction

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vsphere 5.5 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 Introduction Executive

More information

Why Datrium DVX is Best for VDI

Why Datrium DVX is Best for VDI Why Datrium DVX is Best for VDI 385 Moffett Park Dr. Sunnyvale, CA 94089 844-478-8349 www.datrium.com Technical Report Introduction Managing a robust and growing virtual desktop infrastructure in current

More information

Veritas NetBackup on Cisco UCS S3260 Storage Server

Veritas NetBackup on Cisco UCS S3260 Storage Server Veritas NetBackup on Cisco UCS S3260 Storage Server This document provides an introduction to the process for deploying the Veritas NetBackup master server and media server on the Cisco UCS S3260 Storage

More information

PURE STORAGE FLASHSTACK CONVERGED INFRASTRUCTURE SOLUTION

PURE STORAGE FLASHSTACK CONVERGED INFRASTRUCTURE SOLUTION PURE STORAGE FLASHSTACK CONVERGED INFRASTRUCTURE SOLUTION Design Guide for Microsoft Exchange Server on FlashStack May 2016 TABLE OF CONTENTS INTRODUCTION... 2 GOALS AND OBJECTIVES... 2 DESIGN GUIDE PRINCIPLES...

More information

PROTECTING MISSION CRITICAL DATA

PROTECTING MISSION CRITICAL DATA WHITE PAPER PROTECTING MISSION CRITICAL DATA WITH BACKUP AND REPLICATION FROM PURE STORAGE AND VEEAM TABLE OF CONTENTS INTRODUCTION... 3 ARCHITECTURAL OVERVIEW... 3 TEST PROCESS... 5 VEEAM BACKUP & REPLICATION

More information

FlashStack for Citrix XenDesktop Solution Guide

FlashStack for Citrix XenDesktop Solution Guide FlashStack for Citrix XenDesktop Solution Guide Data centers are looking to be more streamlined, flexible and transformative to help deliver the applications and workloads to drive business impact. One

More information

Cisco HyperFlex HX220c Edge M5

Cisco HyperFlex HX220c Edge M5 Data Sheet Cisco HyperFlex HX220c Edge M5 Hyperconvergence engineered on the fifth-generation Cisco UCS platform Rich digital experiences need always-on, local, high-performance computing that is close

More information

Cisco HyperFlex HX220c M4 Node

Cisco HyperFlex HX220c M4 Node Data Sheet Cisco HyperFlex HX220c M4 Node A New Generation of Hyperconverged Systems To keep pace with the market, you need systems that support rapid, agile development processes. Cisco HyperFlex Systems

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director

Hyper-Convergence De-mystified. Francis O Haire Group Technology Director Hyper-Convergence De-mystified Francis O Haire Group Technology Director The Cloud Era Is Well Underway Rapid Time to Market I deployed my application in five minutes. Fractional IT Consumption I use and

More information

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations

Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations With Intel Xeon Cisco UCS: Choosing the Best Architecture for Your Citrix XenDesktop and XenApp Implementations Mark Balch May 22, 2013 Desktop and Application Virtualization: Business Drivers Co-sponsored

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture

Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture White Paper Cisco UCS SmartStack for Microsoft SQL Server 2014 with VMware: Reference Architecture Executive Summary Introduction Microsoft SQL Server 2005 has been in extended support since April 2011,

More information

Lenovo Validated Designs

Lenovo Validated Designs Lenovo Validated Designs for ThinkAgile HX Appliances Deliver greater reliability and simplify the modern datacenter Simplify Solutions Infrastructure Lenovo and Nutanix share a common vision of delivering

More information

The Impact of Hyper- converged Infrastructure on the IT Landscape

The Impact of Hyper- converged Infrastructure on the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape Focus on innovation, not IT integration BUILD Consumes valuables time and resources Go faster Invest in areas that differentiate BUY 3 Integration

More information

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside.

MODERNISE WITH ALL-FLASH. Intel Inside. Powerful Data Centre Outside. MODERNISE WITH ALL-FLASH Intel Inside. Powerful Data Centre Outside. MODERNISE WITHOUT COMPROMISE In today s lightning-fast digital world, it s critical for businesses to make their move to the Modern

More information

FLASHARRAY AT A GLANCE

FLASHARRAY AT A GLANCE FLASHARRAY AT A GLANCE ACCELERATE latency sensitive apps, DBs, VMs, virtual desktops CONSOLIDATE all your Tier 1 applications on an All-Flash Cloud TWO YEARS OF PROVEN 99.9999% AVAILABILITY inclusive of

More information

7 Things ISVs Must Know About Virtualization

7 Things ISVs Must Know About Virtualization 7 Things ISVs Must Know About Virtualization July 2010 VIRTUALIZATION BENEFITS REPORT Table of Contents Executive Summary...1 Introduction...1 1. Applications just run!...2 2. Performance is excellent...2

More information

TITLE. the IT Landscape

TITLE. the IT Landscape The Impact of Hyperconverged Infrastructure on the IT Landscape 1 TITLE Drivers for adoption Lower TCO Speed and Agility Scale Easily Operational Simplicity Hyper-converged Integrated storage & compute

More information

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect Vblock Architecture Andrew Smallridge DC Technology Solutions Architect asmallri@cisco.com Vblock Design Governance It s an architecture! Requirements: Pretested Fully Integrated Ready to Go Ready to Grow

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Cisco UCS B460 M4 Blade Server

Cisco UCS B460 M4 Blade Server Data Sheet Cisco UCS B460 M4 Blade Server Product Overview The new Cisco UCS B460 M4 Blade Server uses the power of the latest Intel Xeon processor E7 v3 product family to add new levels of performance

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April IOmark- VM HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC- 150427- b Test Report Date: 27, April 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark-

More information

VMware vsphere with ESX 4 and vcenter

VMware vsphere with ESX 4 and vcenter VMware vsphere with ESX 4 and vcenter This class is a 5-day intense introduction to virtualization using VMware s immensely popular vsphere suite including VMware ESX 4 and vcenter. Assuming no prior virtualization

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Stellar performance for a virtualized world

Stellar performance for a virtualized world IBM Systems and Technology IBM System Storage Stellar performance for a virtualized world IBM storage systems leverage VMware technology 2 Stellar performance for a virtualized world Highlights Leverages

More information

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs) Based on Cisco UCS C460 M4 Rack Servers Solution Brief May 2015 With Intelligent Intel Xeon Processors Highlights Integrate with Your Existing Data Center Our SAP HANA appliances help you get up and running

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

2014 VMware Inc. All rights reserved.

2014 VMware Inc. All rights reserved. 2014 VMware Inc. All rights reserved. Agenda Virtual SAN 1 Why VSAN Software Defined Storage 2 Introducing Virtual SAN 3 Hardware Requirements 4 DEMO 5 Questions 2 The Software-Defined Data Center Expand

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

SOLUTION BRIEF TOP 5 REASONS TO CHOOSE FLASHSTACK

SOLUTION BRIEF TOP 5 REASONS TO CHOOSE FLASHSTACK SOLUTION BRIEF TOP 5 REASONS TO CHOOSE FLASHSTACK New IT service delivery methodologies are revolutionizing how IT departments function and how users access the applications that make businesses successful.

More information

HPE Synergy HPE SimpliVity 380

HPE Synergy HPE SimpliVity 380 HPE Synergy HPE SimpliVity 0 Pascal.Moens@hpe.com, Solutions Architect Technical Partner Lead February 0 HPE Synergy Composable infrastructure at HPE CPU Memory Local Storage LAN I/O SAN I/O Power Cooling

More information

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE Design Guide APRIL 0 The information in this publication is provided as is. Dell Inc. makes no representations or warranties

More information

TOP 5 REASONS TO CHOOSE FLASHSTACK FOR HEALTHCARE

TOP 5 REASONS TO CHOOSE FLASHSTACK FOR HEALTHCARE SOLUTION BRIEF TOP 5 REASONS TO CHOOSE FLASHSTACK FOR HEALTHCARE New IT service delivery methodologies are revolutionizing how hospital IT departments function and how IT staff and clinicians access the

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect

VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale. Kiewiet Kritzinger DELL EMC CPSD Snr varchitect VxRack FLEX Technical Deep Dive: Building Hyper-converged Solutions at Rackscale Kiewiet Kritzinger DELL EMC CPSD Snr varchitect Introduction to hyper-converged Focus on innovation, not IT integration

More information

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

EMC VSPEX SERVER VIRTUALIZATION SOLUTION Reference Architecture EMC VSPEX SERVER VIRTUALIZATION SOLUTION VMware vsphere 5 for 100 Virtual Machines Enabled by VMware vsphere 5, EMC VNXe3300, and EMC Next-Generation Backup EMC VSPEX April 2012

More information

Taking the Lead to Tomorrow s Enterprise Connectivity

Taking the Lead to Tomorrow s Enterprise Connectivity in partnership with Taking the Lead to Tomorrow s Enterprise Connectivity The challenges, considerations, benefits and options of VDI Jointly presented by - Citrix: Annamalai Chockalingam, Yong Shao Horng

More information

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS

VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VxRail: Level Up with New Capabilities and Powers GLOBAL SPONSORS VMware customers trust their infrastructure to vsan #1 Leading SDS Vendor >10,000 >100 83% vsan Customers Countries Deployed Critical Apps

More information

VMware vsphere with ESX 4.1 and vcenter 4.1

VMware vsphere with ESX 4.1 and vcenter 4.1 QWERTYUIOP{ Overview VMware vsphere with ESX 4.1 and vcenter 4.1 This powerful 5-day class is an intense introduction to virtualization using VMware s vsphere 4.1 including VMware ESX 4.1 and vcenter.

More information

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server White Paper Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server Executive Summary This document describes the network I/O performance characteristics of the Cisco UCS S3260 Storage

More information

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9

THE SUMMARY. CLUSTER SERIES - pg. 3. ULTRA SERIES - pg. 5. EXTREME SERIES - pg. 9 PRODUCT CATALOG THE SUMMARY CLUSTER SERIES - pg. 3 ULTRA SERIES - pg. 5 EXTREME SERIES - pg. 9 CLUSTER SERIES THE HIGH DENSITY STORAGE FOR ARCHIVE AND BACKUP When downtime is not an option Downtime is

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments 1 2017 2017 Cisco Cisco and/or and/or its

More information

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA Learn best practices for running SAP HANA on the Cisco HyperFlex hyperconverged infrastructure (HCI) solution. 2018 Cisco and/or its

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

NVIDIA GRID APPLICATION SIZING FOR AUTODESK REVIT 2016

NVIDIA GRID APPLICATION SIZING FOR AUTODESK REVIT 2016 NVIDIA GRID APPLICATION SIZING FOR AUTODESK REVIT 2016 BPG-08489-001 March 2017 Best Practices Guide TABLE OF CONTENTS Users Per Server (UPS)... 1 Technology Overview... 3 Autodesk Revit 2016 Application...

More information

Veeam Availability Suite on Cisco UCS S3260

Veeam Availability Suite on Cisco UCS S3260 Veeam Availability Suite on Cisco UCS S3260 April 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 101 Contents Introduction Technology Overview

More information

VMware Virtual SAN Technology

VMware Virtual SAN Technology VMware Virtual SAN Technology Today s Agenda 1 Hyper-Converged Infrastructure Architecture & Vmware Virtual SAN Overview 2 Why VMware Hyper-Converged Software? 3 VMware Virtual SAN Advantage Today s Agenda

More information

Merging Enterprise Applications with Docker* Container Technology

Merging Enterprise Applications with Docker* Container Technology Solution Brief NetApp Docker Volume Plugin* Intel Xeon Processors Intel Ethernet Converged Network Adapters Merging Enterprise Applications with Docker* Container Technology Enabling Scale-out Solutions

More information

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide

VMWARE VSAN LICENSING GUIDE - MARCH 2018 VMWARE VSAN 6.6. Licensing Guide - MARCH 2018 VMWARE VSAN 6.6 Licensing Guide Table of Contents Introduction 3 License Editions 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster with

More information

Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6

Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6 White Paper Cisco UCS C240 M4 Rack Server with VMware Virtual SAN 6.0 and Horizon 6 Reference Architecture August 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

VMware vsphere 6.5 Boot Camp

VMware vsphere 6.5 Boot Camp Course Name Format Course Books 5-day, 10 hour/day instructor led training 724 pg Study Guide fully annotated with slide notes 243 pg Lab Guide with detailed steps for completing all labs 145 pg Boot Camp

More information

vstart 50 VMware vsphere Solution Overview

vstart 50 VMware vsphere Solution Overview vstart 50 VMware vsphere Solution Overview Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY,

More information

VMware vsphere: Taking Virtualization to the Next Level

VMware vsphere: Taking Virtualization to the Next Level About this research note: Product Evaluation notes provide an analysis of the market position of a specific product and its vendor through an in-depth exploration of their relative capabilities. VMware

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

VMware vsan 6.6. Licensing Guide. Revised May 2017

VMware vsan 6.6. Licensing Guide. Revised May 2017 VMware 6.6 Licensing Guide Revised May 2017 Contents Introduction... 3 License Editions... 4 Virtual Desktop Infrastructure... 5 Upgrades... 5 Remote Office / Branch Office... 5 Stretched Cluster... 7

More information

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices Solution Overview Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices BENEFITS Cisco UCS and StorMagic SvSAN deliver a solution to the edge: Single addressable storage pool

More information

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December

IOmark- VM. IBM IBM FlashSystem V9000 Test Report: VM a Test Report Date: 5, December IOmark- VM IBM IBM FlashSystem V9000 Test Report: VM- 151205- a Test Report Date: 5, December 2015 Copyright 2010-2015 Evaluator Group, Inc. All rights reserved. IOmark- VM, IOmark- VDI, VDI- IOmark, and

More information

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story

VMWARE EBOOK. Easily Deployed Software-Defined Storage: A Customer Love Story VMWARE EBOOK Easily Deployed Software-Defined Storage: A Customer Love Story TABLE OF CONTENTS The Software-Defined Data Center... 1 VMware Virtual SAN... 3 A Proven Enterprise Platform... 4 Proven Results:

More information

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture

High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7. Reference Architecture High-Performance, High-Density VDI with Pivot3 Acuity and VMware Horizon 7 Reference Architecture How to Contact Pivot3 Pivot3, Inc. General Information: info@pivot3.com 221 West 6 th St., Suite 750 Sales:

More information

Storage Solutions for VMware: InfiniBox. White Paper

Storage Solutions for VMware: InfiniBox. White Paper Storage Solutions for VMware: InfiniBox White Paper Abstract The integration between infrastructure and applications can drive greater flexibility and speed in helping businesses to be competitive and

More information

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.

VMware VAAI Integration. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6. VMware vsphere 5.0 VAAI primitive integration and performance validation with Dell Compellent Storage Center 6.0 Document revision Date Revision Comments /9/0 A Initial Draft THIS GUIDE IS FOR INFORMATIONAL

More information

Virtual Security Server

Virtual Security Server Data Sheet VSS Virtual Security Server Security clients anytime, anywhere, any device CENTRALIZED CLIENT MANAGEMENT UP TO 50% LESS BANDWIDTH UP TO 80 VIDEO STREAMS MOBILE ACCESS INTEGRATED SECURITY SYSTEMS

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide

VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide VMware Virtual SAN on Cisco UCS S3260 Storage Server Deployment Guide May 2018 2018 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 23 Contents Executive

More information

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise Virtualization with VMware ESX and VirtualCenter SMB to Enterprise This class is an intense, five-day introduction to virtualization using VMware s immensely popular Virtual Infrastructure suite including

More information

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees Course Name Format Course Books 5-day instructor led training 735 pg Study Guide fully annotated with slide notes 244 pg Lab Guide with detailed steps for completing all labs vsphere Version Covers uses

More information

Virtualizing SAP on vsphere Leveraging All Flash Storage from Pure Storage January 2016

Virtualizing SAP on vsphere Leveraging All Flash Storage from Pure Storage January 2016 Virtualizing SAP on vsphere Leveraging All Flash Storage from Pure Storage January 2016 Page 1 of 40 This product is protected by U.S. and international copyright and intellectual property laws. This product

More information

The Virtues of Virtualization and the Microsoft Windows 10 Window of Opportunity

The Virtues of Virtualization and the Microsoft Windows 10 Window of Opportunity The Virtues of Virtualization and the Microsoft Windows 10 Window of Opportunity Cisco HyperFlex VDI for Citrix Market trends and VDI drivers The IT industry is trending toward small, incrementally expandable

More information

Overview. Cisco UCS Manager User Documentation

Overview. Cisco UCS Manager User Documentation Cisco UCS Manager User Documentation, page 1 Infrastructure Management Guide, page 2 Cisco Unified Computing System, page 3 Cisco UCS Building Blocks and Connectivity, page 5 Cisco UCS Manager User Documentation

More information

SAN Virtuosity Fibre Channel over Ethernet

SAN Virtuosity Fibre Channel over Ethernet SAN VIRTUOSITY Series WHITE PAPER SAN Virtuosity Fibre Channel over Ethernet Subscribe to the SAN Virtuosity Series at www.sanvirtuosity.com Table of Contents Introduction...1 VMware and the Next Generation

More information