NexentaVSA for View. Hardware Configuration Reference nv4v-v A

Similar documents
SOLUTIONS PRODUCTS INDUSTRIES RESOURCES SUPPORT ABOUT US. Endpoint Security

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

VMware Virtual SAN. Technical Walkthrough. Massimiliano Moschini Brand Specialist VCI - vexpert VMware Inc. All rights reserved.

What others saying about ClearCube Technology»

NexentaStor 5.x Reference Architecture

vstart 50 VMware vsphere Solution Specification

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

NexentaStor 5.x Reference Architecture

Dell EMC Ready Architectures for VDI

SAN Acceleration Using Nexenta Connect View Edition with Third- Party SAN Storage

Cisco Integrated Desktop Virtualization Solution

NexentaStor Storage Replication Adapter User Guide

Cisco HyperFlex HX220c M4 Node

NexentaStor 5.x Reference Architecture

View Turnkey Appliances (Rapid Desktop Appliances) Guide Last Updated: January 6, 2018 For more information go to vmware.com.

Image Management for View Desktops using Mirage

View Turnkey Appliances (Rapid Desktop Appliances) Guide Last Updated: May 11, 2018 For more information go to vmware.com.

Delivering Nexenta Software-Based File Services to Cisco HyperFlex

Cisco HyperFlex Hyperconverged Infrastructure Solution for SAP HANA

Notes Section Notes for Workload. Configuration Section Configuration

Dell EMC Ready Architectures for VDI

Administering VMware Virtual SAN. Modified on October 4, 2017 VMware vsphere 6.0 VMware vsan 6.2

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

System Requirements. Hardware and Virtual Appliance Requirements

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

SvSAN Data Sheet - StorMagic

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Target FC. User Guide 4.0.3

CLOUD PROVIDER POD RELEASE NOTES

EMC Business Continuity for Microsoft Applications

VMWare Horizon View 6 VDI Scalability Testing on Cisco 240c M4 HyperFlex Cluster System

Dell EMC vsan Ready Nodes for VDI

VSAN Virtual Desktop Infrastructure Workload First Published On: Last Updated On:

Dell EMC Ready System for VDI on VxRail

iocontrol Reference Architecture for VMware Horizon View 1 W W W. F U S I O N I O. C O M

CLOUD PROVIDER POD RELEASE NOTES

Preparing Virtual Machines for Cisco APIC-EM

Preparing Virtual Machines for Cisco APIC-EM

Dell EMC Ready System for VDI on XC Series

VMWARE HORIZON 6 ON HYPER-CONVERGED INFRASTRUCTURES. Horizon 6 version 6.2 VMware vsphere 6U1 / VMware Virtual SAN 6U1 Supermicro TwinPro 2 4 Nodes

IBM SmartCloud Desktop Infrastructure with VMware View Reference architecture. 12 December 2012

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/core/threads in test: 4/32/64

SUPERMICRO NEXENTASTOR 5.0 REFERENCE ARCHITECTURE

NEXGEN N5 PERFORMANCE IN A VIRTUALIZED ENVIRONMENT

vsphere Installation and Setup Update 2 Modified on 10 JULY 2018 VMware vsphere 6.5 VMware ESXi 6.5 vcenter Server 6.5

NexentaStor VVOL

VMware VMmark V2.5.2 Results

StarWind Virtual SAN for vsphere Software RAID Configuration Guide

IOmark- VM. HP HP ConvergedSystem 242- HC StoreVirtual Test Report: VM- HC b Test Report Date: 27, April

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

CLOUD PROVIDER POD. for VMware. Release Notes. VMware Cloud Provider Pod January 2019 Check for additions and updates to these release notes

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Evaluation Report: HP StoreFabric SN1000E 16Gb Fibre Channel HBA

T E C H N I C A L S A L E S S O L U T I O N S

Tested By: Hewlett-Packard Test Date: Configuration Section Configuration

Notes Section Notes for Workload. Configuration Section Configuration. mailserver olio dvdstorea dvdstoreb dvdstorec

VMware Virtual SAN Design and Sizing Guide for Horizon View Virtual Desktop Infrastructures TECHNICAL MARKETING DOCUMENTATION REV A /JULY 2014

A Dell Technical White Paper Dell Virtualization Solutions Engineering

Eliminate the Complexity of Multiple Infrastructure Silos

NexentaConnect for Horizon

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

2014 VMware Inc. All rights reserved.

Microsoft SharePoint Server 2010 on Dell Systems

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

LATEST INTEL TECHNOLOGIES POWER NEW PERFORMANCE LEVELS ON VMWARE VSAN

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

BlackBerry AtHoc Networked Crisis Communication Capacity Planning Guidelines. AtHoc SMS Codes

Scalability Testing with Login VSI v16.2. White Paper Parallels Remote Application Server 2018

Surveillance Dell EMC Storage with FLIR Latitude

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/core/threads in test: 4/32/64

VMware vsan Ready Nodes

VMware vfabric Data Director Installation Guide

VMware Virtual SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/32/64

Resiliency Replication Appliance Installation Guide Version 7.2

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/64/64

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

Install ISE on a VMware Virtual Machine

Cisco HyperFlex HX220c Edge M5

VMware vsphere Storage Appliance Installation and Configuration

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

VMWare Horizon View Solution Guide

RACKSPACE ONMETAL I/O V2 OUTPERFORMS AMAZON EC2 BY UP TO 2X IN BENCHMARK TESTING

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

What is QES 2.1? Agenda. Supported Model. Live demo

Webinar Series: Triangulate your Storage Architecture with SvSAN Caching. Luke Pruen Technical Services Director

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Number of Hosts: 2 Uniform Hosts [yes/no]: yes Total sockets/cores/threads in test: 4/32/64

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Implementing SQL Server 2016 with Microsoft Storage Spaces Direct on Dell EMC PowerEdge R730xd

Reference Architecture: Lenovo Client Virtualization with VMware Horizon and System x Servers

vsan Planning and Deployment Update 1 16 OCT 2018 VMware vsphere 6.7 VMware vsan 6.7

Administering VMware vsphere and vcenter 5

VMware vfabric Data Director Installation Guide

Notes Section Notes for Workload. Configuration Section Configuration

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

Transcription:

NexentaVSA for View Hardware Configuration Reference 1.0 5000-nv4v-v0.0-000003-A

Copyright 2012 Nexenta Systems, ALL RIGHTS RESERVED Notice: No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose, without the express written permission of Nexenta Systems (hereinafter referred to as Nexenta ). Nexenta reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. Nexenta products and services only can be ordered under the terms and conditions of Nexenta Systems applicable agreements. All of the features described in this document may not be available currently. Refer to the latest product announcement or contact your local Nexenta Systems sales office for information on feature and product availability. This document includes the latest information available at the time of publication. Nexenta is a registered trademark of Nexenta Systems in the United States and other countries. All other trademarks, service marks, and company names in this document are properties of their respective owners. NexentaVSA for View Hardware Configuration Reference ii

Contents 1 NexentaVSA for View..................................... 1 About NexentaVSA for View.................................. 1 NexentaVSA for View Components.............................. 1 NexentaVSA for View Management Appliance...................... 2 Advantages of using NexentaStor VSA........................... 3 2 Deployment Scenarios..................................... 5 About Deployment Scenarios................................. 5 Floating Desktops......................................... 5 Dedicated Desktops........................................ 6 3 System Requirements..................................... 7 VMware VDI Prerequisites.................................... 7 Server Agent, Desktop Agent, and Management Appliance Requirements... 7 NexentaVSA for View ESXi Host Requirements..................... 8 NexentaStor VSA Requirements............................. 8 DVM Requirements...................................... 8 4 DVM Deployment Recommendations......................... 10 General DVM Deployment Recommendations..................... 10 Recommendations for Allotting Physical Resources for Floating Desktops.. 11 Worksheet for Estimating Resources for Normal Floating Users......... 12 Example of Worksheet for Estimating Resources for Normal Floating Users. 14 5 Example Configurations and Performance.................... 16 Sizing the ESXi Host for 100 Normal Floating Desktops.............. 16 Example of Physical Server with 100 Floating Normal Desktops......... 17 Performance Results for Example Configuration.................... 18 NexentaVSA for View Hardware Configuration Reference iii

1 NexentaVSA for View This section includes the following topics: About NexentaVSA for View NexentaVSA for View Components NexentaVSA for View Management Appliance Advantages of using NexentaStor VSA About NexentaVSA for View NexentaVSA for View is a new approach to simplifying and automating virtual desktop infrastructure (VDI) deployments, management, and calibration using virtual storage and VMware View 5.0. NexentaVSA for View integrates with your standard network, VMware vsphere infrastructure, and VMware View to deploy and manage your desktop virtual machines (DVMs). It also provides performance analytics that can be used to improve your VDI. NexentaVSA for View Components NexentaVSA for View is a client/server environment. A Server Agent is installed on the View Connection server with a Desktop Agent in each desktop template on each ESXi server that is dedicated to NexentaVSA for View. A Management Appliance provides a GUI interface and communication with the VMware VDI environment. NexentaVSA for View consists of the following components: NexentaVSA for View Management Appliance provides the NexentaVSA for View management functions. The Management Appliance is installed from an included template and can be located on any ESXi host in the network. NexentaStor VSA is a virtual storage appliance (VSA) that provides storage management for the NexentaVSA for View DVMs through a NexentaVSA for View vsphere plug-in, which communicates with VMware View and VMware vcenter to perform the actual DVM provisioning and management. Administrators interact with NexentaStor VSA using wizards. NexentaStor VSA is installed from an included template on each dedicated NexentaVSA for View ESXi host. NexentaVSA for View Hardware Configuration Reference 1

NexentaVSA for View NexentaVSA for View Server Agent handles all communication between NexentaVSA for View and the VMware components. The Server Agent is installed on the View Connection Server. NexentaVSA for View Desktop Agent provides communication between NexentaVSA for View and the DVMs. The Desktop Agent is installed on the desktop template, which is installed on each NexentaVSA for View ESXi host.! Note: The bundled NexentaStor VSA package can only be installed as internal storage on the same ESXi host that deploys the DVMs. The bundled NexentaStor VSA package cannot be installed as external storage. Figure 1-1 shows the NexentaVSA for View components in a typical VDI. Figure 1-1: NexentaVSA for View using NexentaStor VSA NexentaVSA for View Management Appliance The NexentaVSA for View Management Appliance has a web interface with management wizards that allow administrators to simplify DVM deployments and optimize VDI workloads. It uses standard inter-process communication (IPC) mechanisms to communicate with the VMware VDI environment. Figure 1-2 illustrates the components of the Management Appliance. NexentaVSA for View Hardware Configuration Reference 2

NexentaVSA for View Figure 1-2: NexentaVSA for View Management Appliance Components The Deployment Wizard reduces approximately 150 configuration steps down to four steps. Create NFS and/or ZFS storage from local ESXi storage, including all clustered ESXi servers. Then create DVMs based on this automatically configured storage. The Configuration Wizard helps administrators tune VDI deployments with rapid reconfiguration, during which NexentaVSA for View automatically rebalances DVM and associate storage resources. Based on performance data, calibration settings define the expected ranges for DVMs. The manager identifies the number of DVMs created and added to the pool with each successful iteration. Collected data iteratively improves the threshold knowledge. The Benchmark tools provide unprecedented performance testing through NexentaStor VSA, allowing administrators to continually monitor the performance for the deployed pool. The Calibration capabilities allow administrators to use the benchmark results to fine-tune the resource allocations in order to continually meet performance goals. Advantages of using NexentaStor VSA There are significant advantages to deploying NexentaVSA for View. These include: Improved scalability NexentaVSA for View scales well. As more hypervisors are added to the infrastructure to support more DVMs, more NexentaVSA for View ESXi hosts can be added, each with its own NexentaStor VSA. Reduced network load NexentaStor VSA delivers I/O directly to the hypervisor from within the hypervisor, decreasing network traffic and making I/O performance more consistent. Because no additional network ports are needed for the hypervisor to communicate to storage, fewer slots are needed on the server. NexentaVSA for View Hardware Configuration Reference 3

NexentaVSA for View Storage hardware independence Since NexentaVSA for View runs on any supported VMware-compatible hardware, customers can deploy a cost-effective NexentaStor VSA solution without concerns of hardware support. Integrating SSD components either locally or in an attached NFS appliance enable caching, which optimizes VDI performance. Use of ZFS technology Under the direction of NexentaVSA for View, NexentaStor VSA abstracts and pools VMFS volumes as a VSA using ZFS technology. Since VMFS understands which devices are HDDs and SSDs, ZFS can construct a hybrid storage pool, applying fast SSDs as caching devices. The ZFS pool is then exported using NFS to the ESXi hypervisor, allowing the DVMs to access underlying VSA storage. Automatic reconfiguration and resource balancing NexentaVSA for View optimizes the compute, memory, and storage parameters and directs VMware vcenter to create DVMs based on these parameters, characterizing end-to-end performance of each configuration, and reconfiguring and automatically rebalancing resources as necessary. Benchmarking and tuning NexentaVSA for View offers significant advantages in its ability to benchmark and tune VDI deployments and optimize performance for VDI workloads. NexentaVSA for View Hardware Configuration Reference 4

2 Deployment Scenarios This section includes the following topics: About Deployment Scenarios Floating Desktops Dedicated Desktops About Deployment Scenarios NexentaVSA for View is extremely flexible and can be implemented in small to large VMware deployments using floating DVMs, dedicated DVMs, or a combination of both. Multiple storage volumes can be exported to multiple VMware hypervisors. In this way, storage can be shared across hypervisors to enhance availability. The resources required to deploy DVMs vary depending upon the type (floating or dedicated) and number of DVMs being deployed. Each NexentaVSA for View cluster in the deployment can have a floating user pool or a dedicated user pool. If a NexentaVSA for View cluster contains multiple NexentaVSA for View ESXi hosts, the pool can be shared across the hosts in that cluster.! Note: NexentaVSA for View 1.0 supports one DVM pool per cluster. Each pool must be for floating or for dedicated users; the two user types cannot be combined in the same pool. Floating Desktops Floating desktops, also known as stateless desktops, are used when the capacity requirement is small and the DVM state is not maintained after a user logs out. A floating desktop environment is usually deployed from a single ESXi host and often uses linked clones. The DVMs are cookie-cutter images that contain no personal settings or user-specific data. Floating desktops are built on an as-needed basis, based on the attributes of the user group. Examples are kiosks, classrooms, and office DVMs. This scenario is deal for companies interested in quickly implementing a VDI initiative or populating satellite offices. NexentaVSA for View Hardware Configuration Reference 5

Deployment Scenarios Dedicated Desktops Dedicated desktops are persistent, providing each user with his or her own personalized DVM. Any changes made by a user are stored on a network file share or VMware View persistent disk. When users log in again, they are presented with their own unique DVMs. Dedicated desktops are typically assigned to users who need to make changes to their DVM images, such as installing additional applications, customizing settings, and saving data within the desktop image itself rather than to a persistent NFS share or VMware View disk. NexentaVSA for View Hardware Configuration Reference 6

3 System Requirements This section includes the following topics: VMware VDI Prerequisites Server Agent, Desktop Agent, and Management Appliance Requirements NexentaVSA for View ESXi Host Requirements VMware VDI Prerequisites Before NexentaVSA for View can be installed, the customer must have a fully installed, configured, and functioning VMware VDI environment that includes the following VMware components, along with all required supporting hardware, software, and networking elements: VMware vsphere 5 with VMware vcenter Server 5 VMware View Manager 5 with Composer The VMware documentation contains a full description of the required hardware, software, and networking elements for the VDI environment. Server Agent, Desktop Agent, and Management Appliance Requirements The NexentaVSA for View Server Agent, Desktop Agent, and Management Appliance are installed as shown below. Table 3-1: Server Agent, Desktop Agent, and Management Appliance Requirements Component Installation Location Additional Requirements NexentaVSA for View Server Agent NexentaVSA for View Desktop Agent NexentaVSA for View Management Appliance View Connection Server VMware View PowerCLI on the View Connection Server Desktop template Microsoft.Net Framework 3.5 or later in the desktop template Any ESXi host in the network None NexentaVSA for View Hardware Configuration Reference 7

System Requirements NexentaVSA for View ESXi Host Requirements NexentaVSA for View requires at least one dedicated physical machine hosting an ESXi server in the VMware VDI environment, called the NexentaVSA for View ESXi host. This physical machine cannot be used to host additional ESXi servers or any other software or network components required by the VMware VDI environment. NexentaVSA for View ESXi hosts cannot be in a cluster with non-nexentavsa for View ESXi servers. If there is only one NexentaVSA for View ESXi host, it must be in a cluster by itself. All desktops managed by NexentaVSA for View are deployed on a NexentaVSA for View ESXi host. Each NexentaVSA for View ESXi host contains NexentaStor VSA and the desktop template, and must meet the NexentaStor VSA Requirements and the DVM Requirements. NexentaStor VSA Requirements Table 3-2 lists the physical machine resources necessary to run NexentaStor VSA on the ESXi server. Table 3-2: Physical Machine Requirements to run NexentaStor VSA Physical Resource Description Notes HBA (host bus adapters) Minimum: Two 1 GB controllers Recommended: One 10 GB Ethernet controller CPU 4 physical cores minimum per 100 normal floating desktops: 64 bit x86 CPUs, 2.13 GHz or faster, Intel Xeon or AMD Barcelona families The HBAs must support virtualization In addition to VMware ESXi host and DVM deployment requirements Memory 4GB minimum In addition to VMware ESXi host and DVM deployment requirements DVM Requirements Each deployed DVM requires a certain amount of memory and other resources. The total amount of DVM resources required for a deployment depends on the performance requirements, number of DVMs, number of NexentaVSA for View ESXi hosts, type of deployment (floating or dedicated), and other factors. NexentaVSA for View Hardware Configuration Reference 8

System Requirements Table 3-3 lists the minimum resources required for a single DVM on the NexentaVSA for View ESXi host. Table 3-3: Virtual Machine Requirements For Each Deployed Desktop Resource Requirement Notes Operating system Microsoft Windows 7 license NIC One virtual NIC For example, a VMXNET3 network adapter HBA LSI logic SAS CPU One vcpu Can be accommodated using hypertrading Memory 800MB to 2GB Allocated, not physical Storage (HDD or SDD) 16GB minimum If you already have a list of the resources required for the NexentaVSA for View ESXi host, and the resources meet the minimum requirements shown above for each DVM, you can use your existing list. If you do not know the resources you need, see the DVM Deployment Recommendations section for guidelines for sizing the DVM resources, along with worksheets and examples. NexentaVSA for View Hardware Configuration Reference 9

4 DVM Deployment Recommendations This section includes the following topics: General DVM Deployment Recommendations Recommendations for Allotting Physical Resources for Floating Desktops Worksheet for Estimating Resources for Normal Floating Users Example of Worksheet for Estimating Resources for Normal Floating Users General DVM Deployment Recommendations Each NexentaVSA for View ESXi host must have enough resources to deploy the required number of desktops on that host. The actual resource requirements depend on the type of DVM (floating or dedicated) and the expected level of use (normal user or power user). In addition, there are several default NexentaVSA for View settings and recommendations that can affect how you deploy NexentaVSA for View in a VDI environment. The following recommendations are used in the formulas and examples in this section. The default NexentaStor VSA setting for floating desktops uses RAID10 (mirrored stripes). All formulas for estimating disk size in this document include the extra space required for RAID10. Use 15K RPM disks. A normal floating desktop requires 7-20 IOPS with an average read and write latency under 20 ms. Install NexentaStor VSA on SSD. It is recommended to mirror the cache. For physical resources, always round up the result to the next available commercial size. In general, increasing memory, disk, and cache size improves performance. When estimating physical resources, you can generally overbook the allocation based on the typical number of concurrent users. If you overbook, you can use fewer resources than calculated. When estimating core requirements for DVM usage, you can lower the calculated estimate by up to 50% if hypertrading is enabled on the ESXi host. If you are using shared pools, divide the number of DVMs by the number of ESXi hosts sharing the pool to obtain the number of DVMs per host used in the calculations. For example, if two ESXi hosts share a pool of 100 DVMs, each ESXi host has 50 DVMs. NexentaVSA for View Hardware Configuration Reference 10

DVM Deployment Recommendations Recommendations for Allotting Physical Resources for Floating Desktops Table 4-1 provides recommendations for allotting physical resources to normal floating desktops on an ESXi host, based on the DVM requirements shown in Table 3-3. These resources are in addition to the requirements for an ESXi host and for NexentaStor VSA. Table 4-1: Recommendations for Physical Resources for Floating Normal Desktops Resource Recommendation Notes Physical CPUs on ESXi Host Recommended: 8 DVMs per core A normal floating DVM utilizes 15% to 25% of a Maximum: 14 DVMs per core physical CPU. Physical Memory on ESXi Host #DVMs x [RAM per DVM] = total RAM 70%-80% of DVM memory is in physical memory. to deploy DVMs on this ESXi host For DVMs requiring less than 1GB RAM, use 1GB. Do not round up this result For DVMs requiring more than 1GB RAM, use 2GB. Physical HDD Requirement on ESXi Host Preliminary HDD Requirement #DVMs x (template size + 4GB) x [performance multiplier] x 2 = preliminary HDD requirement to deploy DVMs on this ESXi host Do not round up this result Number of Physical HDDs #DVMs / [number of DVMs per HDD] = number of HDDs on this ESXi host Round up result to the next whole number Size of Each Physical HDD [preliminary HDD requirement] / [number of HDDs] = preliminary size of each HDD on this ESXi host Round up result to the next commercially available HDD size The 4GB is for user information. The performance multiplier leaves room on the disk for the VMware overhead. The minimum recommended multiplier is 1.25 to give 20% free space. You can increase the multiplier to leave more free space for higher performance. Multiply by 2 because of RAID10. Allocate 10 (recommended) to 13 (maximum) DVMs per HDD. This provides a good level of IOPS for normal users on floating desktops. Round up the result to the nearest whole number. For example, if you have 100 DVMs and are allocating 13 DVMs per HDD, the result is 7.69. Round this up to 8 HDDs on this host. Round up the preliminary HDD size to the nearest commercially available size. For example, if the preliminary HDD requirement is 3.8TB and you need 8 HDDs, the preliminary HDD size is 486.4GB. Round this up to 600GB. NexentaVSA for View Hardware Configuration Reference 11

DVM Deployment Recommendations Table 4-1: Recommendations for Physical Resources for Floating Normal Desktops Resource Recommendation Notes Total Physical HDD Requirement [rounded-up HDD size] x [roundedup number of HDDs on this host] = requirement to deploy DVMs on this host would be 8 Using the examples above, the full HDD total physical HDD requirement to HDDs at 600GB each. deploy DVMs on this ESXi host Physical SDD Requirement on ESXi Host 4GB + [template size] + cache = recommended SDD size where the cache = 4GB + (#DVMs x 125MB) OR 2 x (4GB + (#DVMs x 125MB)) Round up result to the next commercially available SSD size. The 4GB is the installed size of NexentaStor VSA. The formula (#DVMs x 125MB) is the size of the NexentaStor VSA memory store. If the cache is mirrored, multiply the NexentaStor VSA memory store by 2 before adding it to installed size of NexentaStor VSA and the template size. Worksheet for Estimating Resources for Normal Floating Users Table 4-2 is a worksheet to assist you in estimating the physical resources needed to deploy a given number of floating desktops for normal users on a NexentaVSA for View ESXi host. The worksheet is based on the recommendations listed in Table 4-1. The resources calculated here must be installed on the NexentaVSA for View ESXi host in addition to any other resources. You need the following information for this worksheet: Number of DVMs on this host: Memory (RAM) per DVM (recommended 1GB or 2GB): GB Desktop template size: GB Performance multiplier (recommended 1.25): Number of DVMs per HDD (recommended 10, maximum 13): Does the cache on the SDD use mirroring? Yes / No NexentaVSA for View Hardware Configuration Reference 12

DVM Deployment Recommendations Table 4-2: Example of Worksheet for Estimating Resources for 100 Normal Floating Users Physical Cores on this ESXi Host / 14 = cores rounded up to cores minimum for DVMs #DVMs / 8 = cores rounded up to cores recommended for DVMs #DVMs Physical Memory on this ESXi Host #DVMs x GB = GB total RAM for DVMs RAM per DVM Physical HDDs on this ESXi Host Preliminary HDD Requirement on this ESXi Host: template size GB + 4GB = GB x = x = #DVMs perf. multip. Number of Physical HDDs on this ESXi Host: (A) GB preliminary HDD size for DVMs #DVMs / = rounded up to HDDs needed for DVMs DVMs per (B) HDD Size of Each Physical HDD on this ESXi Host: (A) from above GB / = GB rounded up to GB per HDD for DVMs (B) from (C) above Quantity and Size of HDDs on this ESXi Host: HDDs at GB each Physical SDD Requirements on this ESXi Host (B) from above (C) from above #DVMs x 125MB = MB / 1024 = GB + 4GB = GB cache (D) If the cache uses mirroring, multiply: GB x 2 = GB (D) (E) Size of SDDs on this ESXi Host: template size GB + 4GB + GB = GB rounded up to (D) or (E) GB SDD size for DVMs NexentaVSA for View Hardware Configuration Reference 13

DVM Deployment Recommendations Example of Worksheet for Estimating Resources for Normal Floating Users Table 4-3 shows an example of worksheet calculations to determine the resources required for 100 normal floating desktops in a single desktop pool on one NexentaVSA for View ESXi host. These resources would be installed on the host in addition to any other resources. The resources calculated here would be installed on the NexentaVSA for View ESXi host in addition to any other resources. The following values were used for this worksheet: Number of DVMs on this host: 100 Memory (RAM) per DVM (recommended 1GB or 2GB): 1GB Desktop template size,: 30GB Performance multiplier (recommended 1.2): 1.2 Number of DVMs per HDD (recommended 10, maximum 13): 10 Does the cache on the SDD use mirroring? Yes NexentaVSA for View Hardware Configuration Reference 14

DVM Deployment Recommendations Table 4-3: Example of Worksheet for Estimating Resources for 100 Normal Floating Users Physical Cores on this ESXi Host 100 / 14 = 7.14 cores rounded up to 8 cores minimum for DVMs #DVMs 100 / 8 = 12.5 cores rounded up to 13 cores recommended for DVMs #DVMs Physical Memory on this ESXi Host 100 x 1 GB = 100 GB total RAM for DVMs #DVMs RAM per DVM Physical HDDs on this ESXi Host Preliminary HDD Requirement on this ESXi Host: 30 GB + 4GB = 34 GB x 100 = 3400 x 1.25 = 4250 GB template size #DVMs perf. multip. Number of Physical HDDs on this ESXi Host: 100 / 10 = 10 rounded up to 10 HDDs needed for DVMs #DVMs DVMs per HDD (B) Size of Each Physical HDD on this ESXi Host: 4250 GB / 10 = 425.1 GB rounded up to 600 GB per HDD for DVMs (A) from above (B) from above (C) Quantity and Size of HDDs on this ESXi Host: 10 HDDs at 600 GB each (A) preliminary HDD size for DVMs Physical SDD Requirements on this ESXi Host (B) from above (C) from above 100 x 125MB = 125000 MB / 1024 = 122.1 GB + 4GB = 126.1 GB cache #DVMs (D) If the cache uses mirroring, multiply: 126.1 GB x 2 = 252.2 GB (D) (E) Size of SDDs on this ESXi Host: 30 GB + 4GB + 252.2 GB = 282 GB rounded up to 300 template size (D) or (E) GB SDD size for DVMs NexentaVSA for View Hardware Configuration Reference 15

5 Example Configurations and Performance This section includes the following topics: Sizing the ESXi Host for 100 Normal Floating Desktops Example of Physical Server with 100 Floating Normal Desktops Performance Results for Example Configuration Sizing the ESXi Host for 100 Normal Floating Desktops To obtain the total amount of resource requirements for a NexentaVSA for View ESXi host, add the requirements for NexentaStor VSA from Table 3-2 and the estimated requirements for deploying DVMs from Table 4-3. These requirements are in addition to the VMware requirements for an ESXi host. Table 5-1: Example of Total Requirements for an ESXi Host with 100 Normal Floating Desktops Resource NexentaStor VSA Requirement (from Table 3-2) ESXi Host Requirement (from Table 4-3) Total Requirement HBA 4 1GB controllers (minimum) 1 10GB Ethernet controller (recommended) 4 1GB controllers to 1 10GB Ethernet controller Physical Cores 4 8 (minimum) to 13 (recommended) 12 to 17 physical cores Memory 4GB minimum 100GB 104GB minimum HDDs 10 HDDs at 600GB each 10 HDDs at 600GB each SDDs 300GB 300GB NexentaVSA for View Hardware Configuration Reference 16

Example Configurations and Performance Example of Physical Server with 100 Floating Normal Desktops This section contains an example of the requirements for a NexentaVSA for View ESXi host. The example in Table 5-2 below is based on the requirements shown in the following tables: Table 3-2, Physical Machine Requirements to run NexentaStor VSA Table 3-3, Virtual Machine Requirements For Each Deployed Desktop Table 4-3, Example of Worksheet for Estimating Resources for 100 Normal Floating Users Table 5-1, Example of Total Requirements for an ESXi Host with 100 Normal Floating Desktops The example below assumes a configuration of: One physical ESXi server with one desktop pool containing 100 normal floating desktops NexentaStor VSA is installed on SDD Desktop template size is 30GB Each DVM requires 1GB RAM Hypertrading is enabled, allowing the number of physical cores required for DVMs to be reduced (in this example, reduced by 33% to 8 physical cores) DVMs are overbooked, allowing a reduction in the total amount of memory needed Table 5-2: Example of Physical Server: 100 Normal Floating Desktops Component Description Qty Basic Server Hardware Chassis 2U rackmount chassis with 720W (1+1) power 1 supply, black Motherboard Intel Platform E-ATX Dual LGA1366 Socket 1 Motherboard HBA LSI SAS 9211-8I 8PT INT SAS/SATA 6.0Gb/s PCI-E 1 Host Bus Adapter Card NIC STD Dual-port 10G Ethernet w/ SFP+ & CDR 1 Physical Resources Operating System Windows 7 Enterprise Edition 32-bit CPU 6-Core Intel Xeon E5645-2.4 GHz 12M processor 2 (12 cores total) Memory 16GB 1333MHz DDR3 ECC Reg CL9 Kt 6 (96GB total) Disks (HDDs) ST3300657SS 600GB 15K SAS 10 (6TB total) SSDs INTEL 320 Series 160 GB SSD-Reseller Box 2 (320GB total) NexentaVSA for View Hardware Configuration Reference 17

Example Configurations and Performance Performance Results for Example Configuration Table 5-3 lists the performance results obtained by VMware View Planner for the system described above, with 100 concurrent DVMs. Table 5-3: VMware View Planner Results for Example System Measurement IO Meter Benchmark: 100% nonsequential write operations IO Meter Benchmark: 100% nonsequential read operations QoS (Quality of Service) goal: less than 1.5 seconds Results 45 IOPS 135 IOPS 0.862929 seconds NexentaVSA for View Hardware Configuration Reference 18

Corporate Headquarters 455 El Camino Real Santa Clara, CA 95040 U.S.A. New York City 405 Lexington Avenue 26th Floor New York, NY 10174 U.S.A. Houston 2203 Timberloch Place Suite 112 The Woodlands, TX 77380 U.S.A. Nexenta Russia Competency Center 40 Let Pobedy Building 34, Office 703 Krasnodar, Russia, 350038 5000-nv4v-v0.0-000003-A