Deploying VMware View in the Enterprise EMC Celerra NS-120 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com
Copyright 2009 EMC Corporation. All rights reserved. Published August, 2009 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Deploying VMware View in the Enterprise EMC Celerra NS-120 H6488 2 Deploying VMware View in the Enterprise EMC Celerra NS-120
Contents About this Document... 7 Chapter 1 Solution Overview... 9 Business challenge... 10 Technology solution... 10 Solution advantages... 10 Chapter 2 Solution Architecture... 11 Overall architecture... 12 General characteristics...12 Storage architecture... 12 Array configuration...13 File system configurations...13 Network architecture... 15 Switches...15 Celerra NS-120...16 Celerra and ESX server configuration... 18 NS-120 configuration...18 VMware ESX host configuration...21 Creating virtual machines from iscsi LUNs...25 High availability and failover... 27 Storage layer...27 Connectivity layer...27 Host layer...28 Chapter 3 Hardware and Software Resources... 29 Hardware resources... 30 Software resources... 32 Deploying VMware View in the Enterprise EMC Celerra NS-120 3
Contents 4 Deploying VMware View in the Enterprise EMC Celerra NS-120
Tables Table 1 Solution advantages... 10 Table 2 File system configurations... 13 Table 3 Disk volumes...15 Table 4 Hardware specifications... 30 Table 5 VMware ESX servers... 30 Table 6 Software specifications... 32 Deploying VMware View in the Enterprise EMC Celerra NS-120 5
Tables 6 Deploying VMware View in the Enterprise EMC Celerra NS-120
About this Document This document provides an overview of the Deploying VMware View in the Enterprise EMC Celerra NS-120 solution. Purpose Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. Information in this document can also be used by other EMC organizations (for example, the technical services or sales organization) as the basis to produce documentation for a technical services or sales kit. Audience This document is intended for internal EMC personnel, EMC partners, and customers. Scope This document describes the reference architecture of an EMC solution. Implementation instructions and sizing guidelines are beyond the scope of this document. Related documents The following documents, located on EMC Powerlink, provide additional and relevant information. Access to these documents is based on your login credentials. If you do not have access to the content listed below, contact your EMC representative: VMware Virtual Desktop Infrastructure Planning for EMC Celerra Best Practices Planning white paper EMC Infrastructure for Deploying VDI in the Enterprise EMC Celerra NS20 Reference Architecture Configuring iscsi Targets on Celerra Technical Module EMC Infrastructure for Deploying VMware View in the Enterprise EMC Celerra Unified Storage Platforms Solutions Guide Deploying VMware View in the Enterprise EMC Celerra NS-120 7
About this Document The following VMware documents, located on the VMware website, also provide useful information: Introduction to VMware View Manager VMware View Manager Administrator Guide VMware View Storage Deployment Guide for VMware View VMware View Windows XP Deployment Guide VMware View Guide to Profile Virtualization VMware Infrastructure 3 Documentation VMware Infrastructure 3 VDI Server Sizing and Scaling VMware Performance Study 8 Deploying VMware View in the Enterprise EMC Celerra NS-120
Chapter 1 Solution Overview This chapter presents these topics: Business challenge... 10 Technology solution... 10 Solution advantages... 10 Deploying VMware View in the Enterprise EMC Celerra NS-120 9
Solution Overview Business challenge With limited resources and increasing demands, today's businesses must address the following challenges: Consolidate desktops scattered throughout an enterprise Ensure information access, availability, and continuity Maximize server and storage utilization, and deliver high desktop performance Manage upgrades and migrations quickly and easily Reduce the demands on limited IT resources and budgets Reduce the complexity of technology choices In addition, businesses must manage IT costs and reduce the risk of business disruption. Technology solution The Deploying VMware View in the Enterprise EMC Celerra NS-120 solution establishes a configuration of validated hardware and software that permits easy and repeatable deployment of virtual desktops by using the storage provided by a Celerra NS-120 system. This document describes the reference architecture to configure ESX servers and the storage provided by a Celerra NS-120 system in a manner that provides performance, recoverability, and protection. In addition, these guidelines can be further extrapolated to other larger scale Celerra systems. Solution advantages Table 1 shows the benefits of the VMware View solution with the Celerra NS-120 IP storage solution. Table 1 Solution advantages Benefit Maintains service levels Reduces support costs Reduces risk Accelerates implementations Details This solution keeps users desktops available and running at peak performance. This solution minimizes the cost to upgrade and maintain users desktops. This solution offers a reference architecture that includes tested and proven configurations that improve performance and scalability. EMC Professional Services and ASN certified EMC partners provide rapid assessment and efficient implementation. 10 Deploying VMware View in the Enterprise EMC Celerra NS-120
Chapter 2 Solution Architecture This chapter presents these topics: Overall architecture... 12 Storage architecture... 12 Network architecture... 15 Celerra and ESX server configuration... 18 High availability and failover... 27 Deploying VMware View in the Enterprise EMC Celerra NS-120 11
Overall architecture Figure 1 shows the architecture of the VMware View Celerra NS-120 solution environment. Figure 1 Solution architecture General characteristics The general characteristics of the solution architecture are: Virtual desktops are created and deployed by using the Celerra Temporary Writable Snap (TWS) feature and the VMware Clone feature. Storage allocation for virtual desktops is based on the 4+1 RAID 5 disk grouping. IP [Gigabit Ethernet and virtual local area network (VLAN) iscsi] connections are designed to balance or distribute the disk input/output. All virtual machine (VM) files (vmdk, vmx, and log) are stored with the storage provided by the EMC Celerra NS-120 storage system. This makes server replacement relatively simple. Storage architecture In this solution, the iscsi configuration storage architecture was tested. Users can set up storage either by using the Celerra Manager or the Celerra command line interface. This software provides the ability to view every disk, design file systems, and create iscsi LUNs to be used as VMware vstorage VMFS data stores. 12 Deploying VMware View in the Enterprise EMC Celerra NS-120
Figure 2 shows the storage architecture of the tested configuration. Figure 2 iscsi configuration overview The iscsi configuration is designed to be easy to use and manage. The iscsi configuration is ideal for environments with high performance requirements. In this configuration, the VMware guest operating system (OS) is on an iscsi LUN that is presented as a vstorage VMFS data store to the VMware ESX server. Array configuration File system configurations The Celerra system was configured as shown in Figure 2. The Celerra NS-120 that was used for the validation had five disk-array enclosures (DAEs). Each of the four DAEs contained 15 Fibre Channel 300 GB/15k rpm 2/4 Gb disks. The other DAE contained five 400 GB enterprise Flash drives (EFD). The initial testing used an all RAID 5 disk grouping only. Based on the standard NAS template, two LUNs were created for each RAID group, and each LUN was owned by a different storage pool for load balancing. The file systems were created by using an Automatic Volume Management (AVM) user-defined storage pool. The file systems and iscsi LUNs were virtually provisioned with a recommended high water mark of 75 percent. The RAID 5 disk group was the basic building block for a virtual desktop. Table 2 provides the required file system configurations. Table 2 File system configurations CLARiiON array objects File system for golden image VM File system for VM clones and TWSs Configuration required Storage capacity: 24 GB iscsi LUN capacity: 20 GB Number of disks used: 5 Number of disk volumes used: 1 Storage capacity: 1 TB iscsi LUN capacity: 20 GB Number of disks used: 5 Number of disk volumes used: 2 Deploying VMware View in the Enterprise EMC Celerra NS-120 13
Figure 3 shows the disk layout of the file system configuration. 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 4+1 RAID 5 Figure 3 Disk layout 14 Deploying VMware View in the Enterprise EMC Celerra NS-120
Table 3 lists the disk volumes per file system for this configuration. Table 3 Disk volumes File system Golden image Test log files VMs (clones and TWSs) Disk volumes d8 d9 d13, d25 (concatenated) d14, d26 (concatenated) d15, d27 (concatenated) d16, d28 (concatenated) d19, d31 (concatenated) Network architecture Switches The networks used in this testing were dedicated 1 Gb/s Ethernets. All virtual desktops were assigned an IP address by using a dynamic host configuration protocol (DHCP) server. The ESX servers consisted of five Intel Gb Ethernet Controllers. Four of them comprised two network devices that consisted of two NIC teaming ports. Each of these was placed on a separate subnet for multipathing and load balancing. EMC recommends that the switches support Gigabit Ethernet (GbE) connections and that the ports on the switches support copper-based media. In this configuration, the VMware virtual switches are set to directly connect physical network cards to their logical equivalent in the VM. The vcenter network representation is as shown in Figure 4: Figure 4 vcenter network Deploying VMware View in the Enterprise EMC Celerra NS-120 15
The vmnics comprising the virtual switch were configured for NIC teaming as shown in Figure 5: Figure 5 NIC teaming Celerra NS-120 Celerra NS-120 contains two blades. These blades can operate either independently or in the active/passive mode, with the passive blade serving as a failover device for the active blade. In this solution, the blades operated in the active/passive mode. The NS-120 blade consists of four Gb Ethernet controller ports. These four ports were configured as two 2-port link aggregation devices. Each link aggregation device was placed on a different subnet in order to create multiple paths for the iscsi objects. Multiple iscsi targets were created and iscsi sessions were distributed across both the logical network interfaces. 16 Deploying VMware View in the Enterprise EMC Celerra NS-120
Figure 6 shows the ports on the rear side of an EMC Celerra NS-120 blade. Figure 6 EMC Celerra NS-120 blade ports Ports cge0 and cge1 are set up for link aggregation and support the iscsi storage traffic. Ports cge2 and cge3 are used for the second link aggregation device. This can be seen in the following: # /nas/bin/server_ifconfig server_2 -a server_2 : iscsi-net2 protocol=ip device=lnk02 inet=10.6.119.246 netmask=255.255.255.0 broadcast=10.6.119.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:1f:ac:12 iscsi-net1 protocol=ip device=lnk01 inet=10.6.116.246 netmask=255.255.255.0 broadcast=10.6.116.255 UP, ethernet, mtu=1500, vlan=0, macaddr=0:60:16:1f:ac:14 This can also be seen in Celerra Manager as shown in Figure 7: Figure 7 Celerra Manager Deploying VMware View in the Enterprise EMC Celerra NS-120 17
Note: As a best practice, the blade s network ports connected to the storage network (as shown in Figure 6 on page 15) should be dedicated to storage traffic. However, if the ports are not heavily used, they can be shared with non-storage network traffic. EMC recommends monitoring the network to avoid bottlenecks. Celerra and ESX server configuration NS-120 configuration NS-120 can be configured as follows: To enable sparse TWS support on the blade, use the command: /nas/bin/server_param <server_name> -facility nbs modify sparsetws value 1 Setting a value of 1 will result in the DART always creating sparse TWS. This can also be done in Celerra Manager as shown in Figure 8: Figure 8 Enable sparse TWS support To enable the return of only the list of iscsi targets for which the host has been explicitly granted access to a LUN, use the command: /nas/bin/server_param <server_name> -facility iscsi modify SendTargetsMode value 1 Only the information about the LUNs to which the initiator has been granted specific access will be returned. 18 Deploying VMware View in the Enterprise EMC Celerra NS-120
This can also be done in Celerra Manager as shown in Figure 9: Figure 9 Enable return of iscsi targets To create a user-defined storage pool, use the command: /nas/bin/nas_pool -create -name <pool name> -description 'Storage Pool -volumes <dvol >, -default_slice_flag y This can also be done in Celerra Manager as shown in Figure 10: Figure 10 Create a user-defined storage pool To create a file system from the user-defined storage pool and then mount it on a blade, use the commands: /nas/bin/nas_fs -name <fs name> -type uxfs -create size=<size> pool=<pool name> -option mover=<server_name>, slice=y Deploying VMware View in the Enterprise EMC Celerra NS-120 19
/nas/bin/server_mount <server_name> <fs name> <fs pathname> This can also be done in Celerra Manager as shown in Figure 11: Figure 11 Create a file system from the user-defined storage pool To create an iscsi LUN serving as a data store to the ESX server, use the command: /nas/sbin/server_iscsi <server_name> -lun number <lun #> - create <target alias name> -size <lun size> -fs <fs name> -vp yes The -vp yes option allows you to create a virtually provisioned iscsi LUN instead of a regular (thick) LUN. When using virtual provisioning, closely monitor the file system space that contains virtually provisioned iscsi LUNs. To determine available or used space in a file system, use the commands: /nas/bin/server_df /nas/bin/nas_fs 20 Deploying VMware View in the Enterprise EMC Celerra NS-120
This can also be done in Celerra Manager as shown in Figure 12: Figure 12 Create an iscsi LUN VMware ESX host configuration The ESX server must be configured to allow iscsi access and to ensure that snapshot LUNs are available to the server. By default, an ESX server is not allowed to access services on a remote host. To provide iscsi access, connect to each ESX server through the vcenter Server by using the VMware Infrastructure Client (VIC): 1. Click Configuration > Security Profile > Properties. The Firewall Properties dialog box appears as shown in Figure 13 on page 22. 2. Select Software iscsi Client. Deploying VMware View in the Enterprise EMC Celerra NS-120 21
Figure 13 VMware ESX host configuration To ensure that the snapshot LUNs are available to the ESX server: 1. Click Configuration > Advanced Settings. The Advanced Settings dialog box appears as shown in Figure 14 on page 23. 2. Select LVM and do the following: In the LVM EnableResignature field, type 1. In the LVM DisallowSnapshotLun field, type 0. 22 Deploying VMware View in the Enterprise EMC Celerra NS-120
Figure 14 Modify parameters In addition, the vswitch used for iscsi network traffic can be created using the VIC: 1. Click Configuration > Networking > Add networking > VMkernel > Create a New Switch. The VMkernel Connection Settings dialog box appears as shown in Figure 15 on page 24. 2. Type the appropriate information in the following fields: Network Label VLAN ID IP Address Subnet Mask Deploying VMware View in the Enterprise EMC Celerra NS-120 23
Figure 15 Create vswitch using VIC 3. Click Configuration > Storage Adapters. Select the iscsi Adapter and click Properties. The iscsi Initiator Properties dialog box appears. 4. Click General > Configure. The General Properties dialog box appears as shown in Figure 16. 5. Click Enable, and then click OK. Figure 16 General properties 6. Click Dynamic Discovery, and then click Add. The Add Send Targets Server dialog box appears. 7. Type the IP address and port for each iscsi target and click OK as shown in Figure 17 on page 25. 24 Deploying VMware View in the Enterprise EMC Celerra NS-120
Note: If CHAP authentication is enabled on the iscsi target, it should be configured in the CHAP Authentication tab. Figure 17 Add Send Targets Server Creating virtual machines from iscsi LUNs After the golden image of a desktop environment is created on an iscsi LUN, replicas of this LUN can be easily made by using Celerra Snap technology introduced in NAS release 5.6. Multiple VMs can be deployed easily by creating TWS from the Control Station. This is achieved by placing the following lines in a loop within a shell script in the Control Station: /nas/bin/server_iscsi <server_name> -snap -create -target <iscsi target> -lun <iscsi lun containing golden image> /nas/bin/server_iscsi <server_name> -snap -promote <name of snap from previous step> -initiator <ESX server initiator name> Deploying VMware View in the Enterprise EMC Celerra NS-120 25
Subsequently, the temporary snapped LUNs (TWS) can be presented to the ESX server by running the following ESX shell command sequence at the ESX server console: 1. Rescan the iscsi software adapter in the VI client by clicking Configuration > Hardware Storage Adapters > Rescan as shown in Figure 18. Figure 18 Rescan the iscsi software adapter 2. Register the snap and add it as a data store with the following command: $ /usr/bin/vmware-cmd -s register <path to *.vmx file beneath /vmfs/volumes> 3. Power on the newly registered snap with the following command: $ /usr/bin/vmware-cmd " <path to *.vmx file beneath /vmfs/volumes> start 26 Deploying VMware View in the Enterprise EMC Celerra NS-120
This can also be done in vcenter as shown in Figure 19: Figure 19 Power on option Note: Virtual desktops from Celerra TWS can also be created by using the EMC Celerra VMware Virtual Desktop Deployment Plug-in tool available on Powerlink. High availability and failover Storage layer Connectivity layer The validated solution provides protection at the storage layer, the connectivity layer, and the host layer. Celerra can have multiple blades to provide high availability and load balancing. In the solution, primary and standby blades provide seamless failover capabilities for the Celerra storage. This minimizes end-user disruption during routine Celerra maintenance. The RAID disk configuration on the Celerra backend provides protection against hard disk failures. The advanced networking features of the Celerra, such as failsafe networks and link aggregation, provide protection against network connection failures. The solution configuration also includes separate NICs at the source of each I/O path, a separate network infrastructure (such as cables, switches, and routers), and separate target ports. Multiple network paths are created for the iscsi objects. Multiple iscsi targets are created and iscsi sessions are distributed across multiple logical network interfaces. This provides redundancy if one network path becomes unavailable. Deploying VMware View in the Enterprise EMC Celerra NS-120 27
Host layer The application hosts have redundant power supplies and network connections to reduce the impact of host hardware failure. 28 Deploying VMware View in the Enterprise EMC Celerra NS-120
Chapter 3 Hardware and Software Resources This chapter presents these topics: Hardware resources... 30 Software resources... 32 Deploying VMware View in the Enterprise EMC Celerra NS-120 29
Hardware and Software Resources Hardware resources Table 4 lists the hardware resources required for this solution. Table 4 Hardware specifications Hardware Quantity Configuration Notes EMC Celerra NS-120 One NS-120 with CLARiiON CX4-120 array Four DAEs with 15 FC 300 GB/15k/2/4 Gb disks One DAE with five EFDs and 400 GB disks Dell PowerEdge 1850 One Memory: 4 GB RAM CPU: Dual 2.8 GHz dual-core processors Storage: One 146 GB and one 36 GB disk NIC: Dual port Intel Pro/1000 MT Gb Adapters Desktop/VM One vcpu with 2.8 GHz virtual processor vmemory 1 GB RAM for Windows XP VMs vmxnet (connectivity) Celerra shared storage for file systems, iscsi LUNs, and snaps. Required for vcenter service Table 5 lists the VMware ESX servers used for hosting virtual desktops. Multiple ESX servers are required for numerous VMs. Table 4 gives an example listing of what was used in the Engineering Lab. Table 5 VMware ESX servers Hardware Quantity Configuration Dell PowerEdge 1850 Four Memory: 16 GB RAM CPU: Dual Intel Xeon 2.8 GHz dual-core processors (four logical processors) Storage: Two 73 GB local disks NICs: 5 Gb Ethernet adapters, two embedded Intel 82546EB Gb Ethernet controllers, and three 8254NXX Gb Ethernet controllers Dell PowerEdge 1950 Four Memory: 32 GB RAM CPU: Dual Intel Xeon 3.0 GHz quad-core processors (eight logical processors) Storage: 129 GB local disk NICs: 6 GB Ethernet adapters, two Broadcom NetXtreme II BCM5709 1000 Base-T ports, and Intel Pro/1000PT quad port Gb Ethernet controller 30 Deploying VMware View in the Enterprise EMC Celerra NS-120
Hardware and Software Resources Hardware Quantity Configuration Dell PowerEdge 6850 Two Memory: 32 GB RAM CPU: Dual Intel Xeon 3.0 GHz quad-core processors (eight logical processors) Storage: 60.5 GB local disk NICs: 6 Gb Ethernet adapters, four 82571EB Gb Ethernet ports, and two NetXtreme BCM5705 Gb Ethernet ports Dell PowerEdge 6950 Two Memory: 64 GB RAM CPU: Dual-core AMD Opteron 3.0 GHz processors (eight logical processors) Storage: 129 GB local disk NICs: 6 Gb Ethernet Adapters, two Broadcom NetXtreme II BCM 5708 1000 Base-T ports, and Intel Gb VT quad port server adapter Dell PowerEdge R905 Eight Memory: 64 GB RAM CPU: Dual-core AMD Opteron 3.0 GHz processors (eight logical processors) Storage: 129 GB local disk NICs: 8 Gb Ethernet Adapters, four Broadcom NetXtreme II BCM 5708 1000 Base-T ports, and Intel Gb VT quad port server adapter Deploying VMware View in the Enterprise EMC Celerra NS-120 31
Hardware and Software Resources Software resources Table 6 lists the software resources required for this solution. Table 6 Software specifications Software Configuration NS-120 (Celerra shared storage, file systems, iscsi LUNs, and snaps) NAS/DART Release 5.6.43.8 For EFD performance, NAS Release 5.6.45 or later is recommended. CLARiiON FLARE Release 28 (4.28.000.5.504) ESX servers ESX ESX 3.5 update 2 (Build 130756) vcenter Server OS MS Windows Server 2003 Enterprise Edition SP2 (32-bit) VMware vcenter 2.5 Build 119518 View Manager 3.0.1 Build 142034 Desktops/VM Note: This software is used for generating test load. OS MS Windows XP Professional Version SP3 (32-bit) Note: MS Windows Vista, which requires more memory, was not used at this time VMware Tools 3.5.0000 AutoIt Version 3.2.10.0 http://www.autoitscript.com/autoit3/ Microsoft Office Revision 11 Internet Explorer 7.0.5730.13 Adobe Reader 8.1.2 McAfee virus scan 8.5.0i Enterprise 32 Deploying VMware View in the Enterprise EMC Celerra NS-120