Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

Similar documents
EMC Business Continuity for Microsoft Applications

EMC Infrastructure for Virtual Desktops

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

VMware vsphere with ESX 4.1 and vcenter 4.1

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC Integrated Infrastructure for VMware. Business Continuity

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

By the end of the class, attendees will have learned the skills, and best practices of virtualization. Attendees

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

Configuring and Managing Virtual Storage

Microsoft Office SharePoint Server 2007

EMC Backup and Recovery for Microsoft SQL Server

EMC Virtual Infrastructure for Microsoft Exchange 2007

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

VMware vsphere 6.5 Boot Camp

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Data center requirements

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

DELL EMC UNITY: HIGH AVAILABILITY

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

Storage Considerations for VMware vcloud Director. VMware vcloud Director Version 1.0

InfoSphere Warehouse with Power Systems and EMC CLARiiON Storage: Reference Architecture Summary

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

EMC CLARiiON CX3 Series FCP

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Reference Architecture

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

DATA PROTECTION IN A ROBO ENVIRONMENT

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

Adobe Acrobat Connect Pro 7.5 and VMware ESX Server

Branch Storage Consolidation with Riverbed and EMC VSPEX.

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

Performance of Virtual Desktops in a VMware Infrastructure 3 Environment VMware ESX 3.5 Update 2

VMware vsphere Storage Appliance Installation and Configuration

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Dell EMC. Vblock System 340 with VMware Horizon 6.0 with View

Many organizations rely on Microsoft Exchange for

Exam4Tests. Latest exam questions & answers help you to pass IT exam test easily

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 7

Increase Scalability for Virtual Desktops with EMC Symmetrix FAST VP and VMware VAAI

DELL EMC UNITY: BEST PRACTICES GUIDE

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.0

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

VMware vsphere 5.5 Advanced Administration

IOmark- VDI. IBM IBM FlashSystem V9000 Test Report: VDI a Test Report Date: 5, December

EMC INFRASTRUCTURE FOR VMWARE VIEW 5.1

Overview. Prerequisites. VMware vsphere 6.5 Optimize, Upgrade, Troubleshoot

"Charting the Course... VMware vsphere 6.7 Boot Camp. Course Summary

EMC Infrastructure for Virtual Desktops

Configuring iscsi in a VMware ESX Server 3 Environment B E S T P R A C T I C E S

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP:

EMC CLARiiON CX3 UltraScale Series The Proven Midrange Storage

EMC CLARiiON AX4-5i (2,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

VMware Horizon View 5.2 on NetApp Clustered Data ONTAP at $35/Desktop

Using EMC FAST with SAP on EMC Unified Storage

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

Nutanix Complete Cluster Date: May 2012 Authors: Ginny Roth, Tony Palmer, and Emil Flock

Demartek December 2007

NAS for Server Virtualization Dennis Chapman Senior Technical Director NetApp

VMware vsphere with ESX 4 and vcenter

VMware with CLARiiON. Sheetal Kochavara

ESRP Storage Program

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

VMware vsphere with ESX 6 and vcenter 6

Setup for Microsoft Cluster Service Update 1 Release for ESX Server 3.5, ESX Server 3i version 3.5, VirtualCenter 2.5

PeerStorage Arrays Unequalled Storage Solutions

EMC Corporation. Tony Takazawa Vice President, EMC Investor Relations. August 5, 2008

Virtualization of the MS Exchange Server Environment

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

E EMC. EMC Storage and Information Infrastructure Expert for Technology Architects

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

ESRP Storage Program EMC CLARiiON CX3-20c(1000 user) Storage Solution for Microsoft Exchange Server

Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems

7 Ways Compellent Optimizes VMware Server Virtualization WHITE PAPER FEBRUARY 2009

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family

VMware vsphere Customized Corporate Agenda

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC INFRASTRUCTURE FOR CITRIX XENDESKTOP 5.6

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

IOmark- VM. HP MSA P2000 Test Report: VM a Test Report Date: 4, March

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Virtualization with VMware ESX and VirtualCenter SMB to Enterprise

VMware Exam VCP-511 VMware Certified Professional on vsphere 5 Version: 11.3 [ Total Questions: 288 ]

ESX 5.5 ESX 5.1 ESX 5 ESX

vstart 50 VMware vsphere Solution Specification

Professional on vsphere

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

EMC Business Continuity for Microsoft Office SharePoint Server 2007

Transcription:

Deploying EMC CLARiiON CX4-240 FC with View Contents Introduction... 1 Hardware and Software Requirements... 2 Hardware Resources... 2 Software Resources...2 Solution Configuration... 3 Network Architecture... 3 EMC CLARiiON CX4-240... 3 EMC CLARiiON Configuration... 3 Storage Architecture...4 Validation Results... 7 Storage System Utilization... 8 Application Response Time...10 Summary... 11 Resources... 11 Appendix: Glossary... 12 Contents i

Deploying EMC CLARiiON CX4-240 FC with View Introduction Built on the industry-leading virtualization platform, View 3 is a universal client solution that lets you manage operating systems, hardware, applications, and users independently of each other, wherever they may reside. View streamlines desktop and application management, reduces costs, and increases data security through centralization, resulting in greater end user flexibility and IT control. View enables you to extend the value of Infrastructure to encompass not only desktops in the datacenter but also applications and the delivery of these environments securely to remote clients, online or off, anywhere. View transforms the way you use and manage desktop operating systems. You can deploy desktop instances rapidly in secure datacenters to facilitate high availability and disaster recovery, protect the integrity of enterprise information, and remove data from local devices that are susceptible to theft or loss. Isolating each desktop instance in its own virtual machine eliminates typical application compatibility issues and improves and delivers a more personal computing environment. The EMC CLARiiON CX4 series with UltraFlex technology is the fourth-generation CX series and continues EMC s commitment to maximizing customers investments in CLARiiON technology by ensuring that existing resources and capital assets are optimally utilized as customers adopt new technology. New UltraFlex technology enables online expansion via hot-pluggable I/O modules so you can meet current Fibre Channel and iscsi needs and accommodate new technologies, too. You have the option to populate these flexible slots with either Fibre Channel interfaces or iscsi interfaces. This document provides a detailed summary of the design and configuration of an environment incorporating the EMC CLARiiON CX4-240 using Fibre Channel protocol for use with View. and EMC used this configuration as part of the View reference architecture validation. This guide provides guidance specific to the use of EMC CLARiiON CX4-240 storage with View. Although this configuration was used specifically with the View reference architecture for large-scale View deployments, the information provided in this guide can be helpful to anyone planning to deploy View using an EMC CLARiiON CX4-240. In addition, you can extrapolate from these guidelines, which apply directly to the CX4 system, to other CLARiiON systems. Before using this document, you should have a working knowledge of View as well as CLARiiON technologies. Introduction 1

Deploying EMC CLARiiON CX4-240 FC with View Hardware and Software Requirements The following sections provide details of the hardware and software used in our validation tests. Hardware Resources The configuration described in this paper uses the following equipment: Description CLARiiON CX4-240 storage array CLARiiON write cache Minimum Requirement CLARiiON storage, Fibre Channel LUNs and Snaps 480MB 2 Fibre Channel for back-end storage connectivity 4 10/100/1000 BaseT Ethernet 2 per storage processor 4 10/100/1000 management 2 per storage processor PROM revision Release 1.81.00 45 300GB 15K 4Gb Fibre Channel disks Sup up to 240 FC or SATA disks in an all or mixed configuration 8 additional Fibre Channel for host connectivity Table 1: Hardware configuration Software Resources The configuration described in this paper uses the following software: Description Minimum Requirement CX4-240 CLARiiON Release 28 (04.28.000.005) CLARiiON Navisphere 6.28.0.4.31 ESX hosts ESX 3.5 Update 2 vcenter Operating system Microsoft Windows Server 2003 Enterprise Edition SP2 (32- bit) vcenter 2.5 Update 3 Desktops (virtual machines) Operating system Microsoft Windows XP Professional SP3 (32-bit) Tools 3.5.0000 Table 2: Software resources Hardware and Software Requirements 2

Deploying EMC CLARiiON CX4-240 FC with View Solution Configuration The following sections provide details of the configuration of the environment used in our validation tests. Network Architecture The networks in the configuration we tested were dedicated 1Gb Ethernet. We used a DHCP server to assign IP addresses to all View desktops. Each ESX host used four 1Gb Ethernet controllers for network connectivity and two embedded Fibre Channel HBAs for storage access. We connected one of the Fibre Channel HBAs to fabric A and the other Fibre Channel HBA to fabric B. EMC CLARiiON CX4-240 Figure 1 shows the on the rear of an EMC CLARiiON CX4-240. DO NOT REMOVE DO NOT REMOVE MGMT B B0 B1 B2 B3 MGMT A A0 A1 A2 A3 SPB MGMT SPB FC SPB iscsi Additional for future expansion Figure 1: EMC CLARiiON CX4-240 SPA MGMT SPA FC SPA iscsi Additional for future expansion EMC CLARiiON Configuration The CLARiiON storage array used in this configuration has two 4Gb Fibre Channel on each storage processor. ESX handles failover automatically because it identifies multiple paths to the storage array. Each ESX host identifies a total four paths to the CLARiiON storage array, two into each storage processor. Solution Configuration 3

Deploying EMC CLARiiON CX4-240 FC with View Figure 2: Multipath configuration on the ESX host Storage Architecture We configured the CLARiiON CX4-240 array as illustrated in Figure 3. This CX4 array had four disk array enclosures, each containing 15 Fibre Channel 300GB 15K disks. We used three of the disk array enclosures for this testing. All testing used a 4+1 RAID 5 disk grouping only. CLARiiON array objects Fibre Channel disk capacity Configuration required 300GB Number of disks used 35 VMFS for linked clone images Storage capacity 14 300GB Number of iscsi LUNs 14 Fibre Channel LUN capacity Number of RAID groups used Table 3: Storage layout 300GB 7 (4+1, RAID-5) Solution Configuration 4

Deploying EMC CLARiiON CX4-240 FC with View Figure 3: Configuration of storage groups on CLARiiON The set of 35 300GB 15K Fibre Channel disk drives was configured with the RAID disk groups and layout shown in Figure 4. Solution Configuration 5

Deploying EMC CLARiiON CX4-240 FC with View Shelf 1_1 SPA SPB 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 LUN 12 LUN 13 Shelf 1_0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SPA SPB LUN 6 LUN 7 LUN 8 LUN 9 LUN 10 LUN 11 Shelf 0_1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SPA SPB LUN 0 LUN 1 LUN 2 LUN 3 LUN 4 LUN 5 Shelf 0_0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 SPA SPB 300 GB / 15 RPM FC drives LUNs exposed to cluster A LUNs exposed to cluster B Figure 4: EMC CX4-240 RAID group configuration We presented all the odd numbered LUNs of the 14 available via VMFS to one of two ESX clusters as a datastore where the linked clones resided. We exposed the even numbered LUNs to the other ESX cluster. We used View Manager 3 to create each of the desktop pools. Each desktop pool used the seven available datastores assigned to its respective cluster. We used the following formula to estimate the size needed for each LUN. In all other calculations we rounded numbers up or down for simplicity. (Number of clones * 2x memory) + (Number of patch replicas * virtual machine disk size) =Total amount of usable space needed (1000 * 1) + (4 * 8) = 1032 Solution Configuration 6

Deploying EMC CLARiiON CX4-240 FC with View Even though we could create this amount of storage approximately 1TB on fewer spindles, for appropriate storage sizing, we needed to take performance requirements into consideration. Rather than just considering the performance requirements during the system s steady state, we considered the boot-storm performance requirements when we sized the appropriate number of spindles. In order to accommodate the desktop pools (which were based on linked clones) we configured each pool to use aggressive storage overcommitment. Storage overcommitment is a feature of View Composer that allows you to control how aggressively View places virtual machines on each datastore. When you use a more aggressive level of storage overcommitment, you have less free space available for virtual machines to grow over time. For more information about View Composer, see the View Manager Administration Guide or Design Considerations for View Composer (see the Resources section for links). This formula for estimating the required storage assumes that you are using aggressive storage overcommitment. When you use aggressive storage overcommitment, you have very little additional storage available to provide room for growth, over time, for linked clones. For persistent pools, you should implement a refresh policy that resets the virtual machines to their original size. For nonpersistent pools, implement a policy of deleting after first use. As an alternative, you can add additional storage to the above formula to provide additional room for growth. Each of the disk groups in the configuration described in this paper provides more than enough additional space to accommodate increasing the size of each of the Fibre Channel LUNs at any time. By increasing the size of each Fibre Channel LUN, you can provide additional room for each of the virtual machines based on a linked clone to grow over time. Regardless of the approach you take, it is a best practice to monitor the storage array for space usage. Table 4 lists disk volumes per file system for the storage configuration described in this paper. File System Virtual nachines (linked clones) cluster A Virtual machines (linked clones) cluster B Table 4: LUNs LUNs LUN0, LUN2, LUN4, LUN6, LUN8, LUN10, LUN12 LUN1, LUN3, LUN5, LUN7, LUN9, LUN11, LUN13 Validation Results The configuration we implemented and used during the validation of the View reference architecture is suitable for large-scale View deployments. In this configuration, we used the EMC CLARiiON CX4-240 to validate a 1000-user View building block architecture. For additional information, see View Reference Architecture (see the Resources section for a link). Validation Results 7

Deploying EMC CLARiiON CX4-240 FC with View Storage System Utilization Figures 5 and 6 show the average CPU utilization of the EMC CLARiiON CX4-240 and the average I/O rate for both of the EMC CLARiiON storage processors. As the graphs show, this configuration provides more than enough capacity to handle the resource needs of each building block. The average I/O rate into both of the storage processors was less than 16MB/s during the steady-state stage of testing. Figure 5: CLARiiON CX4-240 average CPU utilization Validation Results 8

Deploying EMC CLARiiON CX4-240 FC with View Figure 6: EMC CLARiiON CX4-240 average I/O rate for all disk volumes The workload consisted of 1000 concurrent Windows desktop users performing the normal operations listed below. These users can be classified as high-end task workers or low-end knowledge workers. The workload includes the following tasks: Microsoft Word Open, minimize, close, write random words and numbers, save modifications Microsoft Excel Open, minimize, close, write random numbers, insert and delete columns and rows, copy and paste formulas, save modifications Microsoft PowerPoint Open, minimize, close, conduct a slide show presentation Adobe Acrobat Reader Open, minimize, close, browse pages in PDF document Internet Explorer Open, minimize, close, browse page McAfee Active VirusScan Real-time scanning PKZIP Open, close, compress a large file Validation Results 9

Deploying EMC CLARiiON CX4-240 FC with View Application Response Time Figure 7 shows the average application execution time for all virtual desktops in both cluster A and cluster B. These application times represent the amount of time it took to open a document, close a document, or save a document that was created. Figure 7 does not show the amount of time an application is minimized or being worked on. Because of the random nature of the workload, applications are not just opened, worked on, and then closed. Rather, the workload might create a Microsoft Word document, work on the document, minimize the document, and then use Microsoft Internet Explorer. At some later point, the workload returns to the Microsoft Word document (which had been minimized), reopens the document, works on it again, and then closes it. Operation Average Application Execution time (sec) 0.53 1.66 0.38 0.11 0.16 1.27 0.29 0.10 0.11 1.57 0.27 0.01 0.10 1.19 0.27 1.66 0.11 1.08 0.41 0.08 0.33 1.75 0.05 0 0.5 1 1.5 2 PKZip-Compress PDF-Open PDF-Close MSWord-Doc05-Save_2 MSWord-Doc05-Save_1 MSWord-Doc05-Open MSWord-Doc05-Close MSWord-Doc02-Save_2 MSWord-Doc02-Save_1 MSWord-Doc02-Open MSWord-Doc02-Close MSWord-Doc01-Save_2 MSWord-Doc01-Save_1 MSWord-Doc01-Open MSWord-Doc01-Close MSPowerPT-Open MSPowerPT-Close MSIntExplorer-Open MSIntExplorer-Close MSExcel-Save_2 MSExcel-Save_1 MSExcel-Open MSExcel-Close Figure 7: Average application execution time in seconds Figure 8 shows the corresponding latency at the storage system level during the steady-state stage of testing. Validation Results 10

Deploying EMC CLARiiON CX4-240 FC with View Figure 8: Average storage latency Summary The storage configuration used in our validation testing can easily support 1000 virtual desktops that can be used by users who fit the high-end task worker profile. The application response times were well within the acceptable limits. The steady storage latencies were well below 5ms with some headroom to absorb an unexpected boot storm surge. Resources Design Considerations for View Composer http://www.vmware.com/resources/techresources/10004 Infrastructure 3 documentation http://www.vmware.com/support/pubs/vi_pubs.html View Manager Administration Guide http://www.vmware.com/pdf/viewmanager3_admin_guide.pdf View Reference Architecture http://www.vmware.com/files/pdf/resources/vmware-view-reference-architecture.pdf Windows XP Deployment Guide http://www.vmware.com/files/pdf/vdi-xp-guide.pdf Summary 11

Deploying EMC CLARiiON CX4-240 FC with View Appendix: Glossary LUN Logical unit: For Fibre Channel on a CLARiiON storage array, a logical unit is a software feature that processes SCSI commands, such as reading from and writing to storage media. From a Fibre Channel host perspective, a logical unit appears as a block-based device. RAID Redundant array of independent disks, designed for fault tolerance and performance. VMFS Virtual Machine File System. View A set of software products that provide services and management infrastructure for centralization of desktop operating environments using virtual machine technology. View Manager A software product that manages secure access to virtual desktops. It works with vcenter Server to provide advanced management capabilities. Appendix: Glossary 12