EMC Business Continuity for Microsoft Exchange 2007

Similar documents
EMC Backup and Recovery for Microsoft Exchange Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX 3.

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Business Continuity for Microsoft Applications

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Microsoft Office SharePoint Server 2007

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Backup and Recovery for Microsoft SQL Server

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Business Continuity for Microsoft Exchange 2010

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Reference Architecture

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Solutions for Microsoft Exchange 2003 CX Series iscsi

Many organizations rely on Microsoft Exchange for

Local and Remote Data Protection for Microsoft Exchange Server 2007

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC CLARiiON AX4-5i (2,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

Secure and Consolidated 16,000 Exchange Users Solution on a VMware/EMC Environment

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

DATA PROTECTION IN A ROBO ENVIRONMENT

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

ESRP Storage Program

Surveillance Dell EMC Storage with Bosch Video Recording Manager

EMC Business Continuity for Oracle Database 11g

Exchange 2003 Archiving for Operational Efficiency

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Assessing performance in HP LeftHand SANs

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorView/S

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

ESRP Storage Program EMC CLARiiON CX3-20c(1000 user) Storage Solution for Microsoft Exchange Server

EMC Celerra Unified Storage Platforms

ESRP Storage Program EMC CLARiiON CX3-20c (500 User) Storage Solution for Microsoft Exchange Server

EMC Infrastructure for Virtual Desktops

EMC VSPEX END-USER COMPUTING

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

EMC VSPEX END-USER COMPUTING

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

EMC CLARiiON LUN Shrinking with Microsoft Exchange 2007

Surveillance Dell EMC Storage with Verint Nextiva

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Microsoft E xchange 2010 on VMware

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

INTEGRATED INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNXE3300, VMWARE VSPHERE 4.1, AND VMWARE VIEW 4.5

Microsoft Exchange 2007 on VMware Infrastructure 3 and NetApp iscsi Storage

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

Dell PowerVault MD Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

Accelerate Applications Using EqualLogic Arrays with directcache

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SQL Server 2008

EMC CLARiiON Backup Storage Solutions

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

VMware vsphere with ESX 4.1 and vcenter 4.1

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

EMC Solutions for Microsoft Exchange 2007 EMC Celerra Unified Storage Platforms

VMware Infrastructure Update 1 for Dell PowerEdge Systems. Deployment Guide. support.dell.com

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

Dell PowerVault MD1220 is a SAS based storage enclosure. The major features of the storage system include:

DELL EMC UNITY: BEST PRACTICES GUIDE

Securing VSPEX VMware View 5.1 End- User Computing Solutions with RSA

Surveillance Dell EMC Storage with FLIR Latitude

EMC VSPEX END-USER COMPUTING

EMC Business Continuity for Microsoft Office SharePoint Server 2007

EMC SAN Copy Command Line Interfaces

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

EMC VSPEX END-USER COMPUTING

vstart 50 VMware vsphere Solution Specification

EMC XTREMCACHE ACCELERATES ORACLE

Dell EMC Ready Architectures for VDI

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

Transcription:

EMC Business Continuity for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4, EMC Replication Manager, and VMware vsphere 4 using iscsi Proven Solution Guide

Copyright 2009 EMC Corporation. All rights reserved. Published September 2009 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H6480

Table of Contents Table of Contents Chapter 1: About This Document... 4 Overview... 4 Audience and purpose... 5 Scope... 5 Business challenge... 6 Technology solution... 6 Objectives... 6 Reference Architecture... 7 Validated environment profile... 9 Hardware and software resources... 10 Prerequisites and supporting documentation... 12 Chapter 2: Storage Design... 13 Overview... 13 Storage design layout... 14 RAID group layout... 14 Best practices and recommendations... 15 Chapter 3: Server and Network Design... 16 Overview... 16 Server design... 17 Network design... 19 Best practices for virtual machines... 19 Best practices for iscsi... 20 Chapter 4: Installation and Configuration... 21 Overview... 21 Navisphere CLI scripts configuration... 22 Disk management scripts... 23 Chapter 5: Testing and Validation... 25 Overview... 25 Tested components... 26 Section A: Replication Manager with VMware ESX Server 3.5 test results... 27 Overview... 27 Test results summary... 27 Section B: Replication Manager with VMware ESX Server 4.0 test results... 34 Overview... 34 Test result summary... 34 Chapter 6: Conclusion... 39 Supporting Information... 41 Overview... 41 Navisphere CLI scripts configuration... 41 3

Chapter 1: About This Document Chapter 1: About This Document Overview Introduction This document summarizes some observations using best practices that were discovered, validated, or otherwise encountered while validating the functionality of Microsoft Exchange 2007 on Windows Server 2008, enabled by EMC CLARiiON CX4-120, EMC Replication Manager, with both VMware ESX 3.5 Update 2 and with VMware vsphere 4.0 using iscsi. Replication Manager uses Microsoft Volume Shadow Copy Service (VSS) to perform online replication of Exchange Server 2007 storage groups. The primary focus of the testing with vsphere was to validate EMC Replication with this product. Performance testing was also completed using LoadGen and results have been documented here. EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers. This document provides the specifications for the customer environment (that is, the storage configuration, design, sizing, and backup considerations) that constitute the requirements for this use case. This document outlines the detailed components that make up the environment and their relationship to each other. The use case test scenarios are listed, together with the objectives for completing them and the expected results. Use case definition A use case reflects a defined set of tests that validates the reference architecture for a customer environment. This validated architecture represents a Proven Solution that can be used for the development and implementation of customer deployments. Contents The content of this chapter includes the following topics. Topic See Page Audience and purpose 5 Scope 5 Business challenge 6 Technology solution 6 Objectives 6 Reference Architecture 7 Validated environment profile 9 Hardware and software resources 10 Prerequisites and supporting documentation 12 4

Chapter 1: About This Document Audience and purpose Audience The intended audience for the Proven Solution Guide is: Internal EMC personnel EMC partners Customers Purpose The purpose of this use case is to provide a consolidated virtualized solution for Microsoft Exchange Server 2007. The solution includes all the hardware and software components required to run this environment, including Active Directory and the required Exchange Server roles. Backup and Recovery replication and disaster recovery are applicable use cases for this solution using EMC's Replication Manager. Information in this document can be used as the basis for a solution build, white paper, best practices document, or training. It can also be used by other EMC organizations (for example, the technical services or sales organization) as the basis for producing documentation for a technical services or sales kit. Scope Scope This document describes the architecture of an EMC solution built at EMC s Global Solutions labs. The scope of this solution includes the following: Design of the CLARiiON CX4-120 storage array to support the tested building blocks Design of the CLARiiON CX4-120 storage array to support Replication Manager clones for Exchange databases and logs on a 2-day rotation cycle Storage loading simulated using Microsoft Load Generator (LoadGen) EMC Replication Manager performance testing and analysis Not in scope Implementation instructions and sizing guidelines are beyond the scope of this document. Information on how to install and configure Microsoft Exchange Server 2007 and the required EMC products is out of scope for this document. However, links are provided on where to find all required software for this solution. 5

Chapter 1: About This Document Business challenge Overview Managing a company s growing e-mail requirements, while lowering data center costs without compromising valuable data or service level agreements, presents a large challenge for IT departments. This demands a solution that is operationally efficient, affordable, and provides realtime backup, recovery, and data protection. This solution uses a building block designed for 600 users. The challenge was to ensure that at the building block level all components worked as expected and were within Microsoft Exchange database latencies at all times, while running local data replication. Technology solution Overview A consolidated Microsoft Exchange infrastructure is the first step to meeting the challenges of e-mail management. This solution demonstrates the value of virtualizing a Microsoft Exchange 2007 environment with VMware ESX Server on Windows Server 2008. The solution described in this reference architecture utilizes EMC s CLARiiON CX4-120 with iscsi, which is a simple, easy-to-manage iscsi storage system. Objectives Overview The following are the objectives of this solution. Objective LoadGen validation Replication Manager validation Details Testing and validating 2,000 users at.48 IOPS Validating the creation of Replication Manager replicas 6

Chapter 1: About This Document Reference Architecture Corresponding Reference Architecture This use case has a corresponding Reference Architecture document that is available on Powerlink and EMC.com. Refer to EMC Business Continuity for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4, EMC Replication Manager, and VMware vsphere 4 using iscsi Reference Architecture for details. If you do not have access to this content, contact your EMC representative. Reference Architecture diagram The following diagram depicts the overall physical architecture of the use case. Note: Always ensure that there is redundant architecture in place for the domain controllers by having more than one domain controller server. 7

Chapter 1: About This Document Hardware layout diagram The following diagram describes the hardware layout used in this solution. 8

Chapter 1: About This Document Validated environment profile Profile characteristics This configuration is based on previous testing that was run on Exchange 2007 SP1 and 300 GB 15k rpm FC drives. This configuration was tested with virtualization with no errors encountered. More information on this testing can be found on EMC.com and the Microsoft website under the Exchange Solutions Review Program (ESRP). Profile characteristic For more information, see ESRP Storage Program - EMC CLARiiON CX3-20c (600 User) iscsi Storage Solution for Microsoft Exchange Server 2007. The solution was validated with the following environment profile. Value Number of users 600 Exchange 2007 SP1 Mailbox Servers 1 Number of Exchange 2007 users per server 600 Number of storage groups per server 4 Number of Exchange 2007 databases per storage group 1 Number of Exchange 2007 mailboxes per mail database 150 Mailbox quota Exchange 2007 production data 300 MB Type Value RAID 1_0 Size 300 GB Speed 15k Connection FC 9

Chapter 1: About This Document Hardware and software resources Hardware The hardware used to validate the solution is listed in the following table. Equipment Quantity Configuration Rack 1 Dell 42U rack CLARiiON CX4-120 1 2 storage processors 2.879 GB mirrored cache DAEP 1 1 300 GB FC HDD 15 15k FC Dell PowerEdge 2850 2 2: 1 Gb/s NICs 2: 1 Gb/s NICs iscsi 3: MSV switch adapters 64 GB RAM 4: Intel64 Family 15 Model 4 Stepping 8 GenuineIntel ~2793 MHz processors Dell PowerEdge 1950 1 1: 1 Gb/s NIC production 8 GB RAM 2: Intel64 Family 15 Model 4 Stepping 8 GenuineIntel ~2793 MHz processors VMware vcenter Server 1 1: 1 Gb/s NIC production 8 GB RAM 2: Intel64 Family 15 Model 4 Stepping 8 GenuineIntel ~2793 MHz processors Dell PowerConnect 5324 2 24-port 1 Gigabit Ethernet Layer 2 switch with Layer 3 awareness and with 4 combo ports 10

Chapter 1: About This Document Software The software used to validate the solution is listed in the following table. Software Microsoft Windows Server 2008 Microsoft Windows Server 2003 (to support VMware vcenter Server) VMware ESX Server Version RTM SP2 3.5 u2 VMware ESX Server 4.0 Microsoft Exchange Server 2007 EMC PowerPath Microsoft iscsi Initiator EMC Replication Manager EMC Solutions Enabler SP1 5.2 x64 Built-in 5.2 server/mount host 6.5.2.5-891 x64 Navisphere admsnap 2.28 Navisphere CLI 6.28.0.4.4 VMware vcenter Server 2.5 Microsoft LoadGen 8.02.0045 11

Chapter 1: About This Document Prerequisites and supporting documentation Technology It is assumed the reader has a general knowledge of the following products: EMC CLARiiON storage arrays EMC Replication Manager EMC Navisphere Manager Microsoft Exchange Server 2007 Microsoft Exchange LoadGen Microsoft Windows Server 2008 VMware ESX Server 3.5 virtualization software Supporting documents The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative. Replication Manager Version 5.2.1 Product Guide EMC Solutions for Microsoft Exchange Server 2007 CLARiiON CX3 Series iscsi - Best Practices Planning ESRP Storage Program - EMC CLARiiON CX3-20c (600 User) iscsi Storage Solution for Microsoft Exchange Server 2007 EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 Using Replication Manager Clones - Reference Architecture EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 Using RM Clones with Ontrack PowerControls for Email Recovery - Reference Architecture EMC Backup and Recovery for Microsoft Exchange 2007 SP1 enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX 3.5 using ISCSI - Reference Architecture EMC Solutions for Microsoft Exchange 2007 Replication Manager Setup - Build Document EMC Solutions for Microsoft Exchange 2007 Standard Setup of an Exchange 2007 Mailbox Role - Build Document EMC Solutions for Microsoft Exchange 2007 Standard Setup of an Exchange 2007 HUB/CAS Role - Build Document EMC Business Continuity for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4, EMC Replication Manager, and VMware vsphere 4 using iscsi - Reference Architecture Third-party documents The following documents are available on the Microsoft website: Exchange Server 2007 Load Generator Installing Microsoft Exchange Server 2007 12

Chapter 2: Storage Design Chapter 2: Storage Design Overview Introduction to storage design Storage design is an important element to ensure the successful development of the EMC backup and recovery for Microsoft Exchange 2007 enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX 3.5 and VMware ESX 4.0 using iscsi solution. The purpose of this use case is to build an Exchange 2007 SP1 and VMware ESX Server 3.5 Update 2 and 4.0 environment on the CLARiiON CX4-120 platform and integrate it with Replication Manager 5.2.1. This use case is not intended to be a comprehensive guide to every aspect of the Exchange 2007 SP1 with VMware ESX Server 3.5 Update 2 and 4.0 solution. The storage design layout instructions presented in this chapter apply to the specific components used during the development of this solution. The scope is roughly bound by the following parameters and assumptions. This is representative of a Replication Manager 5.2.1 integration with Exchange 2007 SP1 on VMware ESX 3.5 Server Update 2 implementation. The same parameters and building blocks were used when this solution was tested with VMware ESX Server 4.0. Note, however, that actual implementations will vary from the parameters shown based on testing results. Contents This chapter contains the following topics: Topic See Page Storage design layout 14 RAID group layout 14 Best practices and recommendations 15 13

Chapter 2: Storage Design Storage design layout Introduction to storage design layout The following sections detail the layout principles associated with this solution. Goal The following describes some of the key objectives when designing a configuration for the CLARiiON CX4-120 storage layout: The performance requirements must accommodate the required I/O based on the user profile. The design should be easy to understand and build upon (using a building block approach). The design should help to minimize the time and complexity of designing the storage layout. Determining building block requirements A building block approach was used to determine the number of physical spindles required for this solution. This was a commercial-level building block approach, based on ESRP testing and submissions, which are a proven solution for the desired IOPS per building block. For more information, see ESRP Storage Program - EMC CLARiiON CX3-20c (600 User) iscsi Storage Solution for Microsoft Exchange Server 2007. For more information on Microsoft s ESRP program, visit: http://technet.microsoft.com/en-us/exchange/bb412164.aspx For more information about EMC solutions using the ESRP framework, visit: http://www.emc.com/esrp RAID group layout RAID group layout design This design was tested based on previous, known performances of the building block configurations using metaluns. This storage solution was based with a single DAE solution in mind, having the ability, once this configuration was proven, to show that linear results would be possible as building blocks are added. 14

Chapter 2: Storage Design RAID group diagram The RAID group layout is illustrated in the following diagram. Best practices and recommendations CLARiiON RAID group configuration For CLARiiON RAID group configuration, it is recommended to: Keep the Exchange database and log LUNs in separate RAID groups. Keep the Replication Manager clone LUNs in a separate RAID group from the Exchange production LUNs. Balance the RAID group across the storage processors. Ensure that a hot spare disk is assigned. Exchange online maintenance For Exchange online maintenance, ensure that Replication Manager jobs are running outside the Exchange online maintenance window. Do not run Exchange online maintenance against an Exchange storage group while its Replication Manager job is running. Note: Failure to follow this recommendation will cause a large decrease in the clone resync speed and slow online maintenance. A best practice for running Replication Manager jobs is to run them during off-peak hours. Clone RAID group configuration For clone RAID group configuration, ensure that a separate RAID group is used for clone LUNs. Note: Failure to do so will result in degraded performance in the production Exchange server and in the time to complete the Replication Manager jobs. Since the clone is used as a protect methodology, using the same drives for the source and destination defeats the purpose. 15

Chapter 3: Server and Network Design Chapter 3: Server and Network Design Overview Introduction to server and network design This section focuses on the server and network design of this solution. The server design provides details on the physical and logical or virtual severs that were used in this solution. Details on server configuration include the number of processors, network cards, and amount of RAM used. For more information about the hardware specification of the servers, refer to the corresponding Reference Architecture document EMC Backup and Recovery for Microsoft Exchange 2007 enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX 3.5 using iscsi - Reference Architecture. The network design section provides an overview and details the best practices for using an iscsi storage array. Contents This chapter contains the following topics: Topic See Page Server design 17 Network design 19 Best practices for virtual machines 19 Best practices for iscsi 20 16

Chapter 3: Server and Network Design Server design Overview of server design This solution was created with two Active Directory servers, one physical server and one virtual server. A physical server was also used for the VMware vcenter Server on Windows Server 2003. The following describes the configuration of each component used. Configuration of components VMware ESX Server was configured in a VMware High Availability cluster, each server with 64 GB RAM. Guests were separated onto each server to maximize the usage of these servers. VMware ESX Server 1 Microsoft Active Directory server Exchange 2007 Mailbox role Exchange Hub/CAS roles VMware ESX Server 2 EMC Replication Manager mount host EMC Replication Manager Server The virtual machine (VM) guests were configured as follows: Microsoft Exchange Mailbox Server 4 processors 16 GB RAM 3 network cards Microsoft Active Directory Server 2 processors 2 GB RAM 1 network card Microsoft Exchange Hub/CAS Server 2 processors 8 GB RAM 1 network card EMC Replication Manager mount host 2 processors 8 GB RAM 3 network cards EMC Replication Manager Server 4 processors 2 GB RAM 1 network card 17

Chapter 3: Server and Network Design Note: Microsoft Windows Server 2003 is required for EMC s Replication Manager Server, which is not currently supported to run on Microsoft Windows Server 2008. Once Replication Manager is supported, it will be possible to have the Replication Manager Server and the mount host as part of the same virtual server. For this solution, the mount host had to be separate, as the mount host must run the same operating system as the production Exchange server. The solution in total requires four licenses for Microsoft Windows Server. Quantity Description 1 Windows Server 2008 Enterprise* (VMware ESX Server, which allows for up to four guest licenses) 2 Microsoft Windows 2008 Standard Edition Physical Domain Controller Microsoft System Center Virtual Machine Manager 1 Microsoft Windows Server 2003 Standard Edition for the virtual Replication Manager Server. * Microsoft s Microsoft Windows Server 2008 Enterprise License allows the use of up to four instances of the server software in virtual environments (for details, visit: http://www.microsoft.com/windowsserver2008/en/us/licensing-faq.aspx). When possible, it is recommended to spread the Virtual Machine Disk Format (VMDK) files across the CLARiiON service processors to prevent performance issues if there is a very active VM. VMware Distributed Resource Scheduler (DRS) can be used to monitor the virtual server and ensure that no single server overwhelms the host machine or uses all the resources affecting other virtual machines. A recommended configuration with the service processors on a CLARiiON array is: Service Processor A Microsoft Active Directory Server Exchange 2007 Mailbox role Exchange Hub/CAS roles Service Processor B EMC Replication Manager mount host EMC Replication Manager Server 18

Chapter 3: Server and Network Design Network design Overview of network design In this configuration, the Exchange Mailbox Server and Replication Manager mount host were configured using guest OS/OS iscsi dedicated network cards. Two physical 1 Gb/s Ethernet network switches were used for this solution: one for production network traffic and the other for iscsi (through VLAN) traffic to the array. Dedicated virtual networks were created to match the physical network and iscsi networks. Guests requiring access to iscsi were given additional network connections for array connectivity. Best practices for virtual machines Overview of best practices for virtual machines The following best practices are recommended for a guest operating system to run at top performance by improving on the network, iscsi, guest operating system, hard drive space, and memory usage. Disable hibernation on Windows Server 2008 Hibernation is on by default in Windows Server 2008. This feature consumes additional processor power and storage space and is not needed for Exchange and Active Directory servers, which have built-in features to handle power loss. To disable hibernation on Windows Server 2008: 1. Run the following command from the command line on all virtual machine servers. powercfg -Hibernate OFF 2. Run the following command to confirm that hibernation is off. Dir /ah 3. Reboot the server. Registry change Changing the registry specifies whether the user-mode or kernel-mode drivers and kernel mode system code can be paged to disk when not in use. Disabling this in a guest OS prevents unnecessary caching in the guest and host. A reboot of the system is needed for this change to take effect. To change the registry, add the following registry key to all virtual machines. Note: Do not do this on a physical server. [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management] DisablePagingExecutive=dword:00000001 19

Chapter 3: Server and Network Design Best practices for iscsi Best practices The following are some iscsi best practices for connectivity to EMC CLARiiON storage arrays using iscsi connectivity. These recommendations are intended to provide a high performance and stable solution. This section does not include all possible iscsi best practices. Use dedicated switches for production and iscsi networks. If this is not possible, ensure that VLANs can be created on the switch used to guarantee that the iscsi traffic from the server to the array is separated from all other network traffic. Although not used in this solution, the best recommended practice is to have two physical NICs on the ESX server for redundancy. Ensure that all ports on the switch are set to auto-negotiate 1000. Note: It is not possible to disable auto-negotiation on ports on the Dell PowerConnect 5324 switches. While the GUI/CMD line allows the change, the port will become disabled and non-functional. Disable power management on the NICs on the server. Disable each of the following in advanced settings on the NICs Jumbo packet: IPv4 Checksum offload TCP Checksum Offload (IPv4) TCP Large send Offload (IPv4) UDP Checksum Offload (IPv4) On the iscsi NICs on the Exchange servers, clear the checkboxes for Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks. Modify the registry with the following changes on the Exchange servers to optimize iscsi performance: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parame ters KeepAliveTime=Dword:300000 (Decimal) For more details, visit: http://support.microsoft.com/kb/324270. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TCPIP\Parame ters\interfaces{1cdf670f-6155-4652-a317-118489577a22} TcpAckFrequency=Dword:1 (Decimal) For more details, visit http://support.microsoft.com/kb/328890. For more details on these best practices, see EMC Solutions for Microsoft Exchange Server 2007 CLARiiON CX3 Series iscsi - Best Practices Planning. This document is available on EMC s Powerlink website. 20

Chapter 4: Installation and Configuration Chapter 4: Installation and Configuration Overview Introduction to installation and configuration This chapter provides the procedures and guidelines for configuring the components that make up the validated solution scenario. For more information about the background to this solution, see ESRP Storage Program - EMC CLARiiON CX3-20c (600 User) iscsi Storage Solution for Microsoft Exchange Server 2007. The configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate installation and configuration documentation for the revision levels of the hardware and software components in the solution. Versionspecific release notes are especially important. EMC Replication Manager software is used for the creation of VSS cloned images and snapshots of the Exchange database and log file LUNs. While the software is capable of backup and restore using this technology, this paper focuses on the backup technology and comparative times when running sequential or parallel jobs. With Replication Manager, it is possible to configure clone jobs to run in order, sequentially where database and log file LUNs are cloned or snapped one at a time or in parallel where two jobs can run simultaneously. Backing up and checksumming the databases using eseutil /K would complete faster. Contents This chapter contains the following topics: Topic See Page Navisphere CLI scripts configuration 22 Disk management scripts 23 21

Chapter 4: Installation and Configuration Navisphere CLI scripts configuration Background Several options are available when configuring a CLARiiON CX4-120 storage array. One option is to use the Navisphere user interface; another is to use the CLARiiON Navisphere Secure Command Line Interface (naviseccli). This solution provides details on using the command line interface. This method is recommended as a best practice because it is faster and gives the ability to document the array configuration while creating the array storage. This information can then be used in a disaster recovery document to quickly re-create the exact configuration. Overview The CLI commands used to create this solution are provided in the Supporting Information section. These steps assume that planning has already taken place on how the entire array is configured, including decisions on the RAID group configuration, RAID type, number of LUNs, size of LUNs, and so on. Note: All lines for each section can be copied and pasted into a Windows command window. Copying and pasting lines of scripts automates the use of the commands without requiring additional user input. All CLI commands provided in this document are specific to this configuration. Details such as RAID types, LUN numbers, names, sizes, and volume labels can be modified to suit the implementation. They are provided here as an example of exactly what was completed to configure this specific solution. Prerequisites The following are the prerequisites for a successful configuration: Prior to configuration Navisphere Agent and Navisphere CLI must be installed. Connectivity and registration must be set up prior to running the following scripts. The scripts can be run from any server with Navisphere CLI installed. Hosts should be set up and configured but they are not required for LUN creation. Hosts are required for host addition to the storage groups but not for the storage groups themselves. Steps for naviseccli configuration The following are the steps for naviseccli configuration: Create storage groups Create RAID groups Bind LUNs Create reserved LUN pool LUNs Add LUNs to the reserved LUN pool Add hosts to CLARiiON storage groups Add LUNs to clone private LUNs 22

Chapter 4: Installation and Configuration Disk management scripts Overview Once the LUNs and hosts have been assigned to the storage group, they are available on the server. Rescan the disk drives in the Device Manager and all of the LUNs will appear. Microsoft Windows Server 2008 places new volumes in an offline state by default. Disks must be brought online before partitions can be created. Diskpart can be used to configure each disk completely. Using scripts here simplifies and speeds up the process for configuring the volumes. Planning is required to ensure the scripts are correct. Make volumes accessible to the host for data Before the database and log volumes can be configured, the mount points must first be configured. To do this, follow the steps below: Type diskpart at the command prompt. The following commands can be selected together, copied and pasted into diskpart to run automatically without any prompting. Note: This is the configuration for this solution. Disk numbers may differ during implementation. The following is an example of the two script files used with diskpart to create the S: drive (drive 10 mount point root), and then assign a drive to the mount point directories. The following assumes that drives 10 and 11 are the mount point drives for S: and T:. Note: With Windows Server 2008, diskpart does not require the alignment of a partition to 64. Note: This is only required when using Diskpart. The GUI configuration does this automatically. assigning letters for mp root drives -------mountpoint.txt select disk 10 online disk attributes disk clear readonly create partition primary select partition 1 assign letter s: select disk 11 online disk attributes disk clear readonly create partition primary select partition 1 assign letter t: ------- 23

Chapter 4: Installation and Configuration -------mounts.bat format S:\ /fs:ntfs /v:mount_s /a:64k /q /y format T:\ /fs:ntfs /v:mount_t /a:64k /q /y Md s:\sg1db Md t:\sg2db Md t:\sg1lg Md s:\sg2db ------- -------mountdrives.txt select disk 21 online disk attributes disk clear readonly create partition primary select partition 1 assign mount s:\sg1db ----- -------fmtdrives.bat format S:\sg1db /fs:ntfs /v:sg1db /a:64k /q /y format t:\sg2db /fs:ntfs /v:sg2db /a:64k /q /y format t:\sg1lg /fs:ntfs /v:sg1lg /a:64k /q /y format S:\sg2lg /fs:ntfs /v:sg2lg /a:64k /q /y To use these types of files, confirm the drives for the mount points, and then copy the disk 21 commands for each of the drives required for all storage groups. Run the following commands to quickly create the partitions and mount the partitions to the mount points. From the Windows command prompt, enter: C:\Diskpart /s mountpoint.txt C:\mounts.bat C:\diskpart /s mountdrives.txt C:\fmtdrives.bat Using these scripts as an example, it is possible to quickly mount, partition, and format all the presented drives. It is recommended to use a file for recording all the actions. 24

Chapter 5: Testing and Validation Chapter 5: Testing and Validation Overview Introduction to testing and validation Storage design is an important element to ensure the successful development of EMC s backup and recovery solution for Microsoft Exchange 2007 enabled by CLARiiON CX4-120, Replication Manager, and VMware ESX 3.5 and 4.0 using the Microsoft iscsi Software Initiator. This chapter provides details on the components tested in this solution and a summary of the test results. Contents This chapter contains the following topics: Topic Tested components 26 See Page Section A: Replication Manager with VMware ESX Server 3.5 test results Section B: Replication Manager with VMware ESX Server 4.0 test results 27 34 25

Chapter 5: Testing and Validation Tested components Overview of the tested components The following section details the components that were tested for this solution. Replication Manager For Replication Manager, the tests that were completed were as follows: Validate the functionality of Replication Manager with the VMware ESX Server 3.5 Update 2 and 4.0 environment Validate the performance of Replication Manager without LoadGen running Validate the performance of Replication Manager with clone jobs running in sequential operation Validate the performance of Replication Manager with clone jobs running in parallel operation 26

Chapter 5: Testing and Validation Section A: Replication Manager with VMware ESX Server 3.5 test results Overview Introduction The following section details the high-level Replication Manager with VMware ESX Server 3.5 test results. Test results summary Summary of high-level Replication Manager with VMware ESX Server 3.5 test results Functionality testing was carried out to determine the ability of Replication Manager to run in different modes in comparison to baseline testing. Baseline LoadGen testing was done prior to the cloning process for comparison of data on the impact of cloning on the Exchange Server and CLARiiON CX4 array. Performance testing was completed in this solution using Microsoft s LoadGen tool to simulate Exchange I/O. Running Replication Manager jobs with LoadGen also running is intended to simulate running clone jobs during the peak hours of the production day. These tests indicate the worst-case scenario. Baseline testing and testing with Replication Manager running were both completed. As expected, running Replication Manager clone jobs during the production day has an impact both on the array and on Exchange 2007. Comparison of baseline LoadGen 95 th percentile results The following LoadGen charts detail a number of LoadGen tasks during each LoadGen test. Their purpose is to demonstrate that there is a slight increase in latencies during certain tasks while Replication Manager is taking clones. However, in major tasks like login and logout these latencies are negligible. Note: Perfmon data was not available for these tests, therefore LoadGen details have been included to show the difference between the baseline and when clones are being taken. 27

Chapter 5: Testing and Validation Baseline Parallel % Difference Sequential % Difference Task Name 95th Percentile Latency 95th Percentile Latency baseline to parallel 95th Percentile Latency baseline to sequential BrowseCalendar 53 74 39.62264151 73 37.73584906 BrowseContacts 92 123 33.69565217 113 22.82608696 BrowsePublicFolder 0 0 0 0 0 BrowseTasks 0 0 0 0 0 CreateContact 155 174 12.25806452 132-14.83870968 CreateFolder 0 0 0 0 0 CreateTask 91 154 69.23076923 133 46.15384615 DeleteMail 0 0 0 0 0 DownloadOab 0 0 0 0 0 EditRules 0 0 0 0 0 EditSmartFolders 0 0 0 0 0 ExportMail 0 0 0 0 0 InitializeMailbox 0 0 0 0 0 Logoff 2 2 0 2 0 Logon 313657 315505 0.589178625 319704 1.927902135 MakeAppointment 665 656-1.353383459 712 7.067669173 ModuleInit 227 131-42.2907489 95-58.14977974 ModuleTerm 0 0 0 0 0 MoveMail 0 0 0 0 0 PostFreeBusy 0 0 0 0 0 PublicFolderPost 0 0 0 0 0 ReadAndProcessMessages 184 184 0 184 0 RequestMeeting 1365 1520 11.35531136 1613 18.16849817 Search 0 0 0 0 0 SendMail 214 214 0 233 8.878504673 SynchronizeFolders 0 0 0 0 0 UserInit 0 0 0 0 0 UserTerm 0 0 0 0 0 Baseline LoadGen results The following diagram illustrates the Navisphere results with no clone jobs running. The SPs on average are no greater than 5 percent while supporting 600 users. 28

Chapter 5: Testing and Validation LoadGen results The following table provides details of the LoadGen results. Task Name Count Actual Distribution (%) Configured Distribution (%) Average Latency BrowseCalendar 7237 14 14 31 124 BrowseContacts 6026 11 12 65 194 BrowsePublicFolder 0 0 0 0 0 BrowseTasks 0 0 0 0 0 CreateContact 1182 2 1 64 223 CreateFolder 0 0 0 0 0 CreateTask 609 1 1 55 202 DeleteMail 0 0 0 0 0 DownloadOab 609 1 1 0 0 EditRules 0 0 0 0 0 EditSmartFolders 0 0 0 0 0 ExportMail 0 0 0 0 0 InitializeMailbox 0 0 0 0 0 Logoff 1795 3 3 3 13 Logon 600 1 0 20590 307118 MakeAppointment 569 1 2 326 995 ModuleInit 2 0 0 2307 2704 ModuleTerm 0 0 0 0 0 MoveMail 0 0 0 0 0 PostFreeBusy 2506 4 4 9 0 PublicFolderPost 0 0 0 0 0 95th Percentile Latency ReadAndProcessMess ages 23858 47 48 97 333 RequestMeeting 589 1 1 775 1673 Search 0 0 0 0 0 SendMail 4795 9 9 120 374 SynchronizeFolders 0 0 0 0 0 UserInit 0 0 0 0 0 29

Chapter 5: Testing and Validation Task Name Count Actual Distribution (%) Configured Distribution (%) Average Latency UserTerm 0 0 0 0 0 95th Percentile Latency Sequential jobs The following diagram illustrates the Navisphere results with clone jobs running in sequential operation. As can be seen, each job was run in order, with both jobs completing in 30 minutes and the total job taking approximately one hour to complete. Up to 60k was tested without issues. The following table provides details of the LoadGen results. Task Name Count Actual Distribution (%) Configured Distribution (%) Average Latency BrowseCalendar 7104 14 14 25 74 BrowseContacts 5989 11 12 50 123 BrowsePublicFolder 0 0 0 0 0 BrowseTasks 0 0 0 0 0 CreateContact 1224 2 1 48 174 CreateFolder 0 0 0 0 0 CreateTask 558 1 1 40 154 DeleteMail 0 0 0 0 0 DownloadOab 621 1 1 0 0 EditRules 0 0 0 0 0 95th Percentile Latency 30

Chapter 5: Testing and Validation Task Name Count Actual Distribution (%) Configured Distribution (%) Average Latency EditSmartFolders 0 0 0 0 0 ExportMail 0 0 0 0 0 InitializeMailbox 0 0 0 0 0 Logoff 1840 3 3 2 2 Logon 600 1 0 27965 315505 MakeAppointment 670 1 2 252 656 ModuleInit 2 0 0 91 131 ModuleTerm 0 0 0 0 0 MoveMail 0 0 0 0 0 PostFreeBusy 2410 4 4 11 0 PublicFolderPost 0 0 0 0 0 95th Percentile Latency ReadAndProcessMessa ges 24003 47 48 49 184 RequestMeeting 578 1 1 1325 1520 Search 0 0 0 0 0 SendMail 4775 9 9 78 214 SynchronizeFolders 0 0 0 0 0 UserInit 0 0 0 0 0 UserTerm 0 0 0 0 0 31

Chapter 5: Testing and Validation Parallel jobs The following diagram illustrates the Navisphere results with clone jobs running in parallel operation. As can be seen, each job ran at the same time and was completed in 30 minutes. The following table provides details of the LoadGen results. Task Name Count Actual distribution (%) Configured distribution (%) Average latency BrowseCalendar 7375 14 14 28 73 BrowseContacts 6041 11 12 49 113 BrowsePublicFolder 0 0 0 0 0 BrowseTasks 0 0 0 0 0 CreateContact 1263 2 1 56 132 CreateFolder 0 0 0 0 0 CreateTask 606 1 1 39 133 DeleteMail 0 0 0 0 0 DownloadOab 584 1 1 0 0 EditRules 0 0 0 0 0 EditSmartFolders 0 0 0 0 0 ExportMail 0 0 0 0 0 InitializeMailbox 0 0 0 0 0 95th percentile latency 32

Chapter 5: Testing and Validation Task Name Count Actual distribution (%) Configured distribution (%) Average latency Logoff 1712 3 3 3 2 Logon 600 1 0 29121 319704 MakeAppointment 568 1 2 270 712 ModuleInit 2 0 0 87 95 ModuleTerm 0 0 0 0 0 MoveMail 0 0 0 0 0 PostFreeBusy 2425 4 4 12 0 PublicFolderPost 0 0 0 0 0 95th percentile latency ReadAndProcessMe ssages 23854 47 48 50 184 RequestMeeting 572 1 1 781 1613 Search 0 0 0 0 0 SendMail 4779 9 9 85 233 SynchronizeFolders 0 0 0 0 0 UserInit 0 0 0 0 0 UserTerm 0 0 0 0 0 33

Chapter 5: Testing and Validation Section B: Replication Manager with VMware ESX Server 4.0 test results Overview Introduction The following section details the high-level Replication Manager with VMware ESX Server 4.0 test results. Test result summary Summary of high-level Replication Manager with VMware ESX Server 4.0 test results The primary purpose of this testing was to validate the functionality of Replication Manager with VMware ESX Server 4.0. Two sets of Replication Manager tests were completed: Running Replication Manager jobs separately Running multiple Replication Manager jobs at the same time Running LoadGen while also running Replication Manager was tested. This was to simulate how Exchange would respond while running Replication Manager during the production day. Test results The following tables and diagrams illustrate the results from performance testing completed using VMware ESX Server 4.0. All other hardware and software components in the solution are the same as those used for testing with ESX Server 3.5. Before running any tests with EMC Replicaiton Manager, LoadGen was first used to ensure a valid baseline could be achieved. There are a number of performance counters that are used when analyzing results to determine whether the specific configuration is valid, and that the required throughputs and response times are achieved. For comparison purposes all results are placed in the one table, as shown below. 34

Chapter 5: Testing and Validation Logical Disk Rec. MS value disk read Avg. Disk sec/read baseline Avg. Disk sec/read mixed Avg. Disk sec/read separate Rec. MS value disk write Avg. Disk sec/write baseline Avg. Disk sec/write mixed Avg. Disk sec/write separate Database (S:SG1DB) Database (S:SG2DB) Database (T:SG3DB) Database (T:SG4DB) Log (T:SG1Logs) Log (T:SG2Logs) Log (S:SG3Logs) Log (S:SG4Logs) <0.020 0.006 0.008 0.009 <0.020 0.005 0.008 0.008 <0.020 0.006 0.007 0.009 <0.020 0.005 0.008 0.009 <0.020 0.005 0.007 0.008 <0.020 0.005 0.008 0.008 <0.020 0.006 0.008 0.009 <0.020 0.005 0.008 0.008 n/a 0.000 0.002 0.001 <0.010 0.002 0.002 0.002 n/a 0.000 0.002 0.001 <0.010 0.001 0.002 0.002 n/a 0.000 0.002 0.001 <0.010 0.002 0.002 0.002 n/a 0.000 0.002 0.001 <0.010 0.002 0.002 0.002 Navisphere results with no clone jobs running The following diagram illustrates the Navisphere results with no clone jobs running. On average, the SPs utilization is no greater than 6 percent while supporting 600 users. 35

Chapter 5: Testing and Validation SP utilization on the CLARiiON during separate and parallel RM jobs The following diagram shows the SP utilization on the CLARiiON during the separate RM jobs. The following diagram shows the Navisphere results with clone jobs running in parallel operation. It is also useful to gather performance statistics from the servers to determine the server impact and how the application itself responds. As can be seen in the following table, the Exchange values are all well within the recommended thresholds. 36

Chapter 5: Testing and Validation This indicates that the clients will not experience performance problems with this configuration. Latencies increase a little while the Replication Manager jobs are running. This increase is not enough to cause any performance problems on the servers or the clients. Performance counter Disk Transfers /Sec Database Database Page Fault Stalls/Sec Log Record Stalls/Sec Log Threads Waiting MSExchangeIS Client: RPCs failed:server Too Busy Microsoft rec. value Value baseline Value mixed clone Value separate clone Details of counter n/a 55.717 71.100 74.604 The rate of read and write operations on the disk. 0 0 0 0 Shows the rate at which the database file page requests require the database cache manager to allocate a new page from the database cache. <10 (max<100) 0 0 0 Shows the number of log records that cannot be added to the log buffers per second (because the log buffers are full or a log buffer is a bottleneck). <10 on average 0 0 0 Regular spikes concurrent with log record stall spikes indicate the log disks are a bottleneck. 0 0.000 0.000 0.000 Shows client-reported rate of failed RPCs since the store was started. RPC Averaged Latency RPC number of slow packets Should not be higher than 25 ms on average <1 on average <3 at all times 5.815 8.073 8.217 Information about how clients are affected when the overall sever RPC averaged latencies increase. 0.338 0.475 0.506 Shows the number of RPC packets in the past 1,024 packets that have latencies longer than two seconds. RPC Requests <70 at all times 0.239 0.351 0.428 Indicates the overall requests that are currently executing within the information store process. MSExchangeIS Mailbox Messages Delivered/Sec n/a 9 11 12 Shows the number of messages delivered per second. Messages Submitted/Sec n/a 2.81 3.66 3.99 Shows the number of messages submitted per second. 37

Chapter 5: Testing and Validation Performance counter Messages Queued for Submission Processor % Processor Time Memory Microsoft rec. value <50 at all times and not sustained for more than 15 minutes Value baseline Value mixed clone Value separate clone Details of counter 3.90 4.69 3.84 Shows the current number of messages that are not yet processed by the transport layer. n/a 8.296 11.090 12.162 Shows the percentage of processor time. Available MBs >100 3661.743 3757.52 7 3677.326 Shows the amount of physical memory, in megabytes (MB), immediately available for allocation to a process or for system use. Free System Page Table Entries n/a 33,559,16 6.941 33,559, 210.212 33,559,5 90.023 The number of page table entries not currently in used by the system. Pool Non-paged Bytes n/a 64,147,28 9.946 63,121, 423.815 63,294,2 77.818 Consists of system virtual addresses that are guaranteed to be resident in physical memory at all times and can thus be accessed from any address space without incurring paging input/output (I/O). Pool Paged Bytes n/a 131,982,9 54.090 133,113,935.07 3 133,837, 570.116 Shows the portion of shared system memory that can be paged to the disk paging file. 38

Chapter 6: Conclusion Chapter 6: Conclusion Overview This solution contains all components required to run a messaging infrastructure, which includes protection, at a relatively low cost. The solution provides many benefits to customers on a number of levels: Virtualization Protection Simplified mailbox design Virtualization With EMC s continued partnership with VMware, this solution illustrates the many benefits of a virtualized platform for Exchange 2007. The main benefits are server consolidation and the ability of Exchange and Replication Manager to run within a virtualized platform. EMC can help to accelerate the assessment, design, implementation, and management of creating a virtualized Exchange 2007 environment while lowering the implementation risk and cost. Protection Replication Manager is ideal for backup automation and acceleration in creating a Gold Copy of the production data for instant restore should corruption occur. Replication Manager is an easy-to-use point-and-click application to manage and automate data protection at the array level. Simplified mailbox design By using the building block method, Exchange deployments result in predictable performance for all mailbox servers. The building block method removes any assumptions or guesswork when sizing the servers and storage. Objectives and results The following table details the objectives of this use case and the results achieved. Objective Confirm the functionality of Replication Manager in a virtualized environment using two different methods for running the jobs. Determine if running Replication Manager jobs would have an impact if run during the production day. Result Replication Manager jobs were run both sequentially and in parallel. Two gold clones could be created and the sequential and parallel jobs worked without issue, as expected. Results from LoadGen and Navisphere Analyzer confirmed that running Replication Manager jobs during the production day does have an impact on Exchange 2007 performance. While the increase in latencies is low, it is still recommended to run the clone jobs during nonproduction or off-peak hours. 39

Chapter 6: Conclusion Conclusion The powerful combination of EMC CLARiiON CX4-120, Replication Manager and VMware ESX Server, as tested, provides an integrated backup solution that is ideal for the midsize customer. IT infrastructure management is becoming increasingly complex, and so are the needs for constant information availability and detailed management and reporting capabilities. The growth of data is driving longer backup and recovery windows, and core business processes, such as generating copies of data and establishing recovery processes to meet your business service level agreements, are taking longer to complete. With the decreasing cost of disk storage, replication technologies are now more cost-effective for enabling realtime backup and recovery in parallel or sequential environments. With this EMC solution and the advanced features of EMC Replication Manager, the discovery of the environment is automated, allowing customers to easily schedule replications in their backup environment. It places their Microsoft application in the proper state for application-consistent replicas, which accelerates backups and provides instant restore capabilities for their test/development environment or production tiered storage environment. Application images can instantly be transferred between servers or between sites to meet the customer s business needs. This EMC solution, deployed in a virtual environment based on VMware ESX Server, contributes to improving operational efficiency, so customers can deploy fewer resources for completing the operational tasks of detailed management reporting and deploying replication-based backup processes. Testing verified that using Microsoft Exchange Server 2007 SP1, with local data replication, allows for fast data replication in the event of database loss or corruption. The storage layout for this solution is based on a design that utilizes a building block approach, which repeatedly builds upon itself as the customer s requirements grow and additional space and fault tolerance are required. Testing confirmed that at the smallest building block level all components worked in a single system, as expected, and within Microsoft Exchange database latencies, while running local data replication. 40

Supporting Information Supporting Information Overview Introduction to supporting information This chapter contains supporting information on configuring Navisphere CLI scripts. Navisphere CLI scripts configuration Scripts for naviseccli configuration The following are the scripts for naviseccli configuration. -h 200.0.80.100 as seen in the following CLI examples should be replaced with either the SPa or SPb management IP addresses. Creating storage groups naviseccli h 200.0.81.100 storagegroup create gname EXCH naviseccli h 200.0.81.100 storagegroup create gname ESX Datastor naviseccli h 200.0.81.100 storagegroup create gname F81RMMH01 naviseccli h 200.0.81.100 storagegroup create gname EMC Replication Storage Creating RAID groups Creating RAIDGroup 10 for Logs and ESX (Disk 0,1,2,3).. NaviSecCLI -h 200.0.81.100 CreateRG 10 0_1_0 0_1_1 0_1_2 0_1_3 -rm yes -pri High Creating RAIDGroup 11 for Databases (Disk 10,11,12,13).. NaviSecCLI -h 200.0.81.100 CreateRG 11 0_1_10 0_1_11 0_1_12 0_1_13 -rm yes -pri High Creating RAIDGroup 12 for Hotspare (Disk14).. NaviSecCLI -h 200.0.81.100 CreateRG 12 0_1_14 -rm yes -pri High Creating RAIDGroup 13 for RMLUNS and RLP (Disk 5,6,7,8,9).. NaviSecCLI -h 200.0.81.100 CreateRG 13 0_1_5 0_1_6 0_1_7 0_1_8 0_1_9 -rm yes -pri High Binding LUNs Binding 12GB LUN for Q2RM_SG1LG.. NaviSecCLI -h 200.0.81.100 bind r1_0 104 -rg 10 -sq bc -cap 25165824 -sp b naviseccli -h 200.0.81.100 chglun -l 104 -name Q2RM_SG1LG Binding 12GB LUN for Q2RM_SG2LG.. NaviSecCLI -h 200.0.81.100 bind r1_0 105 -rg 10 -sq bc -cap 41

Supporting Information 25165824 -sp b naviseccli -h 200.0.81.100 chglun -l 105 -name Q2RM_SG2LG Binding 12GB LUN for Q2RM_SG3LG.. NaviSecCLI -h 200.0.81.100 bind r1_0 106 -rg 10 -sq bc -cap 25165824 -sp a naviseccli -h 200.0.81.100 chglun -l 106 -name Q2RM_SG3LG Binding 12GB LUN for Q2RM_SG4LG.. NaviSecCLI -h 200.0.81.100 bind r1_0 107 -rg 10 -sq bc -cap 25165824 -sp a naviseccli -h 200.0.81.100 chglun -l 107 -name Q2RM_SG4LG Binding 1GB LUN for S.. NaviSecCLI -h 200.0.81.100 bind r1_0 115 -rg 10 -sq bc -cap 2097152 -sp a naviseccli -h 200.0.81.100 chglun -l 115 -name Q2RM_S Binding 1GB LUN for T.. NaviSecCLI -h 200.0.81.100 bind r1_0 116 -rg 10 -sq bc -cap 2097152 -sp b naviseccli -h 200.0.81.100 chglun -l 116 -name Q2RM_T Binding 175GB LUN for shared esx location NaviSecCLI -h 200.0.81.100 bind r1_0 120 -rg 10 -sq bc -cap 367001600 -sp a naviseccli -h 200.0.81.100 chglun -l 120 -name esx-v01 Binding 175GB LUN for shared esx location NaviSecCLI -h 200.0.81.100 bind r1_0 121 -rg 10 -sq bc -cap 367001600 -sp b naviseccli -h 200.0.81.100 chglun -l 121 -name esx-v02 Binding 120GB LUN for Q2RM_SG1DB.. NaviSecCLI -h 200.0.81.100 bind r1_0 108 -rg 11 -sq bc -cap 251658240 -sp a naviseccli -h 200.0.81.100 chglun -l 108 -name Q2RM_SG1DB Binding 120GB LUN for Q2RM_SG2DB.. NaviSecCLI -h 200.0.81.100 bind r1_0 109 -rg 11 -sq bc -cap 251658240 -sp a naviseccli -h 200.0.81.100 chglun -l 109 -name Q2RM_SG2DB Binding 120GB LUN for Q2RM_SG3DB.. NaviSecCLI -h 200.0.81.100 bind r1_0 110 -rg 11 -sq bc -cap 251658240 -sp b naviseccli -h 200.0.81.100 chglun -l 110 -name Q2RM_SG3DB Binding 120GB LUN for Q2RM_SG4DB.. NaviSecCLI -h 200.0.81.100 bind r1_0 111 -rg 11 -sq bc -cap 251658240 -sp b naviseccli -h 200.0.81.100 chglun -l 111 -name Q2RM_SG4DB Adding Disk 14 as hotspare.. NaviSecCLI -h 200.0.81.100 bind hs 112 -rg 12 42

Supporting Information Binding 8X120GB LUN for RMDB NaviSecCLI -h 200.0.81.100 bind r5 130 -rg 13 -sq bc -cap 251658240 -sp a NaviSecCLI -h 200.0.81.100 bind r5 131 -rg 13 -sq bc -cap 251658240 -sp a NaviSecCLI -h 200.0.81.100 bind r5 132 -rg 13 -sq bc -cap 251658240 -sp a NaviSecCLI -h 200.0.81.100 bind r5 133 -rg 13 -sq bc -cap 251658240 -sp a NaviSecCLI -h 200.0.81.100 bind r5 134 -rg 13 -sq bc -cap 251658240 -sp b NaviSecCLI -h 200.0.81.100 bind r5 135 -rg 13 -sq bc -cap 251658240 -sp b NaviSecCLI -h 200.0.81.100 bind r5 136 -rg 13 -sq bc -cap 251658240 -sp b NaviSecCLI -h 200.0.81.100 bind r5 137 -rg 13 -sq bc -cap 251658240 -sp b Binding 8X12GB LUN for RMLG NaviSecCLI -h 200.0.81.100 bind r5 138 -rg 13 -sq bc -cap 25165824 -sp a NaviSecCLI -h 200.0.81.100 bind r5 139 -rg 13 -sq bc -cap 25165824 -sp a NaviSecCLI -h 200.0.81.100 bind r5 140 -rg 13 -sq bc -cap 25165824 -sp a NaviSecCLI -h 200.0.81.100 bind r5 141 -rg 13 -sq bc -cap 25165824 -sp a NaviSecCLI -h 200.0.81.100 bind r5 142 -rg 13 -sq bc -cap 25165824 -sp b NaviSecCLI -h 200.0.81.100 bind r5 143 -rg 13 -sq bc -cap 25165824 -sp b NaviSecCLI -h 200.0.81.100 bind r5 144 -rg 13 -sq bc -cap 25165824 -sp b NaviSecCLI -h 200.0.81.100 bind r5 145 -rg 13 -sq bc -cap 25165824 -sp b Binding 1GB LUN for S on RMMountHost.. NaviSecCLI -h 200.0.81.100 bind r1_0 146 -rg 10 -sq bc -cap 2097152 -sp a naviseccli -h 200.0.81.100 chglun -l 146 -name Q2RM_S_RMMH01 Binding 1GB LUN for T on RMMountHost.. NaviSecCLI -h 200.0.81.100 bind r1_0 147 -rg 10 -sq bc -cap 2097152 -sp b naviseccli -h 200.0.81.100 chglun -l 147 -name Q2RM_T_RMMH01 43

Supporting Information Creating reserved LUN pool LUNs Binding LUN for Reserved Lun 150 NavisecCLI -h 200.0.81.100 bind r5 150 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 151 NavisecCLI -h 200.0.81.100 bind r5 151 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 152 NavisecCLI -h 200.0.81.100 bind r5 152 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 153 NavisecCLI -h 200.0.81.100 bind r5 153 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 154 NavisecCLI -h 200.0.81.100 bind r5 154 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 155 NavisecCLI -h 200.0.81.100 bind r5 155 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 156 NavisecCLI -h 200.0.81.100 bind r5 156 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 157 NavisecCLI -h 200.0.81.100 bind r5 157 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 158 NavisecCLI -h 200.0.81.100 bind r5 158 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 159 NavisecCLI -h 200.0.81.100 bind r5 159 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 160 NavisecCLI -h 200.0.81.100 bind r5 160 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 161 NavisecCLI -h 200.0.81.100 bind r5 161 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 162 NavisecCLI -h 200.0.81.100 bind r5 162 -rg 13 -sq bc -cap 2097152 -sp a Binding LUN for Reserved Lun 163 NavisecCLI -h 200.0.81.100 bind r5 163 -rg 13 -sq bc -cap 2097152 -sp b Binding LUN for Reserved Lun 164 NavisecCLI -h 200.0.81.100 bind r5 164 -rg 13 -sq bc -cap 44

Supporting Information 2097152 -sp a Binding LUN for Reserved Lun 165 NavisecCLI -h 200.0.81.100 bind r5 165 -rg 13 -sq bc -cap 2097152 -sp b Adding LUNs to the reserved LUN pool NaviSecCLI -h 200.0.81.100 reserved -lunpool -addlun 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 naviseccli h 200.0.81.100 storagegroup create gname EXCH naviseccli h 200.0.81.100 storagegroup create gname ESX DATASTOR naviseccli h 200.0.81.100 storagegroup create gname F81RMMH01 naviseccli h 200.0.81.100 storagegroup create gname EMC Replication Storage Adding hosts to CLARiiON storage groups naviseccli -h 200.0.81.100 storagegroup -connecthost host F81VEX01 -gname EXCH naviseccli -h 200.0.81.100 storagegroup -connecthost host F81esx01 -gname ESX DATASTOR naviseccli -h 200.0.81.100 storagegroup -connecthost host F81es021 -gname ESX DATASTOR naviseccli -h 200.0.81.100 storagegroup -connecthost host F81VRMMH01 -gname F81RMmH01 45

Supporting Information Adding LUNs to clone private LUNs To enable EMC s Replication Manager to create clones, two LUNs must be assigned as clone private LUNs on the array. These can be two 1 GB LUNs. To assign these LUNs: Right-click on the array, navigate to SnapView and select Clone Feature Properties. Result: The Clone Features Properties window is displayed as illustrated. Select the two 1 GB LUNs to be added and select Allow Protected Restore. Click OK. 46