EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

Similar documents
EMC Solutions for Microsoft Exchange 2003 CX Series iscsi

EMC Solutions for Microsoft Exchange 2007 EMC Celerra Unified Storage Platforms

EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Exchange 2003 NS Series iscsi

EMC Backup and Recovery for Microsoft Exchange 2007

EMC CLARiiON LUN Shrinking with Microsoft Exchange 2007

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

EMC Virtual Infrastructure for Microsoft Exchange 2007

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3 Series FCP

EMC Business Continuity for Microsoft Exchange 2010

Assessing performance in HP LeftHand SANs

Microsoft Office SharePoint Server 2007

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

Reference Architecture

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

Many organizations rely on Microsoft Exchange for

EMC Business Continuity for Microsoft Applications

EMC Tiered Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON CX4 and Enterprise Flash Drives

EMC Celerra Unified Storage Platforms

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

EMC Business Continuity for Oracle Database 11g

EMC Solutions for Exchange 2003 NS Series iscsi. Reference Architecture Guide A01

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

EMC Celerra Unified Storage Platforms

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

HP Supporting the HP ProLiant Storage Server Product Family.

EMC Business Continuity for Microsoft Exchange 2007

Local and Remote Data Protection for Microsoft Exchange Server 2007

3.1. Storage. Direct Attached Storage (DAS)

EMC CLARiiON Database Storage Solutions: Microsoft SQL Server 2000 and 2005

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

Step-by-Step Guide to Installing Cluster Service

The advantages of architecting an open iscsi SAN

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC NetWorker Module for Microsoft for Windows Bare Metal Recovery Solution

Video Surveillance EMC Storage with Honeywell Digital Video Manager

EMC Backup and Recovery for Microsoft SQL Server

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

DELL EMC UNITY: BEST PRACTICES GUIDE

Configuring and Managing Virtual Storage

EMC VSPEX END-USER COMPUTING

VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorView/S

EMC Integrated Infrastructure for VMware. Business Continuity

Hosted Microsoft Exchange Server 2003 Deployment Utilizing Network Appliance Storage Solutions

EMC SAN Copy. Command Line Interface (CLI) Reference P/N REV A15

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

Exchange 2003 Archiving for Operational Efficiency

EMC VNX Series: Introduction to SMB 3.0 Support

EMC VSPEX END-USER COMPUTING

ESRP Storage Program

Dell EMC Unity Family

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Data center requirements

... Technical report: IBM System Storage N series and Microsoft Exchange Server Best practices guide. Document NS

EMC CLARiiON Storage Solutions: Microsoft Data Protection Manager 2007

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

EMC SAN Copy Command Line Interfaces

Deploying Microsoft System Center Data Protection Manager 2007 in an iscsi SAN

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

Dell Exchange 2007 Advisor and Representative Deployments

Storage Update and Storage Best Practices for Microsoft Server Applications. Dennis Martin President, Demartek January 2009 Copyright 2009 Demartek

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System

EMC VSPEX END-USER COMPUTING

iscsi Boot from SAN with Dell PS Series

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Parallels Virtuozzo Containers 4.6 for Windows

Introduction to the CX700

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

DELL EMC UNITY: HIGH AVAILABILITY

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

Oracle RAC 10g Celerra NS Series NFS

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

EMC VPLEX with Quantum Stornext

Transitioning Exchange 2003 to VMware Virtual Exchange 2007 Using EMC CLARiiON or EMC Celerra

Virtualization And High Availability. Howard Chow Microsoft MVP

Technical Note P/N REV A01 March 29, 2007

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

BrightStor ARCserve Backup for Windows

EMC CLARiiON CX3 UltraScale Series The Proven Midrange Storage

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

EMC VSPEX END-USER COMPUTING

Surveillance Dell EMC Storage with Verint Nextiva

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

Transcription:

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi Best Practices Planning Abstract This white paper presents the best practices for optimizing performance for a Microsoft Exchange 2007 solution on EMC CLARiiON storage arrays through iscsi connectivity. The recommendations are targeted toward ensuring availability and disaster recovery along with performance. This paper covers the best practice recommendations in the following categories: Exchange Networking Operating system Backup and restore CLARiiON and EMC Replication Manager July 2007

Copyright 2007 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. Part Number H2871 Best Practices Planning 2

Table of Contents Executive summary...5 Introduction...5 Audience... 5 Terminology... 5 Exchange best practices...6 Exchange storage group database and log files... 6 Periodic run of Exchange Best Practices Analyzer... 6 Clustering Exchange servers... 6 Two DC/GCs per AD site... 6 One database per storage group... 6 Networking best practices...7 1 Gigabit Ethernet switches with VLAN capabilities... 7 Dedicated switches for creating VLANs for production and iscsi subnets... 7 1 GbE NIC on the OS for production and iscsi traffic... 7 CAT6 cables for GbE connectivity... 7 Network speed and duplexing: Auto-negotiate for GbE network and switch ports... 7 Jumbo Frame support set to 9000... 7 Operating system best practices...8 Installation of latest Microsoft iscsi initiator... 8 Clear the boxes for Client for Microsoft networks and File and Printer Sharing for Microsoft Networks on iscsi NICs only... 8 Installation of latest NIC driver... 8 iscsi configuration... 9 TCP/IP registry changes... 10 Use high-performance disks for best Exchange performance (rpm)... 10 Plan for performance and not capacity... 10 Use DISKPART to align iscsi LUNs for best performance... 11 Use EMC RAID 1/0 for log files...12 Use EMC RAID 1/0 for database files... 12 Use RAID 5 if Replication Manager is used for instant local recovery capabilities... 12 Backup and restore best practices...12 Use RM version 5.0 and VSS to implement instant local recoverability... 12 RM VSS snapshots for iscsi LUNs supporting Exchange scheduled in off-hours... 13 Name LUNs for quick identification... 13 Use NaviCLI for clone creation of RAID groups and storage groups... 13 Increase NT backup logical disk buffers to 64... 14 Increase NT backup logical disk buffers to 1024... 14 Increase NT backup logical disk buffers to 16... 14 CLARiiON and Replication Manager best practices...15 CLARiiON... 15 Balanced LUNs design... 15 Best Practices Planning 3

Replication Manager... 15 Clone LUNs creation using NaviCLI... 15 Local continuous replication best practices...16 One database per storage group... 16 Size for additional 20 percent processor overhead... 16 Additional storage requirements... 16 Public folders... 16 Database... 16 Backup... 16 Copy isolation... 16 Cluster continuous replication best practices...16 Public folders... 17 Hardware... 17 Storage... 17 Database... 17 Backup... 17 Microsoft Storage Calculator... 17 Storage group design... 17 Conclusion...17 References...17 Best Practices Planning 4

Executive summary While designing a Microsoft Exchange solution, one criterion for the design should be that it ensures a balance between capacity, availability, and performance. While ensuring scalability, the solution should be able to maintain high availability, optimum performance, and an efficient mechanism for disaster recovery. This white paper is a resource for optimizing performance for Exchange 2007 storage configurations on EMC CLARiiON through iscsi. Introduction This white paper outlines best practice recommendations for Microsoft Exchange, networking, operating systems, backup and restore, and continuous replication. A section on best practices for CLARiiON and EMC Replication Manager is also included. Audience This white paper is intended for IT administrators and system engineers who have an interest in implementing Microsoft Exchange 2007 using EMC CLARiiON systems. It is assumed that the reader has a general knowledge of Microsoft Exchange, Active Directory, and CLARiiON features and terminologies. Terminology Active Directory: An advanced directory service introduced in Windows 2000 Server. It stores information about objects in a network and makes this information available to users and network administrators through a protocol such as LDAP. Disk volume: A physical storage unit in the CLARiiON system that is exported from the storage array. All other volume types are created from the disk volumes. Group: A CLARiiON storage-system term for a disk group. iscsi: The Internet SCSI protocol for sending SCSI packets over TCP/IP networks. iscsi initiator: An iscsi endpoint identified by a unique iscsi-recognized name. It initiates iscsi sessions by sending a command to the other endpoint (iscsi target). iscsi target: An iscsi endpoint which is identified by a unique iscsi-recognized name. It executes the commands sent by the iscsi initiator. RAID (redundant array of independent disks): A method of storing data on multiple disk drives to increase performance and storage capacities. RAID also provides redundancy and fault tolerance. RAID 1: A RAID method that provides data integrity by mirroring (copying) data onto another disk. This RAID type ensures no data loss or interruption of service if a disk fails. It supports fast read performance but involves high cost. As half of the disk space is allocated for data protection, only 50 percent of the total disk drive capacity is available for data storage. RAID 5: A RAID method that supports distributed data guarding where the data is striped at a block level across several drives and the parity is distributed among the drives. No single disk is devoted to parity. Parity information is stored so that data can be reconstructed if needed. If one disk fails it is possible to rebuild the complete data set so that no data is lost. The performance is good for reads but slower for writes. Data loss occurs if a second disk fails before data from the first failed disk is rebuilt. RAID 1/0: A RAID method where the data is striped, and then mirrored. Typically this is created using minimum of four drives, though in case of CLARiiON, this can be created using only two drives with provision for adding more drives later for additional space. RAID 1/0 when used with four disks or more supports better disk performance and fault tolerance. Best Practices Planning 5

SP A: A generic term for the first storage processor in a CLARiiON storage system. SP B: A generic term for the second storage processor in a CLARiiON storage system. Storage processor: A circuit board with memory modules and control logic that manages the storage system I/O between the host s Fibre Channel adapter and the disk modules on a CLARiiON storage system. Volume Shadow Copy Service (VSS): A Windows service that provides an infrastructure that enables third-party storage management programs, business programs, and hardware providers to create and manage consistent point-in-time copies of data called shadow copies. Exchange best practices Listed below are the best practice recommendations for the Microsoft Exchange solution using CLARiiON CX3-20 storage arrays and iscsi for connectivity. Exchange storage group database and log files It is recommended to ensure that database files and log files are from the same Exchange storage group and do not share the same physical spindles. This prevents the possibility of data loss due to the loss of multiple drives on an Exchange database, or an entire storage group. Periodic run of Exchange Best Practices Analyzer It is highly recommended to run the Microsoft Exchange Best Practices Analyzer (EXBPA) against the Exchange servers. EXPBA can be installed in the Exchange Management Console (EMC), following all recommendations. The Microsoft Exchange Best Practices Analyzer automatically examines an Exchange server deployment and determines whether the configuration is set according to Microsoft best practices either through Microsoft Operations Manager (MOM) or through periodic scheduled running of EXBPA. Ensure that the servers are running the recommended latest definitions. Clustering Exchange servers Clustering builds resilient systems providing excellent protection from server and application failures. For low-cost, highly available clusters, it is recommended to use Microsoft Cluster Service (MSCS) with Exchange servers to increase the fault tolerance of hardware running the Exchange server. MSCS reduces downtime through its hotfix and patch management capabilities. Training customer administrative staff in MSCS is also recommended as it will help them handle the differences in management of different Exchange clustered servers. Two DC/GCs per AD site As Exchange relies heavily on Active Directory for DSAccess using the DC/GCs, it is highly recommended to have a minimum of two AD DC\GC servers per site for fault tolerance. Best practices for a number of DC/GCs is a 4:1 physical processor ratio where for every four Exchange server physical processors there should be one DC/GC processor to ensure a minimum of two DC/GC servers per site for fault tolerance. One database per storage group With the Exchange 2007 architecture change, 50 storage groups are possible with one database per storage group. As with Exchange 2003 it is possible to have multiple databases per storage group, but it is not recommended. Best Practices Planning 6

Networking best practices Listed below are the best practice recommendations for networking using iscsi for connectivity for the CLARiiON CX3-20 storage arrays. 1 Gigabit Ethernet switches with VLAN capabilities It is recommended to use 1 Gigabit Ethernet (GbE) switches, as these switches are capable of setting up virtual LANs (VLAN) to segment production. For best performance, iscsi is recommended. Dedicated switches for creating VLANs for production and iscsi subnets Ideally, use dedicated switches and iscsi connectivity for production. If this is not possible, ensure that the switches are capable of creating VLANs for production and iscsi traffic. 1 GbE NIC on the OS for production and iscsi traffic For best performance, it is recommended to use independent, dedicated 1 GbE NICs for production (one NIC) and iscsi (one or more NICs for iscsi) traffic on hosts. CAT6 cables for GbE connectivity It is recommended to use CAT6 cables for best performance and reliability as they showed superior results compared to CAT5E cables when used for 1000 MB connectivity. The connections worked well without any issues. Network speed and duplexing: Auto-negotiate for GbE network and switch ports Ensure that all ports on the switches are set to AUTO Negotiate 1000, and the corresponding NIC cards are set to match this. AUTO/AUTO is not intended by NIC manufacturers as a finished setting and has to be manually hard coded once networking configurations and speed are set. Jumbo Frame support set to 9000 It is recommended to use NIC cards that have Jumbo Frame support. FLARE release 24 supports a Jumbo Frames setting to 9000 (or a setting in a similar range per the NIC vendor). Ensure that the switch also has this setting matched with the size enabled either globally or per port depending on the vendor. Best Practices Planning 7

Operating system best practices Listed below are the operating system best practice recommendations for CLARiiON CX3-20 storage arrays using iscsi for connectivity. Installation of latest Microsoft iscsi initiator It is recommended to use the latest Microsoft iscsi initiator for best performance. At the time of publication, Microsoft iscsi initiator version 2.04 is the latest available release. Clear the boxes for Client for Microsoft networks and File and Printer Sharing for Microsoft Networks on iscsi NICs only It is recommended to clear the Client for Microsoft networks box along with the File and Printer Sharing for Microsoft Networks box on the NIC for iscsi (Figure 1). Figure 1. Clear the two checkboxes Installation of latest NIC driver It is recommended to install the latest vendor NIC driver for best performance. Best Practices Planning 8

iscsi configuration Following are the recommended instructions for the iscsi configuration: 1. Select the iscsi initiator. 2. Refer to Figure 2. Select Automatically restore this connection when the system boots. Figure 2. Log On to Target dialog box 3. Select Enable multi-path. 4. Click Advanced The Advanced Settings dialog box appears as shown in Figure 3. Figure 3. Advanced Settings dialog box 5. In the Local adapter field, select Microsoft iscsi Initiator. 6. In the Source IP field, select the iscsi NIC card designated for SP A or SP B. Best Practices Planning 9

7. In the Target Portal field, select the iscsi port on the CLARiiON (SP A or SP B) A0-A4, B0-B4. 8. For the InterConnect SP setup, ensure that a SINGLE NIC does not have multiple connections to the SAME SP. For sample connection, refer to Figure 4: NIC0 to A0 and B3, NIC 1 to A3 and B0. Figure 4. iscsi Initiator Properties TCP/IP registry changes For optimal iscsi performance, the below registry changes are recommended: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ KeepAliveTime = Dword: 300000 (decimal ) For more details visit http://support.microsoft.com/kb/324270. It is also recommended to make the following changes after locating the network cards used for iscsi: NOTE: The IP addresses will be listed under Interfaces {guid}\ipaddress key). Add the TcpAckFrequency = 1 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\Interfaces\{A8 EFAD93-95C3-4E98-AE5D-CE0E6185CA19} For more details please visit http://support.microsoft.com/kb/328890. Use high-performance disks for best Exchange performance (rpm) It is recommended to use high-performance disks with Exchange 2007 to ensure the best user experience. Plan for performance and not capacity The most common error made while planning an Exchange server storage solution is designing for capacity and not for performance or I/Os per second (IOPS). One important storage parameter for performance is disk latency. High disk latency is synonymous with slower performance. Best Practices Planning 10

Microsoft recommends read and write latencies not to exceed 20 ms. Following are the Microsoft guidelines for good performance: Average read and writes latencies below 20 ms (average disk sec/read, average disk sec/write). Maximum read and write latencies below 50 ms. With advances in disk technology, the increase in storage capacity of a disk drive has outpaced the increase in IOPS. Hence, the IOPS capacity is the standard to be used while planning Exchange storage configurations. Use DISKPART to align iscsi LUNs for best performance It is recommended to align the disk partition using DISKPART before a Windows signature is written to any iscsi LUN. A Windows partition is created starting at the 64th sector, and this causes a misalignment in the partition with the physical disk. This can cause the I/O operation to straddle cylinder boundaries, resulting in a significant reduction in performance. Performance improvement as high as 40 percent was noted on partitioning the drive using DISKPART and aligning the disk. NOTE: An in-depth discussion on Using diskpar and diskpart to Align Partitions on Windows Basic and Dynamic Disks can be found on EMC Powerlink. The following Microsoft TechNet article also covers the topic: http://www.microsoft.com/technet/prodtechnol/exchange/guides/storageperformance/fa839f7d-f876-42c4- a335-338a1eb04d89.mspx?mfr=true After LUN creation is complete on the production CLARiiON system, the active MSCS node should be able to see the LUN as a raw volume. Partition the LUN using the Microsoft command line utility DISKPART, ensuring that the partition is created using ALIGN=64 switch. The following example uses DISKPART against drive 4: C:\>Diskpart. The following details are displayed: Microsoft DiskPart version 5.2.3790.1830 Copyright (C) 1999-2001 Microsoft Corporation. On computer: JC27Q91X32 DISKPART> list disk Disk numbers Status Size Free Disk 1 Online 136 GB 112 GB Disk 2 Online 267 GB 0 GB Disk 3 Online 267 GB 0 GB Disk 4 Online 600 GB 112 GB DISKPART> select disk 4 Disk 4 is now the selected disk. DISKPART> create partition primary align=64 DISKPART succeeded in creating the specified partition. 1. Using Microsoft Disk Manager, select the drive letter or mount point to be associated with the corresponding database, log file, SMTP, or quorum drive. 2. After selecting this information, format the drive NTFS at: 64k allocation unit size for Databases Best Practices Planning 11

64k allocation unit size for Logs 4k allocation unit size for SMTP 64k allocation unit size for Databases backup to disk and clones Use EMC RAID 1/0 for log files It is recommended to use EMC RAID 1/0 starting with two drives and expanding when required for high fault tolerance and best performance. Use EMC RAID 1/0 for database files It is recommended to use RAID 1/0 for best performance and fault tolerance for Microsoft Exchange databases. Use RAID 5 if Replication Manager is used for instant local recovery capabilities RM clones utilizing RAID 5 permits clones of the same block size to use fewer disks than what is required for best performance of the Exchange production databases. Backup and restore best practices Listed below are the best practice recommendations for backup and restore using iscsi for connectivity for the CLARiiON CX3-20 storage arrays. Use RM version 5.0 and VSS to implement instant local recoverability Replication Manager version 5.0 SP1 with VSS allows instant local clones of Exchange databases and log files. Some of the benefits of using RM version 5.0 SP1 to back up Exchange 2007 are as follows: Quick backup and restore: RM version 5.0 SP1 takes only a few minutes to back up or restore on Exchange storage group. Ease of use: RM version 5.0 SP1 has a simple interface in which IT administrator can discover applications, select Exchange storage groups, and execute backup and restore operations. Integrated with Microsoft Volume Shadow Copy Service (VSS): RM is integrated with the Microsoft VSS architecture when running Exchange 2007 on Windows 2003. NOTE: For more on the VSS Framework, and how it is used to guarantee database consistency when snapping Microsoft 2003 server applications, please visit http://support.microsoft.com/default.aspx?scid=kb;en-us;822896. Multiple backups: Creating two clones locally allows restoration of the data for up to two days. Log truncation: RM version 5.0 SP1 full Exchange clones will truncate the logs only after the database checksum is completed. Checksum of database integrity for every backup: RM version 5.0 SP1 will check the Exchange databases and log file using the ESEUTIL utility from Microsoft for every backup. Runs on pre- and post-scripts: RM version 5.0 SP1 allows pre-and post-job scripts to run. RM mount host: A remote mount host allows for the database checksum portion of a VSS clone operation run on a server other than the production Exchange servers. RM clones: The clones are required to be the same block size as the source LUNs. To ensure that LUNs of the same size are created, use of NaviCLI command lines is recommended. Best Practices Planning 12

RM VSS snapshots for iscsi LUNs supporting Exchange scheduled in off-hours During the VSS procedure, Exchange will be in backup mode, and online maintenance and online defragmentation (OLM/OLD) will not run. Online maintenance and defragmentation are extremely important. So it is important to schedule OLM/OLD and VSS to run in off-hours and not simultaneously, to ensure that the OLM/OLD process is not interrupted until completion. If the OLM/OLD process is not completed because of scheduling conflicts with backup software, databases will begin to grow dramatically. The best practice would be to schedule it before or after Exchange online maintenance. Name LUNs for quick identification In the example below you can see how naming LUNs permit quickly identifying the size and usage. C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 getlun 7 Findstr "LUN Capacity(Blocks)" Name LUN 7-25GB - SG1LG LUN Capacity (Megabytes): 25600 LUN Capacity (Blocks): 52428800 Use NaviCLI for clone creation of RAID groups and storage groups Below are the instructions for clone creation using NaviCLI: 1. Get the block count of the database and log file LUNs for cloning. 2. Using the command line interface for Navisphere, quickly gather the data using the below command line. RM requires the destination LUN to be in the exact block count size as the source. NOTE: For more information, refer to ManCLI.txt within NaviCLI directory after installation. C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 getlun 0 Findstr "LUN Capacity(Blocks)" Name LUN 0-267GB - SG1 LUN Capacity (Megabytes) 273709 LUN Capacity (Blocks) 60557312 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 getlun 7 Capacity(Blocks)" Findstr "LUN Name LUN 7-25GB - SG1LG LUN Capacity (Megabytes) 25600 LUN Capacity (Blocks) 52428800 3. Create a new RAID group for cloning and LUNs using NaviCLI. RAID group 200 in this example will be for the Exchange database and log clones (two per * 2 SG): Best Practices Planning 13

C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 createrg 200 1_0 1_1 1_2 1_3 1_4 1_5 1_6 1_7 1_8 1_9 -pri high Create the LUNs within the new RAID group 200 using RAID 5 per the Use RM version 5.0 and VSS to implement instant local recoverability recommendation. C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 201 -rg 200 -sq bc -cap 52428800 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 202 -rg 200 -sq bc -cap 52428800 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 203 -rg 200 -sq bc -cap 52428800 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 204 -rg 200 -sq bc -cap 52428800 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 205 -rg 200 -sq bc -cap 560557312 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 206 -rg 200 -sq bc -cap 560557312 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 207 -rg 200 -sq bc -cap 560557312 C:\Program Files\EMC\Navisphere CLI>navicli -h 200.0.0.3 bind r5 208 -rg 200 -sq bc -cap 560557312 Increase NT backup logical disk buffers to 64 When using NT backup software for backup to disk ensure that the value of the entry Logical disk buffers size is increased as per the Microsoft best practices guide. For more information visit http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc. Increase NT backup logical disk buffers to 1024 When using NT Backup for backup to disk software ensure that the logical disk buffers are increased as per the Microsoft best practices guide. For more information visit http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc. Increase NT backup logical disk buffers to 16 When using NT backup for backup to disk software ensure that the maximum count of tape buffers are increased as per the Microsoft best practices guide. For more information, visit http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc. Best Practices Planning 14

CLARiiON and Replication Manager best practices The best practice recommendation for CLARiiON using iscsi connectivity is as follows. CLARiiON Balanced LUNs design It is recommended to use EMC s PowerPath 4.6 software for the stability of seamless failover in the event of a service processor (SP) going offline as the CLARiiON CX3-20 service processors run in active/active mode. The CLARiiON CX3-20 has four GbE ports per SP, with the capability of adding two Fibre Channel ports per SP. It is recommended to balance the load against the service processors by not having all database LUNs on a single SP in multiple storage group (SG) configurations. Replication Manager The best practice recommendation for Replication Manager (RM) using iscsi for connectivity for the CLARiiON CX3-20 storage array is as follows: Clone LUNs creation using NaviCLI RM requires LUNs of equal block count, but not RAID technology. For the creation of LUNs, use of the NaviCLI command line interface is recommended. Creation of LUNs is recommended to ensure accurate block count. Example of NaviCLI 1. Open a command prompt (Click Windows Start button > Select Run > Enter CMD.exe) 2. Run the following command, which will report back the LUN capacity in MB along with the block count as needed for the clone LUNs. navicli -h 200.0.0.3 getlun 0 findstr LUN Name LUN 0-267GB - SG1 LUN Capacity (Megabytes): 273709 LUN Capacity (Blocks): 560557312 3. Upon creation of the RAID group, use the command in the following example to create the clone LUN using the RAID group: Example of creation of clone LUN: Using RAID group 100, bind the LUN as 200, paying special attention to -sq bc -cap, and then use the blocks size from the above command: navicli -h 200.0.0.3 bind r5 200 -rg 100 -sq bc -cap 560557312. Best Practices Planning 15

Local continuous replication best practices The best practice recommendation for local continuous replication (LCR) using iscsi connectivity is as follows: Note: For more information, visit http://technet.microsoft.com/en-us/library/bb124704.aspx. One database per storage group It is recommended to use a single database per storage group. Size for additional 20 percent processor overhead The majority of the additional resource consumption comes from log file verification and log file replay on the LCR enabled mailbox server. This additional processing cost is roughly 20 percent and should be factored in when sizing LCR mailbox servers. Additionally, the Exchange 2007 replication service will work well on a LCR server based on the memory resources provided. But to ensure that the ESE database cache maintains optimal efficiency under LCR, it is recommended to provision an additional 1 GB of physical RAM to Exchange mailbox and multi-role servers. Additional storage requirements LCR is a copy of the database and logs from the selected storage group; this requires the same size of disk or LUN with the IOPS capable of sustaining the user load. Additional consideration must be given to reseeding the primary active database. Public folders LCR cannot be used for a public folder database if more than one public folder database exists in the organization. Database It is recommended to keep databases at a maximum of 200 GB. Backup LCR is not a backup. Normal VSS backups should be considered for the passive copy. Copy isolation It is recommended to physically isolate the active copy of the database and logs from the passive copies. Cluster continuous replication best practices The best practice recommendation for cluster continuous replication (CCR) using iscsi connectivity is as follows: Note: For more information, visit http://technet.microsoft.com/en-us/library/bb123996.aspx. Best Practices Planning 16

Public folders CCR cannot be used for a public folder database if more than one public folder database exists in the organization. Hardware Twice the hardware is required, and thus twice the amount of storage is required. CCR is not the same as a Shared Storage Cluster (SCC or E2k3 cluster) and requires storage on both servers. Storage Currently with E2k7 RTM, two to three times the storage IOPS is required on the target server. Database It is recommended to keep databases at a maximum of 200 GB. Backup CCR is not a backup. Normal VSS backups should be considered for the passive copy. Microsoft Storage Calculator It is recommended to use the Microsoft Storage Calculator for sizing LUNs. For more information, visit http://msexchangeteam.com/files/12/attachments/entry438481.aspx. Storage group design It is recommended to have two LUNs per storage group: one for databases and one for logs. Conclusion Following the recommended best practices helps to ensure a balance between capacity, availability, and performance of Exchange 2007 storage configurations on EMC CLARiiON through iscsi. The recommendations are targeted toward enabling high availability, optimum performance, and an efficient mechanism for disaster recovery. References The following documents provide additional, relevant information: - Reference Architecture - Validation Test Report Best Practices Planning 17