EMC Solutions for Exchange 2003 NS Series iscsi

Similar documents
EMC Solutions for Microsoft Exchange 2007 NS Series iscsi

EMC Solutions for Microsoft Exchange 2003 CX Series iscsi

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

EMC Solutions for Microsoft Exchange 2007 EMC Celerra Unified Storage Platforms

EMC Solutions for Exchange 2003 NS Series iscsi. Reference Architecture Guide A01

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

Dell EMC Unity Family

Microsoft Office SharePoint Server 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Celerra Manager Makes Customizing Storage Pool Layouts Easy. Applied Technology

EMC Business Continuity for Microsoft Applications

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

EMC Celerra Unified Storage Platforms

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

EMC SAN Copy. Command Line Interface (CLI) Reference P/N REV A15

EMC SAN Copy Command Line Interfaces

EMC Celerra Virtual Provisioned Storage

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

DATA PROTECTION IN A ROBO ENVIRONMENT

Oracle RAC 10g Celerra NS Series NFS

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

EMC Celerra CNS with CLARiiON Storage

Configuring and Managing Virtual Storage

Deploying VMware View in the Enterprise EMC Celerra NS-120. Reference Architecture.

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

3.1. Storage. Direct Attached Storage (DAS)

EMC Backup and Recovery for Microsoft Exchange 2007

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Business Continuity for Microsoft Exchange 2010

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

Many organizations rely on Microsoft Exchange for

Local and Remote Data Protection for Microsoft Exchange Server 2007

Introduction to Using EMC Celerra with VMware vsphere 4

EMC Celerra Unified Storage Platforms

EMC NetWorker Module for SnapImage Release 2.0 Microsoft Windows Version

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

EMC CLARiiON Server Support Products for Windows INSTALLATION GUIDE P/N REV A05

EMC VNX Series: Introduction to SMB 3.0 Support

Assessing performance in HP LeftHand SANs

BACKUP AND RECOVERY FOR ORACLE DATABASE 11g WITH EMC DEDUPLICATION A Detailed Review

Transitioning Exchange 2003 to VMware Virtual Exchange 2007 Using EMC CLARiiON or EMC Celerra

QuickStart Guide vcenter Server Heartbeat 5.5 Update 1 EN

COPYRIGHTED MATERIAL. Windows Server 2008 Storage Services. Chapter. in this chapter:

Dell EMC Unity Family

Reference Architecture

Network and storage settings of ES NAS high-availability network storage services

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC SAN Copy Command Line Interface P/N REV A14 EMC Corporation Corporate Headquarters: Hopkinton, MA

THE FOLLOWING EXAM OBJECTIVES ARE COVERED IN THIS CHAPTER: Configure local storage. Configure Windows Firewall

EMC Solutions for Oracle Database 10g/11g for Midsize Enterprises EMC Celerra NS Series Multi-Protocol Storage System

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

Network and storage settings of ES NAS high-availability network storage services

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Step-by-Step Guide to Installing Cluster Service

Disaster Recovery of Microsoft Exchange Server 2007 on EMC Celerra iscsi using EMC Replication Manager and VMware vcenter Site Recovery Manager

Connecting EMC DiskXtender for Windows to EMC Centera

EMC Business Continuity for Oracle Database 11g

MIGRATING TO DELL EMC UNITY WITH SAN COPY

EMC Celerra Replicator V2 with Silver Peak WAN Optimization

EMC ViPR Controller. Integration with VMAX and VNX Storage Systems Guide. Version REV 01

iscsi Boot from SAN with Dell PS Series

A Dell Technical White Paper Dell PowerVault MD32X0, MD32X0i, and MD36X0i

Using Computer Associates BrightStor ARCserve Backup with Microsoft Data Protection Manager

USING ISCSI AND VERITAS BACKUP EXEC 9.0 FOR WINDOWS SERVERS BENEFITS AND TEST CONFIGURATION

Synology High Availability (SHA)

Deploying EMC CLARiiON CX4-240 FC with VMware View. Introduction... 1 Hardware and Software Requirements... 2

HP Supporting the HP ProLiant Storage Server Product Family.

EMC Backup and Recovery for Microsoft SQL Server

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

vsphere Networking Update 1 ESXi 5.1 vcenter Server 5.1 vsphere 5.1 EN

EMC Unified Storage for Oracle Database 11g/10g Virtualized Solution. Enabled by EMC Celerra and Linux using FCP and NFS. Reference Architecture

DELL EMC UNITY: REPLICATION TECHNOLOGIES

HP StoreVirtual Storage Multi-Site Configuration Guide

EMC Disk Library Automated Tape Caching Feature

BrightStor ARCserve Backup for Windows

EMC VNXe Series. Configuring Hosts to Access NFS File Systems. Version 3.1 P/N REV. 03

EMC VSPEX END-USER COMPUTING

EMC CLARiiON LUN Shrinking with Microsoft Exchange 2007

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

Recommendations for Aligning VMFS Partitions

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC CLARiiON iscsi Server Setup Guide for VMware ESX Server 3i and 3.x Hosts

iscsi Technology: A Convergence of Networking and Storage

Chapter 2 CommVault Data Management Concepts

Veritas NetBackup for Microsoft SQL Server Administrator's Guide

EMC Unity Family EMC Unity All Flash, EMC Unity Hybrid, EMC UnityVSA

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

Veeam Cloud Connect. Version 8.0. Administrator Guide

DELL EMC UNITY: HIGH AVAILABILITY

INTEROPERABILITY OF AVAMAR AND DISKXTENDER FOR WINDOWS

Network-attached storage (NAS) is an

DELL EMC UNITY: BEST PRACTICES GUIDE

EMC VNX2 Deduplication and Compression

Transcription:

EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning H2182.1 EMC Corporation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com

Copyright 2007 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com All other trademarks used herein are the property of their respective owners. EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning ii EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Contents Preface...vii Chapter 1 Exchange 2003 NS Series iscsi Overview...1-1 Chapter 2 Introduction...1-2 Definition of terms...1-2 Best Practices...2-1 Microsoft Exchange servers...2-2 Windows servers...2-4 Networking...2-6 Backup and restore...2-7 Storage...2-11 General recommendations...2-11 Physical disk drive recommendations...2-12 Appendix A Create a Celerra File System... A-1 Creating a Celerra file system... A-2 Step 1: Identify disk volumes in the same RAID group...a-3 Step 2: Concatenate disk volumes on the same RAID group...a-4 Step 3: Create a stripe volume across Exchange database meta volumes...a-5 Step 4: Create a storage pool for the Exchange database and logs...a-6 Step 5: Create file systems for the Exchange database and logs...a-7 Appendix B Create a Link Aggregation Device on the Celerra... B-1 Creating a link aggregation device... B-2 Step 1: Create a new link aggregation device... B-2 Step 2: Create a network interface for the link aggregation device... B-3 Appendix C Create iscsi LUNs... C-1 Creating iscsi LUNs using the Celerra Manager... C-2 Appendix D Multiple Connections per Session (MC/S) Configuration... D-1 Configuring multiple connections per session... D-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning iii

Contents iv EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Figures Figure 1 Log On to Target dialog box with automatic restore option enabled...2-5 Figure 2 File system layout on a shelf... A-2 Figure 3 List of disk volumes... A-3 Figure 4 Concatenating database volumes of the same RAID groups... A-4 Figure 5 Creating Stripe volumes... A-5 Figure 6 Creating Storage Pool for database... A-6 Figure 7 Creating Storage Pool for logs... A-7 Figure 8 Creating File systems for database... A-8 Figure 9 Creating File systems for logs... A-8 Figure 10 New Network Device window... B-2 Figure 11 New Network Interface window... B-3 Figure 12 Interfaces tab... B-4 Figure 13 Wizards window... C-2 Figure 14 Select Data Mover window... C-3 Figure 15 Select/Create Target window... C-4 Figure 16 Enter Target Name window... C-5 Figure 17 Enter Target Portals window... C-6 Figure 18 Overview/Results window... C-7 Figure 19 Select/Create File System window... C-8 Figure 20 Enter LUN Info. window... C-9 Figure 21 LUN Masking window... C-10 Figure 22 Add New Initiator dialog box... C-11 Figure 23 Overview/Results window... C-11 Figure 24 Add target portals... D-2 Figure 25 Add Target Portals dialog box... D-3 Figure 26 Add another target portal... D-3 Figure 27 Add the second target portals... D-4 Figure 28 List target portals... D-4 Figure 29 Select target for logon... D-5 Figure 30 Select Automatic restore... D-5 Figure 31 Set advanced settings... D-6 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning v

Figures Figure 32 Connection status... D-7 Figure 33 Target Properties... D-8 Figure 34 Session Connections... D-9 Figure 35 Add Connection... D-9 Figure 36 Second connections advanced settings... D-10 Figure 37 Second connections advanced settings... D-11 vi EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Preface As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this guide may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. Audience The intended audience for this paper is IT administrators and system engineers who have an interest in implementing Microsoft Exchange 2003 using EMC Celerra systems. It is assumed that the reader has a general knowledge of Microsoft Exchange, Active Directory, and EMC Celerra features and terminology. EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning vii

Preface viii EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Chapter 1 Exchange 2003 NS Series iscsi Overview This chapter presents these topics: Introduction...1-2 Definition of terms...1-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning 1-1

Exchange 2003 NS Series iscsi Overview Introduction Definition of terms It is important to plan an Exchange solution that can grow while maintaining optimum performance, high availability, and disaster recovery. This document is meant to be a resource guide for optimizing the performance for Exchange 2003 storage configuration on EMC Celerra via iscsi. It also provides a step-by-step process of creating file system, iscsi LUNs, Multiple connections session configuration, and network setup. The intended audience for this paper is IT administrators and system engineers who have an interest in implementing Microsoft Exchange 2003 using EMC Celerra systems. It is assumed that the reader has a general knowledge of Microsoft Exchange, Active Directory, and EMC Celerra features and terminology. Active Directory: An advanced directory service introduced with Windows 2000 Server. It stores information about objects on a network and makes this information available to users and network administrators through a protocol such as LDAP. Automatic Volume Management (AVM): A feature of the Celerra NS Series that creates and manages volumes automatically, without manual volume management by an administrator. AVM organizes volumes into pools of storage that can be allocated to file systems. Data Mover: A Celerra NS Series cabinet component running the data access in real time (DART) operating system that retrieves files from a storage device and makes the files available to a network client. Disk Volume: On Celerra systems, a physical storage unit as exported from the storage array. All other volume types are created from disk volumes. iscsi (Internet SCSI): A protocol for sending SCSI packets over TCP/IP networks. iscsi initiator: An iscsi endpoint, identified by a unique iscsi-recognized name that begins an iscsi session by issuing a command to the other endpoint (the iscsi target). iscsi target: An iscsi endpoint, identified by a unique, iscsi-recognized name that executes commands issued by the iscsi initiator. RAID: Redundant array of independent disks. A method for storing information where the data is stored on multiple disk drives to increase performance and storage capacities and to provide redundancy and fault tolerance. RAID 1: RAID method that provides data integrity by mirroring (copying) data onto another disk. This RAID type provides the greatest assurance of data integrity at the greatest cost in disk space. RAID 5: Data is striped across disks in large stripes. Parity information is stored so data can be reconstructed if needed. One disk can fail without data loss. Performance is good for reads, but slower for writes. 1-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Exchange 2003 NS Series iscsi Overview RAID group: The CLARiiON storage-system term for a Celerra disk group. SP: Storage processor on a CLARiiON storage system. On a CLARiiON storage system, a circuit board with memory modules and control logic that manages the storage-system I/O between the host s Fibre Channel adapter and the disk modules. SP A: Storage processor A. A generic term for the first storage processor in a CLARiiON storage system. SP B: Storage processor B. A generic term for the second storage processor in a CLARiiON storage system. VSS (Volume Shadow Copy Service): A Windows service and architecture that coordinates various components to create consistent point-in-time copies of data called shadow copies. Definition of terms 1-3

Exchange 2003 NS Series iscsi Overview 1-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Chapter 2 Best Practices This chapter presents these topics: Microsoft Exchange servers...2-2 Windows servers...2-4 Networking...2-6 Backup and restore...2-7 Storage...2-11 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning 2-1

Best Practices Microsoft Exchange servers This section contains recommendations for optimizing the performance of Exchange servers. Recommendation #1 Use Microsoft clustering for high availability and to allow non-disruptive server maintenance and software upgrades Use Microsoft Cluster Service (MSCS) with Exchange servers to increase the fault tolerance of hardware running Exchange server. Additionally, the MSCS Hotfix and Patch management functionality helps reduce downtime. Training of customer administrative staff is highly recommended due to the differences in management and updating of Exchange clustered servers. At the time of release of this document, EMC s Celerra platforms have been qualified by Microsoft to be configured with clusters of two and eight server nodes. For updated information and details on node configuration and hardware that Celerra systems can support, consult the Microsoft support matrix: http://www.microsoft.com/windows/catalog/server/ Recommendation #2 Separate the Exchange Storage Group Database from the Log files Ensure that Database Files and Log files from the same Exchange Storage Group do not share the same physical spindles. This prevents the possibility of losing an Entire Storage Group in case of multiple disk failures. Recommendation #3 Run Exchange Best Practices Analyzer after install. Rerun it monthly Run the Microsoft Exchange Best Practices Analyzer against the Exchange servers upon completion of installation and follow all recommendations. Additionally EXBPA can be scheduled to run at intervals or through Microsoft Microsoft Operation Manager to ensure the servers are running the latest definitions. Recommendation #4 Increase msexcheseparamlogbuffers to 9000 on each Exchange storage group to improve performance The Exchange storage group attribute msexcheseparamlogbuffers governs the number of Extensible Storage Engine (ESE) log buffers that are used by the Exchange information store. ESE uses a set of log buffers to hold information in RAM before it writes to the transaction logs. Using these buffers efficiently improves transaction log performance. For more information about this parameter, read the following article from Microsoft. http://www.microsoft.com/technet/prodtechnol/exchange/analyzer/ef883688-4a1f-45bd-bd68-065daf834530.mspx?mfr=true 2-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices Recommendation #5 Decrease msexcheseparamcheckpointdepthmax to 5MB to improve cluster failover performance The Exchange Storage group attribute msexcheseparamcheckpointdepthmax controls the checkpoint depth. The maximum amount of data that the Extensible Storage Engine can write to logs before it writes to the database is known as the log checkpoint depth. Decreasing this parameter on Clustered Exchange servers will allow for better failover performance BUT disk performance must be monitored to ensure the increase in IO does not cause thrashing. For more information about this parameter, read the following article from Microsoft. http://support.microsoft.com/?kbid=886298 Recommendation #6 Have at least two Domain Controller/Global Catalog servers per Active Directory Site for fault tolerance Exchange relies heavily upon the Active Directory for DSAccess using the DC/GCs. It is highly recommended to have at minimum 2 Active Directory DC/GC servers per site for fault tolerance. Best Practices for the number of DC/GC rule of thumb is a 4:1 physical processor ratio where for every 4 Exchange server physical processors there should be 1 DC/GC processor with respect to ensure at minimum of 2 DC/GC servers per site for fault tolerance. Recommendation #7 Use Gigabit Ethernet for iscsi network connections between the Exchange Server(s) and the Celerra system Maintaining optimal network performance is crucial to the deployment of Exchange on iscsi because there is considerable network traffic generated by Exchange 2003 server. For optimum network performance use Gigabit Ethernet cabling, switches, and network interface cards for network connection, between Exchange 2003 server(s) and Celerra systems. Microsoft Exchange servers 2-3

Best Practices Windows servers This section contains recommendations for optimizing the performance of Windows servers. Recommendation #1 Remove the Client for Microsoft Networks and File and Print Sharing for Microsoft Networks on iscsi NICs Remove the Client for Microsoft Networks and Print Sharing for Microsoft Networks on iscsi NICs. Recommendation #2 Install the latest NIC driver Install the latest Vendor NIC driver on iscsi NICs. Recommendation #3 Change the TCP/IP Registry KeepAliveTime to 300,000 to harden the TCP/IP stack against denial of service attacks Denial of service (DoS) attacks are network attacks that are aimed at making a computer or a particular service on a computer unavailable to network users. The KeepAliveTime parameter controls how frequently TCP tries to verify that an idle connection is still intact by sending a keep-alive packet. Setting KeepAliveTime to 300,000 (5 minutes) will harden TCP/IP attack against denial of service attacks. HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\ KeepAliveTime = Dword: 300000 (decimal) http://support.microsoft.com/kb/324270 Recommendation #4 Change the TCP/IP Registry TcpAckFrequency to 1 for optimum performance Changing the TCP/IP acknowledgement frequency optimizes iscsi performance. Locate the Network cards used for iscsi (note: the IP addresses will be listed under Interfaces{guid}\IPAddress key. Once you have located the IP addresses for iscsi modify the TcpAckFrequency = 1 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\I nterfaces\{a8efad93-95c3-4e98-ae5d-ce0e6185ca19} http://support.microsoft.com/kb/328890 Recommendation #5 Modify the BOOT.ini on Windows 2003 servers with more then 1GB of Physical RAM Following the Microsoft guidance, addition of the /3GB and UserVA=3030 switch to the boot.ini on systems with more then 1GB of Physical RAM allows for more usermode memory and limits the kernel mode memory. 2-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices http://www.microsoft.com/technet/prodtechnol/exchange/2003/insider/memoryscalability.mspx Recommendation #6 Increase the Microsoft Initiator time-out value to 600 seconds By default, the Microsoft iscsi Initiator time-out is set to 60 seconds. This time-out defines how much time the initiator will hold a request before reporting an iscsi connection error. This value can be increased to accommodate some longer outages, such as a Data Mover cluster events. If an iscsi timeout occurs on an Exchange Server that hosts the Exchange database and transaction logs on iscsi LUNs, it will result in unmounting the database. To change the time-out value, search the Windows Registry for the MaxRequestHoldTime entry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet, and change the value to 600. The following is an example of the Registry entry in one of the Exchange Servers: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\ {4D36E97B- E325-11CE-BFC1-08002BE10318}\0002\Parameters MaxRequestHoldTime = 600 (DWORD) (decimal) Recommendation #7 Select the Automatically restore this connection when the system boots checkbox when configuring the iscsi Initiator on the Exchange Server When an Exchange Server is rebooted, the iscsi disks will not be available unless the Automatically restore this connection when the system boots checkbox is selected. Select the checkbox from the Log On to Target dialog box in the Microsoft iscsi initiator window as shown in Figure 2-1. Figure 1 Log On to Target dialog box with automatic restore option enabled Windows servers 2-5

Best Practices Networking This section contains recommendations for optimizing network performance. Recommendation #1 Recommendation #2 Use 1 Gb (GigE) Switches with VLAN capabilities The use of GigE switches which are capable of setting up Virtual LANs(VLAN) to segment Production and iscsi traffic should be used for best performance. Dedicate switches for Production and iscsi traffic Use dedicated switches for Production and iscsi traffic. If this is not possible, ensure the switches are capable of creating separate VLANs for Production and iscsi traffic. Recommendation #3 Use 1 Gb (GigE) NICs on the Exchange Server The use of independent GigE NICs for Production network traffic and iscsi network traffic should be used for best performance. IT is recommended to have one NIC for Production and one or more NICs for iscsi. Recommendation #4 Use CAT6 cables for GigE connectivity The use of CAT6 Cables has shown a dramatic results on gigabit connectivity rather then Cat5E cables. To ensure the best performance and reliability between Exchange Server and Celerra iscsi LUNs, we recommend CAT6 Cables. Recommendation #5 Set the Network Speed and Duplexing to Auto/Auto for GigE Network and Switch ports Ensure that all Ports on the Switches are set to AUTO Negotiate GigE networks and make sure the corresponding NIC cards are set to match this configuration. AUTO/AUTO is not intended by NIC manufacturers as a finished setting and is to be manually hard coded after network configurations and speeds are set. 2-6 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices Backup and restore Recommendation #1 Use EMC Replication Manager / SE or EMC RepliStor to replicate your data to a remote standby server for disaster recovery capabilities EMC Replication Manager / SE 3.1 and EMC RepliStor 6.1 provide a reliable remote data replication solution with Exchange 2003. In both cases there is a source system, which is the production Exchange Server containing the data to be protected, and a target system, which is a standby Exchange Server where the data is replicated. In both cases we recommend Microsoft Cluster Services for local server high availability; however the prerequisites for geographically dispersed clusters are often difficult and costly to meet. The RepliStor and Replication Manager / SE solutions provide data replication over existing IP WAN or LAN connections. The primary difference between these two solutions is that RepliStor provides hostbased replication while Replication Manager / SE provides array-based replication. The following documents will provide further details for implementing either solution. RepliStor RepliStor and Exchange 2003 A High Availability Solution Replication Manager / SE NAS Celerra Network Server Replication Manager / SE iscsi Nonclustered Disaster Recovery Solution Implementation Guide NAS Celerra Network Server Replication Manager / SE iscsi Clustered Disaster Recovery Solution Implementation Guide Recommendation #2 Ensure that RepliStor will not fail to synchronize data during a temporary WAN condition This recommendation applies only if you are using RepliStor in your environment. Provide the RepliStor data directory with the maximum 1GB of kernel cache on the Exchange production nodes to ensure that replicated data waiting for transmission will not be lost if the WAN link experiences temporary congestion. You should also move the RepliStor data folder from the local drive to a large iscsi LUN set up on the Celerra. If the 1GB cache fills with outstanding IOs, the data folder will be used to store the outstanding IOs to be replicated. Do not leave the RepliStor data folder on the local drive; otherwise, outstanding IOs might fill the drive. For a similar reason, increase the RepliStor batch size (the send queue) from its default (0x8000) to its maximum (0x20000). Backup and restore 2-7

Best Practices Recommendation #3 Use Replication Manager/SE (RM/SE) for Celerra and VSS to implement instant local recovery capabilities Replication Manager/SE (RM/SE) for Celerra has the ability to create point-in-time replicas of databases and file systems residing on Celerra iscsi virtual LUNs, allowing some recovery scenarios to bypass the need for loading data from tape. Some of the benefits of using RM/SE for Celerra to back up Exchange 2003 are as follows: Quick backup and restore. RM/SE takes only a few minutes to back up or restore an Exchange storage group. Ease of use. RM/SE has a simple interface with which an IT administrator can discover applications, select Exchange storage groups, and execute backup or restore operations. Integrated with Microsoft Volume Shadow copy services (VSS). RM/SE is integrated with the Microsoft VSS architecture when running Exchange 2003 on Windows Server 2003. For more on the VSS framework, and how it is used to guarantee database consistency when snapping Microsoft Server 2003 applications, go to http://support.microsoft.com/default.aspx?scid=kb;en-us;822896 Multiple backups. RM/SE can allow for up to 1000 snaps of an Exchange storage group. This can provide freedom to use tape backups less frequently, while maintaining more point-in-time copies on Celerra systems for immediate restore. Option to truncate transaction logs. RM/SE has the option of truncating the transaction logs during backup. This allows an IT administrator to have full, incremental, and differential backup operations using RM/SE. Checks database integrity for every backup. RM/SE will check the Exchange database integrity using the Microsoft ESEUTIL tool for every backup. Integrates with your tape backup software. RM/SE integrates with your existing tape backup software so you instantly snap the Exchange Server and then mount the snapshot to an alternate host to stream to tape. This means that you can maintain your existing tape if you desire offsite copies, but you may never require a restore from tape. In addition, RM/SE has an option to run any scripts to execute jobs before and after the replication. When RM/SE is used for backup, it has the following effect on performance: Increase in write latency. The database write latency will increase, but is still well below the Microsoft recommended log and database latency values for good performance. Decrease in user space on the Exchange database file system. Since the RM/SE stores backups on the same file system as the production iscsi LUN, storage requirements for a given number of users will depend on the number of backup snaps an Exchange administrator wants to keep. This concept of space reservation is common to most snapshot implementations and is designed to ensure that snapshots 2-8 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices always have sufficient space to complete, and the worst-case restore scenarios can complete successfully. The formula to calculate the total Celerra file-system size that needs to be created in order to keep the iscsi LUN and backup snaps is: TotalFileSystemSize = (LUN_Size * 2) Where: + [(No_Of_Snaps) * (LUN_Size * Change_Rate)] + (N * LUN_Size) LUN_Size is the size of the production iscsi LUN where the production Exchange database or transaction log files will reside. No_Of_Snaps is the total number of replicas of the production iscsi LUN that will be kept at any time. Change_Rate is the amount of change on the production iscsi LUN between each replica. TotalFileSystemSize is the size that the file system needs to be to handle the production iscsi LUN and all of its replicas. N is the number of mounted replicas. Consider the following example where an Exchange Server has the following environment: Total number of mailboxes = 1000 Mailbox quota = 100 MB Number of replicas need to be kept = 10 Rate of data change between replicas = 10 percent Number of mounted replicas = 1 LUN_Size = Total number of mailboxes * Mailbox quota = 100 GB No_Of_Snaps = 10 N = 1 Change_Rate = 10 percent = 0.1 TotalFileSystemSize = (100 * 2) + (10) * (100 * 0.1) + (1 * 100) = 400 GB A 400 GB Celerra file system must be created to host an iscsi LUN of 100 GB for the example environment. Backup and restore 2-9

Best Practices Recommendation #4 Schedule RM/SE VSS snapshots for iscsi LUNs supporting Exchange in off hours, best practice is being before or after Exchange online maintenance Complete verification (via ESEUTIL /K) of the VSS snapshots is enforced by RM/SE, as is required by Microsoft. Snapshots of iscsi LUNs do not place any significant load on the production system in steady state, but do have a momentary (measured in seconds) impact when they are taken. However, ESEUTIL verification of a snapshot can impact the production environment (since the iscsi LUN snapshot is largely made up of the same physical data as the production iscsi LUN). This large verification load can be expected to last roughly two hours for a 150 GB database. Due to this mandatory verification, it is not recommended to take snapshots during production periods. Recommendation #5 Create multiple Exchange mailbox stores for quick backup and recovery and fewer Exchange mailbox stores for easy administration Having a small number of mailboxes with multiple mailbox stores has advantages of quick backup and recovery as well as minimal mailbox disruption if data corruption occurs. On the other hand, having a large number of mailboxes with fewer mailbox stores is easy to administer since fewer mailboxes stores will need to be maintained. Recommendation #6 Increase NTBackup Logical Disk Buffers to 64 When using NT Backup for Backup to Disk software ensure the Logical Disk Buffers are increased per Microsoft Tech Note. http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc Recommendation #7 Increase NTBackup Max Buffers size to 1024 When using NT Backup for Backup to Disk software, ensure the Maximum Buffer size is increased per Microsoft Tech Note. http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc Recommendation #8 Increase NTBackup Max Num Tape Buffers to 16 When using NT Backup for Backup to Disk software, ensure the maximum number of tape Buffers are increased per Microsoft Tech Note. http://download.microsoft.com/download/4/3/1/43104b4b-dd07-44d0-90c9- d1cda210f3cd/exchangebackupnote.doc 2-10 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices Storage General recommendations Storage design is very important for the Exchange environment because disk subsystem bottlenecks are generally the cause of more performance problems than processor or memory deficiencies. Recommendation #1 Plan storage layout for performance, not capacity The most common error people make when planning an Exchange Server is designing for capacity and not for performance or IOPS (I/O per second). The most important single storage parameter for performance is disk latency. High disk latency is synonymous with slower performance. Microsoft guidelines for good performance are as follows: Average read and write latencies below 20 ms. Maximum read and write latency below 50 ms. In today s disk technology, the increase in storage capacity of a disk drive has outpaced the increase in IOPS. Therefore, the IOPS capacity is the standard to use when planning Exchange storage configurations. Recommendation #2 Use building block approach to allocate spindles for Exchange database and log files from the Celerra system The building block approach is defined as the number of disk spindles required to support 500 Exchange users assuming an Exchange IO profile of 1 IOP and 250MB mailbox size per Exchange user. It is assumed that each set of 500 Exchange users belong to a different Exchange storage group. There are two building block configurations: Six disk spindle building block: Two log spindles and four database spindles Eight disk spindle building block: Two log spindles and six database spindles The six disk spindle building block is the economical configuration in terms of performance/cost. However, this configuration does not provide enough free disk capacity to adequately protect the Exchange logs and databases. To protect the Exchange logs and databases using Celerra iscsi Local or Remote Replicas, the eight disk spindle building block is required. Refer to the EMC Solutions for Exchange 2003 NS40 iscsi Validation Test Report on EMC Powerlink (EMC s password-protected extranet for customers and partners) for details about this configuration. Storage 2-11

Best Practices Physical disk drive recommendations Recommendation #1 Use high-rpm disk drives for best Exchange performance Higher-rpm drives provide higher overall random access throughput and shorter response times than slower-rpm drives. For optimum performance, higher-rpm drives are recommended. Recommendation #2 Use Fibre Channel disk drives for best Exchange performance For best performance, Fibre Channel drives are always recommended for Exchange I/O because of their significantly better performance with random I/O, the dominant I/O pattern for Exchange databases. Recommendation #3 Use diskpar to align iscsi LUNs for best performance This is the most critical recommendation among all the other recommendations. When Microsoft Disk Manager formats the Celerra iscsi LUNs, it always creates the partition starting at the 64th sector, therefore misaligning it with the underlying physical disk. Due to this misalignment, Exchange I/O that would have fit evenly on the disks may result in more than one I/O to the physical disk drive. To fix the disk alignment, Microsoft provides a command line tool Diskpar.exe. Diskpar.exe comes with the Windows 2000 Resource Kit and it can explicitly set the starting offset in the Master Boot Record (MBR). This utility is merged with diskpart.exe on Windows Server 2003 Service Pack 1 Support Tools. Exchange Server 2003 writes data in multiples of 4 KB I/O operations (4 KB for the databases and up to 32 KB for streaming files). Since the Celerra file-system block size is 8 KB, which is a multiple of the Exchange I/O of 4 KB, use diskpar to set the offset to 128 sectors which equals 64 KB. The disk alignment technique increases the Exchange I/O performance significantly (up to 65 percent) on Celerra iscsi LUNs. Recommendation #4 Use RAID 1 for log files For the highest fault tolerance and best performance using RAID 1 starting with 2 drives and expand when required. Recommendation #5 Use RAID 1 for database files RAID 1 is recommended for best performance and highest fault tolerance for Microsoft Exchange Databases. Recommendation #6 Keep Jet and streaming databases together on the same iscsi LUN All Exchange Mailbox databases have a Jet database (.edb file) where the content is generated by MAPI clients and a streaming database (.stm file) where the content is generated by Internet protocol clients. Since both files compose a single Mailbox database, it is advisable to keep them together on the same Celerra iscsi LUN. In 2-12 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices addition, RM/SE for Celerra will not take a replica if both files are not in the same Celerra iscsi LUN. Storage 2-13

Best Practices 2-14 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Appendix A Create a Celerra File System This appendix describes the steps performed to create file systems required for a 500 Exchange mailbox configuration on a Celerra system using RAID 1. Topics include: Creating a Celerra file system... A-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning A-1

Best Practices Creating a Celerra file system Testing has shown that the Celerra can support 500 Exchange users on 6 physical disk spindles (refer to the EMC Solutions for Exchange 2003 NS40 iscsi Validation Test Report for details). These spindles are configured in RAID 1 pairs. Two file systems are created on these spindles. The first file system spans two disk spindles and is dedicated to Exchange log files. The second file system spans the other four spindles and is dedicated to the Exchange database as shown below in Figure 2: Figure 2 File system layout on a shelf In the above figure, the two Celerra dvolumes in RG 8 are placed in a Celerra userdefined storage pool named Log. The Celerra file system used for storing the Exchange logs is created from the Log storage pool. If the desired file system size is larger than the capacity of a single dvolume, Celerra will automatically concatenate the two dvolumes. For the Exchange database, the two Celerra dvolumes in RG 9 are concatenated in to a Celerra metavolume. Similarly, the two Celerra dvolumes in RG 10 are concatenated in to a Celerra metavolume. A Celerra stripe volume with a stripe size of 32Kb is then created across these two metavolumes. This stripe volume is placed in a Celerra userdefined storage pool named DB1. The Celerra file system used to store the Exchange database for the First Storage Group is created from the DB1 storage pool. This section contains the step-by-step procedure for creating the Exchange Log and Database file systems: A-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices Step 1: Identify disk volumes in the same RAID group Identify disk volumes that belong to the same RAID group and verify their identify their SP ownership. 1. From the Celerra Manager, select Storage Volumes. 2. Select disk from the Show Volume of Type drop down menu. Note: This will list all disk volumes on the Celerra system. In order to identify the disk volumes that belong to the same RAID group, sort the list by clicking on the Disk Group column heading. Figure 3 shows the disk volumes from the Celerra manager. 3. Review the Directors column to identify SP ownership. Used for Exchange Log Used for Exchange Database Used for Exchange Database Figure 3 List of disk volumes In this example, d160 and d75 will be used for Exchange logs. D161, d176, d162 and d177 will be used for the Exchange database. Storage A-3

Best Practices Step 2: Concatenate disk volumes on the same RAID group This step applies to disk volumes that will be used for the Exchange database. By concatenating disk volumes that belong to the same RAID group, you increase the usable capacity of a RAID group. To concatenate disk volumes, create a Celerra meta volume on top of the two disk volumes. When concatenating these disk volumes, be mindful of the SP ownership of the disk volumes. This is extremely important in the next step when a stripe volume is created across the meta volumes. Meta volume creation must be performed from the CLI as the Celerra Manager does not support specifying the order of the disk volumes. This figure shows the meta volume creation from the CLI for the Exchange database file system. The first meta volume Meta1_db1 is created from d161 and d176 (specified in that order on the command line). D161 is owned by SPA while d176 is owned by SPB. The second meta volume, Meta2_db1 is created from d177 and d162 (specified in that order on the command line). Note that d177 is owned by SPB while d162 is owned by SPA. Figure 4 Concatenating database volumes of the same RAID groups A-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices Step 3: Create a stripe volume across Exchange database meta volumes To create a stripe volume: 1. From the Celerra manager select Storage Volumes. 2. Select the New button. 3. Select the two meta volumes (Meta1_db1 and Meta2_db1) created in the previous step. This stripe volume will be used for the Exchange database. The SP balancing from step 2 is important when striping across multiple meta volumes. Figure 5 shows the stripe volume creation. Figure 5 Creating Stripe volumes Storage A-5

Best Practices Step 4: Create a storage pool for the Exchange database and logs In this step, you will create two user-defined storage pools. You will use one these storage pools for the Exchange database and one for the Exchange logs. 1. Use the Stripe_db1 stripe volume created in Step 3: Create a stripe volume across Exchange database meta volumes to create the Exchange database storage pool as shown Figure 6. Figure 6 Creating Storage Pool for database A-6 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Best Practices 2. Use the two disk volumes identified for the exchange logs in Step 1: Identify disk volumes in the same RAID group in (d160 and d175) to create the Exchange logs storage pool. There is no need to concatenate these two disk volumes as Celerra will automatically concatenate the two disk volumes if necessary. This is shown in Figure 7. Figure 7 Creating Storage Pool for logs Step 5: Create file systems for the Exchange database and logs In this step you will create two file systems one for the Exchange database and the other for the Exchange logs from the user-defined storage pools created in Step 4: Create a storage pool for the Exchange database and logs. 1. From the Celerra Manager select File Systems. 2. Select the New button. Storage A-7

Best Practices Make sure the entire storage pool is used to create the file system by specifying all the available space in the Storage Capacity. Figure 8 and Figure 9 shows the file system creation for the Exchange database and the Exchange logs. Figure 8 Creating File systems for database Figure 9 Creating File systems for logs A-8 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Appendix B Create a Link Aggregation Device on the Celerra This appendix presents this topic: Creating a link aggregation device... B-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning B-1

Create a Link Aggregation Device on the Celerra Creating a link aggregation device Step 1: Create a new link aggregation device Make sure free CGE ports available to create the link aggregation device, and then list all the CGE ports. 1. From Celerra Manager select Network Devices. 2. Select the New button. 3. From the New Network Devices window, use the Data Mover drop-down menu to select data mover for which to create the aggregation device. 4. Select Link Aggregation as the Type. 5. Specify the device name and select the number of cge ports. 6. Click OK. By default, the statistical load balancing is set to ip. Figure 10 New Network Device window B-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create a Link Aggregation Device on the Celerra Step 2: Create a network interface for the link aggregation device To create a new interface: 1. From the Celerra Manager select Network Interfaces New. 2. Select the link aggregation device from the Device Name drop-down list. Multiple network interfaces can be created for same link aggregation network device. Figure 11 New Network Interface window Creating a link aggregation device B-3

Create a Link Aggregation Device on the Celerra The LACP network interfaces are listed on the Network window with link aggregation device as its device name. Now that the Celerra is link aggregation enabled, make sure that the network switch supports LACP. Figure 12 Interfaces tab B-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Appendix C Create iscsi LUNs This appendix presents this topic: Creating iscsi LUNs using the Celerra Manager... C-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning C-1

Create iscsi LUNs Creating iscsi LUNs using the Celerra Manager 1. Open the Celerra Manager and click Wizards on the left pane. The Select a Wizard window appears as shown in Figure 13. Figure 13 Wizards window C-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create iscsi LUNs 2. Click Create an iscsi LUN in the right pane. The Select Data Mover window appears, as shown in Figure 14. Figure 14 Select Data Mover window Creating iscsi LUNs using the Celerra Manager C-3

Create iscsi LUNs 3. Select the Data Mover associated with the iscsi LUN you want to create. 4. Click Next. The Select/Create Target window appears, as follows. Figure 15 Select/Create Target window C-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create iscsi LUNs 5. A target must be created (or a preexisting target needs to be specified) before creating an iscsi LUN. Click Create Target. The Enter Target Name window appears, as follows. Figure 16 Enter Target Name window Creating iscsi LUNs using the Celerra Manager C-5

Create iscsi LUNs 6. In the Enter Target Alias Name field, type a target name. Then, you can either specify the target qualified name in the Enter Target Qualified Name field, or select the Auto Generate Target Qualified Name checkbox to generate the target qualified name automatically. Then, click Next. The Enter Target Portals window appears, as shown next. Figure 17 Enter Target Portals window C-6 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create iscsi LUNs 7. Add one or more interfaces as the target portals and click Next. The Overview/Results window appears, as follows. Figure 18 Overview/Results window Creating iscsi LUNs using the Celerra Manager C-7

Create iscsi LUNs 8. Review all the parameters and click Submit to create an iscsi target. The target will be created successfully if all the parameters are valid. 9. After the target is created, click Create File System from the Select /Create File System window to select a file system. The Select/Create File System window appears as follows. Figure 19 Select/Create File System window C-8 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create iscsi LUNs 10. Select the file system from the file system list and click Next. The Enter LUN Info. window appears, as follows. Figure 20 Enter LUN Info. window Creating iscsi LUNs using the Celerra Manager C-9

Create iscsi LUNs 11. Create iscsi LUNs for the file system: a. Select Create Multiple LUNs b. In the Enter the number of LUNs to create field, type the number of iscsi LUNs to be created. c. In the Enter size of the new LUN field, specify the size of each LUN. For the database iscsi LUN, be sure to consider any snaps you may create when sizing your LUNs. Refer to recommendation #3 in the Backup and restore section for iscsi LUN sizing with snaps. For the log iscsi LUN size it according to your needs. d. Click Next. The LUN Masking window appears. Figure 21 LUN Masking window C-10 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Create iscsi LUNs 12. Click Add New to add particular initiators that can see the iscsi LUN(s) you created in step 10 of this procedure. The Add New Initiator dialog box appears. Figure 22 Add New Initiator dialog box 13. Specify the initiator name and click OK. 14. Select CHAP authentication if it is needed and click Next to display the isns Server Settings window. If an isns server is used, add the server details and click Next. Review all the iscsi LUN parameters and click Finish to create the iscsi LUNs. Figure 23 Overview/Results window Creating iscsi LUNs using the Celerra Manager C-11

Create iscsi LUNs C-12 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Appendix D Multiple Connections per Session (MC/S) Configuration This appendix presents this topic: Configuring multiple connections per session...d-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning D-1

Multiple Connections per Session (MC/S) Configuration Configuring multiple connections per session Microsoft initiator 2.x supports multiple connections per sessions (MC/S) by allowing iscsi initiators and target to establish redundant IO paths. Setting up redundant paths properly is important to ensure high availability of the Celerra iscsi LUNs. The Exchange server should have separate NIC cards and/or iscsi HBA, separate network infrastructure (cables, switches, routers, etc) where as the Celerra iscsi target should have multiple target portals. Here are the steps to setup MC/S on the Microsoft iscsi initiator from the Exchange server. 1. Open MS initiator from the path Start Settings Control Panel iscsi Initiator. To add Target Portal address select Discovery tab and click the Add button. Figure 24 Add target portals D-2 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Multiple Connections per Session (MC/S) Configuration 2. Enter the IP address of the Target portal selected for MC/S, the default port is 3260. Click OK. Figure 25 Add Target Portals dialog box 3. Click on the Add button again from the Discovery tab to add the second iscsi portal. Figure 26 Add another target portal Configuring multiple connections per session D-3

Multiple Connections per Session (MC/S) Configuration 4. Enter the second target portal IP address of the Celerra iscsi target and click OK. Figure 27 Add the second target portals 5. Check if both the Target Portals are appearing as shown below. Figure 28 List target portals D-4 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Multiple Connections per Session (MC/S) Configuration 6. To login to the iscsi Targets. Select the iqn name of the iscsi Target from the list and click Log On. Figure 29 Select target for logon 7. Enable Automatically restores this connection when the system boots, this will restore the connection to the iscsi drives even after the system reboots. Click Advanced. Figure 30 Select Automatic restore Configuring multiple connections per session D-5

Multiple Connections per Session (MC/S) Configuration 8. Select the Source IP and its corresponding Target Portal in the same subnet from the drop down menu as indicated below and click OK. Figure 31 Set advanced settings D-6 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Multiple Connections per Session (MC/S) Configuration 9. The Status would now show connected indicating the connection between the initiator and the target has been established. This has added only one connection to the Target. To add a second connection to the session select Details button as shown below Figure 32 Connection status Configuring multiple connections per session D-7

Multiple Connections per Session (MC/S) Configuration 10. Click on the Connections button to add the second connection. Figure 33 Target Properties D-8 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Multiple Connections per Session (MC/S) Configuration 11. Click Add from the Session connection tab to add a second connection to iscsi session. Figure 34 Session Connections 12. The target name would be the same as we are adding a second connection to the same target, go to Advanced as shown below Figure 35 Add Connection Configuring multiple connections per session D-9

Multiple Connections per Session (MC/S) Configuration 13. Enter the next source IP as specified in the Discovery field in Step 2 and its corresponding Target Portal in the same subnet as source from the drop down menu as shown below. Click on OK. Figure 36 Second connections advanced settings D-10 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning

Multiple Connections per Session (MC/S) Configuration 14. The MCS is now set up, where two connections are established to the same iscsi target, this is indicated below. Click on OK to complete the MCS setup. Figure 37 Second connections advanced settings Configuring multiple connections per session D-11

Multiple Connections per Session (MC/S) Configuration D-12 EMC Solutions for Exchange 2003 NS Series iscsi Best Practices Planning