PS SERIES STORAGE ARRAYS 90,000-USER STORAGE SOLUTION FOR MICROSOFT EXCHANGE SERVER 2007

Similar documents
ESRP Storage Program

Dell PowerVault MD Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

IBM System Storage DS3300 Storage System 1000 Mailbox Clustered Continuous Replication Microsoft Exchange 2007 Storage Solution

Dell PowerVault MD1220 is a SAS based storage enclosure. The major features of the storage system include:

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Sun ZFS Storage 7120 Appliance 5,000 Mailbox Resiliency Exchange 2010 Storage Solution

EMC CLARiiON AX4-5i (2,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

Dell Compellent Storage Center 6.5 SC4020 4,500 Mailbox Exchange 2013 Resiliency Storage Solution

Dell EMC SC Series SC5020 9,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K Drives

HP StorageWorks 600 Modular Disk System 4,000 user 3GB Mailbox resiliency Exchange 2010 storage solution

Sun ZFS Storage 7320 Appliance 10,000 Mailbox Resiliency Exchange 2010 Storage Solution

HP 3PAR StoreServ ,000 Mailbox Resiliency Exchange 2010 Storage Solution

ESRP Storage Program EMC CLARiiON CX3-20c (500 User) Storage Solution for Microsoft Exchange Server

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

ESRP Storage Program EMC CLARiiON CX3-20c(1000 user) Storage Solution for Microsoft Exchange Server

NetApp E-Series E ,000-Mailbox Microsoft Exchange Server 2013 Mailbox Resiliency Storage Solution

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution

This document contains information about the EMC DMX SRDF/A Storage Solution for Microsoft Exchange Server.

IBM Storwize V ,000 mailbox resiliency Microsoft Exchange 2013 storage solution. IBM Systems and Technology Group ISV Enablement March 2014

PS SERIES BEST PRACTICES DEPLOYING WINDOWS SERVER 2008 WITH PS SERIES SANS

HPE StoreVirtual 3200 Storage 1000 Mailbox Resiliency Exchange 2016 Storage Solution

Assessing performance in HP LeftHand SANs

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014

Dell EqualLogic PS 6210E Series 5,000-User Mailbox Resiliency Storage Solution for Microsoft Exchange Server 2013

EMC Backup and Recovery for Microsoft Exchange 2007

half the remaining storage, 20 percent of total available storage, for log files.

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015

HP MSA 2040 Array 750 Mailbox Resiliency Exchange 2013 Storage Solution

Microsoft Office SharePoint Server 2007

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

ESRP Storage Program EMC Celerra NS20 (700 User) Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC Celerra NS20 (1,000 User) Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC CX-3-20 (900 User) iscsi Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC Celerra NS-120 (1,500 User) Storage Solution for Microsoft Exchange Server 2007 SP1

ESRP Storage Program EMC Celerra NS40 (3,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

ESRP Storage Program EMC CX-3-20 (700 User) iscsi Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC Celerra NS20 (1,500 User) Storage Solution for Microsoft Exchange Server 2007

EMC Business Continuity for Microsoft Applications

ESRP Storage Program EMC CLARiiON CX3-20c (1,000 User) iscsi Storage Solution for Microsoft Exchange Server 2007

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

PS Series Best Practices Deploying Microsoft Windows Clustering with an iscsi SAN

ESRP Storage Program EMC CX-3-20 (1000 User) iscsi Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC CX-3-20 (1300 User) iscsi Storage Solution for Microsoft Exchange Server 2007

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

Fujitsu PRIMEFLEX for VMware vsan 20,000 User Mailbox Exchange 2016 Mailbox Resiliency Storage Solution

Reference Architecture for Dell VIS Self-Service Creator and VMware vsphere 4

ESRP Storage Program EMC CLARiiON CX3-20c (1,400 User) iscsi with LCR Storage Solution for Microsoft Exchange Server 2007

Dell PowerEdge R720xd 12,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

EMC Virtual Infrastructure for Microsoft Exchange 2007

ESRP Storage Program EMC CLARiiON CX3-20c (1,200 User) iscsi with LCR Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC CLARiiON CX3-20c (1,300 User) iscsi Storage Solution for Microsoft Exchange Server 2007

Storage management is at the center

ESRP Storage Program EMC CLARiiON CX3-20c (1,400 User) iscsi Storage Solution for Microsoft Exchange Server 2007

A Dell Technical White Paper Dell Virtualization Solutions Engineering

ESRP Storage Program EMC CLARiiON CX3-20c (600 User) iscsi Storage Solution for Microsoft Exchange Server 2007

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC Integrated Infrastructure for VMware. Business Continuity

JANUARY 2009 ESRP performance comparison: Solid-state drives vs. hard disk drives

Veritas Storage Foundation for Windows by Symantec

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

vstart 50 VMware vsphere Solution Specification

EqualLogic PS Series Storage

IBM System Storage DS5020 Express

PeerStorage Arrays Unequalled Storage Solutions

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Accelerate Applications Using EqualLogic Arrays with directcache

vstart 50 VMware vsphere Solution Overview

HP StoreVirtual Storage Multi-Site Configuration Guide

Many organizations rely on Microsoft Exchange for

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Enterprise power with everyday simplicity

Surveillance Dell EMC Storage with Milestone XProtect Corporate

ESRP Storage Program EMC CLARiiON CX3-20c (600 User) iscsi with LCR Storage Solution for Microsoft Exchange Server 2007

Veritas Storage Foundation for Windows by Symantec

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

EMC Backup and Recovery for Microsoft SQL Server

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Dell PowerVault MD3820f 1,000 user Mailbox Exchange 2013 Resiliency Storage Solution Direct Attach FC using dual QLogic QLE Gb FC adapters

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

Milestone Solution Partner IT Infrastructure Components Certification Report

Enterprise power with everyday simplicity

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

Implementing SharePoint Server 2010 on Dell vstart Solution

EMC CLARiiON CX3 Series FCP

Surveillance Dell EMC Storage with Verint Nextiva

Dell EqualLogic Software

Dell EqualLogic Software All-inclusive software for enterprise power with everyday simplicity

Veritas Storage Foundation for Windows by Symantec

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

HP StoreVirtual Storage Multi-Site Configuration Guide

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

3.1. Storage. Direct Attached Storage (DAS)

Using EonStor DS Series iscsi-host storage systems with VMware vsphere 5.x

Transcription:

PS SERIES STORAGE ARRAYS 90,000-USER STORAGE SOLUTION FOR MICROSOFT EXCHANGE SERVER 2007 ESRP Storage Version 2.1 PS Series Firmware Version 4.0.1 Tested with: ESRP Storage Version 2.1 Test Date: November 11, 2008 Document Version 1.1

Copyright 2009 Dell Inc. All Right s Reserved. Dell EqualLogic is a trademark of Dell Inc. All trademarks and registered trademarks mentioned herein are the property of their respective owners. Possession, use, or copying of the documentation or the software described in this publication is authorized only under the license agreement specified herein. Dell, Inc. will not be held liable for technical or editorial errors or omissions contained herein. The information in this document is subject to change. December 2008 WWW.DELL.COM/PSseries

PREFACE Thank you for your interest in Dell EqualLogic PS Series storage products. We hope you will find the PS Series products intuitive and simple to configure and manage. PS Series arrays optimize resources by automating volume and network load balancing. Additionally, PS Series arrays offer all-inclusive array management software, host software, and free firmware updates. The following value-add features and products integrate with PS Series arrays and are available at no additional cost: Note: The highlighted text denotes the focus of this document. PS Series Array Software o Firmware Installed on each array, this software allows you to manage your storage environment and provides capabilities such as volume snapshots, clones, and replicas to ensure data hosted on the arrays can be protected in the event of an error or disaster. Group Manager GUI: Provides a graphical user interface for managing your array Group Manager CLI: Provides a command line interface for managing your array. o Manual Transfer Utility (MTU): Runs on Windows and Linux host systems and enables secure transfer of large amounts of data to a replication partner site when configuring disaster tolerance. You use portable media to eliminate network congestion, minimize downtime, and quick-start replication. Host Software for Windows o Host Integration Tools o Remote Setup Wizard (RSW): Initializes new PS Series arrays, configures host connections to PS Series SANs, and configures and manages multipathing. Multipath I/O Device Specific Module (MPIO DSM): Includes a connection awareness-module that understands PS Series network load balancing and facilitates host connections to PS Series volumes. VSS and VDS Provider Services: Allows 3 rd party backup software vendors to perform off-host backups. Auto-Snapshot Manager/Microsoft Edition (ASM/ME): Provides point-in-time SAN protection of critical application data using PS Series snapshots, clones, and replicas of supported applications such as SQL Server, Exchange Server, Hyper-V, and NTFS file shares. SAN HeadQuarters (SAN HQ): Provides centralized monitoring, historical performance trending, and event reporting for multiple PS Series groups. Host Software for VMware o o o Storage Adapter for Site Recovery Manager (SRM): Allows SRM to understand and recognize PS Series replication for full SRM integration. Auto-Snapshot Manager/VMware Edition (ASM/VE): Integrates with VMware Virtual Center and PS Series snapshots to allow administrators to enable Smart Copy protection of Virtual Center folders, datastores, and virtual machines. MPIO Plug-In for VMware ESX: Provides enhancements to existing VMware multipathing functionality. Current Customers Please Note: You may not be running the latest versions of the tools and software listed above. If you are under valid warranty or support agreements for your PS Series array, you are entitled to obtain the latest updates and new releases as they become available. To learn more about any of these products, contact your local sales representative or visit the Dell EqualLogic site at http://www.equallogic.com. To set up a Dell EqualLogic support account to download the latest available PS Series firmware and software kits visit: https://www.equallogic.com/secure/login.aspx?returnurl=%2fsupport%2fdefault.aspx

TABLE OF CONTENTS Preface...ii Overview...1 Disclaimer...1 PS Series Storage Array Features...1 Solution Description...5 Hardware and Software...5 Exchange Storage Group Layout...7 Targeted Customer Profile...7 Tested Deployment...8 Best Practices... 10 Additional Information... 10 Test Result Summary... 11 Reliability Results... 11 Performance Results... 11 Streaming Backup/Recovery Performance... 15 Conclusion... 15 Appendix A... 16 Microsoft Exchange Server Jetstress Results for BLIZZARD... 16 Microsoft Exchange Server Jetstress Results for CYCLONE... 29 Microsoft Exchange Server Jetstress Results for DOWNPOUR... 32 Microsoft Exchange Server Jetstress Results for FOG... 35 Microsoft Exchange Server Jetstress Results for GALE... 38 Microsoft Exchange Server Jetstress Results for HAIL... 41 Microsoft Exchange Server Jetstress Results for HURRICANE... 44 Microsoft Exchange Server Jetstress Results for ICE... 47 Microsoft Exchange Server Jetstress Results for MIST... 50 Microsoft Exchange Server Jetstress Results for RAIN... 53 Microsoft Exchange Server Jetstress Results for SNOW... 56 Microsoft Exchange Server Jetstress Results for Typhoon... 59

OVERVIEW This document provides information on Dell s storage solution for Microsoft Exchange Server, based on the Microsoft Exchange Solution Reviewed Program (ESRP) Storage program 1. This solution supports 90,000 simulated Exchange users using 14 Dell EqualLogic PS5000XV arrays with Seagate Cheetah 15K.6 450G 15,000 rpm SAS drives. For any questions or comments regarding the contents of this document, contact Dell. DISCLAIMER This document has been produced independently of Microsoft Corporation. Microsoft Corporation expressly disclaims responsibility for, and makes no warranty, express or implied, with respect to the accuracy of the contents of this document. The information contained in this document represents the current view of Dell, Inc. on the issues discussed as of the date of publication. Due to changing market conditions, it should not be interpreted to be a commitment on the part of Dell, and Dell cannot guarantee the accuracy of any information presented after the date of publication. PS SERIES STORAGE ARRAY FEATURES Using PS Series storage arrays, businesses can leverage their existing Ethernet infrastructure and deploy a complete, easy-to-manage iscsi SAN, with enterprise-level features included at no extra cost. The revolutionary PS Series architecture was specifically designed to decrease the storage management burden on IT administrators and also alleviate CIO planning and budgetary concerns. Dell chose the iscsi protocol with its networking and connectivity advantages as the basis of the storage solution, and then built intelligence, automation, and redundancy into each PS Series storage array. PS Series storage arrays use storage virtualization technology to mask the underlying complexity of the storage configuration. This virtualization occurs within and across multiple arrays that are logically grouped together, making management simple and efficient. Reliable hardware, intuitive graphical and command line user interfaces, and automated operations improve productivity and service levels, while RAID configuration, storage expansion, disk sparing, and performance optimization occur automatically. An IP network is used to connect hosts and applications to storage volumes, and also to connect arrays to each other, providing a communication mechanism that the arrays use to share configuration data and collaborate during data provisioning and load balancing. With the selfmanaging capabilities of PS Series storage arrays, administrators can provision data on-demand and make configuration changes quickly and easily without disrupting running applications. The PS Series storage array is a true modular storage system. Each array contains redundant hotswappable components for high availability; exceeding 99.999% reliability. An array does not act individually, but as part of a group of one or more arrays, accessed through a single IP address. Each array is configured with the RAID level of your choice. Performance load balancing enables volume data to be stored where the RAID configuration is optimal. When more capacity is needed, 1 The ESRP Storage pr ogram was developed by Microsoft Corporation to provide a common storage testing framework for vendors to provide information on their storage solutions for Microsoft Exchange Server software. For more details on the Microsoft ESRP Storage program, see the following URL: http://www.microsoft.com/technet/prodtechnol/exchange/2007/esrp.mspx

you simply add another member to the group capacity and performance scale automatically and linearly. Whether you have one array or many, the group provides a single management view, and the administrative effort remains the same. Using a PS Series group for disk storage, diverse operating systems and a wide range of applications can share a reliable and high-performance storage system that can scale from hundreds of gigabytes to more than 100 terabytes. Administrators can access the group through a web browser, network connection, or serial connection. The graphical and command line user interfaces present a unified view of the storage that makes provisioning quick and easy. You can instantly create, expand, and delete volumes. Group storage space can be organized into a single pool or multiple pools for increased control and optimal flexibility. In addition, volume snapshots and replicas can be created on demand or through a schedule, providing online backup and restore capabilities with unmatched performance. Event notification mechanisms including e-mail, syslog, and SNMP ensure that any problems in the SAN can be quickly identified and resolved. Automatic controller failover and disk sparing means that failures are handled without requiring user intervention. Servicing the system (including replacing disks, controllers, fans, and power supplies) is all done online. For a complete storage solution, Dell also provides host-based utilities at no extra cost. The Host Integration Tools enable easy point-and-click array initialization and host configuration. In addition, multipath I/O support enables you to create a reliable and high-performance I/O path between servers and PS Series group data, while Auto-Snapshot Manager (VSS provider) enables you to create snapshots that are coordinated with Windows applications. In order to provide a truly complete system, Dell includes numerous advanced features as standard functionality in every PS Series storage array, so there are no hardware add-ons or software licenses to deal with: Modular hardware. A PS Series group can easily grow or shrink to accommodate workload changes, so administrators can purchase only the storage they need when they need it. Future products will fully interope rate with first-generation arrays, protecting your initial investment. Fully-redundant, fault-tolerant storage array. Each array includes redundant, hotswappable components disks, control modules, fans, and power supplies for a nosingle-point-of-failure configuration. Components fail over automatically, without user intervention or disrupting data availability. In addition, data in each array is protected with RAID technology. Support for RAID 10, RAID 5, an d RAID 50. You can choose to configure arrays with RAID 10, RAID 5, or RAID 50, depending on your capacity and application needs. Support for a variety of disk drives. Serial ATA (SATA) and Serial-Attached SCSI (SAS) provide flexibility in capacity and performance to meet all needs. Automatic spare configuration and utilization. Disk spares are automatically configured and used to replace failed disks - no user intervention is required. Auto-Stat Disk Monitoring System (ADMS). By continually monitoring disk drive health within a PS Series storage array or across a PS Series group, ADMS ensures optimal data availability. ADMS automatically scans drives in the background to proactively detect and correct media anomalies. High performance control modules. Dual control modules provide support for network interface and control module failover. Nonvolatile write-back caches are mirrored across

the control modules to protect data. Each control module has three Gigabit Ethernet interfaces and copper-based network connectors. Simple hardware installation. Only a single network connection on an array is required for operation. Additional network connections can be added at any time (up to a maximum of three on each array) for increased bandwidth and reliability. Support for standard Ethernet networks. Because PS Series storage arrays use standard Ethernet connections to provide access to storage, there is no need to train administrators in unfamiliar and complex technologies like Fibre Channel. Also, costs are reduced due to the high volumes and intense vendor competition among Ethernet hardware venders. Easy setup and management. A simple setup utility lets you quickly configure an array on the network and create a PS Series group; in minutes, you have a functioning iscsi SAN. By automating complex operations like RAID configuration, disk sparing, data provisioning, and load balancing, even novices can effectively manage the SAN. Graphical and command line user interfaces. Password-protected management interfaces provide a single-system view of the storage. Administrators do not need multiple consoles to perform storage management tasks. Using the Group Manager GUI, creating and managing volumes and configuring security, networking, and event notification are point-and-click operations. Also available is an equivalent command line interface (CLI) that can be accessed through telnet, SSH, or a serial connection, or used in scripts. Automatic data provisioning. There is no need for administrators to manually create RAIDsets or map data onto disks or individual controllers. Arrays in a group contribute space to a shared pool of storage, from which you create volumes. Each volume has a specific size and access controls. To increase a volume, just specify a new size. The group handles storage allocation and capacity balancing across the disks and arrays. Dynamic load balancing. As the workload changes, data and network I/O are automatically load balanced within and across arrays in the group with no impact on applications and no user intervention. Thus, hot spots can be quickly detected and eliminated. Online and seamless scalability. Increasing array capacity is as easy as installing additional drives or adding more network connections. You can seamlessly expand overall group capacity adding another array to a PS Series group. In all cases, performance scales automatically as disk data and network I/O are load balanced across the added resources. Processing power also increases due to the additional controllers and caches. Meanwhile, volumes remain available with no impact on hosts and applications. There is no need to open a server cabinet or reconfigure an operating system. The additional storage space and network bandwidth are immediately available for use. More than 100 TB of storage can be configured in a single group. Notably, as the group expands, the management effort remains constant. A group with one array is as easy to manage as a multimember group. Future Dell products featuring the latest technology will fully interoperate with existing arrays, protecting your initial investment. In a PS Series group, there is no need for arrays to be the same model. The peer storage architecture ensures that all devices are utilized optimally and automatically. Robust security for both data and management access. Security between an iscsi initiator (host) and iscsi target (volume) can be based on IP address, iscsi initiator name, or CHAP

user name, eliminating the need to understand complicated security technologies (such as Fibre Channel Switch Zoning or LUN Masking). CHAP authentication can be provided through the PS Series group itself or an external RADIUS server. In addition, access to the group for management purposes requires an administrative account and password. Accounts can have either read-write or read-only privileges. Advanced features are standard in all arrays. A key PS Series design principle is to include advanced functionality in all arrays. The result is a complete solution with built-in intelligenc e and advanced features. All the features described below are standard on each array and require no additional software, licenses, or cost. Cloning. A clone is an image copy of a volume. Cloning is commonly used in multiple server deployments; for example, a master image of a system can be created and then cloned for each server. Cloning can dramatically reduce overhead when deploying replicated servers, such as blade servers and web servers. Snapshots. A snapshot quickly captures a volume s contents at a specific point in time and can be used for backups, testing, and upgrades. Both instant and scheduled snapshots are supported. Snapshots greatly simplify and improve the performance of backup and recovery operations. Consistency groups can be created for simultaneous snapshots, maintaining application synchronization across multiple data volumes. Volume Shadow Copy Service (VSS). EqualLogic arrays are integrated with Microsoft s VSS framework, which is included with Windows Server 2003. This feature enables turnkey snapshot backups that can offload the backup process from application servers. Virtual Disk Service (VDS). The EqualLogic VDS provider enables you to use Microsoft Storage Manager for SANs to create and manage volumes in a PS Series group. Replication. Using two PS Series groups, you can replicate volumes across unlimited distances to protect your data. Replication enables you to set up a simple, yet robust disaster recovery plan that guards against catastrophic events. A replica represents the contents of a volume at a specific point in time and is similar to a snapshot, except that it must be stored separately from the original volume. If the original volume is destroyed, you can recover data by cloning a replica. This creates a new volume containing the volume data that existed at the time the replica was created. Multipath I/O. A redundant network path eliminates any point of failure between hosts and disk storage and improves availability. For high performance, you can load balance I/O across multiple ports (HBAs or NICs). SAN Boot. Booting servers directly from the SAN is operationally identical to a traditional boot process, but can be accomplished easily and efficiently across hundreds of servers. Storage Pools. With PS Series storage, you can divide SAN space into multiple storage pools in a single PS Series group to build an efficient, flexible, easy-to-manage networked storage environment. Pools can be used for segregation or tiering of data online. Wide-spread interoperability. PS Series storage arrays are ideal for heterogeneous environments, with support for most major operating systems and cluster software.

SOLUTION DESCRIPTION The following sections outline the hardware and software environment that was used to run the ESRP tests. Hardware and Software The hardware environment is described in the following table. Storage Drives Servers Ethernet connections 14 PS5000XV 2 storage arrays Configured into two PS Series group, with four data storage pools and one log storage pool (to allow physical separation of data and logs) 224 15K-RPM 450GB Serial Attached SCSI disk drives Six Dell PowerEdge 2950 Servers, each with one Intel Xeon 3.00GHz Dual Core CPU and 4GB memory running MS Windows Server 2008 SP1 Standard x64 Edition Six Dell PowerEdge 1950 Servers, each with one Intel Xeon 3.00GHz Quad Core CPU and 8GB memory running Microsoft Windows Server 2008 SP1 Standard x64 Edition 4 gigabit Ethernet connections per Server: 2 dual port Intel Gigabit Ethernet Adapters (mode l: PRO/1000 PT, driver v9.9.12.0) Cisco Catalyst 3750e Gigabit Ethernet Switches were used. All the servers and arrays were connected to three Cisco 3750e switches. 2 The PS5000XV has been qualified for use with Microsoft products. See the following URL for more information: http://www.microsoft.com/windowsserversystem/storage/partnerlist.mspx

PS Series Solution for Exchange Server Test Configuration

Exchange Storage Group Layout To perform the tests, two PS Series groups were created. Twelve Dell EqualLogic PS5000XV storage arrays were configured into a single PS Series group, consisting of four storage pools, all of which were used for data. Two Dell EqualLogic PS5000XV were configured into another PS Series group, consisting of a single storage pool used for the logs. There are twelve simulated Microsoft Exchange servers in the test. Each Exchange server uses two data volumes in a single data storage pool. Each Exchange server also has two log volumes in the log storage pool to match the number of data volumes as required in the test. The ESRP-Storage program focuses on storage solution testing to address performance and reliability issues with storage design. However, storage is not the only factor to take into consideration when designing a scalable Exchange solution. Other factors that affect server scalability are: Server processor utilization Server physical and virtual memory limitations Resource requirements for other applications Directory and network service latencies Network infrastructure limitations Replication and recovery requirements Client usage profiles All these factors are beyond the scope of the ESRP-Storage test. Therefore, the number of mailboxes hosted per server as part of the tested configuration may not necessarily be viable for some customer deployments. For more information on identifying and addressing performance bottlenecks in an Exchange system, refer to Microsoft's Troubleshooting Microsoft Exchange Server Performance, available at the following URL: http://go.microsoft.com/fwlink/?linkid=23454 Targeted Customer Profile The PS Series storage solution is intended for medium-sized Microsoft Exchange 2007 Server customers who want reliable, high-performance, and easy-to-manage disk storage. The tested configuration can support the following: 90,000 users 0.32 I/O per second per user (MS Exchange 2007 heavy profile: 0.40 IOPS per user including 20% headroom) 300MB mailbox size Eight storage groups per server, five databases each

Tested Deployment The following tables summarize the testing environment. Simulated Exchange Configuration Number of Exchange mailboxes simulated 90,000 Number of hosts 12 Number of mailboxes/host 7,500 Number of storage groups/host 8 Number of mailbox stores/storage group 5 Number of mailboxes/mailbox store 188 Number of mailbox store LUNs/storage group 1 Simulated profile: I/Os per second per mailbox (IOPS, including 20% headroom) Database LUN size Log LUN size Backup LUN size/storage group Total database size for performance testing 0.32 (0.40 for tests including 20% headroom) 1,600GB 20GB N/A 26,768GB Percent storage capacity used by Exchange database 70% 3 Primary Storage Hardware Storage Connectivity Storage model and OS/firmware revision Storage cache Number of storage controllers Number of storage ports Maximum bandwidth of storage connectivity to host Switch type/model/firmware revision HBA model and firmware Number of HBAs/host 2 iscsi Dell EqualLogic PS5000XV Firmware Rev: V4.0.1 http://www.dell.com/products/view.aspx?id=2509 14GB (1GB per array) 28 (2 per array) 42 (3 per array) 10Gb Ethernet Cisco Catalyst 3750-E Gigabit Ethernet Switch Firmware Rev: V12.2(46)SE Intel PRO/1000 PT Dual Port Gigabit Ethernet Adapters 3 Storage performance characteristics change based on the percentage utilization of the individual disks. Tests that use a small percentage of the storage (~25%) may exhibit reduced throughput if the storage capacity utilization is significantly increased beyond what is tested in this paper.

Host server type Dell PowerEdge 2950 Intel Xeon Dual Core 3.00GHz with 4GB RAM Total number of disks tested in solution 224 Maximum number of spindles 224 Dell PowerEdge 1950 Intel Xeon Quad Core 3.00GHz with 8GB RAM Primary Storage Software NIC driver Intel Driver Version: 9.9.30.0 Multipathing Dell EqualLogic HIT (Host Integration Toolkit) 3.11 Host OS Windows Server 2008, Standard x64 Edition, Service Pack 1 ESE.dll file version 08.01.0240.005 Replication solution name/version N/A Primary Storage Disk Configuration (Mailbox Store Disks) Disk type, speed and firmware revision Raw capacity per disk (GB) Seagate Cheetah 15K.6 450G 15,000 rpm SAS drives Model: ST3450856SS Firmware Revision: XQH6 450GB Number of physical disks in test 192 Total raw storage capacity (GB) Number of disks per LUN 48 Raid level Total formatted capacity (GB) 86,400GB RAID10 Storage capacity utilization 45% Database capacity utilization 31% 38,502GB Primary Storage Disk Configuration (Transaction Log Disks) Disk type, speed and firmware revision Raw capacity per disk (GB) Number of physical disks in test 32 Total raw storage capacity (GB) Number of disks per LUN 32 Raid level Total formatted capacity (GB) Seagate Cheetah 15K.6 450G 15,000 rpm SAS drives Model: ST3450856SS Firmware Revision: XQH6 450GB 14,400GB RAID10 6,420GB

BEST PRACTICES Microsoft Exchange Server is a disk-intensive application. Based on the tests using the ESRP framework, Dell recommends the following best practices to improve storage performance: Use diskpart (in Windows Server 2008) to align all Exchange-related disks on a 64KB boundary. To do this, use the value of 64 for Dell EqualLogic PS Series arrays. Allow the PS Series group to automatically balance the load across arrays, caches, and network links. Automatic load balancing reduces administrator effort as Exchange workloads change over time. In large Exchange deployments, isolate the Exchange workload from other application workloads by creating separate storage pools for Exchange-related volumes in a PS Series group and setting up separate servers for Exchange and other applications. Depending on the desired level of availability, you can configure Exchange using multiple storage pools in a PS Series group to provide complete resource and hardware isolation between logs and databases. You can also deploy Exchange using a single pool, which provides a high level of availability and makes provisioning simple. Performance and reliability are similar in either a single pool group or multiple-pool group. Size and configure first for I/O performance, then for storage capacity. Enable Dell EqualLogic Host Integration Tools V3.1 on Exchange servers to ensure highlyavailable SAN connections with Microsoft s MPIO. Use Microsoft iscsi software initiators in Exchange configurations. In these tests, the Micros oft iscsi software initiator was us ed. Use separate volumes for Exchange databases and transaction logs to improve backup and recovery operations. Place SAN infrastructure on VLANs or subnets that differ from other production network traffic. Use non-blocking Gigabit Ethernet switches. For Exchange 2007 best practices on storage design, visit the following URL: http://technet.microsoft.com/en-us/library/bb124518.aspx ADDITIONAL INFORMATION For more information, see the Dell website (www.dell.com). In addition, Dell EqualLogic technical documents may be useful: http://support.dell.com/equallogic

TEST RESULT SUMMARY This section provides a high-level summary of the test data from ESRP Storage. See Appendix A for detailed information about the test results. Reliability Results A number of tests in the framework check reliability. The goal is to verify the storage can handle high I/O load for a long period of time. Both log and database files are analyzed for integrity after the stress test to ensure no database or log corruption. The following list provides an overview: There were errors in the saved event log file, but they were unrelated to the benchmark (for example, printer driver unavailable errors were recorded when Remote Desktop connections were established to the server from a remote client during the benchmark run). No errors were reported by the database and log checksum process. Performance Results The Primary Storage performance testing is designed to exercise the storage with maximum sustainable Exchange I/O for over two hours. The test shows how long it takes for the storage to respond to an I/O under load. The data below is the sum of all of the logical disk I/Os and the average of all the logical disks I/O latency during the test (which was run for six hours). Each server is listed separately and the aggregate numbers across all servers are also presented. Individual Server Metrics The server metrics include the sum of I/Os across storage groups and the average latency across all storage groups on a per server basis. Table 1 BLIZZARD Performance Results Database I/O Average Database Disk Transfers/sec 3,001.92 Average Database Disk Reads/sec 1,576.84 Average Database Disk Writes/sec 1,425.08 Average Database Disk Read Latency (ms) 19 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 592.46 Average Log Disk Write Latency (ms) 7

Table 2 CYCLONE Performance Results Database I/O Average Database Disk Transfers/sec 3,056.04 Average Database Disk Reads/sec 1,659.56 Average Database Disk Writes/sec 1,396.49 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 572.53 Average Log Disk Write Latency (ms) 7 Table 3 DOWNPOUR Performance Results Database I/O Average Database Disk Transfers/sec 3,105.65 Average Database Disk Reads/sec 1,687.30 Average Database Disk Writes/sec 1,418.36 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 586.51 Average Log Disk Write Latency (ms) 7 Table 4 FOG Performance Results Database I/O Average Database Disk Transfers/sec 3,103.11 Average Database Disk Reads/sec 1,685.21 Average Database Disk Writes/sec 1,417.90 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 586.27 Average Log Disk Write Latency (ms) 7 Table 5 G ALE Performance Results Database I/O Average Database Disk Transfers/sec 3,062.18 Average Database Disk Reads/sec 1,662.58 Average Database Disk Writes/sec 1,399.59 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 574.08 Average Log Disk Write Latency (ms) 7

Table 6 H AIL Performance Results Database I/O Average Database Disk Transfers/sec 3,025.98 Average Database Disk Reads/sec 1,585.96 Average Database Disk Writes/sec 1,440.02 Average Database Disk Read Latency (ms) 19 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 594.23 Average Log Disk Write Latency (ms) 7 Table 7 HURRICANE Performance Results Database I/O Average Database Disk Transfers/sec 2,915.74 Average Database Disk Reads/sec 1,527.95 Average Database Disk Writes/sec 1,387.78 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 581.72 Average Log Disk Write Latency (ms) 7 Table 8 ICE Performance Results Database I/O Average Database Disk Transfers/sec 2,926.25 Average Database Disk Reads/sec 1,535.81 Average Database Disk Writes/sec 1,390.45 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 574.89 Average Log Disk Write Latency (ms) 7 Table 9 MIST Performance Results Database I/O Average Database Disk Transfers/sec 3,256.17 Average Database Disk Reads/sec 1,765.89 Average Database Disk Writes/sec 1,490.28 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 604.34 Average Log Disk Write Latency (ms) 7

Table 10 R AIN Performance Results Database I/O Average Database Disk Transfers/sec 3,110.63 Average Database Disk Reads/sec 1,632.06 Average Database Disk Writes/sec 1,478.57 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 610.22 Average Log Disk Write Latency (ms) 7 Table 11 SNOW Performance Results Database I/O Average Database Disk Transfers/sec 2,925.86 Average Database Disk Reads/sec 1,535.51 Average Database Disk Writes/sec 1,390.34 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 571.34 Average Log Disk Write Latency (ms) 7 Table 12 TYPHOON Performance Results Database I/O Average Database Disk Transfers/sec 3,100.36 Average Database Disk Reads/sec 1,685.18 Average Database Disk Writes/sec 1,415.19 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 578.90 Average Log Disk Write Latency (ms) 7 Aggregate Performance Across All Servers Aggregate performance is the sum of I/Os across all servers and the average latency across all servers. Database I/O Average Database Disk Transfers/sec 36,589.88 Average Database Disk Reads/sec 19,539.84 Average Database Disk Writes/sec 17,050.04 Average Database Disk Read Latency (ms) 18 Average Database Disk Write Latency (ms) 7 Transaction Log I/O Average Log Disk Writes/sec 7,027.48 Average Log Disk Write Latency (ms) 7

Streaming Backup/Recovery Performance The test measures the maximum rate at which multiple databases can be backed up in series. Database Read-Only Performance The following table shows the average rate for a single database file. MB read/sec (avg. per storage group) 25.92 MB read/sec (avg. per system) 207.36 MB read/sec (total) 2,695.73 Log Read-Only Performance The test measures the maximum rate at which the log files can be played against the databases. The following table shows the average rate for 500 log files played in a single storage group. Each log file is 1 MB. Average time to play one Log file (sec) 0.11 CONCLUSION This document was developed by Dell, Inc., and reviewed by the Microsoft Exchange Product team. The test results and data presented in this document are based on the tests introduced in the ESRP v2.0 test framework. Customers should not quote the data directly for their pre-deployment verification. It is still necessary to go through the exercises to validate the storage design for a specific customer environment. The ESRP Storage program is not designed to be a benchmarking program. Its tests are not designed for achieving the maximum throughput for a given solution. Rather, they are focused on producing recommendations from vendors for the Exchange application. Therefore, the data presented in this document should not be used for direct comparisons among the solutions.

APPENDIX A Microsoft Exchange Server Jetstress Results for BLIZZARD Performance Test Result Report Test Summary Overall Test Result Pass Machine Name BLIZZARD Test Description 90,000 User ESRP Configuration Test Start Time 11/11/2008 5:31:19 PM Test End Time 11/12/2008 12:54:07 AM Jetstress Version 08.02.0060.000 Ese Version 08.01.0240.005 Operating Sys tem Windows Server (R) 2008 Standard Service Pack 1 (6.0.6001.65536) Performance Log C:\Progr am Files\Exchange Jetstress\ESRP Results\Performance_2008_11_11_17_32_43.blg C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_12_0_54_7.blg Database Sizing and Throughput Achieved I/O per Second 3001.915 Target I/O per Second 3000 Initial database size 2359569285120 Final database size 2396470771712 Database files (count) 40 Jetstress System Parameters Thread count 16 (per-storage group) Log buffers 9000 Minimum database cache 256.0 MB Maximum database cache 2048.0 MB

Insert operations 40% Delete operations 30% Replace operations 5% Read operations 25% Lazy commits 55% Disk Subsystem Performance LogicalDisk sec/read sec/write Disk Reads/sec Disk Writes/sec Bytes/Write Database (R:) Database (S:) 0.019 0.007 788.556 713.748 (n/a) 0.018 0.007 788.279 711.332 (n/a) Log (L:) 0.000 0.007 0.000 296.690 6658.435 Log (M:) 0.000 0.007 0.000 295.770 6670.642 Host System Performance Counter Average Minimum Maximum % Processor Time 8.340 5.551 14.610 Ava ilable MByt es 1280.414 1274.000 1485.000 Free System Page Table Entries 33558103.034 33557859.000 33558522.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 128108546.844 127959040.000 129085440.000 Pool Paged Bytes 99643995.022 99627008.000 99684352.000 Database Page Fault Stalls/sec 0.000 0.000 0.000 Test Log 11/11/2008 5:31:19 PM -- Jetstress testing begins... 11/11/2008 5:31:19 PM -- Prepare testing begins... 11/11/2008 5:32:01 PM -- Attaching databases... 11/11/2008 5:32:01 PM -- Prepare testing ends. 11/11/2008 5:32:01 PM -- Dispatching transactions begins... 11/11/2008 5:32:01 PM -- Database cache settings: (minimum: 256.0 MB, maximum: 2.0 GB) 11/11/2008 5:32:01 PM -- Database flush thresholds: (start: 20.5 MB, stop: 41.0 MB) 11/11/2008 5:32:43 PM -- Database read latency thresholds: (average: 0.02 /read, maximum: 0.05 /read). 11/11/2008 5:32:43 PM -- Log write latency thresholds: (average: 0.01 /write, maximum: 0.05 /write).

11/11/2008 5:32:44 PM -- Operation mix: Sessions 16, Inserts 40%, Deletes 30%, Replaces 5%, Reads 25%, Lazy Commits 55%. 11/11/2008 5:32:44 PM -- Performance logging begins (interval: 15000 ms). 11/11/2008 5:32:44 PM -- Attaining prerequisites: 11/11/2008 5:42:11 PM -- \MSExchange Database(JetstressWin)\Database Cache Size, Last: 1936278000.0 (lower bound: 1932735000.0, upper bound: none) 11/11/2008 11:42:11 PM -- Performance logging ends. 11/12/2008 12:54:04 AM -- JetInterop batch transaction stats: 152655, 153674, 152648, 152569, 152870, 152592, 152959, and 153158. 11/12/2008 12:54:04 AM -- Dispatching transactions ends. 11/12/2008 12:54:04 AM -- Shutting down databases... 11/12/2008 12:54:07 AM -- Instance304.1 (complete), Instance304.2 (complete), Instance304.3 (complete), Instance304.4 (complete), Instance304.5 (complete), Instance304.6 (complete), Ins tance304. 7 (complete), and Ins tance304. 8 (complete) 11/12/2008 12:54:08 AM -- Performance logging begins (interval: 30000 ms). 11/12/2008 12:54:08 AM -- Verifying database checksums... 11/12/2008 3:39:09 AM -- R: (100% processed), and S: (100% processed) 11/12/2008 3:39:09 AM -- Performance logging ends. 11/12/2008 3:39:09 AM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_12_0_54_7.blg has 329 samples. 11/12/2008 3:39:11 AM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_12_0_54_7.html is saved. 11/12/2008 3:39:11 AM -- Verifying log checksums... 11/12/2008 3:39:14 AM -- L:\log01 (2 logs passed), L:\log02 (2 logs passed), L:\log03 (3 logs passed), L:\log04 (2 logs passed), M:\log05 (3 logs passed), M:\log06 (2 logs passed), M:\log07 (3 logs passed), and M:\log08 (2 logs passed) 11/12/2008 3:39:14 AM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\Performance_2008_11_11_17_32_43.blg has 1477 samples. 11/12/2008 3:39:14 AM -- Creating test report... 11/12/2008 3:39:19 AM -- Volume R: has 0.0188 for sec/read. 11/12/2008 3:39:19 AM -- Volume S: has 0.0182 for sec/read. 11/12/2008 3:39:19 AM -- Volume L: has 0.0066 for sec/write. 11/12/2008 3:39:19 AM -- Volume L: has 0.0000 for sec/read. 11/12/2008 3:39:19 AM -- Volume M: has 0.0066 for sec/write. 11/12/2008 3:39:19 AM -- Volume M: has 0.0000 for sec/read. 11/12/2008 3:39:19 AM -- Test has 0 Maximum Database Page Fault Stalls/sec. 11/12/2008 3:39:19 AM -- Test has 0 Database Page Fault Stalls/sec samples higher than 0. 11/12/2008 3:39:19 AM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\Performance_2008_11_11_17_32_43.xml has 1439 samples queried. Database Checksum Report Checksum Statistics - All Database Seen pages Bad pages Correctable pages Wrong page no pages File length / taken R:\esrp01\Jetstress1.edb 7312194 0 0 0 57126 MBytes / 513 R:\esrp01\Jetstress2.edb 7312450 0 0 0 57128 MBytes / 466

R:\esrp01\Jetstress3.edb 7314498 0 0 0 R:\esrp01\Jetstress4.edb 7313474 0 0 0 R:\esrp01\Jetstress5.edb 7311938 0 0 0 R:\esrp02\Jetstress1.edb 7313474 0 0 0 R:\esrp02\Jetstress2.edb 7315010 0 0 0 R:\esrp02\Jetstress3.edb 7313474 0 0 0 R:\esrp02\Jetstress4.edb 7313986 0 0 0 R:\esrp02\Jetstress5.edb 7315266 0 0 0 R:\esrp03\Jetstress1.edb 7311682 0 0 0 R:\esrp03\Jetstress2.edb 7314498 0 0 0 R:\esrp03\Jetstress3.edb 7312450 0 0 0 R:\esrp03\Jetstress4.edb 7315266 0 0 0 R:\esrp03\Jetstress5.edb 7313218 0 0 0 R:\esrp04\Jetstress1.edb 7313474 0 0 0 R:\esrp04\Jetstress2.edb 7313218 0 0 0 R:\esrp04\Jetstress3.edb 7312450 0 0 0 R:\esrp04\Jetstress4.edb 7314498 0 0 0 R:\esrp04\Jetstress5.edb 7313218 0 0 0 57144 MBytes / 487 57136 MBytes / 484 57124 MBytes / 482 57136 MBytes / 493 57148 MBytes / 487 57136 MBytes / 473 57140 MBytes / 469 57150 MBytes / 470 57122 MBytes / 474 57144 MBytes / 471 57128 MBytes / 476 57150 MBytes / 482 57134 MBytes / 472 57136 MBytes / 484 57134 MBytes / 484 57128 MBytes / 484 57144 MBytes / 465 57134 MBytes / 457

S:\esrp05\Jetstress1.edb 7312962 0 0 0 S:\esrp05\Jetstress2.edb 7315010 0 0 0 S:\esrp05\Jetstress3.edb 7312450 0 0 0 S:\esrp05\Jetstress4.edb 7312194 0 0 0 S:\esrp05\Jetstress5.edb 7314498 0 0 0 S:\esrp06\Jetstress1.edb 7311938 0 0 0 S:\esrp06\Jetstress2.edb 7315010 0 0 0 S:\esrp06\Jetstress3.edb 7311938 0 0 0 S:\esrp06\Jetstress4.edb 7315010 0 0 0 S:\esrp06\Jetstress5.edb 7312194 0 0 0 S:\esrp07\Jetstress1.edb 7312450 0 0 0 S:\esrp07\Jetstress2.edb 7313474 0 0 0 S:\esrp07\Jetstress3.edb 7314242 0 0 0 S:\esrp07\Jetstress4.edb 7314754 0 0 0 S:\esrp07\Jetstress5.edb 7314754 0 0 0 S:\esrp08\Jetstress1.edb 7313474 0 0 0 S:\esrp08\Jetstress2.edb 7313730 0 0 0 S:\esrp08\Jetstress3.edb 7311682 0 0 0 57132 MBytes / 600 57148 MBytes / 492 57128 MBytes / 495 57126 MBytes / 496 57144 MBytes / 485 57124 MBytes / 493 57148 MBytes / 484 57124 MBytes / 490 57148 MBytes / 487 57126 MBytes / 503 57128 MBytes / 487 57136 MBytes / 492 57142 MBytes / 494 57146 MBytes / 497 57146 MBytes / 492 57136 MBytes / 492 57138 MBytes / 512 57122 MBytes / 494 S:\esrp08\Jetstress4.edb 7313218 0 0 0 57134 MBytes / 516

S:\esrp08\Jetstress5.edb 7313218 0 0 0 (Sum) 292537936 0 0 0 57134 MBytes / 389 2285452 MBytes / 9900 Disk Subsystem Performance (of checksum) LogicalDisk sec/read sec/write Disk Reads/sec Disk Writes/sec R: 0.083 0.000 1908.291 0.000 S: 0.085 0.000 1846.595 0.000 Memory System Performance (of checksum) Counter Average Minimum Maximum % Processor Time 27.736 15.074 30.742 Available MBytes 3387.687 3371.000 3399.000 Free System Page Table Entries 33559697.340 33558313.000 33560079.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 129178290.967 127995904.000 134225920.000 Pool Paged Bytes 99134566.711 98684928.000 99143680.000 Test Log 11/11/2008 5:31:19 PM -- Jetstress testing begins... 11/11/2008 5:31:19 PM -- Prepare testing begins... 11/11/2008 5:32:01 PM -- Attaching databases... 11/11/2008 5:32:01 PM -- Prepare testing ends. 11/11/2008 5:32:01 PM -- Dispatching transactions begins... 11/11/2008 5:32:01 PM -- Database cache settings: (minimum: 256.0 MB, maximum: 2.0 GB) 11/11/2008 5:32:01 PM -- Database flush thresholds: (start: 20.5 MB, stop: 41.0 MB) 11/11/2008 5:32:43 PM -- Database read latency thresholds: (average: 0.02 /read, maximum: 0.05 /read). 11/11/2008 5:32:43 PM -- Log write latency thresholds: (average: 0.01 /write, maximum: 0.05 /write). 11/11/2008 5:32:44 PM -- Operation mix: Sessions 16, Inserts 40%, Deletes 30%, Replaces 5%, Reads 25%, Lazy Commits 55%. 11/11/2008 5:32:44 PM -- Performance logging begins (interval: 15000 ms). 11/11/2008 5:32:44 PM -- Attaining prerequisites: 11/11/2008 5:42:11 PM -- \MSExchange Database(JetstressWin)\Database Cache Size, Last: 1936278000.0 (lower bound: 1932735000.0, upper bound: none) 11/11/2008 11:42:11 PM -- Performance logging ends. 11/12/2008 12:54:04 AM -- JetInterop batch transaction stats: 152655, 153674, 152648, 152569, 152870, 152592, 152959, and 153158. 11/12/2008 12:54:04 AM -- Dispatching transactions ends.

11/12/2008 12:54:04 AM -- Shutting down databases... 11/12/2008 12:54:07 AM -- Instance304.1 (complete), Instance304.2 (complete), Instance304.3 (complete), Instance304.4 (complete), Instance304.5 (complete), Instance304.6 (complete), Ins tance304. 7 (complete), and Ins tance304. 8 (complete) 11/12/2008 12:54:08 AM -- Performance logging begins (interval: 30000 ms). 11/12/2008 12:54:08 AM -- Verifying database checksums... 11/12/2008 3:39:09 AM -- R: (100% processed), and S: (100% processed) 11/12/2008 3:39:09 AM -- Performance logging ends. 11/12/2008 3:39:09 AM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_12_0_54_7.blg has 329 samples. Stress Test Result Report Test Summary Overall Test Result Pass Machine Name BLIZZARD Test Description 90,000 User ESRP Configuration Test Start Time 11/12/2008 8:59:31 AM Test End Time 11/13/2008 9:41:23 AM Jetstress Version 08.02.0060.000 Ese Version 08.01.0240.005 Operating Sys tem Windows Server (R) 2008 Standard Service Pack 1 (6.0.6001.65536) Performance Log C:\Progr am Files\Exchange Jetstress\ESRP Results\Stress_2008_11_12_9_0_55.blg C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_13_9_41_23.blg Database Sizing and Throughput Achieved I/O per Second 2927.739 Target I/O per Second 3000 Initial database size 2396470771712 Final database size 2504058863616 Database files (count) 40 Jetstress System Parameters Thread count 16 (per-storage group)

Log buffers 9000 Minimum database cache 256.0 MB Maximum database cache 2048.0 MB Insert operations 40% Delete operations 30% Replace operations 5% Read operations 25% Lazy commits 55% Disk Subsystem Performance LogicalDisk sec/read sec/write Disk Reads/sec Disk Writes/sec Bytes/Write Database (R:) Database (S:) 0.020 0.007 795.566 670.622 (n/a) 0.020 0.007 794.803 666.749 (n/a) Log (L:) 0.000 0.006 0.000 300.347 6042.642 Log (M:) 0.000 0.006 0.000 301.158 6029.536 Host System Performance Counter Average Minimum Maximum % Processor Time 8.211 4.840 17.151 Available MBytes 1264.900 1255.000 1467.000 Free System Page Table Entries 33559034.347 33558448.000 33559246.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 128248436.594 127934464.000 130244608.000 Pool Paged Bytes 101560190.510 101376000.000 101941248.000 Database Page Fault Stalls/sec 0.000 0.000 0.000 Test Log 11/12/2008 8:59:31 AM -- Jetstress testing begins... 11/12/2008 8:59:31 AM -- Prepare testing begins... 11/12/2008 9:00:13 AM -- Attaching databases... 11/12/2008 9:00:13 AM -- Prepare testing ends. 11/12/2008 9:00:13 AM -- Dispatching transactions begins... 11/12/2008 9:00:13 AM -- Database cache settings: (minimum: 256.0 MB, maximum: 2.0 GB)

11/12/2008 9:00:13 AM -- Database flush thresholds: (start: 20.5 MB, stop: 41.0 MB) 11/12/2008 9:00:55 AM -- Database read latency thresholds: (average: 0.02 /read, maximum: 0.1 /read). 11/12/2008 9:00:55 AM -- Log write latency thresholds: (average: 0.01 /write, maximum: 0.1 /write). 11/12/2008 9:00:55 AM -- Operation mix: Sessions 16, Inserts 40%, Deletes 30%, Replaces 5%, Reads 25%, Lazy Commits 55%. 11/12/2008 9:00:55 AM -- Performance logging begins (interval: 15000 ms). 11/12/2008 9:00:56 AM -- Attaining prerequisites: 11/12/2008 9:09:05 AM -- \MSExchange Database(JetstressWin)\Database Cache Size, Last: 1935237000.0 (lower bound: 1932735000.0, upper bound: none) 11/13/2008 9:09:06 AM -- Performance logging ends. 11/13/2008 9:41:19 AM -- JetInterop batch transaction stats: 483203, 483525, 484170, 483507, 483006, 482239, 483633, and 483574. 11/13/2008 9:41:20 AM -- Dispatching transactions ends. 11/13/2008 9:41:20 AM -- Shutting down databases... 11/13/2008 9:41:23 AM -- Instance1168.1 (complete), Instance1168.2 (complete), Instance1168.3 (complete), Instance1168.4 (complete), Instance1168.5 (complete), Instance1168.6 (complete), Instance1168.7 (complete), and Instance1168.8 (complete) 11/13/2008 9:41:23 AM -- Performance logging begins (interval: 30000 ms). 11/13/2008 9:41:23 AM -- Verifying database checksums... 11/13/2008 12:34:07 PM -- R: (100% processed), and S: (100% processed) 11/13/2008 12:34:07 PM -- Performance logging ends. 11/13/2008 12:34:07 PM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_13_9_41_23.blg has 345 samples. 11/13/2008 12:34:08 PM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\DBChecksum_2008_11_13_9_41_23.html is saved. 11/13/2008 12:34:08 PM -- Verifying log checksums... 11/13/2008 12:34:11 PM -- L:\log01 (2 logs passed), L:\log02 (3 logs passed), L:\log03 (3 logs passed), L:\log04 (3 logs passed), M:\log05 (2 logs passed), M:\log06 (2 logs passed), M:\log07 (3 logs passed), and M:\log08 (2 logs passed) 11/13/2008 12:34:11 PM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\Stress_2008_11_12_9_0_55.blg has 5789 samples. 11/13/2008 12:34:11 PM -- Creating test report... 11/13/2008 12:34:34 PM -- Volume R: has 0.0200 for sec/read. 11/13/2008 12:34:34 PM -- Volume S: has 0.0195 for sec/read. 11/13/2008 12:34:34 PM -- Volume L: has 0.0057 for sec/write. 11/13/2008 12:34:34 PM -- Volume L: has 0.0000 for sec/read. 11/13/2008 12:34:34 PM -- Volume M: has 0.0057 for sec/write. 11/13/2008 12:34:34 PM -- Volume M: has 0.0000 for sec/read. 11/13/2008 12:34:34 PM -- Test has 0 Maximum Database Page Fault Stalls/sec. 11/13/2008 12:34:34 PM -- Test has 0 Database Page Fault Stalls/sec samples higher than 0. 11/13/2008 12:34:34 PM -- C:\Progr am Files\Exchange Jetstress\ESRP Results\Stress_2008_11_12_9_0_55.xml has 5756 samples queried. Streaming Backup Test Result Report Streaming Backup Statistics - All Database Instance Database Size (MBytes) Elapsed Backup Time MBytes Transferred/sec