Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution

Similar documents
ESRP Storage Program

Dell PowerVault MD1220 is a SAS based storage enclosure. The major features of the storage system include:

Sun ZFS Storage 7120 Appliance 5,000 Mailbox Resiliency Exchange 2010 Storage Solution

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Sun ZFS Storage 7320 Appliance 10,000 Mailbox Resiliency Exchange 2010 Storage Solution

NetApp E-Series E ,000-Mailbox Microsoft Exchange Server 2013 Mailbox Resiliency Storage Solution

Dell Compellent Storage Center 6.5 SC4020 4,500 Mailbox Exchange 2013 Resiliency Storage Solution

HP StorageWorks 600 Modular Disk System 4,000 user 3GB Mailbox resiliency Exchange 2010 storage solution

Dell PowerVault MD Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

HP 3PAR StoreServ ,000 Mailbox Resiliency Exchange 2010 Storage Solution

Dell EMC SC Series SC5020 9,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K Drives

IBM Storwize V ,000 mailbox resiliency Microsoft Exchange 2013 storage solution. IBM Systems and Technology Group ISV Enablement March 2014

IBM System Storage DS3300 Storage System 1000 Mailbox Clustered Continuous Replication Microsoft Exchange 2007 Storage Solution

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015

HPE StoreVirtual 3200 Storage 1000 Mailbox Resiliency Exchange 2016 Storage Solution

Fujitsu PRIMEFLEX for VMware vsan 20,000 User Mailbox Exchange 2016 Mailbox Resiliency Storage Solution

HP MSA 2040 Array 750 Mailbox Resiliency Exchange 2013 Storage Solution

EMC CLARiiON AX4-5i (2,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

This document contains information about the EMC DMX SRDF/A Storage Solution for Microsoft Exchange Server.

Assessing performance in HP LeftHand SANs

Dell PowerEdge R720xd 12,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

ESRP Storage Program EMC CLARiiON CX3-20c (500 User) Storage Solution for Microsoft Exchange Server

PS SERIES STORAGE ARRAYS 90,000-USER STORAGE SOLUTION FOR MICROSOFT EXCHANGE SERVER 2007

Dell PowerVault MD3820f 1,000 user Mailbox Exchange 2013 Resiliency Storage Solution Direct Attach FC using dual QLogic QLE Gb FC adapters

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC Business Continuity for Microsoft Applications

ESRP Storage Program EMC CLARiiON CX3-20c(1000 user) Storage Solution for Microsoft Exchange Server

Dell PowerVault MD3820f 5,000 user Mailbox Exchange 2013 Resiliency Storage Solution Direct Attach FC using dual QLogic QLE Gb FC adapters

Dell PowerVault MD3860f 20,000 user Mailbox Exchange 2013 Resiliency Storage Solution Direct Attach FC using dual QLogic QLE Gb FC adapters

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

ESRP Storage Program EMC Celerra NS-120 (1,500 User) Storage Solution for Microsoft Exchange Server 2007 SP1

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

EMC Backup and Recovery for Microsoft Exchange 2007

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Microsoft ESRP 4.0. Dell MD3 Series storage solutions September 2015

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Storage Optimization with Oracle Database 11g

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Business Continuity for Microsoft Exchange 2010

Citrix XenDesktop 5.5 on VMware 5 with Hitachi Virtual Storage Platform

The Oracle Database Appliance I/O and Performance Architecture

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

half the remaining storage, 20 percent of total available storage, for log files.

A Thorough Introduction to 64-Bit Aggregates

Microsoft Office SharePoint Server 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

Active Flash Performance for Hitachi Virtual Storage Platform Gx00 Models. By Hitachi Data Systems

ESRP Storage Program EMC CX-3-20 (900 User) iscsi Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC Celerra NS40 (3,000 User) Storage Solution for Microsoft Exchange Server 2007 SP1

NetVault Backup Client and Server Sizing Guide 2.1

ESRP Storage Program EMC CX-3-20 (700 User) iscsi Storage Solution for Microsoft Exchange Server 2007

A Thorough Introduction to 64-Bit Aggregates

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Nimble Storage Adaptive Flash

IBM InfoSphere Streams v4.0 Performance Best Practices

IBM System Storage DS5020 Express

EMC Backup and Recovery for Microsoft SQL Server

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Technical Paper. Performance and Tuning Considerations for SAS on the Hitachi Virtual Storage Platform G1500 All-Flash Array

Virtualization of the MS Exchange Server Environment

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

ESRP Storage Program EMC CX-3-20 (1000 User) iscsi Storage Solution for Microsoft Exchange Server 2007

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

ESRP Storage Program EMC CX-3-20 (1300 User) iscsi Storage Solution for Microsoft Exchange Server 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Deploy a High-Performance Database Solution: Cisco UCS B420 M4 Blade Server with Fusion iomemory PX600 Using Oracle Database 12c

PRESERVE DATABASE PERFORMANCE WHEN RUNNING MIXED WORKLOADS

The advantages of architecting an open iscsi SAN

ESRP Storage Program EMC Celerra NS20 (700 User) Storage Solution for Microsoft Exchange Server 2007

NetVault Backup Client and Server Sizing Guide 3.0

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

HP SAS benchmark performance tests

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

ESRP Storage Program EMC CLARiiON CX3-20c (1,400 User) iscsi with LCR Storage Solution for Microsoft Exchange Server 2007

ESRP Storage Program EMC Celerra NS20 (1,000 User) Storage Solution for Microsoft Exchange Server 2007

IBM Tivoli Storage Manager for Windows Version Installation Guide IBM

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

ESRP Storage Program EMC Celerra NS20 (1,500 User) Storage Solution for Microsoft Exchange Server 2007

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Pivot3 Acuity with Microsoft SQL Server Reference Architecture

Hitachi Virtual Storage Platform Family

ESRP Storage Program EMC CLARiiON CX3-20c (1,300 User) iscsi Storage Solution for Microsoft Exchange Server 2007

10Gb iscsi Initiators

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Transcription:

1 Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 4.0 Test Date: February - March 2014 Month Year

Notices and Disclaimer Copyright 2014 Hitachi Data Systems Corporation. All rights reserved. The performance data contained herein was obtained in a controlled isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While Hitachi Data Systems Corporation has reviewed each item for accuracy in a specific situation, there is no guarantee that the same results can be obtained elsewhere. All designs, specifications, statements, information and recommendations (collectively, "designs") in this manual are presented "AS IS," with all faults. Hitachi Data Systems Corporation and its suppliers disclaim all warranties, including without limitation, the warranty of merchantability, fitness for a particular purpose and non-infringement or arising from a course of dealing, usage or trade practice. In no event shall Hitachi Data Systems Corporation or its suppliers be liable for any indirect, special, consequential or incidental damages, including without limitation, lost profit or loss or damage to data arising out of the use or inability to use the designs, even if Hitachi Data Systems Corporation or its suppliers have been advised of the possibility of such damages. This document has been reviewed for accuracy as of the date of initial publication. Hitachi Data Systems Corporation may make improvements and/or changes in product and/or programs at any time without notice. 1

Table of Contents Overview... 3 Disclaimer... 3 Features... 4 Solution Description... 6 Targeted Customer Profile... 13 Test Deployment... 14 Replication Configuration... 17 Best Practices... 19 Core Storage... 19 Storage-based Replication... 20 Backup Strategy... 20 Test Results Summary... 21 Reliability... 21 Storage Performance Results... 21 Database Backup and Recovery Performance... 24 Conclusion... 26 Appendix A Test Reports... 27 Performance Test Result: CB34... 27 Performance Test Database Checksums Result: CB34... 41 Stress Test Result: CB34... 49 Stress Test Database Checksums Result: CB34... 63 Backup Test Result: CB34... 71 Soft Recovery Test Result: CB34... 78 Soft Recovery Test Performance Result: CB34... 91 2

Hitachi Unified Storage VM Dynamically Provisioned 21,600 Mailbox Exchange 2013 Mailbox Resiliency Storage Solution Tested with: ESRP Storage Version 4.0 Test Date: February - March 2014 Overview This document provides information on a Microsoft Exchange Server 2013 mailbox resiliency storage solution that uses Hitachi Unified Storage VM storage systems with Hitachi Dynamic Provisioning. This solution is based on the Microsoft Exchange Solution Reviewed Program (ESRP) Storage program. For more information about the contents of this document or Hitachi Data Systems best practice recommendations for Microsoft Exchange Server 2013 storage design, see Hitachi Data Systems Microsoft Exchange Solutions Web page. The ESRP Storage program was developed by Microsoft Corporation to provide a common storage testing framework for vendors to provide information on its storage solutions for Microsoft Exchange Server software. For more information about the Microsoft ESRP Storage program, see TechNet s overview of the program. Disclaimer This document has been produced independently of Microsoft Corporation. Microsoft Corporation expressly disclaims responsibility for, and makes no warranty, express or implied, with respect to the accuracy of the contents of this document. The information contained in this document represents the current view of Hitachi Data Systems on the issues discussed as of the date of publication. Due to changing market conditions, it should not be interpreted to be a commitment on the part of Hitachi Data Systems, and Hitachi Data Systems cannot guarantee the accuracy of any information presented after the date of publication. 3

Features The purpose of this testing was to measure the ESRP 4.0 results on a Microsoft Exchange 2013 environment with 21,600 users and eight servers. This testing used Hitachi Unified Storage VM with Hitachi Dynamic Provisioning in a two-pool RAID-6 (6D+2P) - one for databases and one for logs - resiliency configuration. These results help answer questions about the kind of performance capabilities to expect with a large-scale Exchange deployment on Hitachi Unified Storage VM. Testing used eight Hitachi Compute Blade 2000 server blades in two chassis, each with the following: 64 GB of RAM Two quad-core Intel Xeon X5690 3.46 GHz CPUs Two dual-port 8 Gb/sec Fibre Channel PCIe HBA (Emulex LPe1205-HI, using two ports per HBA) located in the chassis I/O expansion tray Microsoft Windows Server 2008 R2 Enterprise This solution includes Exchange 2013 Mailbox Resiliency by using the database availability group (DAG) feature. This tested configuration uses eight DAGs, each containing thirty four database copies and two servers (one simulated). The test configuration was capable of supporting 21,600 users with a 0.3 IOPS per user profile and a user mailbox size of 5 GB. Hitachi Unified Storage VM with the following was used for these tests: 288 3 TB 7.2K RPM SAS disks 128 GB of cache 32 8 Gb/sec paths used Hitachi Unified Storage VM is an entry-level enterprise storage platform. It combines storage virtualization services with unified block, file, and object data management. This versatile, scalable platform offers a storage virtualization system to provide central storage services to existing storage assets. Unified management delivers end-to-end central storage management of all virtualized internal and external storage on Unified Storage VM. A unique, hardware-accelerated, object-based file system supports intelligent file tiering and migration, as well as virtual NAS functionality, without compromising performance or scalability. The benefits of Unified Storage VM are the following: Enables the move to a new storage platform with less effort and cost when compared to the industry average Increases performance and lowers operating cost with automated data placement Supports scalable management for growing and complex storage environment while using fewer resources Achieves better power efficiency and with more storage capacity for more sustainable data centers Lowers operational risk and data loss exposure with data resilience solutions Consolidates management with end-to-end virtualization to prevent virtual server sprawl 4

Hitachi Unified Storage VM is highly suitable for a variety of applications and host platforms that support the most demanding workloads. With internal and external storage virtualization capabilities, advanced replication technologies, tiered storage features and a tightly integrated management suite, Hitachi Unified Storage VM is fully capable of serving as the core underlying storage platform of high performance Exchange Server 2013 architectures, while maintaining the ability to support additional workloads of an organization such as SQL Server and SharePoint Server. 5

Solution Description Deploying Microsoft Exchange Server 2013 requires careful consideration of all aspects of the solution architecture. Host servers need to be configured so that they are robust enough to handle the required Exchange load. The storage solution must be designed to provide the necessary performance while also being reliable and easy to administer. Of course, an effective backup and recovery plan should be incorporated into the solution as well. The aim of this solution report is to provide a tested configuration that uses Hitachi Unified Storage VM to meet the needs of a large Exchange Server deployment. This solution uses Hitachi Dynamic Provisioning, which is enabled on Hitachi Unified Storage VM via a license key. In the most basic sense, Hitachi Dynamic Provisioning is similar to the use of a host-based logical volume manager (LVM), but with several additional features available within Hitachi Unified Storage VM and without the need to install software on the host or incur host processing overhead. Hitachi Dynamic Provisioning is a superior solution by providing for one or more pools of wide striping across many RAID groups within Hitachi Unified Storage VM. One or more Hitachi Dynamic Provisioning virtual volumes (DPVols) of a user-specified logical size (with no initial physical space allocated) are created and associated with a single pool. Primarily, Hitachi Dynamic Provisioning is deployed to avoid the routine issue of hot spots that occur on logical units (LUs) from individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. By using many RAID groups as members of a striped Hitachi Dynamic Provisioning pool underneath the virtual or logical volumes seen by the hosts, a host workload is distributed across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots and results in fewer mailbox moves for the Exchange administrator. Hitachi Dynamic Provisioning also carries the side benefit of thin provisioning, where physical space is only assigned from the pool to the DPVol as needed. Space is allocated as needed as 42 MB pool pages to that DPVol s logical block address range. A pool can also be dynamically expanded by adding more RAID groups without disruption or requiring downtime. Upon expansion, a pool can be rebalanced easily so that the data and workload are wide striped evenly across the current and newly added RAID groups that make up the pool. High availability is also a part of this solution with the use of database availability groups (DAG), which is the base component of the high availability and site resilience framework built into Microsoft Exchange Server 2013. A DAG is a group of up to 16 mailbox servers that host a set of databases and logs and use continuous replication to provide automatic database-level recovery from failures that affect individual servers or databases. Any server in a DAG can host a copy of a mailbox database from any other server in the DAG. When a server is added to a DAG, it monitors and works with the other servers in the DAG to provide automatic recovery delivering a robust, highly available Exchange solution without the administrative complexities of traditional failover clustering. For more information about the DAG feature in Exchange Server 2013, see http://technet.microsoft.com/en-us/library/dd979799.aspx This solution includes two copies of each Exchange database using eight DAGs, with each DAG configured with two server blades (one simulated) that host active mailboxes in thirty four databases. To target the 21,600 user resiliency solution, a Hitachi Unified Storage VM storage system was configured with 288 disks (maximum 1152). Eight servers (one per DAG) were used, with each server configured with 2,700 mailboxes. There were 272 active databases and the simulated database copies for the tests for a total database size of 108,000 GB. 6

Each DAG contained two copies of the databases hosted by that DAG: A local, active copy on a server connected to the primary Hitachi Unified Storage VM A passive copy (simulated) on another server connected to a second Hitachi Unified Storage VM (simulated). This recommended configuration can support both high-availability and disaster-recovery scenarios when the active and passive database copies are allocated among both DAG members and dispersed across both storage systems. Each simulated DAG server node in this solution maintains a mirrored configuration and possesses adequate capacity and performance capabilities to support the second set of replicated databases. For more information, see the Hitachi Data Systems Storage Systems web page. This solution enables organizations to consolidate Exchange Server 2013 DAG deployments on two Hitachi Unified Storage VM storage systems. Using identical hardware and software configurations guarantees that an active database and its replicated copy do not share storage paths, disk spindles or storage controllers, making it a very reliable, high-performing, highly available Exchange Server 2013 solution that is cost effective and easy to manage. This helps ensure that performance and service levels related to storage are maintained regardless of which server is hosting the active database. If further protection is needed in a production environment, additional Exchange Server 2013 mailbox servers can be easily added to support these failover scenarios. The disks in Hitachi Unified Storage VM were organized into Parity groups for use by databases or logs. There were 288 3 TB 7.2K RPM SAS disks used in these tests configured as 36 RAID-6 (6D+2P) parity groups for the Exchange databases and logs. Each Parity group had 8 LDEVs of 1.99 TB configured. The 256 LDEVs from Parity groups 1-1 to 1-32 were added to HDP Pool-0 (Database Pool) and the 32 LDEVs from Parity group 2-1 to 2-4 were added to HDP Pool-1 (Log Pool). There were 272 DPVols (each specified to be of 1940 GB) created from HDP Pool-0 (Database Pool). Similarly, 272 DPVols (each specified to be 194 GB) were created from HDP Pool-1 (Log Pool). The Database DPVols and Log DPVols were then assigned to the hosts as LUNs. 7

Table 1 outlines the port layout for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 1. Hitachi Unified Storage VM Ports to Server Mapping Configuration Server Primary path Secondary path CB34 1A 1B 2A 2B CB35 1C 1D 2C 2D CB36 3A 3B 4A 4B CB37 3C 3D 4C 4D CB38 5A 5B 6A 6B CB39 5C 5D 6C 6D CB40 7A 8A 8A 8B CB41 7C 7D 8C 8D 8

Table 2 outlines the port layout with the database DPVol assignments for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 2. Hitachi Unified Storage VM Ports to Database DPVol Layout Port Database DB DPVols 1A Databases1-17 10:00 10:10 2A Databases 18-34 10:11 10:21 1C Databases 35-51 10:22-10:32 2C Databases 52-68 10:33-10:43 3A Databases 69-85 10:44-10:54 4A Databases 86-102 10:55-10:65 3C Databases 103-119 10:66-10:76 4C Databases 120-136 10:77-10:87 5A Databases 137-153 10:88-10:98 6A Databases154-170 10:99-10:A9 5C Databases171-187 10:AA-10:BA 6C Databases188-204 10:BB -10:CB 7A Databases 205-221 10:CC -10:DC 8A Databases 222-238 10:DD -10:ED 7C Databases 239-255 10:EE-10:FE 8C Databases 256-272 10:FF-11:0F 9

Table 3 outlines the port layout with the log DPVol assignments for the primary storage and servers. An identical configuration would be deployed on the replicated storage and servers for this solution. Table 3. Hitachi Virtual StoragePlatform Ports to Log DPVol Layout Port Log Log DPVols 1A Log 1-17 11:10-11:20 2A Log 18-34 11:21-11:31 1C Log 35-51 11:32-11:42 2C Log 52-68 11:43-11:53 3A Log 69-85 11:54-11:64 4A Log 86-102 11:65-11:75 3C Log 103-119 11:76-11:86 4C Log 120-136 11:87-11:97 5A Log 137-153 11:98-11:A8 6A Log 154-170 11:A9-11:B9 5C Log 171-187 11:BA-11:CA 6C Log 188-204 11:CB-11:DB 7A Log 205-221 11:DC-11:EC 8A Log 222-238 11:ED-11:FD 7C Log 239-255 11:FE-12:0E 8C Log 256-272 12:0F-12:1F 10

Table 4 provides the detailed specifications for the storage configuration which uses RAID-6 (6D+2P) groups and 3TB 7.2K disks. Dynamic Provisioning Pool 0 is dedicated for the databases and Dynamic Provisioning Pool 1 is dedicated for the logs. Table 4. Hitachi Unified Storage VM Configuration Details Host Pool Port DPVol Size (GB) RAID Level Description cb34 cb35 cb36 cb37 cb38 cb39 cb40 cb41 cb34 cb35 cb36 cb37 cb38 cb39 cb40 cb41 0 1A/1B 10:00 10:10 1940 RAID-6 Databases1-17 0 2A/2B 10:11 10:21 1940 RAID-6 Databases 18-34 0 1C/1D 10:22-10:32 1940 RAID-6 Databases 35-51 0 2C/2D 10:33-10:43 1940 RAID-6 Databases 52-68 0 3A/3B 10:44-10:54 1940 RAID-6 Databases 69-85 0 4A/4B 10:55-10:65 1940 RAID-6 Databases 86-102 0 3C/3D 10:66-10:76 1940 RAID-6 Databases 103-119 0 4C/4D 10:77-10:87 1940 RAID-6 Databases 120-136 0 5A/5B 10:88-10:98 1940 RAID-6 Databases 137-153 0 6A/6B 10:99-10:A9 1940 RAID-6 Databases154-170 0 5C/5D 10:AA-10:BA 1940 RAID-6 Databases171-187 0 6C/6D 10:BB -10:CB 1940 RAID-6 Databases188-204 0 7A/7B 10:CC -10:DC 1940 RAID-6 Databases 205-221 0 8A/8B 10:DD -10:ED 1940 RAID-6 Databases 222-238 0 7C/7D 10:EE-10:FE 1940 RAID-6 Databases 239-255 0 8C/8D 10:FF-11:0F 1940 RAID-6 Databases 256-272 1 1A/1B 11:10-11:20 194 RAID-6 Log 1-17 1 2A/2B 11:21-11:31 194 RAID-6 Log 18-34 1 1C/1D 11:32-11:42 194 RAID-6 Log 35-51 1 2C/2D 11:43-11:53 194 RAID-6 Log 52-68 1 3A/3B 11:54-11:64 194 RAID-6 Log 69-85 1 4A/4B 11:65-11:75 194 RAID-6 Log 86-102 1 3C/3D 11:76-11:86 194 RAID-6 Log 103-119 1 4C/4D 11:87-11:97 194 RAID-6 Log 120-136 1 5A/5B 11:98-11:A8 194 RAID-6 Log 137-153 1 6A/6B 11:A9-11:B9 194 RAID-6 Log 154-170 1 5C/5D 11:BA-11:CA 194 RAID-6 Log 171-187 1 6C/6D 11:CB-11:DB 194 RAID-6 Log 188-204 1 7A/7B 11:DC-11:EC 194 RAID-6 Log 205-221 1 8A/8B 11:ED-11:FD 194 RAID-6 Log 222-238 1 7C/7D 11:FE-12:0E 194 RAID-6 Log 239-255 1 8C/8D 12:0F-12:1F 194 RAID-6 Log 256-272 11

The ESRP Storage program focuses on storage solution testing to address performance and reliability issues with storage design. However, storage is not the only factor to take into consideration when designing a scale-up Exchange solution. These factors also affect server scalability: Server processor utilization Server physical and virtual memory limitations Resource requirements for other applications Directory and network service latencies Network infrastructure limitations Replication and recovery requirements Client usage profiles These factors are all beyond the scope of the ESRP Storage program. Therefore, the number of mailboxes hosted per server as part of the tested configuration might not necessarily be viable for some customer deployments. For more information about identifying and addressing performance bottlenecks in an Exchange system, see Microsoft's Troubleshooting Microsoft Exchange Server Performance. 12

Targeted Customer Profile This solution is designed for medium to large organizations that plan to consolidate their Exchange Server 2013 storage on high-performance, high-reliability storage systems. This configuration is designed to support 21,600 Exchange users with the following specifications: Sixteen Exchange servers (eight tested, eight simulated for the database copies) Eight database availability groups (DAG) each with two servers (one simulated) and two copies per database Two Hitachi Unified Storage VM (one tested) 0.25 IOPS per user (0.3 tested for 20 percent growth) 5 GB mailbox size Mailbox resiliency provides high-availability and used as primary data protection mechanism. Hitachi Unified Storage RAID protection against physical failure or loss. 24x7 background database maintenance enabled. 13

Test Deployment The following tables summarize the testing environment. Table 5. Simulated Exchange Configuration Number of Exchange mailboxes simulated 21,600 Number of database availability groups (DAGs) 8 Number of servers per DAG 2 (1 simulated) Number of active mailboxes per server 2,700 Number of databases per host 34 Number of copies per database 2 Number of mailboxes per database 79.4 Simulated profile: I/Os per second per mailbox (IOPS, include 20% headroom) 0.3 Database LU size Log LU size Total database size for performance testing 1940 GB 194 GB 108,000 GB % storage capacity used by Exchange database** 20.8% **Storage performance characteristics change based on the percentage utilization of the individual disks. Tests that use a small percentage of the storage (~25%) might exhibit reduced throughput if the storage capacity utilization is significantly increased beyond what was tested for this paper. 14

Table 6. Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage VM Firmware: 73-03-01-00 /00 WHQL listing: Hitachi Unified Storage VM 128 GB Number of storage controllers 1 Number of storage ports 32 Maximum bandwidth of storage connectivity to host 256 Gb/sec (32 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Host server type NA Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8 Gb/sec port used per HBA Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 64 GB memory Total number of disks tested in solution 288 Maximum number of spindles that can be hosted in the storage 1152 Table 7. Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.4.0-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 15.00.0516.026 Replication solution name/version N/A 15

Table 8. Storage Disk Configuration (Mailbox Store Disks) Disk type, speed and firmware revision SAS Disk 3TB 7.2K 6F-AD Raw capacity per disk (GB) Number of physical disks in test 3 TB 256 (dynamic provisioning pool) Total raw storage capacity (GB) 768,000 Disk slice size (GB) Number of slices per LU or number of disks per LU RAID level Total formatted capacity N/A N/A RAID-6 (6D+2P) at storage level 518,400 GB Storage capacity utilization 67.5% Database capacity utilization 68.7% Table 9. Storage Disk Configuration (Transaction Log Disks) Disk type, speed and firmware revision SAS Disk 3TB 7.2K 6F-AD Raw capacity per disk (GB) Number of spindles in test Total raw storage capacity (GB) Disk slice size (GB) Number of slices per LU or number of disks per LU RAID level Total formatted capacity 3 TB 32 (dynamic provisioning pool) 96,000 GB N/A N/A RAID-6 (6D+2P) at storage level 64,800 GB 16

Replication Configuration Table 10. Replication Configuration Replication mechanism Exchange Server 2013 Database Availability Group (DAG) Number of links 2 Simulated link distance Link type Link bandwidth N/A IP GigE (1Gb/sec) Table 11. Replicated Storage Hardware Storage connectivity (Fibre Channel, SAS, SATA, iscsi) Storage model and OS/firmware revision Storage cache Fibre Channel 1 Hitachi Unified Storage VM Firmware: 73-03-01-00/00 WHQL listing: Hitachi Unified Storage VM 128 GB Number of storage controllers 1 Number of storage ports 32 Maximum bandwidth of storage connectivity to host 256 Gb/sec (32 8 Gb/sec ports) Switch type/model/firmware revision HBA model and firmware Number of HBAs per host Host server type NA Emulex LPe1205-HI FW : 1.11X14 2 dual-ported HBA per host, 2 8 Gb/sec port used per HBA Hitachi Compute Blade E55A2 2 3.46GHz Intel Xeon Processors, 64 GB memory Total number of disks tested in solution 288 Maximum number of spindles that can be hosted in the storage 1152 17

Table 12. Replicated Storage Software HBA driver Storport Miniport 7.2.20.006 HBA QueueTarget setting 0 HBA QueueDepth setting 32 Multipathing Host OS Hitachi Dynamic Link Manager v7.4.0-00 Microsoft Windows Server 2008 R2 Enterprise ESE.dll file version 15.00.0516.026 Replication solution name/version N/A Table 13. Replicated Storage Disk Configuration (MailboxStore Disks) Disk type, speed and firmware revision SAS Disk 3TB 7.2K 6F-AD Raw capacity per disk (GB) Number of physical disks in test 3 TB 256 (dynamic provisioning pool) total raw storage capacity (GB) 768,000 Disk slice size (GB) Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A N/A RAID-6 (6D+2P) at storage level 518,400 GB Storage capacity utilization 67.5% Database capacity utilization 68.7% Table 14. Replicated Storage Disk Configuration (Transactional Log Disks) Disk type, speed and firmware revision SAS Disk 3TB 7.2K 6F-AD Raw capacity per disk (GB) Number of spindles in test 3 TB 32 (dynamic provisioning pool) Total raw storage capacity (GB) 96,000 Disk slice size (GB) Number of slices per LU or number of disks per LU Raid level Total formatted capacity N/A N/A RAID-6 (6D+2P) at storage level 64,800 GB 18

Best Practices Microsoft Exchange Server 2013 is a disk-intensive application. It presents two distinct workload patterns to the storage, with 32KB random read/write operations to the databases, and sequential write operations of varying size (between 512 bytes up to the log buffer size) to the transaction logs. For this reason, designing an optimal storage configuration can prove challenging in practice. Based on the testing run using the ESRP framework, Hitachi Data Systems recommends these best practices to improve the performance of Hitachi Unified Storage VM running Exchange 2013. For more information about Exchange 2013 best practices for storage design, see the Microsoft TechNet article Mailbox Server Storage Design. Core Storage 1. When formatting a newly partitioned LU, Hitachi Data Systems recommends setting the ALU to 64K for the database files and 4K for the log files. 2. Disk alignment is no longer required when using Microsoft Windows Server 2008 or later. 3. Keep the Exchange workload isolated from other applications. Mixing another I/O intensive application whose workload differs from Exchange can cause the performance for both applications to degrade. 4. Use Hitachi Dynamic Link Manager multipathing software to provide fault tolerance and high availability for host connectivity. 5. Use Hitachi Dynamic Provisioning to simplify storage management of the Exchange database and log volumes. 6. Due to the difference in I/O patterns, isolate the Exchange database from the log groups. Create a dedicated Hitachi Dynamic Provisioning pool for the databases and a separate pool for the logs. 7. The log LUs should be at least 10 percent of the size of the database LUs. 8. Hitachi Data Systems does not recommend using LU concatenation. 9. Hitachi Data Systems recommends implementing Mailbox Resiliency using the Exchange Server 2013 Database Availability Group feature. 10. Ensure that each DAG maintains at least two database copies to provide high availability. 11. Isolate active databases and their replicated copies in separate dynamic provisioning pools or ensure that they are located on a separate Hitachi Unified Storage VMs. 12. Use fewer, larger LUs for Exchange 2013 databases (up to 2TB) with Background Database Maintenance (24x7) enabled. 13. Size storage solutions for Exchange based primarily on performance criteria. The number of disks, RAID level and percent utilization of each disk directly affect the level of achievable performance. Factor in capacity requirements only after performance is addressed. 14. Disk size is unrelated to performance with regards to IOPS or throughput rates. Disk size is related to the usable capacity of all of the LUs from a RAID group, which is a choice users make. 19

15. The number of spindles, coupled with the RAID level, determines the physical IOPS capacity of the RAID group and all of its LUs. If the disk has too few spindles, the response times grow to large values very quickly. Storage-based Replication N/A Backup Strategy N/A 20

Test Results Summary This section provides a high-level summary of the test data from ESRP and the link to the detailed HTML reports that are generated by ESRP testing framework. Reliability A number of tests in the framework check reliability spanning a 24-hour window. The goal is to verify the storage can handle high I/O load for a long period of time. Following these stress tests, both log and database files are analyzed for integrity to ensure that no database or log corruption occurs. No errors were reported in the event log file for the storage reliability testing. No errors were reported for the database and log checksum process. If done, no errors were reported during the backup to disk test process. No errors were reported for the database checksum on the remote storage database. Storage Performance Results Primary storage performance testing exercises the storage with maximum sustainable Exchange type of I/O for two hours. The test shows how long it takes for the storage to respond to an I/O under load. The following data is the sum of all of the logical disk I/Os and average of all the logical disks I/O latency in the two-hour test duration. Individual Server Metrics These individual server metrics show the sum of the I/O across the storage groups and the average latency across all storage groups on a per-server basis. Table 15. Individual Server Metrics for Exchange Server (CB34) Database I/O Database Disk Transfers Per Second 889 Database Disk Reads Per Second 625 Database Disk Writes Per Second 263 Database Disk Read Latency (ms) 18.3 Database Disk Write Latency (ms) 3.6 Transaction Log I/O Log Disk Writes Per Second 199 Log Disk Write Latency (ms) 0.3 21

Table 16. Individual Server Metrics for Exchange Server (CB35) Database I/O Database Disk Transfers Per Second 862 Database Disk Reads Per Second 606 Database Disk Writes Per Second 255 Database Disk Read Latency (ms) 18.4 Database Disk Write Latency (ms) 3.5 Transaction Log I/O Log Disk Writes Per Second 193 Log Disk Write Latency (ms) 0.3 Table 17. Individual Server Metrics for Exchange Server (CB36) Database I/O Database Disk Transfers Per Second 878 Database Disk Reads Per Second 618 Database Disk Writes Per Second 260 Database Disk Read Latency (ms) 18.4 Database Disk Write Latency (ms) 3.5 Transaction Log I/O Log Disk Writes Per Second 197 Log Disk Write Latency (ms) 0.3 Table 18. Individual Server Metrics for Exchange Server (CB37) Database I/O Database Disk Transfers Per Second 872 Database Disk Reads Per Second 613 Database Disk Writes Per Second 259 Database Disk Read Latency (ms) 18.3 Database Disk Write Latency (ms) 3.4 Transaction Log I/O Log Disk Writes Per Second 196 Log Disk Write Latency (ms) 0.3 22

Table 19. Individual Server Metrics for Exchange Server (CB38) Database I/O Database Disk Transfers Per Second 878 Database Disk Reads Per Second 614 Database Disk Writes Per Second 259 Database Disk Read Latency (ms) 18.3 Database Disk Write Latency (ms) 3.6 Transaction Log I/O Log Disk Writes Per Second 195 Log Disk Write Latency (ms) 0.4 Table 20. Individual Server Metrics for Exchange Server (CB39) Database I/O Database Disk Transfers Per Second 887 Database Disk Reads Per Second 624 Database Disk Writes Per Second 263 Database Disk Read Latency (ms) 18.4 Database Disk Write Latency (ms) 3.6 Transaction Log I/O Log Disk Writes Per Second 199 Log Disk Write Latency (ms) 0.4 Table 21. Individual Server Metrics for Exchange Server (CB40) Database I/O Database Disk Transfers Per Second 863 Database Disk Reads Per Second 607 Database Disk Writes Per Second 256 Database Disk Read Latency (ms) 18.3 Database Disk Write Latency (ms) 3.5 Transaction Log I/O Log Disk Writes Per Second 193 Log Disk Write Latency (ms) 0.4 23

Table 22. Individual Server Metrics for Exchange Server (CB41) Database I/O Database Disk Transfers Per Second 863 Database Disk Reads Per Second 607 Database Disk Writes Per Second 256 Database Disk Read Latency (ms) 18.2 Database Disk Write Latency (ms) 3.5 Transaction Log I/O Log Disk Writes Per Second 194 Log Disk Write Latency (ms) 0.4 Aggregate Performance Across All Servers Metric The aggregate performance across all server metrics shows the sum of I/O across all servers in the solution and the average latency across all servers in the solution. Table 23. Aggregate Performance for Exchange Server 2013 Database I/O Database Disk Transfers Per Second 6986.87 Database Disk Reads Per Second 4914.99 Database Disk Writes Per Second 2071.88 Database Disk Read Latency (ms) 18.31 Database Disk Write Latency (ms) 3.53 Transaction Log I/O Log Disk Writes Per Second 1566.28 Log Disk Write Latency (ms) 0.35 Database Backup and Recovery Performance This section has two tests: The first measures the sequential read rate of the database files and the second measures recovery/replay performance (playing transaction logs in to the database). Database Read-only Performance This test measures the maximum rate at which databases can be backed up via VSS. The following tables show the average rate for a single database file. Table 24. Database Read-only Performance MB Read Per Second Per Database 37.08 MB Read Per Second Total Per Server 1107.37 24

Transaction Log Recovery/Replay Performance This test measures the maximum rate at which the log files can be played against the databases. The following table shows the average rate for 500 log files played in a single storage group. Each log file is 1MB in size. Table 25. Transaction Log Recovery/Replay Performance Time to Play One Log File (sec) 5.63426 25

Conclusion This document details a tested and robust Exchange Server 2013 Resiliency solution capable of supporting 21,600 users with a 0.3 IOPS per user profile and user mailbox size of 5 GB using eight DAG s, each configured with 2 server nodes (one simulated). A Hitachi Unified Storage VM storage system, with 128 GB of cache and thirty-two 8 Gb/sec Fibre Channel host paths, using Hitachi Dynamic Provisioning (with two pools) and 288 3 TB 7.2K RPM SAS disks in a RAID-6 (6D+2P) configuration was used for these tests. Testing confirmed that Hitachi Unified Storage VM is more than capable of delivering the IOPS and capacity requirements needed to support the active and replicated databases for 21,600 Exchange mailboxes configured with the specified user profile, while maintaining additional headroom to support peak throughput. The solution outlined in this document does not include data protection components, such as VSS snapshot or clone backups, and relies on the built-in Mailbox Resiliency features of Exchange Server 2013 coupled with Hitachi Unified Storage VM RAID technology to provide high-availability and protection from logical and physical failures. Adding additional protection requirements may affect performance and capacity requirements of the underlying storage configuration, and as such need to be factored into the storage design accordingly. For more information to about planning Exchange Server 2013 storage architectures for Hitachi Unified Storage VM, see http://www.hds.com/ This document is developed by Hitachi Data Systems and reviewed by the Microsoft Exchange product team. The test results and data presented in this document are based on the tests introduced in the ESRP test framework. Do not quote the data directly for pre-deployment verification. It is still necessary to validate the storage design for a specific customer environment. The ESRP program is not designed to be a benchmarking program; tests do not generate the maximum throughput for a given solution. Rather, it is focused on producing recommendations from vendors for Exchange application. Thus, do not use the data presented in this document for direct comparisons among the solutions 26

Appendix A Test Reports This appendix contains Jetstress test results for one of the servers used in testing this storage solution. These test results are representative of the results obtained for all of the servers tested. Performance Test Result: CB34 Test Summary Overall Test Result Machine Name Pass CB34 Test Description Test Start Time Test End Time Collection Start Time Collection End Time 2/26/2014 1:05:40 AM 2/26/2014 3:54:35 AM 2/26/2014 1:17:07 AM 2/26/2014 3:17:03 AM Jetstress Version 15.00.0658.004 ESE Version 15.00.0516.026 Operating System Windows Server 2008 R2 Enterprise Service Pack 1 (6.1.7601.65536) Performance Log C:\ESRP_PE183_R6(6+2)_5GB_mailbox_users\Performance Test\Performance_2014_2_26_1_6_52.blg Database Sizing and Throughput Achieved Transactional I/O per Second 888.607 Target Transactional I/O per Second 810 Initial Database Size (bytes) 53206164045824 Final Database Size (bytes) 53209259442176 Database Files (Count) 34 27

Jetstress System Parameters Thread Count 30 Minimum Database Cache Maximum Database Cache 1088.0 MB 8704.0 MB Insert Operations 40% Delete Operations 20% Replace Operations 5% Read Operations 35% Lazy Commits 70% Run Background Database Maintenance True Number of Copies per Database 2 Database Configuration Instance5240.1 Instance5240.2 Instance5240.3 Instance5240.4 Instance5240.5 Instance5240.6 Instance5240.7 Instance5240.8 Instance5240.9 Log path: C:\logluns\log1 Database: C:\dbluns\db1\Jetstress001001.edb Log path: C:\logluns\log2 Database: C:\dbluns\db2\Jetstress002001.edb Log path: C:\logluns\log3 Database: C:\dbluns\db3\Jetstress003001.edb Log path: C:\logluns\log4 Database: C:\dbluns\db4\Jetstress004001.edb Log path: C:\logluns\log5 Database: C:\dbluns\db5\Jetstress005001.edb Log path: C:\logluns\log6 Database: C:\dbluns\db6\Jetstress006001.edb Log path: C:\logluns\log7 Database: C:\dbluns\db7\Jetstress007001.edb Log path: C:\logluns\log8 Database: C:\dbluns\db8\Jetstress008001.edb Log path: C:\logluns\log9 Database: C:\dbluns\db9\Jetstress009001.edb 28

Instance5240.10 Instance5240.11 Instance5240.12 Instance5240.13 Instance5240.14 Instance5240.15 Instance5240.16 Instance5240.17 Instance5240.18 Instance5240.19 Instance5240.20 Instance5240.21 Instance5240.22 Instance5240.23 Instance5240.24 Instance5240.25 Instance5240.26 Instance5240.27 Log path: C:\logluns\log10 Database: C:\dbluns\db10\Jetstress010001.edb Log path: C:\logluns\log11 Database: C:\dbluns\db11\Jetstress011001.edb Log path: C:\logluns\log12 Database: C:\dbluns\db12\Jetstress012001.edb Log path: C:\logluns\log13 Database: C:\dbluns\db13\Jetstress013001.edb Log path: C:\logluns\log14 Database: C:\dbluns\db14\Jetstress014001.edb Log path: C:\logluns\log15 Database: C:\dbluns\db15\Jetstress015001.edb Log path: C:\logluns\log16 Database: C:\dbluns\db16\Jetstress016001.edb Log path: C:\logluns\log17 Database: C:\dbluns\db17\Jetstress017001.edb Log path: C:\logluns\log18 Database: C:\dbluns\db18\Jetstress018001.edb Log path: C:\logluns\log19 Database: C:\dbluns\db19\Jetstress019001.edb Log path: C:\logluns\log20 Database: C:\dbluns\db20\Jetstress020001.edb Log path: C:\logluns\log21 Database: C:\dbluns\db21\Jetstress021001.edb Log path: C:\logluns\log22 Database: C:\dbluns\db22\Jetstress022001.edb Log path: C:\logluns\log23 Database: C:\dbluns\db23\Jetstress023001.edb Log path: C:\logluns\log24 Database: C:\dbluns\db24\Jetstress024001.edb Log path: C:\logluns\log25 Database: C:\dbluns\db25\Jetstress025001.edb Log path: C:\logluns\log26 Database: C:\dbluns\db26\Jetstress026001.edb Log path: C:\logluns\log27 Database: C:\dbluns\db27\Jetstress027001.edb 29

Instance5240.28 Instance5240.29 Instance5240.30 Instance5240.31 Instance5240.32 Instance5240.33 Instance5240.34 Log path: C:\logluns\log28 Database: C:\dbluns\db28\Jetstress028001.edb Log path: C:\logluns\log29 Database: C:\dbluns\db29\Jetstress029001.edb Log path: C:\logluns\log30 Database: C:\dbluns\db30\Jetstress030001.edb Log path: C:\logluns\log31 Database: C:\dbluns\db31\Jetstress031001.edb Log path: C:\logluns\log32 Database: C:\dbluns\db32\Jetstress032001.edb Log path: C:\logluns\log33 Database: C:\dbluns\db33\Jetstress033001.edb Log path: C:\logluns\log34 Database: C:\dbluns\db34\Jetstress034001.edb 30

Transactional I/O Performance MSExchange Database ==> Instances I/O Database Reads Latency (msec) I/O Database Writes Latency (msec) I/O Database Reads/sec I/O Database Writes/sec I/O Database Reads Bytes I/O Database Writes Bytes I/O Log Reads Latency (msec) I/O Log Writes Latency (msec) I/O Log Reads/sec I/O Log Writes/sec I/O Log Reads Bytes I/O Log Writes Bytes Instance5240.1 18.213 1.242 18.233 7.620 33826.180 37501.541 0.000 0.344 0.000 5.745 0.000 8161.784 Instance5240.2 18.182 1.379 18.377 7.909 33841.722 37201.298 0.000 0.344 0.000 5.966 0.000 8025.708 Instance5240.3 18.331 1.507 18.369 7.673 33865.978 37265.127 0.000 0.348 0.000 5.910 0.000 8006.307 Instance5240.4 18.350 1.652 18.404 7.706 33916.196 37108.702 0.000 0.349 0.000 5.780 0.000 8017.682 Instance5240.5 18.277 1.808 18.359 7.736 33860.910 37111.490 0.000 0.357 0.000 5.883 0.000 8118.782 Instance5240.6 18.263 1.957 18.362 7.684 33834.169 37163.725 0.000 0.343 0.000 5.739 0.000 8127.971 Instance5240.7 18.371 2.101 18.341 7.714 33870.980 37293.624 0.000 0.350 0.000 5.888 0.000 8067.483 Instance5240.8 18.429 2.273 18.523 7.832 33839.400 37133.826 0.000 0.341 0.000 5.782 0.000 8109.671 Instance5240.9 18.302 2.416 18.149 7.646 33809.794 37392.366 0.000 0.351 0.000 5.879 0.000 8153.448 Instance5240.10 18.338 2.528 18.194 7.506 33855.330 37296.031 0.000 0.344 0.000 5.780 0.000 8121.724 Instance5240.11 18.430 2.702 18.521 7.850 33906.901 37187.891 0.000 0.347 0.000 5.782 0.000 8037.321 Instance5240.12 18.510 2.858 18.573 7.788 33906.774 37210.561 0.000 0.342 0.000 5.793 0.000 7984.233 Instance5240.13 18.408 2.993 18.457 7.728 33896.949 37313.352 0.000 0.351 0.000 5.793 0.000 8072.849 Instance5240.14 18.393 3.167 18.561 7.802 33861.951 37129.460 0.000 0.341 0.000 5.801 0.000 8138.070 Instance5240.15 18.376 3.288 18.197 7.663 33762.843 37492.286 0.000 0.344 0.000 5.981 0.000 8049.900 Instance5240.16 18.339 3.480 18.497 7.884 33818.360 37068.677 0.000 0.340 0.000 5.867 0.000 8124.392 Instance5240.17 18.327 3.610 18.389 7.828 33838.926 37345.381 0.000 0.340 0.000 5.960 0.000 8044.895 Instance5240.18 18.221 3.761 18.333 7.649 33967.008 37211.147 0.000 0.338 0.000 5.803 0.000 8070.391 Instance5240.19 18.351 3.937 18.403 7.742 33911.694 37221.285 0.000 0.335 0.000 5.741 0.000 8109.435 Instance5240.20 18.343 4.039 18.459 7.799 33957.274 37290.857 0.000 0.344 0.000 5.825 0.000 8122.893 Instance5240.21 18.384 4.206 18.557 7.875 33855.496 37052.323 0.000 0.333 0.000 5.928 0.000 7981.437 Instance5240.22 18.238 4.281 18.343 7.589 33882.086 37269.628 0.000 0.345 0.000 5.779 0.000 8038.837 Instance5240.23 18.282 4.435 18.347 7.665 33904.587 37150.047 0.000 0.338 0.000 5.850 0.000 8043.720 Instance5240.24 18.245 4.526 18.173 7.564 33889.330 37435.148 0.000 0.341 0.000 5.831 0.000 8074.026 31

Background Database Maintenance I/O Performance MSExchange Database ==> Instances Database Maintenance IO Reads/sec Database Maintenance IO Reads Bytes Instance5240.1 9.159 261953.895 Instance5240.2 9.162 261817.332 Instance5240.3 9.159 261933.775 Instance5240.4 9.160 261866.941 Instance5240.5 9.160 261828.340 Instance5240.6 9.159 261908.367 Instance5240.7 9.162 261818.715 Instance5240.8 9.163 261857.123 Instance5240.9 9.162 261818.701 Instance5240.10 9.159 261916.512 Instance5240.11 9.157 261952.655 Instance5240.12 9.162 261854.310 Instance5240.13 9.161 261843.501 Instance5240.14 9.163 261836.866 Instance5240.15 9.160 261924.698 Instance5240.16 9.160 261912.391 Instance5240.17 9.162 261861.403 Instance5240.18 9.158 261942.506 Instance5240.19 9.162 261851.755 Instance5240.20 9.160 261894.452 Instance5240.21 9.160 261901.083 Instance5240.22 9.162 261838.416 Instance5240.23 9.162 261847.003 Instance5240.24 9.160 261867.663 Instance5240.25 9.161 261909.349 Instance5240.26 9.162 261799.999 Instance5240.27 9.159 261943.032 32

Instance5240.28 9.161 261859.934 Instance5240.29 9.162 261885.255 Instance5240.30 9.160 261900.208 Instance5240.31 9.161 261897.508 Instance5240.32 9.159 261929.008 Instance5240.33 9.161 261889.140 Instance5240.34 9.162 261831.715 Log Replication I/O Performance MSExchange Database ==> Instances I/O Log Reads/sec I/O Log Reads Bytes Instance5240.1 0.134 52498.123 Instance5240.2 0.135 53057.650 Instance5240.3 0.134 52498.123 Instance5240.4 0.130 51366.998 Instance5240.5 0.133 52007.486 Instance5240.6 0.134 53521.211 Instance5240.7 0.137 53479.396 Instance5240.8 0.134 52498.123 Instance5240.9 0.138 53970.033 Instance5240.10 0.133 52007.486 Instance5240.11 0.133 52007.486 Instance5240.12 0.130 51026.213 Instance5240.13 0.134 52929.230 Instance5240.14 0.133 52007.486 Instance5240.15 0.139 54460.669 Instance5240.16 0.135 52988.759 Instance5240.17 0.137 53820.181 33

Instance5240.18 0.133 52007.486 Instance5240.19 0.133 52007.486 Instance5240.20 0.133 52007.486 Instance5240.21 0.134 52498.123 Instance5240.22 0.133 52007.486 Instance5240.23 0.134 52567.014 Instance5240.24 0.134 52498.123 Instance5240.25 0.134 52498.123 Instance5240.26 0.132 51516.850 Instance5240.27 0.137 53479.396 Instance5240.28 0.137 53479.396 Instance5240.29 0.134 52498.123 Instance5240.30 0.133 52007.486 Instance5240.31 0.135 52988.759 Instance5240.32 0.137 53479.396 Instance5240.33 0.137 53479.396 Instance5240.34 0.134 52498.123 Total I/O Performance MSExchange Database ==> Instances I/O Database Reads Latency (msec) I/O Database Writes Latency (msec) I/O Database Reads/sec I/O Database Writes/sec I/O Database Reads Bytes I/O Database Writes Bytes I/O Log Reads Latency (msec) I/O Log Writes Latency (msec) I/O Log Reads/sec I/O Log Writes/sec I/O Log Reads Bytes I/O Log Writes Bytes Instance5240.1 18.213 1.242 27.392 7.620 110101.405 37501.541 1.031 0.344 0.134 5.745 52498.123 8161.784 Instance5240.2 18.182 1.379 27.539 7.909 109687.729 37201.298 1.003 0.344 0.135 5.966 53057.650 8025.708 Instance5240.3 18.331 1.507 27.528 7.673 109750.827 37265.127 1.337 0.348 0.134 5.910 52498.123 8006.307 Instance5240.4 18.350 1.652 27.565 7.706 109668.317 37108.702 1.173 0.349 0.130 5.780 51366.998 8017.682 Instance5240.5 18.277 1.808 27.519 7.736 109744.400 37111.490 1.336 0.357 0.133 5.883 52007.486 8118.782 34

Instance5240.6 18.263 1.957 27.521 7.684 109737.491 37163.725 1.132 0.343 0.134 5.739 53521.211 8127.971 Instance5240.7 18.371 2.101 27.503 7.714 109810.280 37293.624 1.356 0.350 0.137 5.888 53479.396 8067.483 Instance5240.8 18.429 2.273 27.686 7.832 109301.571 37133.826 0.851 0.341 0.134 5.782 52498.123 8109.671 Instance5240.9 18.302 2.416 27.312 7.646 110301.477 37392.366 1.009 0.351 0.138 5.879 53970.033 8153.448 Instance5240.10 18.338 2.528 27.353 7.506 110222.685 37296.031 1.248 0.344 0.133 5.780 52007.486 8121.724 Instance5240.11 18.430 2.702 27.678 7.850 109353.154 37187.891 1.418 0.347 0.133 5.782 52007.486 8037.321 Instance5240.12 18.510 2.858 27.735 7.788 109209.843 37210.561 1.145 0.342 0.130 5.793 51026.213 7984.233 Instance5240.13 18.408 2.993 27.618 7.728 109506.939 37313.352 1.305 0.351 0.134 5.793 52929.230 8072.849 Instance5240.14 18.393 3.167 27.723 7.802 109208.413 37129.460 1.221 0.341 0.133 5.801 52007.486 8138.070 Instance5240.15 18.376 3.288 27.357 7.663 110159.626 37492.286 0.734 0.344 0.139 5.981 54460.669 8049.900 Instance5240.16 18.339 3.480 27.657 7.884 109364.311 37068.677 0.992 0.340 0.135 5.867 52988.759 8124.392 Instance5240.17 18.327 3.610 27.551 7.828 109664.023 37345.381 1.313 0.340 0.137 5.960 53820.181 8044.895 Instance5240.18 18.221 3.761 27.491 7.649 109911.794 37211.147 0.863 0.338 0.133 5.803 52007.486 8070.391 Instance5240.19 18.351 3.937 27.565 7.742 109674.010 37221.285 0.854 0.335 0.133 5.741 52007.486 8109.435 Instance5240.20 18.343 4.039 27.619 7.799 109554.414 37290.857 1.156 0.344 0.133 5.825 52007.486 8122.893 Instance5240.21 18.384 4.206 27.718 7.875 109222.586 37052.323 1.056 0.333 0.134 5.928 52498.123 7981.437 Instance5240.22 18.238 4.281 27.505 7.589 109811.958 37269.628 1.263 0.345 0.133 5.779 52007.486 8038.837 Instance5240.23 18.282 4.435 27.509 7.665 109822.666 37150.047 1.362 0.338 0.134 5.850 52567.014 8043.720 Instance5240.24 18.245 4.526 27.333 7.564 110289.055 37435.148 1.155 0.341 0.134 5.831 52498.123 8074.026 Instance5240.25 18.256 4.691 27.548 7.772 109717.115 37392.589 1.404 0.337 0.134 5.877 52498.123 8053.384 Instance5240.26 18.296 4.822 27.783 7.874 109123.294 36994.470 0.934 0.338 0.132 5.919 51516.850 7898.185 Instance5240.27 18.277 4.958 27.468 7.650 109928.858 37350.841 0.982 0.342 0.137 5.895 53479.396 8138.389 Instance5240.28 18.245 5.103 27.539 7.796 109689.059 37227.933 1.086 0.343 0.137 5.898 53479.396 8123.838 Instance5240.29 18.183 5.248 27.502 7.730 109816.302 37323.016 1.036 0.348 0.134 5.943 52498.123 7938.732 Instance5240.30 18.238 5.373 27.499 7.700 109872.312 37337.967 1.213 0.347 0.133 5.743 52007.486 8195.721 Instance5240.31 18.331 5.509 27.534 7.695 109723.402 37340.114 1.106 0.344 0.135 5.918 52988.759 8105.432 Instance5240.32 18.265 5.696 27.711 7.944 109221.779 37160.015 1.106 0.334 0.137 5.949 53479.396 8095.022 Instance5240.33 18.080 5.785 27.453 7.734 109961.041 37396.417 1.382 0.342 0.137 5.861 53479.396 8151.744 Instance5240.34 18.326 5.982 27.724 7.987 109215.364 37066.816 0.901 0.338 0.134 5.905 52498.123 8121.877 35

Host System Performance Counter Minimum Maximum % Processor Time 0.415 0.150 2.712 Available MBytes 52133.775 52115.000 52698.000 Free System Page Table Entries 33557986.387 33557775.000 33557993.000 Transition Pages RePurposed/sec 0.000 0.000 0.000 Pool Nonpaged Bytes 103393870.686 103325696.000 103419904.000 Pool Paged Bytes 166121075.335 166064128.000 167968768.000 Database Page Fault Stalls/sec 0.000 0.000 0.000 Test Log 2/26/2014 1:05:40 AM -- Preparing for testing... 2/26/2014 1:06:15 AM -- Attaching databases... 2/26/2014 1:06:15 AM -- Preparations for testing are complete. 2/26/2014 1:06:15 AM -- Starting transaction dispatch.. 2/26/2014 1:06:15 AM -- Database cache settings: (minimum: 1.1 GB, maximum: 8.5 GB) 2/26/2014 1:06:15 AM -- Database flush thresholds: (start: 87.0 MB, stop: 174.1 MB) 2/26/2014 1:06:52 AM -- Database read latency thresholds: (average: 20 msec/read, maximum: 100 msec/read). 2/26/2014 1:06:52 AM -- Log write latency thresholds: (average: 10 msec/write, maximum: 100 msec/write). 2/26/2014 1:07:30 AM -- Operation mix: Sessions 30, Inserts 40%, Deletes 20%, Replaces 5%, Reads 35%, Lazy Commits 70%. 2/26/2014 1:07:30 AM -- Performance logging started (interval: 15000 ms). 2/26/2014 1:07:30 AM -- Attaining prerequisites: 2/26/2014 1:17:07 AM -- \MSExchange Database(JetstressWin)\Database Cache Size, Last: 8222589000.0 (lower bound: 8214125000.0, upper bound: none) 2/26/2014 3:17:07 AM -- Performance logging has ended. 2/26/2014 3:54:28 AM -- JetInterop batch transaction stats: 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465, 6465 and 6464. 2/26/2014 3:54:28 AM -- Dispatching transactions ends. 2/26/2014 3:54:28 AM -- Shutting down databases... 2/26/2014 3:54:35 AM -- Instance5240.1 (complete), Instance5240.2 (complete), Instance5240.3 (complete), Instance5240.4 (complete), 36