Estimate Mailbox Storage Capacity Requirements Estimate Mailbox I/O Requirements Determine Storage Type Choose Storage Solution Determine Number of

Size: px
Start display at page:

Download "Estimate Mailbox Storage Capacity Requirements Estimate Mailbox I/O Requirements Determine Storage Type Choose Storage Solution Determine Number of"

Transcription

1 Exchange 2010 Tested Solutions: 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions Rob Simpson, Program Manager, Microsoft Exchange Server; Akshai Parthasarathy, Systems Engineer, Dell; Casey Birch, Product Marketing Manager for Exchange Solutions, Dell December 2010 In Exchange 2010 Tested Solutions, Microsoft and participating server, storage, and network partners examine common customer scenarios and key design decision points facing customers who plan to deploy Microsoft Exchange Server Through this series of white papers, we provide examples of well-designed, cost-effective Exchange 2010 solutions deployed on hardware offered by some of our server, storage, and network partners. You can download this document from the Microsoft Download Center. Applies to: Microsoft Exchange Server 2010 release to manufacturing (RTM) Microsoft Exchange Server 2010 with Service Pack 1 (SP1) Windows Server 2008 R2 Windows Server 2008 R2 Hyper-V Table of Contents Solution Requirements Customer Requirements Mailbox Profile Requirements Geographic Location Requirements Server and Data Protection Requirements Design Assumptions Server Configuration Assumptions Storage Configuration Assumptions Solution Design Determine High Availability Strategy 1

2 Estimate Mailbox Storage Capacity Requirements Estimate Mailbox I/O Requirements Determine Storage Type Choose Storage Solution Determine Number of EqualLogic Arrays Required Estimate Mailbox Memory Requirements Estimate Mailbox CPU Requirements Determine Whether Server Virtualization Will Be Used Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in Separate Virtual Machines Determine Server Model for Hyper-V Root Server Determine the CPU Capacity of the Virtual Machines Determine Number of Mailbox Server Virtual Machines Required Determine Number of Mailboxes per Mailbox Server Determine Memory Required Per Mailbox Server Determine Number of Client Access and Hub Transport Server Combo Virtual Machines Required Determine Memory Required per Combined Client Access and Hub Transport Virtual Machines Determine Virtual Machine Distribution Determine Memory Required per Root Server Determine Minimum Number of Databases Required Identify Failure Domains Impacting Database Copy Layout Design Database Copy Layout Determine Storage Design Determine Placement of the File Share Witness Plan Namespaces Determine Client Access Server Array and Load Balancing Strategy Determine Hardware Load Balancing Solution Determine Hardware Load Balancing Device Resiliency Strategy Determine Hardware Load Balancing Methods Solution Overview Logical Solution Diagram Physical Solution Diagram Server Hardware Summary Client Access and Hub Transport Server Configuration Mailbox Server Configuration 2

3 Database Layout Storage Hardware Summary Storage Configuration Network Switch Hardware Summary Load Balancer Hardware Summary Solution Validation Methodology Storage Design Validation Methodology Server Design Validation Functional Validation Tests Datacenter Switchover Validation Primary Datacenter Service Restoration Validation Storage Design Validation Results Server Design Validation Results This document provides an example of how to design, test, and validate an Exchange Server 2010 solution for environments with 9,000 mailboxes deployed on Dell server and storage solutions and F5 load balancing solutions. One of the key challenges with designing Exchange 2010 environments is examining the current server and storage options available and making the right hardware choices that provide the best value over the anticipated life of the solution. Following the step-by-step methodology in this document, we walk through the important design decision points that help address these key challenges while ensuring that the customer's core business requirements are met. After we have determined the optimal solution for this customer, the solution undergoes a standard validation process to ensure that it holds up under simulated production workloads for normal operating, maintenance, and failure scenarios. Solution Requirements The following tables summarize the key Exchange and hardware components of this solution. Exchange components Exchange component Value or description Target mailbox count 9000 Target mailbox size Target message profile 750 megabytes (MB) 103 messages per day Database copy count 3 Volume Shadow Copy Service (VSS) backup None 3

4 Exchange component Site resiliency Virtualization Exchange server count Value or description Yes Hyper-V 18 virtual machines (VMs) Physical server count 9 Hardware components Hardware component Server partner Server model Server type Processor Storage partner Storage type Disk type Value or description Dell PowerEdge M610 Blade Intel Xeon X5550 Dell EqualLogic Internet SCSI (iscsi) storage area network (SAN) 1 terabyte 7.2 kilobyte (KB) Serial ATA (SATA) 3.5" Customer Requirements One of the most important first steps in Exchange solution design is to accurately summarize the business and technical requirements that are critical to making the correct design decisions. The following sections outline the customer requirements for this solution. Mailbox Profile Requirements Determine mailbox profile requirements as accurately as possible because these requirements may impact all other components of the design. If Exchange is new to you, you may have to make some educated guesses. If you have an existing Exchange environment, you can use the Microsoft Exchange Server Profile Analyzer tool to assist with gathering most of this information. The following tables summarize the mailbox profile requirements for this solution. 4

5 Mailbox count requirements Mailbox count requirements Mailbox count (total number of mailboxes including resource mailboxes) Projected growth percent (%) in mailbox count (projected increase in mailbox count over the life of the solution) Expected mailbox concurrency % (maximum number of active mailboxes at any time) Value % 100% Mailbox size requirements Mailbox size requirements Value Average mailbox size in MB 750 MB (742) Tiered mailbox size Yes 4 gigabytes (GB) 1 GB 512 MB Average mailbox archive size in MB 0 Projected growth (%) in mailbox size in MB (projected increase in mailbox size over the life of the solution) Target average mailbox size in MB included 750 MB Mailbox profile requirements Mailbox profile requirements Target message profile (average total number of messages sent plus received per user per day) Tiered message profile Value 103 messages per day Yes 150 messages per day 100 messages per day Target average message size in KB 75 % in MAPI cached mode 100 % in MAPI online mode 0 5

6 Mailbox profile requirements Value % in Outlook Anywhere cached mode 0 % in Outlook Web Access 0 % in Exchange Active Sync 0 Geographic Location Requirements Understanding the distribution of mailbox users and datacenters is important when making design decisions about high availability and site resiliency. The following table outlines the geographic distribution of people who will be using the Exchange system. Geographic distribution of people Mailbox user site requirements Value Number of major sites containing mailbox users 1 Number of mailbox users in site Number of Mailbox users in site 2 0 The following table outlines the geographic distribution of datacenters that could potentially support the Exchange infrastructure Geographic distribution of datacenters Datacenter site requirements Value or description Total number of datacenters 2 Number of active mailboxes in proximity to datacenter 1 Number of active mailboxes in proximity to datacenter 2 Requirement for Exchange to reside in more than one datacenter Yes 6

7 Server and Data Protection Requirements It's also important to define server and data protection requirements for the environment because these requirements will support design decisions about high availability and site resiliency. The following table identifies server protection requirements. Server protection requirements Server protection requirements Number of simultaneous server or VM failures within site Number of simultaneous server or VM failures during site failure Value 1 0 The following table identifies data protection requirements. Data protection requirements Data protection requirement Requirement to maintain a backup of the Exchange databases outside of the Exchange environment (for example, third-party backup solution) Requirement to maintain copies of the Exchange databases within the Exchange environment (for example, Exchange native data protection) Requirement to maintain multiple copies of mailbox data in the primary datacenter Requirement to maintain multiple copies of mailbox data in a secondary datacenter Requirement to maintain a lagged copy of any Exchange databases Lagged copy period in days Value or description No Yes Yes Yes No Not applicable Target number of database copies 3 Deleted Items folder retention window in days 14 days 7

8 Design Assumptions This section includes information that isn't typically collected as part of customer requirements, but is critical to both the design and the approach to validating the design. Server Configuration Assumptions The following table describes the CPU utilization targets for normal operating conditions, and for site server failure or server maintenance conditions. Server utilization targets Target server CPU utilization design assumption Value Normal operating for Mailbox servers <70% Normal operating for Client Access servers <70% Normal operating for Hub Transport servers <70% Normal operating for multiple server roles (Client Access, Hub Transport, and Mailbox servers) Normal operating for multiple server roles (Client Access and Hub Transport servers) <70% <70% Node failure for Mailbox servers <80% Node failure for Client Access servers <80% Node failure for Hub Transport servers <80% Node failure for multiple server roles (Client Access, Hub Transport, and Mailbox servers) Node failure for multiple server roles (Client Access and Hub Transport servers) <80% <80% Storage Configuration Assumptions The following tables summarize some data configuration and input/output (I/O) assumptions made when designing the storage configuration. Data configuration assumptions Data configuration assumption Value or description Data overhead factor 20% 8

9 Data configuration assumption Value or description Mailbox moves per week 1% Dedicated maintenance or restore logical unit number (LUN) No LUN free space 20% Log shipping compression enabled Log shipping encryption enabled Yes Yes I/O configuration assumptions I/O configuration assumption Value or description I/O overhead factor 20% Additional I/O requirements None Solution Design The following section provides a step-by-step methodology used to design this solution. This methodology takes customer requirements and design assumptions and walks through the key design decision points that need to be made when designing an Exchange 2010 environment. Determine High Availability Strategy When designing an Exchange 2010 environment, many design decision points for high availability strategies impact other design components. We recommend that you determine your high availability strategy as the first step in the design process. We highly recommend that you review the following information prior to starting this step: Understanding High Availability Factors Planning for High Availability and Site Resilience Understanding Backup, Restore and Disaster Recovery Step 1: Determine whether site resiliency is required If you have more than one datacenter, you must decide whether to deploy Exchange infrastructure in a single datacenter or distribute it across two or more datacenters. The organization's recovery service level agreements (SLAs) should define what level of service is required following a primary datacenter failure. This information should form the basis for this decision. 9

10 *Design Decision Point* In this example, there is a service level agreement that requires the ability to restore the messaging service within four hours in the event of a primary datacenter failure. Therefore the customer must deploy Exchange infrastructure in a secondary datacenter for disaster recovery purposes. Step 2: Determine relationship between mailbox user locations and datacenter locations In this step, we look at whether all mailbox users are located primarily in one site or if they're distributed across many sites and whether those sites are associated with datacenters. If they're distributed across many sites and there are datacenters associated with those sites, you need to determine if there's a requirement to maintain affinity between mailbox users and the datacenter associated with that site. *Design Decision Point* In this example, all of the active users are located in one primary location. The primary location is in geographic proximity to the primary datacenter and therefore there's a desire for all active mailboxes to reside in the primary datacenter during normal operating conditions. Step 3: Determine database distribution model Because the customer has decided to deploy Exchange infrastructure in more than one physical location, the customer needs to determine which database distribution model best meets the needs of the organization. There are three database distribution models: Active/Passive distribution Active mailbox database copies are deployed in the primary datacenter and only passive database copies are deployed in a secondary datacenter. The secondary datacenter serves as a standby datacenter and no active mailboxes are hosted in the datacenter under normal operating conditions. In the event of an outage impacting the primary datacenter, a manual switchover to the secondary datacenter is performed and active databases are hosted there until the primary datacenter returns online. Active/Passive distribution 10

11 Active/Active distribution (single DAG) Active mailbox databases are deployed in the primary and secondary datacenters. A corresponding passive copy is located in the alternate datacenter. All Mailbox servers are members of a single database availability group (DAG). In this model, the wide area network (WAN) connection between two datacenters is potentially a single point of failure. Loss of the WAN connection results in Mailbox servers in one of the datacenters going into a failed state due to loss of quorum. Active/Active distribution (single DAG) Active/Active distribution (multiple DAGs) This model leverages multiple DAGs to remove WAN connectivity as a single point of failure. One DAG has active database copies in the first datacenter and its corresponding passive database copies in the second datacenter. The second DAG has active database copies in the second datacenter and its corresponding passive database copies in the first datacenter. In the event of loss of WAN connectivity, the active copies in each site continue to provide database availability to local mailbox users. 11

12 Active/Active distribution (multiple DAGs) *Design Decision Point* In this example, active mailbox users are only in a single location and only the secondary datacenter will be used in the event that the primary datacenter fails. Therefore, an Active/Passive distribution model is the obvious choice. Step 4: Determine backup and database resiliency strategy Exchange 2010 includes several new features and core changes that, when deployed and configured correctly, can provide native data protection that eliminates the need to make traditional data backups. Backups are traditionally used for disaster recovery, recovery of accidentally deleted items, long term data storage, and point-in-time database recovery. Exchange 2010 can address all of these scenarios without the need for traditional backups: Disaster recovery In the event of a hardware or software failure, multiple database copies in a DAG enable high availability with fast failover and no data loss. DAGs can be extended to multiple sites and can provide resilience against datacenter failures. Recovery of accidentally deleted items With the new Recoverable Items folder in Exchange 2010 and the hold policy that can be applied to it, it's possible to retain all deleted and modified data for a specified period of time, so recovery of these items is easier and faster. For more information, see Messaging Policy and Compliance, Understanding Recoverable Items, and Understanding Retention Tags and Retention Policies. 12

13 Long-term data storage Sometimes, backups also serve an archival purpose. Typically, tape is used to preserve point-in-time snapshots of data for extended periods of time as governed by compliance requirements. The new archiving, multipl box search, and message retention features in Exchange 2010 provide a mechanism to efficiently preserve data in an end-user accessible manner for extended periods of time. For more information, see Understanding Personal Archives, Understanding Multi-Mailbox Search, and Understanding Retention Tags and Retention Policies. Point-in-time database snapshot If a past point-in-time copy of mailbox data is a requirement for your organization, Exchange provides the ability to create a lagged copy in a DAG environment. This can be useful in the rare event that there's a logical corruption that replicates across the databases in the DAG, resulting in a need to return to a previous point in time. It may also be useful if an administrator accidentally deletes mailboxes or user data. There are technical reasons and several issues that you should consider before using the features built into Exchange 2010 as a replacement for traditional backups. Prior to making this decision, see Understanding Backup, Restore and Disaster Recovery. *Design Decision Point* In this example, maintaining tape backups has been difficult, and testing and validating restore procedures hasn't occurred on a regular basis. Therefore, using Exchange native data protection in place of traditional backups as the database resiliency strategy is preferred. Step 5: Determine number of database copies required There are a number of factors to consider when determining the number of database copies that you'll deploy. The first is whether you're using a third-party backup solution. In the previous step, this decision was made. We strongly recommend deploying a minimum of three copies of a mailbox database before eliminating traditional forms of protection for the database, such as Redundant Array of Independent Disks (RAID) or traditional VSS-based backups. Prior to making this decision, see Understanding Mailbox Database Copies. *Design Decision Point* In the previous step, it was decided not to deploy a third-party backup solution. As a result, the design should have a minimum of three copies of each database. This ensures that both the recovery time objective and recovery point objective requirements are met. Step 6: Determine database copy type There are two types of database copies: High availability database copy This database copy is configured with a replay lag time of zero. As the name implies, high availability database copies are kept up-to-date by the system, can be automatically activated by the system, and are used to provide high availability for mailbox service and data. Lagged database copy This database copy is configured to delay transaction log replay for a period of time. Lagged database copies are designed to provide point-in-time protection, which can be used to recover from store logical corruptions, administrative errors (for 13

14 example, deleting or purging a disconnected mailbox), and automation errors (for example, bulk purging of disconnected mailboxes). *Design Decision Point* In this example, all three mailbox database copies will be deployed as high availability database copies. The primary need for a lagged copy is to provide the ability to recover single deleted items. This requirement can be met using the deleted items retention feature. Step 7: Determine number of database availability groups A DAG is the base component of the high availability and site resilience framework built into Exchange A DAG is a group of up to 16 Mailbox servers that hosts a set of replicated databases and provides automatic database-level recovery from failures that affect individual servers or databases. A DAG is a boundary for mailbox database replication, database and server switchovers and failovers, and for an internal component called Active Manager. Active Manager is an Exchange 2010 component, which manages switchovers and failovers. Active Manager runs on every server in a DAG. From a planning perspective, you should try to minimize the number of DAGs deployed. You should consider going with more than one DAG if: You deploy more than 16 Mailbox servers. You have active mailbox users in multiple sites (active/active site configuration). You require separate DAG-level administrative boundaries. You have Mailbox servers in separate domains. (DAG is domain bound.) *Design Decision Point* In a previous step, it was decided that the database distribution model was going to be active/passive. This model doesn't require multiple DAGs to be deployed. This example isn't likely to require more than 16 Mailboxes servers for 10,000 mailboxes, and there is no requirement for separate DAG-level administrative boundaries. Therefore, a single DAG will be used in this design. Step 8: Determine Mailbox server resiliency strategy Exchange 2010 has been re-engineered for mailbox resiliency. Automatic failover protection is now provided at the mailbox database level instead of at the server level. You can strategically distribute active and passive database copies to Mailbox servers within a DAG. Determining how many database copies you plan to activate on a per-server basis is a key aspect to Exchange 2010 capacity planning. There are different database distribution models that you can deploy, but generally we recommend one of the following: Design for all copies activated In this model, the Mailbox server role is sized to accommodate the activation of all database copies on the server. For example, a Mailbox server may host four database copies. During normal operating conditions, the server may have two active database copies and two passive database copies. During a failure or maintenance event, all four database copies would become active on the Mailbox server. 14

15 This solution is usually deployed in pairs. For example, if deploying four servers, the first pair is servers MBX1 and MBX2, and the second pair is servers MBX3 and MBX4. In addition, when designing for this model, you will size each Mailbox server for no more than 40 percent of available resources during normal operating conditions. In a site resilient deployment with three database copies and six servers, this model can be deployed in sets of three servers, with the third server residing in the secondary datacenter. This model provides a three-server building block for solutions using an active/passive site resiliency model. This model can be used in the following scenarios: Active/Passive multisite configuration where failure domains (for example, racks, blade enclosures, and storage arrays) require easy isolation of database copies in the primary datacenter Active/Passive multisite configuration where anticipated growth may warrant easy addition of logical units of scale Configurations that aren't required to survive the simultaneous loss of any two Mailbox servers in the DAG This model requires servers to be deployed in pairs for single site deployments and sets of three for multisite deployments. The following table illustrates a sample database layout for this model. Design for all copies activated In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations C2 = passive copy (activation preference value of 2) during normal operations C3 = passive copy (activation preference value of 3) during site failure event Design for targeted failure scenarios In this model, the Mailbox server role is designed to accommodate the activation of a subset of the database copies on the server. The number of 15

16 database copies in the subset will depend on the specific failure scenario that you're designing for. The main goal of this design is to evenly distribute active database load across the remaining Mailbox servers in the DAG. This model should be used in the following scenarios: All single site configurations with three or more database copies Configurations required to survive the simultaneous loss of any two Mailbox servers in the DAG The DAG design for this model requires between 3 and 16 Mailbox servers. The following table illustrates a sample database layout for this model. Design for targeted failure scenarios In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations C2 = passive copy (activation preference value of 2) during normal operations C3 = passive copy (activation preference value of 3) during normal operations *Design Decision Point* In a previous step, it was decided to deploy an Active/Passive database distribution model with two high availability database copies in the primary datacenter and one high availability copy in the secondary datacenter. Because the two high availability copies in the primary datacenter are usually deployed in separate hardware failure domains, this model usually results in a Mailbox server resiliency strategy that designs for all copies being activated. 16

17 Step 9: Determine number of Mailbox servers and DAGs The number of Mailbox servers required to support the workload and the minimum number of Mailbox servers required to support the DAG design may be different. In this step, a preliminary result is obtained. The final number of Mailbox servers will be determined in a later step. *Design Decision Point* This example uses three high availability database copies. To support three copies, a minimum of three Mailbox servers in the DAG is required. In an active/passive configuration, two of the servers will reside in the primary datacenter, and the third server will reside in the secondary datacenter. In this model, the number of servers in the DAG should be deployed in multiples of three. The following table outlines the possible configurations. Number of Mailbox servers and DAGs Primary datacenter Secondary datacenter Total Mailbox server count Estimate Mailbox Storage Capacity Requirements Many factors influence the storage capacity requirements for the Mailbox server role. For additional information, we recommend that you review Understanding Mailbox Database and Log Capacity Factors. The following steps outline how to calculate mailbox capacity requirements. These requirements will then be used to make decisions about which storage solution options meet the capacity requirements. A later section covers additional calculations required to properly design the storage layout on the chosen storage platform. Microsoft has created a Mailbox Server Role Requirements Calculator that will do most of this work for you. To download the calculator, see E2010 Mailbox Server Role Requirements Calculator. For additional information about using the calculator, see Exchange 2010 Mailbox Server Role Requirements Calculator. Step 1: Calculate mailbox size on disk Before attempting to determine what your total storage requirements are, you should know what the mailbox size on disk will be. A full mailbox with a 1-GB quota requires more than 1 GB of disk space because you have to account for the prohibit send/receive limit, the number of messages the user sends or receives per day, the Deleted Items folder retention window (with or without calendar version logging and single item recovery enabled), and the average database daily 17

18 variations per mailbox. The Mailbox Server Role Requirements Calculator does these calculations for you. You can also use the following information to do the calculations manually. The following calculations are used to determine the mailbox size on disk for the three mailbox tiers in this solution: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Whitespace = 100 messages per day MB = 7.3 MB Dumpster = (100 messages per day MB 14 days) + (512 MB 0.012) + (512 MB 0.058) = 138 MB Mailbox size on disk = mailbox limit + whitespace + dumpster = 512 MB MB MB = 657 MB Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Whitespace = 100 messages per day MB = 7.3 MB Dumpster = (100 messages per day MB 14 days) + (1024 MB 0.012) + (1024 MB 0.058) = 174 MB Mailbox size on disk = mailbox limit + whitespace + dumpster = 1024 MB MB MB = 1205 MB Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Whitespace = 150 messages per day MB = 11 MB Dumpster = (150 messages per day MB 14 days) + (4096 MB 0.012) + (4096 MB 0.058) = 441 MB Mailbox size on disk = mailbox limit + whitespace + dumpster = 4096 MB + 11 MB MB = 4548 MB Average size on disk = [( ) + ( ) + ( )] 9000 = 907 MB Step 2: Calculate database storage capacity requirements In this step, the high level storage capacity required for all mailbox databases is determined. The calculated capacity includes database size, catalog index size, and 20 percent free space. To determine the storage capacity required for all databases, use the following formulas: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) 18

19 Database size = (number of mailboxes mailbox size on disk database overhead growth factor) (20% data overhead) = ( ) 1.2 = MB = 5890 GB Database index size = 10% of database size = 589 GB Total database capacity = (database size + index size) 0.80 to add 20% volume free space = ( ) 0.8 = 8099 GB Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Database size= (number of mailboxes mailbox size on disk database overhead growth factor) x (20% data overhead) = ( ) x 1.2 = MB =1271 GB Database index size = 10% of database size = 127 GB Total database capacity = (database size + index size) 0.80 to add 20% volume free space = ( ) 0.8 = 1747 GB Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Database size = (number of mailboxes mailbox size on disk database overhead growth factor) x (20% data overhead) = ( ) x 1.2 = MB = 2400 GB Database index size = 10% of database size = 240 GB Total database capacity = (database size + index size) 0.80 to add 20% volume free space = ( ) 0.8 = 3301 GB Total database capacity (all tiers) =

20 = GB = 12.3 terabytes Step 3: Calculate transaction log storage capacity requirements To ensure that the Mailbox server doesn't sustain any outages as a result of space allocation issues, the transaction logs also need to be sized to accommodate all of the logs that will be generated during the backup set. Provided that this architecture is leveraging the mailbox resiliency and single item recovery features as the backup architecture, the log capacity should allocate for three times the daily log generation rate in the event that a failed copy isn't repaired for three days. (Any failed copy prevents log truncation from occurring.) In the event that the server isn't back online within three days, you would want to temporarily remove the copy to allow truncation to occur. To determine the storage capacity required for all transaction logs, use the following formulas: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of mailbox users) + (1% mailbox move overhead) = (1 MB ) + ( ) = MB = 487 GB Total log capacity = log files size 0.80 to add 20% volume free space = (487) 0.80 = 608 GB Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of mailbox users) + (1% mailbox move overhead) = (1 MB ) + ( ) = MB = 62 GB Total log capacity = log files size 0.80 to add 20% volume free space = (62) 0.80 = 77 GB Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) 20

21 Log files size = (log file size number of logs per mailbox per day number of days required to replace failed infrastructure number of mailbox users) + (1% mailbox move overhead) = (1 MB ) + ( ) = MB = 58 GB Total log capacity = log files size 0.80 to add 20% volume free space = (58) 0.80 = 72 GB Total log capacity (all tiers) = = 757 GB Step 4: Determine total storage capacity requirements The following table summarizes the high level storage capacity requirements for this solution. In a later step, you will use this information to make decisions about which storage solution to deploy. You will then take a closer look at specific storage requirements in later steps. Summary of storage capacity requirements Disk space requirements Value Average mailbox size on disk (MB) 907 Database space required (GB) Log space required (GB) 757 Total space required (GB) Total space required for three database copies (GB) Total space required for three database copies (terabytes) Estimate Mailbox I/O Requirements When designing an Exchange environment, you need an understanding of database and log performance factors. We recommend that you review Understanding Database and Log Performance Factors. Calculate mailbox I/O requirements Because it's one of the key transactional I/O metrics needed for adequately sizing storage, you should understand the amount of database I/O per second (IOPS) consumed by each mailbox user. Pure sequential I/O operations aren't factored in the IOPS per Mailbox server calculation 21

22 because storage subsystems can handle sequential I/O much more efficiently than random I/O. These operations include background database maintenance, log transactional I/O, and log replication I/O. In this step, you calculate the total IOPS required to support all mailbox users, using the following: Note: To determine the IOPS profile for a different message profile, see the table "Database cache and estimated IOPS per mailbox based on message activity" in Understanding Database and Log Performance Factors. Total required IOPS = IOPS per mailbox user number of mailboxes I/O overhead factor = = 81 Total required IOPS (all tiers) = 1107 Average IOPS per mailbox = = The high level storage IOPS requirements are approximately When choosing a storage solution, ensure that the solution meets this requirement. Determine Storage Type Exchange 2010 includes improvements in performance, reliability, and high availability that enable organizations to run Exchange on a wide range of storage options. When examining the storage options available, being able to balance the performance, capacity, manageability, and cost requirements is essential to achieving a successful storage solution for Exchange. For more information about choosing a storage solution for Exchange 2010, see Mailbox Server Storage Design. Determine whether you prefer an internal or external storage solution A number of server models on the market today support from 8 through 16 internal disks. These servers are a fit for some Exchange deployments and provide a solid solution at a low price point. If your storage capacity and I/O requirements are met with internal storage and you don't have a specific requirement to use external storage, you should consider using server models with an internal disk for Exchange deployments. If your storage and I/O requirements are higher or your organization has an existing investment in SANs, you should examine larger external directattached storage (DAS) or SAN solutions. *Design Decision Point* In this example, the external storage solution is selected. 22

23 Choose Storage Solution Use the following steps to choose a storage solution. Step 1: Identify preferred storage vendor In this solution, the preferred storage vendor is Dell. Dell, Inc. is a leading IT infrastructure and services company with a broad portfolio of servers, storage, networking products, and comprehensive service offerings. Dell also provides testing, best practices, and architecture guidance specifically for Exchange 2010 and other Microsoftbased solutions in the unified communications and collaboration stack such as Microsoft Office SharePoint Server and Office Communications Server. Dell offers a wide variety of storage solutions from Dell EqualLogic, Dell PowerVault, and Dell/EMC. Dell storage technologies help you minimize cost and complexity, increase performance and reliability, simplify storage management, and plan for future growth. Step 2: Review available options from preferred vendor There are a number of storage options that would be a good fit for this solution. The following options were considered: Option 1:Dell EqualLogic PS 6000 Series iscsi SAN Array The Dell EqualLogic PS Series is fundamentally changing the way enterprises think about purchasing and managing storage. Built on breakthrough virtualized peer storage architecture, the EqualLogic PS Series simplifies the deployment and administration of consolidated storage environments. Its all-inclusive, intelligent feature set streamlines purchasing and delivers rapid SAN deployment, easy storage management, comprehensive data protection, enterprise-class performance and reliability, and seamless pay-as-you grow expansion. The PS6000 is a 3u chassis that contains sixteen 3.5 inch hard disk drives with two iscsi controllers and four 1 GB- Ethernet ports per controller. Up to 16 arrays can be included in a single managed unit known as a group. Option 2:Dell EqualLogic PS 6500 Series iscsi SAN Array The Dell EqualLogic PS Series 6500 arrays also provide the same ease of use and intelligence features. However, this array was built with maximum density in mind. This 4u chassis holds up to inch hard disk drives, making it incredibly space efficient. It also contains four 1 GB- Ethernet ports per controller. The PS6500 can be mixed with other PS series arrays in the same group. Dell EqualLogic PS Series 6500 array Components Dell EqualLogic PS6000E, X, XV, and XVS Dell EqualLogic PS6500E and X Storage controllers: Dual controllers with a total of 4 GB-battery-backed memory. Battery-backed memory provides up to 72 hours of Dual controllers with a total of 4 GB-battery-backed memory. Battery-backed memory provides up to 72 hours of 23

24 Components Dell EqualLogic PS6000E, X, XV, and XVS data protection. Dell EqualLogic PS6500E and X data protection. Hard disk drives: 16x SATA, SAS, or SSD. 48x SATA or SAS. Volumes Up to Up to RAID support RAID-5, RAID-6, RAID-10, and RAID-50. RAID-5, RAID-6, RAID-10, and RAID-50. Network interfaces 4 copper per controller. 4 copper per controller. Reliability Redundant, hot-swappable controllers, power supplies, cooling fans, and disks. Individual disk drive slot power control. Redundant, hot-swappable controllers, power supplies, cooling fans, and disks. Individual disk drive slot power control. Option 3: Dell PowerVault MD3200i iscsi SAN Array The PowerVault MD3200i is a high performance iscsi SAN designed to deliver storage consolidation and data management capabilities in an easy to use, cost effective solution. Shared storage is required to enable VM mobility, which is the key benefit of a virtual environment. The PowerVault MD3000i is a networked shared storage solution, providing the high availability, expandability, and ease of management desired in virtual environments. The PowerVault MD3000i leverages existing IP networks and offers small and medium businesses an easy to use iscsi SANs without the need for extensive training or new expensive infrastructures. Step 3: Select an array The listed arrays were the PS 6000E and the PS 6500E. PS6500E enclosures can accommodate a total of (hot spare) drives and are the most dense storage solution offered. Therefore, the cost per gigabyte of deploying a PS6500E solution would be lower than that for a PS6000E solution. The PS6500E array is also an intelligent solution that offers SAN configuration and monitoring features, auto-build of RAID sets, network sensing mechanisms, and continuous health monitoring. The MD3200i is a less expensive solution but lacks some of the management and deployment features in the PS series arrays. In this example, the PS6500 series is selected because this storage enclosure offers a comprehensive datacenter consolidation solution spread across multiple sites as opposed to a Server Message Block (SMB) or branch-office storage need. Step 4: Select a disk type The Exchange 2010 solution is optimized to use more sequential I/O and less random I/O with larger mailboxes. This implies less disk intensive activity, even during peak usage hours when compared to Exchange Therefore, high capacity SATA disks are used to save cost. 24

25 For a list of supported disk types, see "Physical Disk Types" in Understanding Storage Configuration. To help determine which disk type to choose, see "Factors to Consider When Choosing Disk Types" in Understanding Storage Configuration. Determine Number of EqualLogic Arrays Required In a previous step, it was determined to deploy three copies of each database. One of the three copies will be located in the secondary datacenter. Therefore, to meet the site resiliency requirements, a minimum of one PS6500E in the primary datacenter and one PS6500E in the secondary datacenter is needed. Consider IOPS requirements. In a previous step it was determined that 1,107 IOPS were required to support the 9,000 mailboxes. For a RAID-10 configuration of SATA disks, this IOPS requirement can be met in a single PS 6500 array. In a failure event, a single PS6500E would have to support 100 percent of the IOPS requirement. Therefore, to meet the IOPS requirements, a minimum of one PS6500E in the primary datacenter and one PS6500E in the secondary datacenter is needed. Consider storage capacity requirements. In a previous step, it was determined that approximately 26 terabytes were required to support two copies of each database in the primary datacenter and approximately 13 terabytes to support one copy of each database in the secondary datacenter. A single PS6500E configured with two spares and the remaining 46 disks in a RAID-10 disk group provides approximately 20 terabytes. Therefore, two PS6500E's in the primary datacenter and one PS6500E in the secondary datacenter are required to support the capacity requirements. Three PS6500E's will be deployed to support the capacity requirements of this solution. Estimate Mailbox Memory Requirements Sizing memory correctly is an important step in designing a healthy Exchange environment. We recommend that you review Understanding Memory Configurations and Exchange Performance and Understanding the Mailbox Database Cache. Calculate required database cache The Extensible Storage Engine (ESE) uses database cache to reduce I/O operations. In general, the more database cache available, the less I/O generated on an Exchange 2010 Mailbox server. However, there's a point where adding additional database cache no longer results in a significant reduction in IOPS. Therefore, adding large amounts of physical memory to your Exchange server without determining the optimal amount of database cache required may result in higher costs with minimal performance benefit. The IOPS estimates that you completed in a previous step assume a minimum amount of database cache per mailbox. These minimum amounts are summarized in the table "Estimated 25

26 IOPS per mailbox based on message activity and mailbox database cache" in Understanding the Mailbox Database Cache. The following table outlines the database cache per user for various message profiles. Database cache per user Messages sent or received per mailbox per day (about 75 KB average message size) Database cache per user (MB) 50 3 MB MB MB MB In this step, you determine high level memory requirements for the entire environment. In a later step, you use this result to determine the amount of physical memory needed for each Mailbox server. Use the following information: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Database cache = profile specific database cache number of mailbox users = 6 MB 7650 = MB = 45 GB Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Database cache = profile specific database cache number of mailbox users = 6 MB 900 = 5400 MB = 6 GB Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Database cache = profile specific database cache number of mailbox users = 9 MB 450 = 4050 MB = 4 GB Total database cache (all tiers) = 55 GB Average per active mailbox = 55 GB = 6.2 MB The total database cache requirements for the environment are 55 GB or 6.2 MB per mailbox user. 26

27 Estimate Mailbox CPU Requirements Mailbox server capacity planning has changed significantly from previous versions of Exchange due to the new mailbox database resiliency model provided in Exchange For additional information, see Mailbox Server Processor Capacity Planning. In the following steps, you calculate the high level megacycle requirements for active and passive database copies. These requirements will be used in a later step to determine the number of Mailbox servers needed to support the workload. Note that the number of Mailbox servers required also depends on the Mailbox server resiliency model and database copy layout. Using megacycle requirements to determine the number of mailbox users that an Exchange Mailbox server can support isn't an exact science. A number of factors can result in unexpected megacycle results in test and production environments. Megacycles should only be used to approximate the number of mailbox users that an Exchange Mailbox server can support. It's always better to be conservative rather than aggressive during the capacity planning portion of the design process. The following calculations are based on published megacycle estimates as summarized in the following table. Megacycle estimates Messages sent or received per mailbox per day Megacycles per mailbox for active mailbox database Megacycles per mailbox for remote passive mailbox database Megacycles per mailbox for local passive mailbox Step 1: Calculate active mailbox CPU requirements In this step, you calculate the megacycles required to support the active database copies, using the following: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Active mailbox megacycles required = profile specific megacycles number of mailbox users = =

28 Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Active mailbox megacycles required = profile specific megacycles number of mailbox users = = 1800 Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Active mailbox megacycles required = profile specific megacycles number of mailbox users = = 1350 Total active mailbox megacycles required (all tiers) = megacycles Step 2: Calculate active mailbox remote database copy CPU requirements In a design with three copies of each database, there is processor overhead associated with shipping logs required to maintain database copies on the remote servers. This overhead is typically 10 percent of the active mailbox megacycles for each remote copy being serviced. In this step, you calculate the active mailbox remote database copy CPU requirements, using the following: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies = (0.2) (7650) 2 = 3060 Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies = (0.2) (900) 2 = 360 Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Remote copy megacycles required = profile specific megacycles number of mailbox users number of remote copies = (0.3) (450) 2 = 270 Total remote copy megacycles required (all tiers) =

29 Step 3: Calculate local passive mailbox CPU requirements In a design with three copies of each database, there is processor overhead associated with maintaining the local passive copies of each database. In this step, the high level megacycles required to support local passive database copies will be calculated. These numbers will be refined in a later step so that they match the server resiliency strategy and database copy layout. Calculate the requirements, using the following: Tier 1 (512 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies = = 4590 Tier 2 (1024 MB mailbox quota, 100 messages per day message profile, 75 KB average message size) Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies = = 540 Tier 3 (4096 MB mailbox quota, 150 messages per day message profile, 75 KB average message size) Passive mailbox megacycles required = profile specific megacycles number of mailbox users number of passive copies = = 405 Total passive mailbox megacycles required (all tiers) = 5535 Step 4: Calculate total CPU requirements Calculate the total requirements, using the following: Total megacycles required = active mailbox + remote passive copies + local passive copies = = Total megacycles per mailbox = 3.08 Determine Whether Server Virtualization Will Be Used Several factors are important when considering server virtualization for Exchange. For more information about supported configurations for virtualization, see Exchange 2010 System Requirements. The main reasons customers use virtualization with Exchange are as follows: 29

30 If you expect server capacity to be underutilized and anticipate better utilization, you may purchase fewer servers as a result of virtualization. You may want to use Windows Network Load Balancing when deploying Client Access, Hub Transport, and Mailbox server roles on the same physical server. If your organization is using virtualization in all server infrastructure, you may want to use virtualization with Exchange, to be in alignment with corporate standard policy. *Design Decision Point* In this solution, deploying additional physical hardware for Client Access servers and Hub Transport servers isn't wanted. Active/passive site resiliency design would require several Mailbox servers to support the DAG design and database copy layout, which may result in unused capacity on the Mailbox servers. Virtualization will be used to better utilize capacity across server roles. Determine Whether Client Access and Hub Transport Server Roles Will Be Deployed in Separate Virtual Machines When using virtualization for the Client Access and Hub Transport server roles, you may consider deploying both roles on the same VM. This approach reduces the number of VMs to manage, the number of server operating systems to update, and the number of Windows and Exchange licenses you need to purchase. Another benefit to combining the Client Access and Hub Transport server roles is to simplify the design process. When deploying roles in isolation, we recommend that you deploy one Hub Transport server logical processor for every four Mailbox server logical processors, and that you deploy three Client Access server logical processors for every four Mailbox server logical processors. This can be confusing, especially when you have to provide sufficient Client Access and Hub Transport servers during multiple VM or physical server failures or maintenance scenarios. When deploying Client Access, Hub Transport, and Mailbox servers on like physical servers or like VMs, you can deploy one server with the Client Access and Hub Transport server roles for every one Mailbox server in the site. *Design Decision Point* In this solution, co-locating the Hub Transport and Client Access server roles in the same VM is wanted. The Mailbox server role is deployed separately in a second VM. This will reduce the number of VMs and operating systems to manage as well as simplify planning for server resiliency. Determine Server Model for Hyper-V Root Server Step 1: Identify preferred server vendor In this solution, the preferred server vendor is Dell. 30

31 The Dell eleventh generation PowerEdge servers offer industry leading performance and efficiency. Innovations include increased memory capacity and faster I/O rates, which help deliver the performance required by today's most demanding applications. Step 2: Review available options from preferred vendor Dell's server portfolio includes several models that were considered for this implementation. Option 1: Dell PowerEdge M610 Blade Server The decision to use iscsi attached storage provides the potential for taking advantage of Dell blades, based on the M1000e chassis. The M610 combines two sockets and twelve DIMMs in a half-height blade for a dense and power efficient server. Dell PowerEdge M1000e blade chassis Components Chassis\enclosure Power supplies Cooling fans Input device Enclosure I/O modules Description Form factor: 10U modular enclosure holds up to sixteen half-height blade servers 44.0 cm (17.3") height 44.7 cm (17.6") width 75.4 cm (29.7") depth Weight: Empty chassis 98 pounds Chassis with all rear modules (IOMs, PSUs, CMCs, KVM) 176lbs Max fully loaded with blades and rear modules 394lbs 3 (non-redundant) or 6 (redundant) 2,360 watt hot-plug power supplies M1000e chassis comes standard with 9 hotpluggable, redundant fan modules Front control panel with interactive graphical LCD: Supports initial configuration wizard Local server blade, enclosure, and module information and troubleshooting Two USB keyboard/mouse connections and one video connection (requires the optional Avocent ikvm switch to enable these ports) for local front crash cart console connections that can be switched between blades Up to six total I/O modules for three fully redundant fabrics, featuring Ethernet FlexIO 31

32 Components Management External storage options Description technology providing on-demand stacking and uplink scalability. Dell FlexIO technology delivers a level of I/O flexibility, bandwidth, investment protection, and capabilities unrivaled in the blade server market. FlexIO technologies include: Completely passive, highly available midplane that can deliver greater than 5 terabytes per second (TBps) of total I/O bandwidth Support for up to two ports of up to 40 gigabits per second (Gbps) from each I/O mezzanine card on the blade server 1 (standard) or optional second (redundant) Chassis Management Controller (CMC) Optional integrated Avocent keyboard, video and mouse (ikvm) switch Dell OpenManage systems management Dell EqualLogic PS series, Dell/EMC AX series, Dell/EMC CX series, Dell/EMC NS series, Dell PowerVault MD series, Dell PowerVault NX series Dell PowerEdge M610 server Components Processors (x2) Form factor Memory Drives Description Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series Blade/modular half-height slot in an M1000e blade chassis 12 DIMM slots 1 GB/2 GB/4 GB/8 GB/16 GB ECC DDR3 Support for up to 192 GB using GB DIMMs Internal hot-swappable drives: 2.5" SAS (10,000 rpm): 36 GB, 73 GB, 146 GB, 300 GB, 600 GB 32

33 Components I/O slots Description 2.5" SAS (15,000 rpm): 36 GB, 73 GB 146 GB Solid-state drives (SSD): 25 GB, 50 GB, 100 GB, 150 GB Maximum internal storage: Up to 1.2 terabyte via GB SAS hard disk drives For external storage options, see previous M1000e blade chassis information For details, see previous M1000e blade chassis information Option 2: Dell PowerEdge M710 blade server The M710 provides two sockets in a blade form factor but extends the number of DIMMs to eighteen, greatly expanding memory capacity. However, the M710 also is a full height blade. The extra RAM can make the R710 an attractive virtualization server. Dell PowerEdge T710 server Components Processors (x2) Form factor Memory Drives Description Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series Blade/modular full-height slot in an M1000e blade chassis 18 DIMM slots 1 GB/2 GB/4 GB/8 GB/16 GB ECC DDR3 Support for up to 192 GB using GB DIMMs Internal hot-swappable drives: SSD: 2.5" SAS (10,000 rpm): 36 GB, 73 GB, 146 GB, 300 GB, 600 GB 2.5" SAS (15,000 rpm): 36 GB, 73 GB 146 GB 25 GB, 50 GB, 100 GB, 150 GB Maximum Internal Storage: Up to 1.2 terabyte via GB SAS 33

34 Components I/O slots Description Hard Drives For external storage options, see previous M1000e blade chassis information For details, see previous M1000e blade chassis information Option 3: Dell PowerEdge R710 rack mounted server Another choice for this implementation could be the Dell PowerEdge R710. This Intel-based platform is a 2u rack mounted server containing two sockets, eighteen DIMM slots, and the option of either eight 2.5", or six 3.5" internal hard disk drives. Although limited in internal disk capacity compared to the other server models presented, it scales beyond the R510 in memory (eighteen DIMMS compared to eight) and provides more I/O options. Storage capabilities may be expanded by using Dell PowerVault MD1200 or MD1220 direct attached storage arrays. The MD1200 provides twelve 3.5" hard disk drives in a 2u rack mounted form factor, while the MD1220 provides twenty-five 2.5" hard disk drives in the same 2u rack mounted form factor. These 6 Gbps SAS connected arrays can be daisy chained, up to four arrays per RAID controller, and also support redundant connections from the server. This storage option satisfies requirements for lower cost storage and simplicity while giving each node the ability to scale in the number of supported mailboxes. Dell PowerEdge R710 server Components Processors (x2) Form factor Memory Drives I/O slots Description Latest quad-core or six-core Intel Xeon processors 5500 and 5600 series 2U rack Up to 192 GB (18 DIMM slots*): 1 GB/2 GB/4 GB/8 GB/16 GB DDR3, 800 megahertz (MHz), 1066 MHz, or 1333 MHz Eight 2.5" hard disk drive option or six 3.5" hard disk drive option with optional flex bay expansion to support half-height TBU Up to six 3.5" drives with optional flex bay or up to eight 2.5" SAS or SATA drives with optional flex bay Peripheral bay options include slim optical drive bay with choice of DVD-ROM, combo CD- RW/DVD-ROM, or DVD + RW 2 PCIe x8 + 2 PCIe x4 G2 or 1 x x4 G2 34

35 Option 4: Dell PowerEdge R810 rack mounted server The R810 is a two or four socket platform in a 2u form factor. It contains Dell patented FlexMem bridge technology, which allows the server to take advantage of all thirty-two DIMM slots even with only two processors installed. This enables the R810 to be a virtualization platform providing great compute power in a dense package. Dell PowerEdge R810 server Components Description Processors (x4) Up to Eight-Core Intel Xeon 7500 and 6500 series processors Form factor Memory 2U rack Up to 512 GB (32 DIMM slots) 1 GB/2 GB/4 GB/8 GB/16 GB DDR MHz Drives Hot-swap option available with up to six 2.5" SAS or SATA drives, including SATA SSD I/O slots 6 PCIe G2 slots: Five x8 slot One x4 slot One storage x4 slot Step 3: Select a server model For this solution, Dell PowerEdge M610 blades is selected. To standardize on blades in the datacenter to take advantage of the density and power efficiencies is desired. Although the M710 may be able to support more VMs per server than the M610, there is still more capacity to be saved with the M610 in this deployment due to it being half-height versus the M710 full-height form factor. In previous steps, megacycles required to support the number of active mailbox users were calculated. In the following steps, the number of available megacycles the selected server model and processor can support will be determined so that the number of active mailboxes each server can support can then be determined. Step 4: Determine benchmark value for server and processor Because the megacycle requirements are based on a baseline server and processor model, you need to adjust the available megacycles for the server against the baseline. To do this, independent performance benchmarks maintained by Standard Performance Evaluation Corporation (SPEC) are used. SPEC is a non-profit corporation formed to establish, maintain, and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. 35

36 To obtain the benchmark value for a server and processor, see Standard Performance Evaluation Corporation, search for the processor, under SPECint_rate2006, find the server model you have chosen, and record the result. Use the following calculation: Processor and server platform = Intel X gigahertz (GHz) in a Dell M610 SPECint_rate2006 value = 234 SPECint_rate2006 value per processor core = = Step 5: Calculate adjusted megacycles In previous steps, you calculated the required megacycles for the entire environment based on megacycle per mailbox estimates. Those estimates were measured on a baseline system (HP DL380 G5 x GHz, 8 cores) that has a SPECint_rate2006 value of 150 (for an 8 core server), or per core. In this step, you need to adjust the available megacycles for the chosen server and processor against the baseline processor so that the required megacycles can be used for capacity planning. To determine the megacycles of the Dell M610 Intel X GHz platform, use the following formula: Adjusted megacycles per core = (new platform per core value) (hertz per core of baseline platform) (baseline per core value) = ( ) = 5195 Adjusted megacycles per server = adjusted megacycles per core number of cores = = Step 6: Adjust available megacycles for virtualization overhead When deploying VMs on the root server, megacycles required to support the hypervisor and virtualization stack must be accounted for. This overhead varies from server to server and under different workloads. A conservative estimate of 10 percent of available megacycles will be used. Use the following calculation: Adjusted available megacycles = usable megacycles 0.90 = = So each server has a usable capacity for VMs of megacycles. The usable capacity per logical processor is 4675 megacycles. 36

37 Determine the CPU Capacity of the Virtual Machines Now that we know the megacycles of the root server we can calculate the megacycles of each VM. These values will be used to determine how many VMs are required and how many mailboxes will be hosted by each VM. Step 1: Calculate available megacycles per virtual machine In this step, you determine how many megacycles are available for each VM deployed on the root server. Because the server has eight logical processors, plan to deploy two VMs per server, each with four virtual processors. Use the following calculation: Available megacycles per VM = adjusted available megacycles per server number of VMs = = Step 2: Determine the target available megacycles per virtual machine Because the design assumptions state not to exceed 80 percent processor utilization, in this step, you adjust the available megacycles to reflect the 80 percent target. Use the following calculation: Because the design assumptions state not to exceed 70 percent processor utilization, in this step, you adjust the available megacycles to reflect the 70 percent target. Use the following calculation: Target available megacycles = available megacycles target max processor utilization = = Determine Number of Mailbox Server Virtual Machines Required You can use the following steps to determine the number of Mailbox server VMs required. Step 1: Determine the maximum number of mailboxes supported by the MBX virtual machine To determine the maximum number of mailboxes supported by the MBX VM, use the following calculation: Number of active mailboxes = available megacycles megacycles per mailbox = = 4250 Step 2: Determine the minimum number of mailbox virtual machines required in the primary site To determine the minimum number of mailbox VMs required in the primary site, use the following calculation: Number of VMs required = total mailbox count in site active mailboxes per VM 37

38 = = 2.2 Based on processor capacity, minimum of three Mailbox server VMs to support the anticipated peak work load during normal operating conditions is required. Step 3: Determine number of Mailbox server virtual machines required to support the mailbox resiliency strategy In the previous step, you determined that a minimum of three Mailbox server VMs to support the target workload are needed. In an active/passive database distribution model, you need a minimum of three Mailbox server VMs in the secondary datacenter to support the workload during a site failure event. The DAG design will have nine Mailbox server VMs with six in the primary site and three in the secondary site. Datacenter vs. Mailbox server count Primary datacenter Secondary datacenter Total Mailbox server count Determine Number of Mailboxes per Mailbox Server You can use the following steps to determine the number of mailboxes per Mailbox server. Step 1: Determine number of active mailboxes per server during normal operation To determine the number of active mailboxes per server during normal operation, use the following calculation: Number of active mailboxes per server = total mailbox count server count = = 1500 Step 2: Determine number of active mailboxes per server worst case failure event To determine the number of active mailboxes per server worst case failure event, use the following calculation: Number of active mailboxes per server = total mailbox count server count 38

39 = = 3000 Determine Memory Required Per Mailbox Server You can use the following steps to determine the memory required per Mailbox server. Step 1: Determine database cache requirements per server for the worst case failure scenario In a previous step, you determined that the database cache requirements for all mailboxes was 55 GB and the average cache required per active mailbox was 6.2 MB. To design for the worst case failure scenario, you calculate based on active mailboxes residing on three of six Mailbox servers. Use the following calculation: Memory required for database cache = number of active mailboxes average cache per mailbox = MB = MB = 18.2 GB Step 2: Determine total memory requirements per mailbox virtual machine server for the worst case failure scenario In this step, reference the following table to determine the recommended memory configuration. Memory requirements Server physical memory (RAM) Database cache size (Mailbox role only) 24 GB 17.6 GB 32 GB 24.4 GB 48 GB 39.2 GB The recommended memory configuration to support 18.2 GB of database cache for a mailbox role server is 32 GB. Determine Number of Client Access and Hub Transport Server Combo Virtual Machines Required In a previous step, it was determined that nine Mailbox server VMs are required. We recommend that you deploy one Client Access and Hub Transport server combo VM for every MBX VM. Therefore, the design will have nine Client Access and Hub Transport server combo VMs. 39

40 Number of Client Access and Hub Transport server combo VMs required Server role configuration Mailbox server role: Client Access and Hub Transport combined server role Recommended processor core ratio 1:1 Determine Memory Required per Combined Client Access and Hub Transport Virtual Machines Step 1: Determine memory requirements for Client Access and Hub Transport server combo virtual machines Based on the following table, each Client Access and Hub Transport server combo VM requires a minimum of 8 GB of memory. Memory configurations for Exchange 2010 servers based on installed server roles Exchange 2010 server role Minimum supported Recommended maximum Hub Transport 4 GB 1 GB per core Client Access 4 GB 2 GB per core Client Access and Hub Transport combined role (Client Access and Hub Transport server roles running on the same physical server) 4 GB 2 GB per core Determine Virtual Machine Distribution When deciding which VMs to host on which root server, your main goal should be to eliminate single points of failure. Don't locate both Client Access and Hub Transport server role VMs on the same root server, and don't locate both Mailbox server role VMs on the same root server. 40

41 Virtual machine distribution (incorrect) The correct distribution is one Client Access and Hub Transport server role VM on each of the physical host servers and one Mailbox server role VM on each of the physical host servers. So in this solution there will be nine Hyper-V root servers each supporting one Client Access and Hub Transport server role VM and one Mailbox server role VM. Virtual machine distribution (correct) Determine Memory Required per Root Server To determine the memory required for each root server, use the following calculation: Root server memory = Client Access and Hub Transport server role VM memory + Mailbox server role VM memory = 8 GB + 32 GB = 40 GB The Hyper-V root server will require a minimum of 40 GB. Determine Minimum Number of Databases Required To determine the optimal number of Exchange databases to deploy, use the Exchange 2010 Mailbox Role Calculator. Enter the appropriate information on the input tab and select Yes for Automatically Calculate Number of Unique Databases / DAG. 41

42 Database configuration On the Role Requirements tab, the recommended number of databases appears. Recommended number of databases In this solution, a minimum of 12 databases will be used. The exact number of databases may be adjusted in future steps to accommodate the database copy layout. Identify Failure Domains Impacting Database Copy Layout Use the following steps to identify failure domains impacting database copy layout. Step 1: Identify failure domains associated with storage In a previous step, it was decided to deploy three Dell EqualLogic PS6500E arrays and to deploy three copies of each database. To provide maximum protection for each of those database copies, we recommend that no more than one copy of a single database be located on the same physical array. In this scenario, each PS6500E represents a failure domain that will impact the layout of database copies in the DAG. Dell EqualLogic PS6500E arrays Step 2: Identify failure domains associated with servers In a previous step, it was determined that nine physical blade servers will be deployed. Six of those servers will be deployed in the primary datacenter and three in the secondary datacenter. Blades are associated with blade enclosures. So to support the site resiliency requirements, a minimum of 2 blade enclosures are required. 42

43 Failure domains associated with servers In the previous step, it was determined that PS6500E represents three failure domains. Consider when all six blades in the first enclosure to the two PS6500Es in the primary datacenter are connected. In the event that there is an issue impacting the enclosure, there are no other servers in the primary datacenter and you're forced to conduct a manual site switchover to the secondary datacenter. A better design is to deploy three blade enclosures, each with three of the nine server blades. Pair the servers in the first enclosure with the first PS6500E, the servers in the second enclosure with the second PS6500E, and the three servers in the secondary site with the PS6500E in the secondary site. By aligning the server and storage failure domains, the database copies are set in a manner that protects against issues with either the storage array or an entire blade enclosure. Failure domains associated with servers in two sites Design Database Copy Layout Use the following steps to design database copy layout. Step 1: Determine number of database copies per Mailbox server In a previous step, it was determined that the minimum number of unique databases that should be deployed is 12. In an active/passive configuration with three copies, we recommend that the number of databases equal the total number of Mailbox servers in the primary site multiplied by the number of Mailbox servers in a single failure domain and be greater than the minimum number of recommended databases. Use the following calculation: Unique database count = total number of Mailbox servers in primary datacenter number of Mailbox servers in failure domain =

44 =18 Step 2: Determine database layout during normal operating conditions Consider equally distributing the C1 database copies (or the copies with an activation preference value of 1) to the servers in the primary datacenter. These are the copies that will be active during normal operating conditions. Database copy layout during normal operating conditions DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6 DB1 DB2 DB3 C1 C1 C1 DB4 DB5 DB6 C1 C1 C1 DB7 DB8 DB9 C1 C1 C1 DB10 DB11 DB12 C1 C1 C1 DB13 DB14 DB15 C1 C1 C1 DB16 DB17 DB18 C1 C1 C1 In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations Next distribute the C2 database copies (or the copies with an activation preference value of 2) to the servers in the second failure domain. During the distribution, you distribute the C2 copies across as many servers in the alternate failure domain as possible to ensure that a single server failure has a minimal impact on the servers in the alternate failure domain. 44

45 Database copy layout with C2 database copies distributed DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6 DB1 C1 C2 DB2 C1 C2 DB3 C1 C2 DB4 C1 C2 DB5 C1 C2 DB6 C1 C2 DB7 C1 C2 DB8 C1 C2 DB9 C1 C2 In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations C2 = passive copy (activation preference value of 2) during normal operations Consider the opposite configuration for the other failure domain. Again, you distribute the C2 copies across as many servers in the alternate failure domain as possible to ensure that a single server failure has a minimal impact on the servers in the alternate failure domain. Database copy layout with C2 database copies distributed in the opposite configuration DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6 DB10 C2 C1 DB11 C2 C1 DB12 C2 C1 DB13 C2 C1 DB14 C2 C1 DB15 C2 C1 DB16 C2 C1 DB17 C2 C1 DB18 C2 C1 In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations 45

46 C2 = passive copy (activation preference value of 2) during normal operations Step 3: Determine database layout during server failure and maintenance conditions Before the secondary datacenter and distribute the C3 copies are considered, examine the following server failure scenario. In the following example, if server MBX1 fails, the active database copies will automatically move to servers MBX4, MBX5, and MBX6. Notice that each of the three servers in the alternate failure domain are now running with four active databases and the active databases are equally distributed across all three servers. Database copy layout during server maintenance or failure In the preceding table, the following applies: 46

47 C1 = active copy (activation preference value of 1) during normal operations C2 = passive copy (activation preference value of 2) during normal operations In a maintenance scenario, you could move the active mailbox databases from the servers in the first failure domain (MBX1, MBX2, MBX3) to the servers in the second failure domain (MBX4, MBX5, MBX6), complete maintenance activities, and then move the active database copies back to the C1 copies on the servers in the first failure domain. This configuration allows you to conduct maintenance activities on all servers in the primary datacenter in two easy passes. Database copy layout during server maintenance In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations C2 = passive copy (activation preference value of 2) during normal operations 47

48 Step 4: Add database copies to secondary datacenter to support site resiliency The last step in the database copy layout is to add the C3 copies (or copies with an activation preference value of 3) to the servers in the secondary datacenter to provide site resiliency. As performed with the C2 copies, distribute the C3 copies across as many servers in the alternate failure domain as possible to ensure that any issues impacting multiple Mailbox servers in the primary datacenter has a minimal impact on the servers in the alternate failure domain in the secondary datacenter. In a full site failure scenario, activate all C3 copies in the secondary datacenter using Datacenter Activation Coordination (DAC) and the distribution of database copies in relation to servers in the primary datacenter is less important. Database copy layout to support site resiliency DB MBX1 MBX2 MBX3 MBX4 MBX5 MBX6 MBX7 MBX8 MBX9 DB1 C1 C2 C3 DB2 C1 C2 C3 DB3 C1 C2 C3 DB4 C1 C2 C3 DB5 C1 C2 C3 DB6 C1 C2 C3 DB7 C1 C2 C3 DB8 C1 C2 C3 DB9 C1 C2 C3 DB10 C2 C1 C3 DB11 C2 C1 C3 DB12 C2 C1 C3 DB13 C2 C1 C3 DB14 C2 C1 C3 DB15 C2 C1 C3 DB16 C2 C1 C3 DB17 C2 C1 C3 DB18 C2 C1 C3 In the preceding table, the following applies: C1 = active copy (activation preference value of 1) during normal operations 48

49 C2 = passive copy (activation preference value of 2) during normal operations C3 = remote passive copy (activation preference value of 3) during normal operations Determine Storage Design A well designed storage solution is a critical aspect of a successful Exchange 2010 Mailbox server role deployment. For more information, see Mailbox Server Storage Design. Step 1: Summarize storage requirements The following table summarizes the storage requirements that have been calculated or determined in a previous design step. Summary of disk space requirements Disk space requirements Value Average mailbox size on disk (MB) 907 Database space required (GB) Log space required (GB) 757 Total space required (GB) Total space required for three database copies (GB) Total space required for three database copies (terabytes) Step 2: Determine whether logs and databases will be co-located on the same LUN In previous Exchange releases, it was a recommended best practice to separate database files and log files from the same mailbox database to different volumes backed by different physical disks for recoverability purposes. This is still a recommended best practice for stand-alone architectures and architectures using VSS-based backups. If you're using Exchange native data protection and have deployed a minimum of three database copies, isolation of logs and databases isn't necessary. *Design Decision Point* With the EqualLogic array, the RAID-10 set spans across all 46 disks. Because this architecture doesn't offer spindle isolation, there is no reason to create separate LUNs for database and log files, therefore subsequent design decisions will be based on a single LUN for each database and log set. 49

50 Step 3: Determine number of LUNs required per array In a previous step, it was identified that each primary Mailbox server would support three active databases, three passive database copies, and three lagged database copies. Therefore there will be a total of nine LUNs for each primary datacenter Mailbox server. Number of LUNs required per array Databases LUNs per server LUNs per array Active databases 3 9 Passive databases 3 9 Lagged databases 3 9 Total LUNs 9 27 Step 4: Determine required LUN size This step determines the size of the LUN required to support both the database and log capacity requirements. Use the following calculations: Database capacity = [(number of mailbox users average mailbox size on disk) + (20% data overhead factor)] + (10% content indexing overhead) = [( ) + (90700)] = MB = 585 GB Log capacity = (log size number of logs per mailbox per day number of days required to replace hardware number of mailbox users) + (mailbox move percent overhead) = (1 MB ) + ( MB) =35285 MB =35 GB LUN size = [(database capacity) + (log capacity)] +20% volume free space = [(585) + (35)].8 = 775 GB The required LUN size is 775 GB. Step 5: Calculate actual LUN size In a previous step, it was determined that the EqualLogic PS6500E has a usable capacity of 20.8 terabytes or GB when using RAID-10 and having two spares configured. Each array needs to have 27 LUNs = 789 GB The actual LUN size will be 789 GB, which will support the required LUN size of 775 GB. 50

51 Actual LUN size Description Usable capacity Value GB Number of LUNs required 27 Required LUN Size Actual LUN Size 775 GB 789 GB Step 6: Determine volume layout on PS6500Es The following table illustrates how the database copies are positioned on the XIV Storage Systems. Volume layout on PS6500Es Database Array1 Database Array2 Database Array3 DB1 C1 DB1 C2 DB1 C3 DB2 C1 DB2 C2 DB2 C3 DB3 C1 DB3 C2 DB3 C3 DB4 C1 DB4 C2 DB4 C3 DB5 C1 DB5 C2 DB5 C3 DB6 C1 DB6 C2 DB6 C3 DB7 C1 DB7 C2 DB7 C3 DB8 C1 DB8 C2 DB8 C3 DB9 C1 DB9 C2 DB9 C3 DB10 C2 DB10 C1 DB10 C3 DB11 C2 DB11 C1 DB11 C3 DB12 C2 DB12 C1 DB12 C3 DB13 C2 DB13 C1 DB13 C3 DB14 C2 DB14 C1 DB14 C3 DB15 C2 DB15 C1 DB15 C3 DB16 C2 DB16 C1 DB16 C3 DB17 C2 DB17 C1 DB17 C3 DB18 C2 DB18 C1 DB18 C3 51

52 Determine Placement of the File Share Witness In Exchange 2010, the DAG uses a minimal set of components from Windows failover clustering. One of those components is the quorum resource, which provides a means for arbitration when determining cluster state and making membership decisions. It's critical that each DAG member have a consistent view of how the DAGs underlying cluster is configured. The quorum acts as the definitive repository for all configuration information relating to the cluster. The quorum is also used as a tiebreaker to avoid split brain syndrome. Split brain syndrome is a condition that occurs when DAG members can't communicate with each other but are available and running. Split brain syndrome is prevented by always requiring a majority of the DAG members (and in the case of DAGs with an even number of members, the DAG witness server) to be available and interacting for the DAG to be operational. A witness server is a server outside of a DAG that hosts the file share witness, which is used to achieve and maintain quorum when the DAG has an even number of members. DAGs with an odd number of members don't use a witness server. Upon creation of a DAG, the file share witness is added by default to a Hub Transport server (that doesn't have the Mailbox server role installed) in the same site as the first member of the DAG. If your Hub Transport server is running in a VM that resides on the same root server as VMs running the Mailbox server role, we recommend that you move the location of the file share witness to another highly available server. You can move the file share witness to a domain controller, but because of security implications, do this only as a last resort. *Design Decision Point* The file and print server is reasonably stable and is managed by the same administrator who supports the Exchange servers, so it's a good choice for the location of the file share witness. Plan Namespaces When you plan your Exchange 2010 organization, one of the most important decisions that you must make is how to arrange your organization's external namespace. A namespace is a logical structure usually represented by a domain name in Domain Name System (DNS). When you define your namespace, you must consider the different locations of your clients and the servers that house their mailboxes. In addition to the physical locations of clients, you must evaluate how they connect to Exchange The answers to these questions will determine how many namespaces you must have. Your namespaces will typically align with your DNS configuration. We recommend that each Active Directory site in a region that has one or more Internet-facing Client Access servers have a unique namespace. This is usually represented in DNS by an A record, for example, mail.contoso.com or mail.europe.contoso.com. For more information, see Understanding Client Access Server Namespaces. There are a number of different ways to arrange your external namespaces, but usually your requirements can be met with one of the following namespace models: 52

53 Consolidated datacenter model This model consists of a single physical site. All servers are located within the site, and there is a single namespace, for example, mail.contoso.com. Single namespace with proxy sites This model consists of multiple physical sites. Only one site contains an Internet-facing Client Access server. The other sites aren't exposed to the Internet. There is only one namespace for the sites in this model, for example, mail.contoso.com. Single namespace and multiple sites This model consists of multiple physical sites. Each site can have an Internet-facing Client Access server. Alternatively, there may be only a single site that contains Internet-facing Client Access servers. There is only one namespace for the sites in this model, for example, mail.contoso.com. Regional namespaces This model consists of multiple physical sites and multiple namespaces. For example, a site located in New York City would have the namespace mail.usa.contoso.com, a site located in Toronto would have the namespace mail.canada.contoso.com, and a site located in London would have the namespace mail.europe.contoso.com. Multiple forests This model consists of multiple forests that have multiple namespaces. An organization that uses this model could be made up of two partner companies, for example, Contoso and Fabrikam. Namespaces might include mail.usa.contoso.com, mail.europe.contoso.com, mail.asia.fabrikam.com, and mail.europe.fabrikam.com. *Design Decision Point* Because this solution is deploying an active/passive site resiliency model and doesn't have any active mailbox users in the secondary site, the best option is the single namespace with multiple sites model. Determine Client Access Server Array and Load Balancing Strategy In Exchange 2010, the RPC Client Access service and the Exchange Address Book service were introduced on the Client Access server role to improve the mailbox users experience when the active mailbox database copy is moved to another Mailbox server (for example, during mailbox database failures and maintenance events). The connection endpoints for mailbox access from Microsoft Outlook and other MAPI clients have been moved from the Mailbox server role to the Client Access server role. Therefore, both internal and external Outlook connections must now be load balanced across all Client Access servers in the site to achieve fault tolerance. To associate the MAPI endpoint with a group of Client Access servers rather than a specific Client Access server, you can define a Client Access server array. You can only configure one array per Active Directory site, and an array can't span more than one Active Directory site. For more information, see Understanding RPC Client Access and Understanding Load Balancing in Exchange. *Design Decision Point* In a previous step, it was determined that Client Access servers would be deployed in two physical locations in two Active Directory sites. Therefore, you need to deploy two Client Access 53

54 server arrays. A single namespace will be load balanced across the Client Access servers in the primary active Client Access server array using redundant hardware load balancers. In a site failure, the namespace will be load balanced across the Client Access servers in the secondary Client Access server array. Determine Hardware Load Balancing Solution Use the following steps to determine a hardware load balancing solution. Step 1: Identify preferred server vendor The preferred vendor for application load balancing is F5. The F5 comprehensive Application Ready infrastructure for Exchange Server allows organizations to easily provide additional performance, security, and availability to ensure maximum return on investment for Exchange deployments. Step 2: Review available options from preferred vendor F5 offers a suite of appliance-based networking technologies designed to optimize networks for applications such as Exchange 2010: BIG-IP Local Traffic Manager (LTM) BIG-IP LTM is designed to monitor and manage traffic to Client Access, Hub Transport, Edge Transport, and Unified Messaging servers, while ensuring that users are always sent to the best performing resource. Whether your users are connecting via MAPI, Outlook Web Access, ActiveSync, or Outlook Anywhere, BIG-IP LTM will load balance the connections appropriately, allowing you to seamlessly scale to any size deployment. BIG-IP LTM now offers several modules that also provide significant value in an Exchange environment, which include: Access Policy Manager (APM) Designed to secure access to Exchange resources, APM can authenticate users before they attach to your Exchange Client Access servers, providing a strong perimeter security. Big-IP WebAccelerator Targeting customers with large Outlook Web Access constituencies, WebAccelerator can drive down bandwidth usage and server utilization while accelerating content to end users. WAN Optimization Module (WOM) Focused on network optimization for WANs, WOM has proven capable in accelerating DAG replication by over five times between datacenters. BIG-IP Global Traffic Manager (GTM) BIG-IP GTM can provide wide area resiliency, providing disaster recovery and load balancing for those with multiple datacenter Exchange deployments. BIG-IP Application Security Manager (ASM) A fully featured Layer 7 firewall, ASM thwarts HTTP, XML, and SMTP based attacks. By combining a negative and positive security model, ASM provides protection against all L7 attacks, both known and unknown. For more information about these technologies, see F5 Solutions for Exchange Server. 54

55 Sizing the appropriate F5 hardware model for your Exchange 2010 deployment is an exercise best done with the guidance of your local F5 team. F5 offers production hardware-based and software-based BIG-IP platforms that range from supporting up to 200 megabits per second (Mbps) all the way up to 80 Gbps. To learn more about the specifications for each of the F5 BIG- IP LTM hardware platforms, see BIG-IP System Hardware Datasheet. Option 1: BIG-IP 1600 series The BIG-IP 1600 offers all the functionality of TMOS in a cost-effective, entry-level platform for intelligent application delivery. BIG-IP 1600 appliance-based networking technologies Components Traffic throughput Hardware Secure Sockets Layer (SSL) Software compression Processor Memory Value or description 1 Gbps Included: 500 transactions per second Maximum: 5,000 transactions per second 1 Gbps bulk encryption Included: 50 Mbps Maximum: 1 Gbps Dual core CPU 4 GB Gigabit Ethernet CU ports 4 Gigabit fiber ports (small form-factor pluggable transceiver) Power supply Typical consumption 2 optional LX, SX, or copper One 300 watt included with a dual power option 150 watt (110 volt input) Option 2: BIG-IP 3900 series With a quad-core processor that enables support for multiple BIG-IP modules, the BIG-IP 3900 unifies application delivery in a 1U, cost-effective platform. BIG-IP 3900 appliance-based networking technologies Components Traffic throughput Hardware SSL Software compression Value or description 4 Gbps Included: 500 transactions per second Maximum: 15,000 transactions per second 2.4 Gbps bulk encryption Included: 50 Mbps 55

56 Components Processor Memory Value or description Maximum: 3.8 Gbps Dual core CPU 8 GB Gigabit Ethernet CU ports 8 Gigabit fiber ports (small form-factor pluggable transceiver) Power supply Typical consumption 4 optional LX, SX, or copper One 300 watt included with a dual power option 175 watt (110 volt input) Option 3: BIG-IP 6900 series With two dual-core processors as well as hardware, SSL, and compression, the BIG-IP 6900 has the performance to provide an integrated platform for application delivery. The BIG-IP 6900 can process up to 6 Gbps of throughput to handle the most demanding applications. BIG-IP 6900 appliance-based networking technologies Components Traffic throughput Hardware SSL FIPS SSL Software compression Processor Memory Value or description 6 Gbps Included: 500 transactions per second Maximum: 25,000 transactions per second 4 Gbps bulk encryption FIPS Level 2 (option) 20,000 transactions per second Included: 50 Mbps Maximum: 5 Gbps Dual core CPU (4 processors) 8 GB Gigabit Ethernet CU ports 16 Gigabit fiber ports (small form-factor pluggable transceiver) Power supply Typical consumption 8 optional LX, SX, or copper Dual 850 watt included 300 watt (110 volt input) 56

57 Step 3: Select a hardware load balancing solution model When it comes time to determine which application delivery controller is suitable, consider the following: Purpose of the application delivery controllers: Simple load balancing Security Acceleration How users are connecting: IMAP Outlook Web Access Outlook Anywhere Hardware benefits of an appliance-based BIG-IP vs. flexibility of a software-based BIG-IP Desired scale and number of concurrent users Percentage of local users vs. remote users Average user expectations, such as number of messages per day average message size This information can be used to ensure the right BIG-IP LTM platform is selected. *Design Decision Point* The BIG-IP 3900 is selected for this solution. The GB capacity and connection count limits are enough to cover normal usage as well as unexpected traffic spikes for 15,000 active mailboxes with a 50 message per day profile. The quad core CPU is also capable enough to handle the processing associated with connection and persistence handling. Determine Hardware Load Balancing Device Resiliency Strategy Whenever deploying BIG-IP LTM, it's important that all efforts are made to implement with fault tolerance in mind. BIG-IP LTM is designed to ensure that application server outages never affect end users, and that the technology helps ensure BIG-IP LTM failures are recovered from in a controlled and seamless manner. Customers typically deploy BIG-IP LTM in redundant pairs. Connected by a dedicated network and serial channel, the two BIG-IP LTMs coordinate network responsibilities, ensuring failure of one device is automatically detected and recovered from by its peer. BIG-IP LTM excels in this area by offering unique functionality such as: Connection mirroring This ensures the connection table in each BIG-IP LTM is mirrored to its peer. This means that in case of a BIG-IP LTM failure, no connections are dropped because the BIG-IP LTM failover partner is already aware of the previously established connections and it assumes responsibilities for the network. 57

58 Network-based outage detection This ensures that a network outage is just as critical as a server outage for the BIG-IP LTM, and that proper remediation steps need to be taken to attempt to remedy the situation. Software-based and hardware-based watchdog functionality This ensures proper failover when a BIG-IP LTM isn't functioning properly. Besides deploying BIG-IP LTMs in redundant pairs, customers often build redundancy into the architecture by building a multiple datacenter environment. BIG-IP GTM is designed to add datacenter load balancing so that wide area resiliency is also achieved. For more information about GTM, see Global Load Balancing Solutions. Determine Hardware Load Balancing Methods Exchange protocols and client access services have different load balancing requirements. Some Exchange protocols and client access services require client to Client Access server affinity. Others work without it, but display performance improvements from such affinity. Other Exchange protocols don't require client to Client Access server affinity, and performance doesn't decrease without affinity. For additional information, see Load Balancing Requirements of Exchange Protocols and Understanding Load Balancing in Exchange For more information about configuring F5 BIG-IP LTMs, see Deploying F5 with Microsoft Exchange Server Solution Overview The previous section provided information about the design decisions that were made when considering an Exchange 2010 solution. The following section provides an overview of the solution. Logical Solution Diagram This solution consists of a total of 18 Exchange 2010 servers deployed in a multisite topology. Nine of the 18 servers are running both the Client Access and Hub Transport server roles. The other nine servers are running the Mailbox server role. The primary namespace is load balanced across six Client Access and Hub Transport servers in a Client Access server array in the primary site. There are three Client Access and Hub Transport servers in a second Client Access server array located in the secondary site. All nine Mailbox servers are members of a single DAG. There are six Mailbox servers located in the primary site and three Mailbox servers in the secondary site. The site resiliency model is active/passive. 58

59 Logical solution Physical Solution Diagram This solution consists of nine Dell PowerEdge M610 blade servers in three PowerEdge M1000e modular blade enclosures attached to three EqualLogic PS6500E iscsi storage arrays via four redundant modular PowerConnect M6220 switches. The hardware in this solution has been provisioned such that there are three failure domains. A failure domain represents a single point of failure and is used to ensure that database copy layouts in the DAG protect against loss of any component in a failure domain. Each failure domain consists of one blade enclosure holding three blade servers and two modular switches connected to a single PS6500E storage array. 59

60 Physical solution Server Hardware Summary The following table summarizes the physical server hardware used in this solution. 60

61 Server hardware summary Component Server vendor Server model Processor Chipset Memory Operating system Virtualization Internal disk Operating system disk configuration RAID controller Network interface Description Dell PowerEdge M610 blade server 2 x Intel Xeon CPU X GHz Intel 5520/5500/X58 48 GB Microsoft Windows Server 2008 R2 Microsoft Hyper-V 2 x 300 GB SAS 15k RAID-1 Dell SAS 6/iR integrated blades controller Broadcom NetXtreme II C-NIC GigE For more information, see PowerEdge M610 Blade Server. Client Access and Hub Transport Server Configuration The following table summarizes the Client Access and Hub Transport server configuration used in this solution. Client Access and Hub Transport server configuration Component Physical or virtual Description Hyper-V VM Virtual processors 4 Memory Storage Operating system Exchange version 8 GB Virtual hard disk on root server operating system volume Microsoft Windows Server 2008 R2 Microsoft Exchange Server 2010 Standard Edition Exchange patch level Exchange 2010 Update Rollup 3 61

62 Mailbox Server Configuration The following table summarizes the Mailbox server configuration used in this solution. Mailbox server configuration Component Physical or virtual Description Hyper-V VM Virtual processors 4 Memory Storage Pass-through storage Operating system Exchange version 32 GB Virtual hard disk on root server operating system volume 9 volumes 789 GB Microsoft Windows Server 2008 R2 Microsoft Exchange Server 2010 Enterprise Edition Exchange patch level Exchange 2010 Update Rollup 2 Third-party software None Database Layout The following diagram illustrates the database layout across the primary and secondary datacenters. 62

63 Database layout Storage Hardware Summary The following table summarizes the storage hardware used in this solution. Storage hardware summary Component Storage vendor Storage model Description Dell EqualLogic PS6500E 63

64 Component Category Disks Description iscsi 48 1 terabyte 7200 rpm SATA Active disks 46 Spares 2 RAID level 10 Usable capacity 20.8 terabytes For more information, see Dell EqualLogic PS6500E iscsi SAN. Storage Configuration Each of the Dell EqualLogic PS6500E storage arrays used in the solution were configured as illustrated in the following table. Storage configuration Component Description Storage enclosures 3 LUNs per Enclosure 27 LUNs per server 9 LUN size RAID level 798 GB RAID-10 The following table illustrates how the available storage was designed and allocated between the three PS6500E storage arrays. PS6500 storage array design and allocation Database Array1 Database Array2 Database Array3 DB1 C1 DB1 C2 DB1 C3 DB2 C1 DB2 C2 DB2 C3 DB3 C1 DB3 C2 DB3 C3 DB4 C1 DB4 C2 DB4 C3 DB5 C1 DB5 C2 DB5 C3 DB6 C1 DB6 C2 DB6 C3 64

65 Database Array1 Database Array2 Database Array3 DB7 C1 DB7 C2 DB7 C3 DB8 C1 DB8 C2 DB8 C3 DB9 C1 DB9 C2 DB9 C3 DB10 C2 DB10 C1 DB10 C3 DB11 C2 DB11 C1 DB11 C3 DB12 C2 DB12 C1 DB12 C3 DB13 C2 DB13 C1 DB13 C3 DB14 C2 DB14 C1 DB14 C3 DB15 C2 DB15 C1 DB15 C3 DB16 C2 DB16 C1 DB16 C3 DB17 C2 DB17 C1 DB17 C3 DB18 C2 DB18 C1 DB18 C3 Network Switch Hardware Summary The following table summarizes the network switch hardware used in this solution. Network switch hardware summary Component Vendor Model Ports Port bandwidth Switch fabric Capacity Description Dell PowerConnect M6220 Ethernet switch 20 (16 internal, 4 external) 10/100/1000 BASE-T auto-sensing 128 Gbps Number per blade enclosures 2 For more information, download a.pdf file about the PowerConnect M6220 Ethernet Switch. Load Balancer Hardware Summary The following table summarizes the storage hardware used in this solution. 65

66 Load balancer hardware summary Component Vendor Description F5 Model BIG-IP 3900 Traffic throughput Hardware SSL Software compression Processor Memory 4 Gbps Included: 500 transactions per second Maximum: 15,000 transactions per second, 2.4 Gbps bulk encryption Included: 50 Mbps Maximum: 3.8 Gbps Dual core CPU 8 GB Gigabit Ethernet CU ports 8 Gigabit fiber ports (small form-factor pluggable transceiver) Power supply Typical consumption 4 optional LX, SX, or copper One 300 watt included with a dual power option 175 watt (110 volt input) Solution Validation Methodology Prior to deploying an Exchange solution in a production environment, validate that the solution was designed, sized, and configured properly. This validation must include functional testing to ensure that the system is operating as desired as well as performance testing to ensure that the system can handle the desired user load. This section describes the approach and test methodology used to validate server and storage design for this solution. In particular, the following tests will be defined in detail: Performance tests Storage performance validation (Jetstress) Server performance validation (Loadgen) Functional tests Database switchover validation Server switchover validation Server failover validation Datacenter switchover validation 66

67 Storage Design Validation Methodology The level of performance and reliability of the storage subsystem connected to the Exchange Mailbox server role has a significant impact on the overall health of the Exchange deployment. Additionally, poor storage performance will result in high transaction latency, primarily reflected in poor client experience when accessing the Exchange system. To ensure the best possible client experience, validate storage sizing and configuration via the method described in this section. Tool Set For validating Exchange storage sizing and configuration, we recommend the Microsoft Exchange Server Jetstress tool. The Jetstress tool is designed to simulate an Exchange I/O workload at the database level by interacting directly with the ESE, which is also known as Jet. The ESE is the database technology that Exchange uses to store messaging data on the Mailbox server role. Jetstress can be configured to test the maximum I/O throughput available to your storage subsystem within the required performance constraints of Exchange. Or, Jetstress can accept a target profile of user count and per-user IOPS, and validate that the storage subsystem is capable of maintaining an acceptable level of performance with the target profile. Test duration is adjustable and can be run for a minimal period of time to validate adequate performance or for an extended period of time to additionally validate storage subsystem reliability. The Jetstress tool can be obtained from the Microsoft Download Center at the following locations: Microsoft Exchange Server Jetstress 2010 (64 bit) Microsoft Exchange Server Jetstress 2010 (32 bit) The documentation included with the Jetstress installer describes how to configure and execute a Jetstress validation test on your server hardware. Approach to Storage Validation There are two main types of storage configurations: Direct-attached storage (DAS) or internal disk scenarios Storage area network (SAN) scenarios With DAS or internal disk scenarios, there's only one server accessing the disk subsystem, so the performance capabilities of the storage subsystem can be validated in isolation. In SAN scenarios, the storage utilized by the solution may be shared by many servers and the infrastructure that connects the servers to the storage may also be a shared dependency. This requires additional testing, as the impact of other servers on the shared infrastructure must be adequately simulated to validate performance and functionality. Test Cases for Storage Validation The following storage validation test cases were executed against the solution and should be considered as a starting point for storage validation. Specific deployments may have other 67

68 validation requirements that can be met with additional testing, so this list isn't intended to be exhaustive: Validation of worst case database switchover scenario In this test case, the level of I/O is expected to be serviced by the storage subsystem in a worst case switchover scenario (largest possible number of active copies on fewest servers). Depending on whether the storage subsystem is DAS or SAN, this test may be required to run on multiple hosts to ensure that the end-to-end solution load on the storage subsystem can be sustained. Validation of storage performance under storage failure and recovery scenario (for example, failed disk replacement and rebuild) In this test case, the performance of the storage subsystem during a failure and rebuild scenario is evaluated to ensure that the necessary level of performance is maintained for optimal Exchange client experience. The same caveat applies for a DAS vs. SAN deployment: If multiple hosts are dependent on a shared storage subsystem, the test must include load from these hosts to simulate the entire effect of the failure and rebuild. Analyzing the Results The Jetstress tool produces a report file after each test is completed. To help you analyze the report, use the guidelines in Jetstress 2010 Test Summary Reports. Specifically, you should use the guidelines in the following table when you examine data in the Test Results table of the report. Jetstress results analysis Performance counter instance Guidelines for performance test I/O Database Reads Average Latency (msec) The average value should be less than 20 milliseconds (msec) (0.020 seconds), and the maximum values should be less than 50 msec. I/O Log Writes Average Latency (msec) %Processor Time Transition Pages Repurposed/sec (Windows Server 2003, Windows Server 2008, Windows Server 2008 R2) Log disk writes are sequential, so average write latencies should be less than 10 msec, with a maximum of no more than 50 msec. Average should be less than 80%, and the maximum should be less than 90%. Average should be less than 100. The report file shows various categories of I/O performed by the Exchange system: Transactional I/O Performance This table reports I/O that represents user activity against the database (for example, Outlook generated I/O). This data is generated by subtracting background maintenance I/O and log replication I/O from the total I/O measured during the test. This data provides the actual database IOPS generated along with I/O latency measurements required to determine whether a Jetstress performance test passed or failed. 68

69 Background Database Maintenance I/O Performance This table reports the I/O generated due to ongoing ESE database background maintenance. Log Replication I/O Performance This table reports the I/O generated from simulated log replication. Total I/O Performance This table reports the total I/O generated during the Jetstress test. Server Design Validation After the performance and reliability of the storage subsystem is validated, ensure that all of the components in the messaging system are validated together for functionality, performance, and scalability. This means moving up in the stack to validate client software interaction with the Exchange product as well as any server-side products that interact with Exchange. To ensure that the end-to-end client experience is acceptable and that the entire solution can sustain the desired user load, the method described in this section can be applied for server design validation. Tool Set For validation of end-to-end solution performance and scalability, we recommend the Microsoft Exchange Server Load Generator tool (Loadgen). Loadgen is designed to produce a simulated client workload against an Exchange deployment. This workload can be used to evaluate the performance of the Exchange system, and can also be used to evaluate the effect of various configuration changes on the overall solution while the system is under load. Loadgen is capable of simulating Microsoft Office Outlook 2007 (online and cached), Office Outlook 2003 (online and cached), POP3, IMAP4, SMTP, ActiveSync, and Outlook Web App (known in Exchange 2007 and earlier versions as Outlook Web Access) client activity. It can be used to generate a single protocol workload, or these client protocols can be combined to generate a multiple protocol workload. You can get the Loadgen tool from the Microsoft Download Center at the following locations: Exchange Load Generator 2010 (64 bit) Exchange Load Generator 2010 (32 bit) The documentation included with the Loadgen installer describes how to configure and execute a Loadgen test against an Exchange deployment. Approach to Server Validation When validating your server design, test the worst case scenario under anticipated peak workload. Based on a number of data sets from Microsoft IT and other customers, peak load is generally equal to 2x the average workload throughout the remainder of the work day. This is referred to as the peak-to-average workload ratio. 69

70 Peak load In this Performance Monitor snapshot, which displays various counters that represent the amount of Exchange work being performed over time on a production Mailbox server, the average value for RPC operations per second (the highlighted line) is about 2,386 when averaged across the entire day. The average for this counter during the peak period from 10:00 through 11:00 is about 4,971, giving a peak-to-average ratio of To ensure that the Exchange solution is capable of sustaining the workload generated during the peak average, modify Loadgen settings to generate a constant amount of load at the peak average level, rather than spreading out the workload over the entire simulated work day. Loadgen task-based simulation modules (like the Outlook simulation modules) utilize a task profile that defines the number of times each task will occur for an average user within a simulated day. The total number of tasks that need to run during a simulated day is calculated as the number of users multiplied by the sum of task counts in the configured task profile. Loadgen then determines the rate at which it should run tasks for the configured set of users by dividing the total number of tasks to run in the simulated day by the simulated day length. For example, if Loadgen needs to run 1,000,000 tasks in a simulated day, and a simulated day is equal to 8 hours (28,800 seconds), Loadgen must run 1,000,000 28,800 = tasks per second to meet the required workload definition. To increase the amount of load to the desired peak average, divide the default simulated day length (8 hours) by the peak-to-average ratio (2) and use this as the new simulated day length. Using the task rate example again, 1,000,000 14,400 = tasks per second. This reduces the simulated day length by half, which results in doubling the actual workload run against the server and achieving our goal of a peak average workload. You don't adjust the run length duration of the test in the Loadgen configuration. The run length duration specifies the duration of the test and doesn't affect the rate at which tasks will be run against the Exchange server. 70

Exchange 2010 Tested Solutions: 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers

Exchange 2010 Tested Solutions: 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers Exchange 2010 Tested Solutions: 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers Rob Simpson, Program Manager, Microsoft Exchange Server; Akshai Parthasarathy, Systems Engineer, Dell; Casey

More information

Introduction to Microsoft Exchange Server 2010 Sizing

Introduction to Microsoft Exchange Server 2010 Sizing Introduction to Microsoft Exchange Server 2010 Sizing Methodologies for Exchange server 2010 deployment strategies Global Solutions Engineering Dell This document is for informational purposes only and

More information

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 Product Group - Enterprise Dell White Paper By Farrukh Noman Ananda Sankaran April 2008 Contents Introduction... 3

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

Dell Exchange 2007 Advisor and Representative Deployments

Dell Exchange 2007 Advisor and Representative Deployments Dell Exchange 2007 Advisor and Representative Deployments Product Group - Enterprise Dell White Paper By Farrukh Noman Bharath Vasudevan April 2007 Contents Executive Summary... 3 Introduction... 4 Dell

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business Technical Report Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users Reliable and affordable storage for your business Table of Contents 1 Overview... 1 2 Introduction... 2 3 Infrastructure

More information

The Microsoft Large Mailbox Vision

The Microsoft Large Mailbox Vision WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more email has many advantages. Large mailboxes

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

DELL POWERVAULT MD FAMILY MODULAR STORAGE THE DELL POWERVAULT MD STORAGE FAMILY

DELL POWERVAULT MD FAMILY MODULAR STORAGE THE DELL POWERVAULT MD STORAGE FAMILY DELL MD FAMILY MODULAR STORAGE THE DELL MD STORAGE FAMILY Simplifying IT The Dell PowerVault MD family can simplify IT by optimizing your data storage architecture and ensuring the availability of your

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

White Paper. A System for  Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft White Paper Mimosa Systems, Inc. November 2007 A System for Email Archiving, Recovery, and Storage Optimization Mimosa NearPoint for Microsoft Exchange Server and EqualLogic PS Series Storage Arrays CONTENTS

More information

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage

Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Dell Reference Configuration for Large Oracle Database Deployments on Dell EqualLogic Storage Database Solutions Engineering By Raghunatha M, Ravi Ramappa Dell Product Group October 2009 Executive Summary

More information

Microsoft E xchange 2010 on VMware

Microsoft E xchange 2010 on VMware : Microsoft E xchange 2010 on VMware Availability and R ecovery Options This product is protected by U.S. and international copyright and intellectual property laws. This product is covered by one or more

More information

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family Dell MD Family Modular storage The Dell MD storage family Dell MD Family Simplifying IT The Dell MD Family simplifies IT by optimizing your data storage architecture and ensuring the availability of your

More information

Dell EMC Microsoft Exchange 2016 Solution

Dell EMC Microsoft Exchange 2016 Solution Dell EMC Microsoft Exchange 2016 Solution Design Guide for implementing Microsoft Exchange Server 2016 on Dell EMC R740xd servers and storage Dell Engineering October 2017 Design Guide Revisions Date October

More information

EqualLogic PS Series Storage

EqualLogic PS Series Storage EqualLogic PS Series Storage Recognized virtualization leadership EqualLogic wins best storage system award for the second year in a row this design brought improved performance that matched particularly

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware

Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Virtualizing Microsoft Exchange Server 2010 with NetApp and VMware Deploying Microsoft Exchange Server 2010 in a virtualized environment that leverages VMware virtualization and NetApp unified storage

More information

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE EXCHANGE SERVER 2016 Design Guide ABSTRACT This Design Guide describes the design principles and solution components for Dell EMC Ready Bundle for Microsoft

More information

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution Microsoft ESRP 4.0 Dell Storage Engineering October 2015 A Dell Technical White Paper Revisions

More information

Microsoft Exchange Server 2010 Implementation on Dell Active System 800v

Microsoft Exchange Server 2010 Implementation on Dell Active System 800v Microsoft Exchange Server 2010 Implementation on Dell Active System 800v A Design and Implementation Guide for Exchange Server 2010 on Active System 800 with VMware vsphere Dell Global Solutions Engineering

More information

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 Incentives for migrating to Exchange 2010 on Dell PowerEdge R720xd Global Solutions Engineering

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

Many organizations rely on Microsoft Exchange for

Many organizations rely on Microsoft Exchange for Feature section: Microsoft Exchange server 007 A Blueprint for Implementing Microsoft Exchange Server 007 Storage Infrastructures By Derrick Baxter Suresh Jasrasaria Designing a consolidated storage infrastructure

More information

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage By Dave Jaffe Dell Enterprise Technology Center and Kendra Matthews Dell Storage Marketing Group Dell Enterprise Technology Center delltechcenter.com

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays

Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays TECHNICAL REPORT: Performance Study Benefits of Automatic Data Tiering in OLTP Database Environments with Dell EqualLogic Hybrid Arrays ABSTRACT The Dell EqualLogic hybrid arrays PS6010XVS and PS6000XVS

More information

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays

Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays Microsoft SQL Server 2012 Fast Track Reference Configuration Using PowerEdge R720 and EqualLogic PS6110XV Arrays This whitepaper describes Dell Microsoft SQL Server Fast Track reference architecture configurations

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

HCI: Hyper-Converged Infrastructure

HCI: Hyper-Converged Infrastructure Key Benefits: Innovative IT solution for high performance, simplicity and low cost Complete solution for IT workloads: compute, storage and networking in a single appliance High performance enabled by

More information

BUSINESS CONTINUITY: THE PROFIT SCENARIO

BUSINESS CONTINUITY: THE PROFIT SCENARIO WHITE PAPER BUSINESS CONTINUITY: THE PROFIT SCENARIO THE BENEFITS OF A COMPREHENSIVE BUSINESS CONTINUITY STRATEGY FOR INCREASED OPPORTUNITY Organizational data is the DNA of a business it makes your operation

More information

Overview of HP tiered solutions program for Microsoft Exchange Server 2010

Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Table of contents Executive summary... 2 Introduction... 3 Exchange 2010 changes that impact tiered solutions... 3 Hardware platforms...

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini June 2016 2016 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Surveillance Dell EMC Storage with Milestone XProtect Corporate Surveillance Dell EMC Storage with Milestone XProtect Corporate Sizing Guide H14502 REV 1.5 Copyright 2014-2018 Dell Inc. or its subsidiaries. All rights reserved. Published January 2018 Dell believes

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump

ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS. By George Crump ECONOMICAL, STORAGE PURPOSE-BUILT FOR THE EMERGING DATA CENTERS By George Crump Economical, Storage Purpose-Built for the Emerging Data Centers Most small, growing businesses start as a collection of laptops

More information

Top Reasons to Upgrade to Microsoft SharePoint 2010

Top Reasons to Upgrade to Microsoft SharePoint 2010 Top Reasons to Upgrade to Microsoft SharePoint 2010 Contents Abstract. 1 SharePoint s Role in Productive Business Environments. 2 Microsoft SharePoint 2010 Upgrade Advantages. 2 The Added Advantages of

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

HP recommended configuration for Microsoft Exchange Server 2010 and HP ProLiant SL4540 Gen8 Servers (3 node)

HP recommended configuration for Microsoft Exchange Server 2010 and HP ProLiant SL4540 Gen8 Servers (3 node) Technical white paper HP recommended configuration for Microsoft Exchange Server 2010 and HP ProLiant SL4540 Gen8 Servers (3 node) Building blocks for 1500 mailboxes with 3-copy high availability design

More information

Storage Management for Exchange. August 2008

Storage Management for Exchange. August 2008 Storage Management for Exchange August 2008 LeftHand Networks, Inc. Leader in iscsi SANs Pioneer in the IP SAN market, founded in 1999 Highly available, simple to manage, and grow as needed architecture

More information

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System

Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Microsoft Exchange Server 2010 workload optimization on the new IBM PureFlex System Best practices Roland Mueller IBM Systems and Technology Group ISV Enablement April 2012 Copyright IBM Corporation, 2012

More information

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage

Database Solutions Engineering. Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage Best Practices for running Microsoft SQL Server and Microsoft Hyper-V on Dell PowerEdge Servers and Storage A Dell Technical White Paper Database Solutions Engineering By Anthony Fernandez Dell Product

More information

IBM System Storage DS5020 Express

IBM System Storage DS5020 Express IBM DS5020 Express Manage growth, complexity, and risk with scalable, high-performance storage Highlights Mixed host interfaces support (FC/iSCSI) enables SAN tiering Balanced performance well-suited for

More information

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015 Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Tested with ESRP Storage Version 4.0 Tested Date: June 2015 Copyright 2015 Dell Inc. All rights reserved. This product

More information

StorageCraft OneXafe and Veeam 9.5

StorageCraft OneXafe and Veeam 9.5 TECHNICAL DEPLOYMENT GUIDE NOV 2018 StorageCraft OneXafe and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneXafe, compliments Veeam to create a differentiated

More information

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity Enterprise power with everyday simplicity QUALIT Y AWARDS STO R A G E M A G A Z I N E EqualLogic Storage The Dell difference Ease of use Integrated tools for centralized monitoring and management Scale-out

More information

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays Dell EqualLogic Best Practices Series Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell EqualLogic Storage Arrays A Dell Technical Whitepaper Jerry Daugherty Storage Infrastructure

More information

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure Nutanix Tech Note Virtualizing Microsoft Applications on Web-Scale Infrastructure The increase in virtualization of critical applications has brought significant attention to compute and storage infrastructure.

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Enterprise power with everyday simplicity

Enterprise power with everyday simplicity Enterprise power with everyday simplicity QUALIT Y AWARDS STO R A G E M A G A Z I N E EqualLogic Storage The Dell difference Ease of use Integrated tools for centralized monitoring and management Scale-out

More information

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018

White Paper. EonStor GS Family Best Practices Guide. Version: 1.1 Updated: Apr., 2018 EonStor GS Family Best Practices Guide White Paper Version: 1.1 Updated: Apr., 2018 Abstract: This guide provides recommendations of best practices for installation and configuration to meet customer performance

More information

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family

Dell PowerVault MD Family. Modular storage. The Dell PowerVault MD storage family Dell PowerVault MD Family Modular storage The Dell PowerVault MD storage family Dell PowerVault MD Family The affordable choice The Dell PowerVault MD family is an affordable choice for reliable storage.

More information

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014 Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014 2014 Dell Inc. All Rights Reserved. Dell, the Dell logo,

More information

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments 1 2017 2017 Cisco Cisco and/or and/or its

More information

Storage management is at the center

Storage management is at the center Special section: equallogic iscsi Peer storage Inside the EqualLogic PS Series iscsi Storage Arrays By John Joseph Eric Schott Kevin Wittmer Built on a patented peer storage architecture, the EqualLogic

More information

EMC Business Continuity for Microsoft Exchange 2010

EMC Business Continuity for Microsoft Exchange 2010 EMC Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage and Microsoft Database Availability Groups Proven Solution Guide Copyright 2011 EMC Corporation. All rights reserved.

More information

PeerStorage Arrays Unequalled Storage Solutions

PeerStorage Arrays Unequalled Storage Solutions Simplifying Networked Storage PeerStorage Arrays Unequalled Storage Solutions John Joseph, VP of Marketing EqualLogic,, 9 Townsend West, Nashua NH 03063 Phone: +1-603 603-249-7772, FAX: +1-603 603-579-6910

More information

<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance

<Insert Picture Here> Introducing Oracle WebLogic Server on Oracle Database Appliance Introducing Oracle WebLogic Server on Oracle Database Appliance Oracle Database Appliance with WebLogic Server Simple. Reliable. Affordable. 2 Virtualization on Oracle Database Appliance

More information

vstart 50 VMware vsphere Solution Specification

vstart 50 VMware vsphere Solution Specification vstart 50 VMware vsphere Solution Specification Release 1.3 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 March 2012 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

Thinking Different: Simple, Efficient, Affordable, Unified Storage

Thinking Different: Simple, Efficient, Affordable, Unified Storage Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 1 IT Challenges: Tougher than Ever Four central themes facing every decision maker today Overcome flat

More information

Data center requirements

Data center requirements Prerequisites, page 1 Data center workflow, page 2 Determine data center requirements, page 2 Gather data for initial data center planning, page 2 Determine the data center deployment model, page 3 Determine

More information

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results Dell Fluid Data solutions Powerful self-optimized enterprise storage Dell Compellent Storage Center: Designed for business results The Dell difference: Efficiency designed to drive down your total cost

More information

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000 This whitepaper describes the Dell Microsoft SQL Server Fast Track reference architecture configuration

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

Kunal Mahajan Microsoft Corporation

Kunal Mahajan Microsoft Corporation Kunal Mahajan Microsoft Corporation 65+ Million Customer hosted Mailboxes 30+ Million Partner hosted Mailboxes 1,800 Partners Strategic Business Challenges Our Sales teams need to connect with the right

More information

50 TB. Traditional Storage + Data Protection Architecture. StorSimple Cloud-integrated Storage. Traditional CapEx: $375K Support: $75K per Year

50 TB. Traditional Storage + Data Protection Architecture. StorSimple Cloud-integrated Storage. Traditional CapEx: $375K Support: $75K per Year Compelling Economics: Traditional Storage vs. StorSimple Traditional Storage + Data Protection Architecture StorSimple Cloud-integrated Storage Servers Servers Primary Volume Disk Array ($100K; Double

More information

Assessing performance in HP LeftHand SANs

Assessing performance in HP LeftHand SANs Assessing performance in HP LeftHand SANs HP LeftHand Starter, Virtualization, and Multi-Site SANs deliver reliable, scalable, and predictable performance White paper Introduction... 2 The advantages of

More information

SGI Origin 400. The Integrated Workgroup Blade System Optimized for SME Workflows

SGI Origin 400. The Integrated Workgroup Blade System Optimized for SME Workflows W H I T E P A P E R SGI Origin 400 The Integrated Workgroup Blade System Optimized for SME Workflows Executive Summary SGI Origin 400 is a highly integrated business-in-a-box blade system with seamless

More information

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives

Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives Dell EMC SCv3020 7,000 Mailbox Exchange 2016 Resiliency Storage Solution using 7.2K drives Microsoft ESRP 4.0 Abstract This document describes the Dell EMC SCv3020 storage solution for Microsoft Exchange

More information

Storage s Pivotal Role in Microsoft Exchange Environments: The Important Benefits of SANs

Storage s Pivotal Role in Microsoft Exchange Environments: The Important Benefits of SANs Solution Profile Storage s Pivotal Role in Microsoft Exchange Environments: The Important Benefits of SANs Hitachi Data Systems Making the Optimal Storage Choice for Performance, Resiliency in Microsoft

More information

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630

Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 Reference Architecture - Microsoft SharePoint Server 2013 on Dell PowerEdge R630 A Dell reference architecture for 5000 Users Dell Global Solutions Engineering June 2015 A Dell Reference Architecture THIS

More information

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide

StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide TECHNICAL DEPLOYMENT GUIDE StorageCraft OneBlox and Veeam 9.5 Expert Deployment Guide Overview StorageCraft, with its scale-out storage solution OneBlox, compliments Veeam to create a differentiated diskbased

More information

LEVERAGING A PERSISTENT HARDWARE ARCHITECTURE

LEVERAGING A PERSISTENT HARDWARE ARCHITECTURE WHITE PAPER I JUNE 2010 LEVERAGING A PERSISTENT HARDWARE ARCHITECTURE How an Open, Modular Storage Platform Gives Enterprises the Agility to Scale On Demand and Adapt to Constant Change. LEVERAGING A PERSISTENT

More information

Solution Brief. IBM eserver BladeCenter & VERITAS Solutions for Microsoft Exchange

Solution Brief. IBM eserver BladeCenter & VERITAS Solutions for Microsoft Exchange Solution Brief IBM e BladeCenter & VERITAS Solutions for Microsoft IBM e BladeCenter and VERITAS: Working Together to Deliver High Availability For Microsoft August 2003 1 Table of Contents Executive Summary...3

More information

Protect enterprise data, achieve long-term data retention

Protect enterprise data, achieve long-term data retention Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce

More information

Free up rack space by replacing old servers and storage

Free up rack space by replacing old servers and storage A Principled Technologies report: Hands-on testing. Real-world results. Free up rack space by replacing old servers and storage A 2U Dell PowerEdge FX2s and all-flash VMware vsan solution powered by Intel

More information

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V IMPLEMENTATION GUIDE EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V EMC VSPEX Abstract This describes, at a high level, the steps required to deploy a Microsoft Exchange 2013 organization

More information

Dell PowerVault MD Family. Modular Storage. The Dell PowerVault MD Storage Family

Dell PowerVault MD Family. Modular Storage. The Dell PowerVault MD Storage Family Modular Storage The Dell PowerVault MD Storage Family The affordable choice The Dell PowerVault MD family is an affordable choice for reliable storage. The new MD3 models improve connectivity and performance

More information

Nimble Storage Adaptive Flash

Nimble Storage Adaptive Flash Nimble Storage Adaptive Flash Read more Nimble solutions Contact Us 800-544-8877 solutions@microage.com MicroAge.com TECHNOLOGY OVERVIEW Nimble Storage Adaptive Flash Nimble Storage s Adaptive Flash platform

More information

Controlling Costs and Driving Agility in the Datacenter

Controlling Costs and Driving Agility in the Datacenter Controlling Costs and Driving Agility in the Datacenter Optimizing Server Infrastructure with Microsoft System Center Microsoft Corporation Published: November 2007 Executive Summary To help control costs,

More information

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved.

Mostafa Magdy Senior Technology Consultant Saudi Arabia. Copyright 2011 EMC Corporation. All rights reserved. Mostafa Magdy Senior Technology Consultant Saudi Arabia 1 Thinking Different: Simple, Efficient, Affordable, Unified Storage EMC VNX Family Easy yet Powerful 2 IT Challenges: Tougher than Ever Four central

More information

Hitachi Adaptable Modular Storage and Workgroup Modular Storage

Hitachi Adaptable Modular Storage and Workgroup Modular Storage O V E R V I E W Hitachi Adaptable Modular Storage and Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Data Systems Hitachi Adaptable Modular Storage and Workgroup

More information

Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage

Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage O V E R V I E W Hitachi Adaptable Modular Storage and Hitachi Workgroup Modular Storage Modular Hitachi Storage Delivers Enterprise-level Benefits Hitachi Adaptable Modular Storage and Hitachi Workgroup

More information

Building a Dynamic and Flexible Exchange Architecture. B S Nagarajan Senior Technology Consultant 6 th November, 2008

Building a Dynamic and Flexible Exchange Architecture. B S Nagarajan Senior Technology Consultant 6 th November, 2008 Building a Dynamic and Flexible Exchange Architecture B S Nagarajan Senior Technology Consultant 6 th November, 2008 Agenda What is new in Exchange 2007? Why Virtualize Exchange? Sizing guidelines Eat

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Dell PowerEdge R720xd 12,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution

Dell PowerEdge R720xd 12,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Dell PowerEdge R720xd 12,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Tested with ESRP Storage Version 4.0 Tested Date: 03/25/2014 1 2014 Dell Inc. All Rights Reserved. Dell, the Dell

More information

Системы хранения IBM. Новые возможности

Системы хранения IBM. Новые возможности Системы хранения IBM Новые возможности Introducing: A New Member of the Storwize Family Easy to use, affordable and efficient storage for Small and Medium Businesses New standard for midrange storage IBM

More information

Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits. White Paper

Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits. White Paper Storageflex HA3969 High-Density Storage: Key Design Features and Hybrid Connectivity Benefits White Paper Abstract This white paper introduces the key design features and hybrid FC/iSCSI connectivity benefits

More information

Four-Socket Server Consolidation Using SQL Server 2008

Four-Socket Server Consolidation Using SQL Server 2008 Four-Socket Server Consolidation Using SQL Server 28 A Dell Technical White Paper Authors Raghunatha M Leena Basanthi K Executive Summary Businesses of all sizes often face challenges with legacy hardware

More information

Surveillance Dell EMC Storage with Verint Nextiva

Surveillance Dell EMC Storage with Verint Nextiva Surveillance Dell EMC Storage with Verint Nextiva Sizing Guide H14897 REV 1.3 Copyright 2016-2017 Dell Inc. or its subsidiaries. All rights reserved. Published September 2017 Dell believes the information

More information

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public Data Protection for Cisco HyperFlex with Veeam Availability Suite 1 2017 2017 Cisco Cisco and/or and/or its affiliates. its affiliates. All rights All rights reserved. reserved. Highlights Is Cisco compatible

More information

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Dell EqualLogic Best Practices Series SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper Storage Infrastructure

More information

How Cisco IT Deployed Enterprise Messaging on Cisco UCS

How Cisco IT Deployed Enterprise Messaging on Cisco UCS Cisco IT Case Study July 2012 Enterprise Messaging on Cisco UCS How Cisco IT Deployed Enterprise Messaging on Cisco UCS Messaging platform upgrade and new servers reduce costs and improve management, availability,

More information

High Availability Without the Cluster (or the SAN) Josh Sekel IT Manager, Faculty of Business Brock University

High Availability Without the Cluster (or the SAN) Josh Sekel IT Manager, Faculty of Business Brock University High Availability Without the Cluster (or the SAN) Josh Sekel IT Manager, Faculty of Business Brock University File Services: Embarked on quest; after paying for too many data recoveries, to make saving

More information

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP:

Vendor must indicate at what level its proposed solution will meet the College s requirements as delineated in the referenced sections of the RFP: Vendor must indicate at what level its proposed solution will the College s requirements as delineated in the referenced sections of the RFP: 2.3 Solution Vision Requirement 2.3 Solution Vision CCAC will

More information