A Comparative Analysis of Exchange 2007 SP1 Replication using Cluster Continuous Replication vs. EMC SRDF/CE (Cluster Enabler)

Size: px
Start display at page:

Download "A Comparative Analysis of Exchange 2007 SP1 Replication using Cluster Continuous Replication vs. EMC SRDF/CE (Cluster Enabler)"

Transcription

1 A Comparative Analysis of Exchange 2007 SP1 Replication using Cluster Continuous Replication vs. EMC SRDF/CE (Cluster Enabler) A Detailed Review Abstract This white paper presents a functionality comparison between two Exchange 2007 Replication technologies Microsoft s Cluster Continuous Replication (CCR) and EMC s SRDF /CE (Cluster Enabler). January 2009

2 Copyright 2008, 2009 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part Number H SRDF/CE (Cluster Enabler) 2

3 Table of Contents Executive summary... 4 Introduction... 4 Microsoft Exchange CCR... 4 EMC Cluster Enabler (CE)... 5 Audience... 5 Terminology... 5 Configuration... 5 Exchange 2007 SP1 server configuration... 6 CCR requirements and considerations... 7 Mailbox server configuration... 7 Storage/SAN configuration... 7 SRDF/CE requirements and considerations... 8 Network configuration... 8 Storage subsystem validation with JetStress... 9 Build and maintenance of the CCR environment... 9 Build and maintenance of the CE environment Performance validation of the Exchange environment Full synchronization of cluster (active to standby) Storage metrics for CCR and CE environments Test scenario: full user load with no replication Test scenario: full user load with replication Test scenario: full user load, concurrent replication (CCR Copy or CE Full Establish) Important notes on database and log resync on the CE cluster Test scenario: full user load/failover to standby node Test scenario: full user load with WAN link failure Conclusion References Appendix A: Additional screens to validate test results SRDF/CE (Cluster Enabler) 3

4 Executive summary This white paper summarizes key test results and recommendations based on one environment s implementation of Cluster Continuous Replication (CCR) and a second environment s implementation of EMC SRDF /CE. By using Microsoft testing tools, the analysis was achieved through environments that included both replication technologies. Noted highlights include: Automated failover solution SRDF/CE combines Microsoft failover clusters with SRDF to automate the failover. Multiple application support Unlike CCR, which is specific to Exchange, SRDF/CE can be used with any application that is clustered using Microsoft failover clustering, including Exchange, SQL, SharePoint, and other non-microsoft applications. Consistent replication SRDF/CE enables consistent replication that virtually eliminates the need to perform full resynchronizations of the environment. Superior failover/failback performance SRDF/CE provides superior failover/failback performance. Increased flexibility SRDF/CE provides more flexibility in design, which leads to efficiencies in hardware and software deployment. Introduction The purpose of this white paper is to present a comparative analysis of replication and failover between a two-node MSCS Exchange 2007 SP1 cluster utilizing CCR for host-based replication and a two-node MSCS Exchange 2007 SP1 cluster utilizing EMC SRDF/CE for SAN-based replication in a synchronous RDF environment. After reading this document, the reader will have a clear picture of how each solution works as well as the pros and cons of each. We will begin by detailing the environment configuration Exchange, server, storage and IP network. We will then detail the underlying Symmetrix DMX storage and SRDF configuration; including storage subsystem validation using Microsoft s JetStress tool. The CCR and CE environments presented in this white paper were tested using Microsoft provided tools and were measured both for performance against each other, as well as against Microsoft recommended metrics. Details of the test scenarios and test results are included in this document. Microsoft Exchange CCR Key facts about CCR include: Continuous replication is asynchronous. Logs are not copied until they are closed by the Mailbox server. This means that the passive node does not have a copy of every log file that exists on the active node. Active and passive designation is automatically reversed after a failover. No manual action is required to reverse the replication. The system manages the replication reversal automatically. Failover and scheduled outages are the same functionally and in duration. It takes just as long to fail over from node 1 to node 2 as it does to fail over from node 2 to node 1. Volume Shadow Copy Service (VSS) backups on the passive node are supported. This allows administrators to offload backups from the active node and extend the backup window. Total data required on backup media is reduced. The CCR passive copy is the first location to turn to after data loss. As a result, a double failure is required before backups are needed. SRDF/CE (Cluster Enabler) 4

5 EMC Cluster Enabler (CE) EMC s Cluster Enabler (CE) for Failover Clusters is a software extension of Failover Clusters. Cluster Enabler allows Windows 2003 and 2008 failover clusters to operate across multiple connected storage arrays in geographically distributed clusters. Cluster Enabler provides around-the-clock (24/7/365) data protection from the following types of failures: Storage failures System failures Site failures Audience This white paper is intended for Microsoft Exchange architects, administrators, storage administrators, customers and anyone else involved in the design, implementation and support of a Microsoft Exchange 2007 solution. Terminology This technical analysis includes the following terms: CMS Clustered Mailbox Server. Microsoft s term that describes a failover cluster. Establish SRDF function that performs an incremental update from R1 devices to R2 devices. Full Establish SRDF function that performs complete track-by-track copy from an R1 device to R2 device, overwriting the contents of the R2 device. RA Group RDF Group containing all LUNs for a specific Exchange CMS. In the case of this white paper, it is 49 LUNs. Reseed Replicate a full copy of a database expediently. R1/R2 Personality Swap The process of changing the designation of a Dynamic RDF device from R1 (source) to R2 (target) while continuing synchronous replication from R1 to R2. Seeding The process of copying a database from source to target in a CCR environment. SRDF/S EMC synchronous replication technology for Symmetrix DMX TM that guarantees that a data write is successfully written to a remote DMX before the I/O is acknowledged to the originating host. Configuration The primary components represented in this white paper include: Storage (provided by two DMX storage arrays) Server Microsoft Exchange Fibre Channel (FC) network IP network In the CCR cluster, each node was given its own disk resources, which were not shared between the nodes and not paired from an SRDF perspective. In the CE cluster, both nodes shared disk resources, which were paired in a synchronous SRDF configuration where the Active cluster node held the R1 disks and the Standby node held the R2 disks. The communication between the primary and failover sites was a simulated Ethernet WAN connection, which utilized an Empirix emulation device to introduce latency and noise on the link. SRDF/CE (Cluster Enabler) 5

6 Figure 1 illustrates the configuration at the Production site and Disaster Recovery (DR) site. Figure 1. Environment configuration Exchange 2007 SP1 server configuration Each environment had the same exchange mailbox configuration. A detailed breakdown of the configuration follows: Number of Users: 6,000 User Profile: Very Heavy (.48 IOPs) Mailbox size: 350 MB Number of Exchange Storage Groups (ESGs): 24; Total of 49 LUNs (24 databases, 24 logs, 1 mount point) Number of mailbox databases per ESGs: 1 Number of users per mailbox database: 250 Database LUN size: 230 GB Log LUN size: 30 GB SRDF/CE (Cluster Enabler) 6

7 Hub/CAS server configuration: Three Hub/CAS servers, Three Domain Controllers/Global Catalog Servers CCR Cluster: two-node cluster configured with CCR Replication CE Cluster: two-node cluster configured as Single Copy Cluster CCR requirements and considerations The following requirements and considerations dictated the CCR environment: CCR can only be used with Exchange clusters. It cannot be utilized with any other applications, including other Microsoft applications such as SQL and SharePoint. Microsoft recommendation limits to two nodes per cluster (Active and Standby) using Majority Node Set (MNS) and File Share Witness on Windows 2003 and File Share Majority on Windows On Windows 2003, hotfix KB for Windows 2003 R2 SP1 is required to support File Share Witness. Both nodes must be on the same subnet in Windows 2003 and in the same AD Site, which limits the distance between nodes. CCR requires an Enterprise Exchange license rather than the standard license. CCR requires twice as much server hardware, and more powerful servers to handle the extra load of replication, due to the Active/Standby requirement. This effectively more than doubles the capital cost of a CCR environment. CCR clusters can only have a single database per ESG. Operating system and Exchange files must be installed identically, including all paths, on both nodes in the cluster. Microsoft recommends Gigabit Ethernet for faster reseeding. Depending on WAN bandwidth and latency, TCP tuning may be required such as increasing the TCP Window Size and modifying RFC1323 scaling options. If failover occurs from the Primary site to Standby site, and a backup is taken while at the Standby site, a reseed will be needed if the node is failed back to the Primary site due to the deletion of the logs during backup. Mailbox server configuration Dell 6850 servers were used for the Mailbox server role as well as for the Hub/CAS server role. A detailed breakdown of the Mailbox server configuration follows: Hardware Type: Dell 6850 CPU: Quad CPU 3 GHz RAM: 32 GB HBA: Dual QLogic QLA2340 Storage/SAN configuration EMC s Enterprise Class DMX storage arrays and Cisco MDS SAN switches were used in this environment. It must be noted that the arrays in this environment were originally configured for a larger test environment, and as such are largely underutilized in this white paper. A detailed breakdown of the storage configuration follows: Array Type: DMX , one per site Available Cache: 256 GB Drive Configuration: Gb 15k rpm SRDF/CE (Cluster Enabler) 7

8 FA Ports: Six 4 GB Fibre Channel paths per server per site RF Ports: Ten 4 GB Fibre Channel per array SAN Switch: MDS 9509, 4 GB Fibre Channel, one switch per site SRDF Type (on CE nodes): dynamic synchronous RDF, bi-directional configuration SRDF/CE Version: SRDF/CE requirements and considerations The following requirements and considerations dictated the SRDF/CE environment: Version upgrades are only supported from 2.1.x to 3.0.x. As part of the upgrade procedure, the CE Configuration Wizard supports an optional checkpoint file (reglist.txt) as a way to migrate settings from a previous version. The Windows processor architectures supported include: x86 x64 (AMD64 and Intel EM64T) IA64 Itanium Installation of CE requires a reboot executed immediately, or at a later time. Installation on Windows 2003 requires a minimum of SP2 and Net Framework 2.0. Installation should only be performed after the Exchange Cluster is configured. Installation requires EMC Solutions Enabler version 6.5 or earlier. Configurations where the cluster node is zoned to both local and remote storage are not supported. Prior to installation, all SRDF devices must be in a synchronized or consistent state and the SRDF link must be operational and tested via a failover from the R1 to R2 side. All nodes in a site must have the same devices mapped. All devices in an RDF Group must be the same type (either R1 or R2). Network configuration Requirements include: Ethernet switch Cisco Catalyst 6509 with Gigabit Ethernet interfaces (one switch per site) Emulation device Empirix WAN Emulation device WAN bandwidth 1 Gb/s between primary and failover sites WAN latency 1 ms simulating a campus environment SRDF/CE (Cluster Enabler) 8

9 Storage subsystem validation with JetStress Microsoft JetStress was used to validate the storage subsystem s capability to handle the IOPs load. JetStress version was used to simulate the I/O of 0.48 per user. The configuration was validated using the same methodology required for the Exchange Solution Reviewed Program (ESRP): a two-hour performance test and a 24-hour Stress test. JetStress testing passed both performance and stress. Table 1 lists results. Table 1. JetStress results Parameter Value Database disk transfers/sec 3,657 Database disk reads/sec 1,988 Average database read latency Average database write latency ms 10 ms Log disk writes/sec 1,034 Average database write latency 6 ms Build and maintenance of the CCR environment The nodes in the CCR cluster were both presented with 49 LUNs for a mount point, 24 databases, and 24 logs. In this environment, the disks were not clustered and they were not shared between the nodes in any way. Once the disks were allocated and configured on the servers, the clusters were built. With CCR, the disks are not added as resources in the cluster, which simplifies the cluster configuration. After the cluster was configured, Exchange was installed using the CCR option during installation. Finally, Microsoft Exchange Load Generator (LoadGen) was used to initialize the databases in preparation for LoadGen testing. After the databases were created (24 databases x 350 MB per mailbox = approximately 85 GB database size), an initial synchronization was executed from the Active node to the Passive node. This process is known as seeding. Seeding can be executed either via the Exchange GUI, or Exchange Management Shell commands. Using the Exchange Management Shell Update command is more detailed in showing errors as well as seeding progress. However, it requires a separate Shell window for each ESG, otherwise the tasks will be executed serially rather than in parallel. In addition, reseeding tasks from Active node to Passive node, or reverse synchronization from Passive to Active (if needed), requires numerous steps, involving these commands: Get-StorageGroupCopyStatus Suspend-StorageGroupCopy Update-StorageGroupCopy Resume-StorageGroupCopy SRDF/CE (Cluster Enabler) 9

10 Build and maintenance of the CE environment The nodes in the CE cluster were both presented with the same 49 LUNs for a mount point and 24 databases and 24 logs. In this environment, the 49 LUNs are shared between the two nodes with SRDF configured on the disks so that one node s DMX holds the R1 (Read/Write enabled) disks, and the other node s DMX holds the R2 (Read Only) disks. Once the disks were allocated and configured on the servers and the SRDF pairings and RA Group were configured on the two DMXs, the clusters were built. In a Shared Storage cluster, all of the disks need to be added as cluster resources. This is a manual process in Windows 2003 (more automated and faster in Windows 2008), which therefore takes more time than the cluster configuration in the CCR environment. After the cluster was configured, Exchange was installed using the Single Copy Cluster option during installation. Finally, Microsoft Exchange Load Generator (LoadGen) was used to initialize the databases in preparation for LoadGen testing. After the databases were created (24 databases x 350 MB per mailbox = approximately 85 GB database size), an initial synchronization was executed from the Active node to the Passive node a process known as an establish. From that point forward the disks became, and stayed, synchronized. A major benefit of CE is that no manual intervention is needed at any point after the initial configuration. All failover and failback tasks are executed from the Cluster Administrator application where CE handles the failover and swap functions dynamically. Performance validation of the Exchange environment Testing of the cluster environments consisted of: Full ESG synchronization between the primary site and the secondary site Server performance under load (with and without replication enabled) Failover/failback timing and link failure (to assess synchronization catch-up time after failure terminates) All testing was conducted under load using Microsoft s Exchange Load Generator (LoadGen). A Very Heavy profile was used that simulates IOPS of.48 per user. Eight-hour runs were executed and metrics were collected on the servers, storage, network emulation devices and the LoadGen manager server to determine success or failure as well as performance data. Results were compared to Microsoft specifications and minimum requirements as well as CCR cluster to CE cluster. Full synchronization of cluster (active to standby) In a shared storage cluster configuration that utilizes SRDF/S for replication, it is not common to perform a full resync from the Active node to Passive node. When the R1 and R2 disk pairing is configured, subsequent changes to the disks are replicated as they occur. Even in the event of a link failure between the DMXs, changes are queued, and when the link is re-established only an incremental update is sent. As a further mechanism to protect the R2 copy, the Active cluster node places a reservation on the disk that prohibits the Standby node from accessing the disk. Under normal circumstances, short of an extremely unlikely complete failure of the Standby storage array, a full re-synchronization is not required because the data is always protected on the R2 devices. In a CCR cluster, since the disks are not shared between the nodes, and are not clustered, it is possible that the Standby node disks could be accessed and corrupted or deleted. As a result, the need to be able to expediently replicate a full copy of a database (known as a reseed) from Active to Standby is necessary. Ideally, that Full Reseed needs to occur quickly with low impact to Production tasks. SRDF/CE (Cluster Enabler) 10

11 Table 2 lists the results seen on the CCR cluster when seeding the full complement of storage groups (24 SGs with a single 85 GB database per SG) after database initialization with no exchange load, both with and without Network Noise. NOTE: Table 2 details the elevated and sustained network utilization. Table 2. CCR cluster when seeding 24 SGs with a single 85 GB database per SG 24 SG seeding 24 SG seeding with 4% loss on WAN Seeding time 5 hrs and 39 mins 6 hrs and 22 mins Avg bandwidth usage 84.68% 73.74% Avg CPU usage (active) 5.71% 5.52% Avg CPU usage (passive) 5.49% 5.30% Avg DB disk usage (active) Avg DB disk usage (passive) 2.90% 2.50% 5.80% 5.20% NOTE: Figure 2 details the elevated and sustained network utilization, indicative of the need for adequate bandwidth and time to complete this task. Figure 2. NIC utilization during reseed SRDF/CE (Cluster Enabler) 11

12 Figure 3. NIC utilization during reseed with induced packet loss In the environment tested for this white paper, a Full Establish from R1 to R2 after the database initialization was executed in order to time how long it would take to replicate the full LUNs from Active to Passive. In a real world environment this would almost never be needed with SRDF/S as noted previously. But for comparison sake it was conducted here. The result was an approximately 14-hour resync time over the 1 Gb/s connection, which was fully utilized. This was not an entirely like comparison with CCR however, since with CCR only the files were copied whereas with an SRDF Full Establish the entire LUN and all tracks are copied, which in this case would be almost three times as much data as in the CCR test. Table 3 lists results observed on the CE cluster when performing an Establish (effectively an incremental copy) of the RDF Device Group after formatting the disks on the R2 side for comparative testing, both with and without Network Noise. Figure 4 also shows that the RDF state of the disks requires only an incremental synchronization of changes, rather than a full copy, when doing an establish. This is much faster than a full copy or reseed. Table 3. Resync R1 to R2 after format of R2 disks Parameter Value (0% network loss) Value (4% network loss) Resync time 2 minutes 2 minutes, 20 seconds Network utilization 14% to 28% 14% to 28% SRDF/CE (Cluster Enabler) 12

13 Figure 4. SRDF queued tracks at the start of establish SRDF/CE (Cluster Enabler) 13

14 Storage metrics for CCR and CE environments The IOPS and utilization metrics were consistent across the CCR and CE test scenarios. Since the same LoadGen profile was used there was no discernible difference between the IOPS and Utilization statistics collected in each of the runs with the exception of the IOPS of the Standby node storage array. Figure 5 through Figure 7 detail RDF link utilization under normal SRDF Established conditions. Figure 5. Active node IOPS profile SRDF/CE (Cluster Enabler) 14

15 Figure 6. Passive node IOPS profile Figure 7. SRDF link utilization during normal operations (LoadGen with RDF enabled) SRDF/CE (Cluster Enabler) 15

16 Test scenario: full user load with no replication After the initial synchronization, an eight-hour LoadGen test was executed against the Active Mailbox server with replication suspended. It was important to establish a baseline to which the subsequent testing could be compared. In the case of the CCR cluster, a Suspend Storage Group Copy was executed from the Exchange Management Console. In the case of the CE cluster, the RDF link was suspended. It should be noted that in some CCR environments, the hub server Messages Queued For Delivery has been shown to increase incrementally. This normally clears when the node is failed over, or the Transport service is restarted. Table 4 lists test results against the CCR cluster. Table 4. LoadGen test with CCR disabled Parameter RPC average latency Avg memory util (Active) Avg memory util (Passive) Value 6.47 ms GB GB Avg CPU usage (Active) 62% Avg CPU usage (Passive) 14% Avg bandwidth usage (Active) Avg bandwidth usage (Passive) 20.5 Mb/s.3 Mb/s Avg hub msgs queued for delivery/sec 9.08 Table 5 lists test results against the CE cluster. Table 5. LoadGen test with SRDF split Parameter RPC average latency Avg memory util (Active) Avg memory util (Passive) Value 5.29 ms GB GB Avg CPU usage (Active) 56% Avg CPU usage (Passive) 14% Avg hub msgs queued for delivery/sec 0.92 SRDF/CE (Cluster Enabler) 16

17 Test scenario: full user load with replication Once the baseline was established with replication suspended, the Active and Standby copies were again synchronized. Once that was complete and the replication mechanism was resumed (CCR Copy or SRDF resume), another eight-hour LoadGen test was executed against the Active Mailbox server using the same configuration as the original baseline test. As noted previously in some CCR environments, the hub server Messages Queued For Delivery has been shown to increase incrementally. This normally clears when the node is failed over or the Transport service is restarted. Table 6 lists test results against the CCR cluster. Table 6. LoadGen test with CCR enabled Parameter RPC average latency Avg memory util (Active) Avg memory util (Passive) Value 5.98 ms GB GB Avg CPU usage (Active) 65% Avg CPU usage (Passive) 17% Avg bandwidth usage (Active) Avg bandwidth usage (Passive) 23.9 Mb/s 15.1 Mb/s Avg hub msgs queued for delivery/sec 8.86 Table 7 lists test results against the CE cluster. Table 7. LoadGen test with SRDF enabled Parameter RPC average latency Avg memory util (Active) Avg memory util (Passive) Value 6.42 ms GB GB Avg CPU usage (Active) 42% Avg CPU usage (Passive) 14% Avg hub msgs queued for delivery/sec 0.91 SRDF/CE (Cluster Enabler) 17

18 Test scenario: full user load, concurrent replication (CCR Copy or CE Full Establish) The next eight-hour LoadGen test consisted of a full synchronization occurring concurrently with the full LoadGen test against the Active Mailbox server. The intention was to note the change in user experience, represented by RPC Latency, when either a Full Reseed or an Establish was executed. In addition, impact on the server processes and network (server and WAN) bandwidth was observed and noted. In the case of the CCR reseed, the user impact was significant more than doubling the RPC Latency throughout the eight-hour test. In addition, the impact on the Active and Standby servers was significant, both in CPU and Network Interface Card (NIC) utilization. During the eight-hour test, the reseed did not complete and in fact continued until anywhere from two to five hours after the conclusion of the test depending on which storage group was observed. It is important to note that this testing was with one Exchange server, 6,000 users, and a full Gigabit Ethernet WAN connection. Under these conditions the network was almost completely saturated, see "Appendix A: Additional screens to validate test results" on page 24. In an environment where there are more than one server and 6,000 users it should be noted that a Full Reseed of a larger environment would most certainly fail and as a result would have to be done with the Exchange server offline, and unavailable to client access until the reseed completed. The alternative would be the costly addition of more bandwidth and network equipment to support the increased need. Table 8 lists test results against the CCR cluster. Table 8. LoadGen test with CCR enabled and concurrent 24 SG reseed Parameter RPC average latency Avg memory util (Active) Avg memory util (Passive) Value ms GB GB Avg CPU usage (Active) 86% Avg CPU usage (Passive) 23% Avg bandwidth usage (Active) Avg bandwidth usage (Passive) 844 Mb/s 821 Mb/s Avg hub msgs queued for delivery/sec NOTE: Reseeding did not complete until approximately three hours after the test completed, as shown in Figure 8. SRDF/CE (Cluster Enabler) 18

19 Figure 8. Time elapsed from start to queue empty Important notes on database and log resync on the CE cluster If an RDF establish is concurrent with user activity, the odds are greater that tracks will have to be sent across the RDF link multiple times as they change, increasing the latency as time goes on. That will increase the amount of time needed to sync up. NOTE: SRDF/A is not recommended for synchronization. The recommended SRDF mode to use during a full resync is Adaptive Copy mode during the duration of the data copy. Once the majority of the tracks are copied, best practice is to switch SRDF back to the desired mode, whether it is SRDF/S or SRDF/A. SRDF/CE (Cluster Enabler) 19

20 Test scenario: full user load/failover to standby node In order to validate functionality after failover, eight-hour LoadGen tests were executed against the Active Mailbox servers with replication enabled. Approximately one hour into the test, the Active server was disabled and the Cluster Group was allowed to fail to the standby node. In all cases the CMS came online on the Standby node. In the case of the CCR cluster, log replay continued for a short time after failover to sync up the former Standby node. On the CE cluster, due to SRDF/S the standby copy was completely in sync at all times. Table 9 lists test results against the CCR cluster. Table 9. Failover from active node to passive node Parameter Avg time to CMS online Avg time to online for client access Value 46 secs 3 mins, 38 secs NOTE: Average Time To Online For Client Access in the CCR environment equates to empty Copy Queue and Replay Queue, which is viewed with the Get-StorageGroupCopyStatus Exchange Management Shell command. Table 10 lists test results against the CE cluster. Table 10. Failover from active node to passive node Parameter Avg time to online for client access Value 1 min., 25 secs Test scenario: full user load with WAN link failure The final test simulated a WAN link failure under normal (.48 IOPS per user) load conditions in order to determine the impact of link failures on the environment. In particular the test identified catch-up time for the active and standby nodes resynchronize. In the case of both clusters, the link was disabled in multiple tests, starting at a duration of 15 minutes, then 30 minutes, 60 minutes, 120 minutes and five hours before re-establishing the link, and noting the impact. As can be seen by the graph of test results, as the duration of the outage was extended, the time it took to catch up in copy and replay increased. Table 11 shows the CCR clusters negative impact on CPU and network utilization. These have been shown in these test results to adversely affect RPC latency during high bandwidth utilization times, such as a Full Reseed or a period of catch up after a link failure. SRDF/CE (Cluster Enabler) 20

21 Table 11 details CCR cluster test results. Table 11. Link/failure catch-up time testing Parameter Value 15 minutes Value 30 minutes Value 60 minutes Value 120 minutes Value 5 hours Time until CCR status is Healthy for all SGs after the link is back up Time until copy queue length/replay queue length return to zero 4 mins 3 mins 5 mins 6 mins 6 mins 10 mins 8 mins 18 mins 31 mins 48 mins Network utilization change > from 3% to 16% > from 3% to 16% > from 3% to 16% > from 3% to 16% > from 3% to 16% Figure 9 and Figure 10 show the Queue Length impact of a five-hour network outage. Figure 9. Screen capture before and during a link failure SRDF/CE (Cluster Enabler) 21

22 In the case of the CE cluster, the RDF link was suspended when time writes to the R2 side began to queue. The same intervals were used for CE as were used with CCR. Table 12 details the test results. Table 12. SRDF/CE Link/failure catch-up time testing Parameter Value 15 minutes Value 30 minutes Value 60 minutes Value 120 minutes Value 5 hours Time until RDF queue is empty 3 mins 4 mins 5 mins 8 mins 13 mins Network utilization change > from 8% to 94% > from 8% to 96% > from 8% to 97% > from 8% to 97% > from 8% to 100% Figure 10 shows the Queue Length impact of a five-hour network outage.. Figure 10. Screen capture indicating the RDF queue size during a link failure SRDF/CE (Cluster Enabler) 22

23 Conclusion After building and testing the two Exchange Cluster environments, it is obvious that there are benefits to both replication strategies. CCR has the benefit of setup simplicity when compared to SAN-based replication. All of the required parts are there on the server already, it s just a matter of configuring them during the install. The tradeoff is that the quantity of servers required in a Microsoft Recommended solution is greater than with a shared storage solution. In addition, if the Active and Standby nodes get out of sync there is much more of an impact to users and administrators as seen in the test results in this white paper. In a larger, more customer representative environment, that impact would be even greater and potentially more costly, since it would potentially require more bandwidth between sites or a compression mechanism of some kind, either of which adds significant cost to a WAN environment. Server- and IP network-based replication affects much of the same resources as those that impact user performance. Network interface card (NIC) and CPU utilization (both key factors in how fast and efficiently replication occurs) will impact how many client requests are processed by the server. This means that anything that adds to the load will impact the user experience. SAN-based replication utilizing synchronous SRDF, and in particular SRDF/CE, requires a bit more initial setup but is much more efficient to the server and end-user experience, and due to the flexibility of supported configurations, more efficient in terms of hardware resources, network resources, and overall data center resources (power and cooling). In addition, the ongoing administration and support are less since failover is automatic and the remote copy is always in sync and has the added benefit of being offloaded from the server resources that impact the user experience. Additionally, and most importantly, is if the WAN link between local and remote arrays is down, writes to the remote side will queue until the link is resumed at which point the copy will continue and quickly catch up with SRDF/CE. Whereas with CCR, once the standby copy is out of sync beyond a time, as shown in this white paper, catch-up time will lengthen, and subsequently user experience will suffer. As the testing in this white paper shows, any full resynchronization in the CCR environment, during normal user load, will result in degraded performance and user experience. Therefore it is imperative that a replication solution be implemented that will reduce the need for that full resynchronization. It is clear that synchronous replication using SRDF, automated through the use of SRDF/CE, virtually eliminates the need for anything other than an incremental update from Active to Passive node. With CCR replication, the need for a full resynchronization is much more likely. That fact must be weighed against any gain in simplicity and possible monetary savings of CCR in order to determine if the user impact is worth it. References EMC Symmetrix DMX (60,000 User) Replicated Storage Solution Using EMC SRDF/S (200 km) for Microsoft Exchange Server 2007 SP1 Exchange Solution Reviewed Program (ESRP) EMC Cluster Enabler 3.0 Product Guide For additional information, see the following Microsoft Exchange Server TechNet websites: SRDF/CE (Cluster Enabler) 23

24 Appendix A: Additional screens to validate test results Figure 11. SRDF/CE manage cluster GUI SRDF/CE (Cluster Enabler) 24

25 Figure storage group CCR reseeding under load SRDF/CE (Cluster Enabler) 25

26 Figure 13. CCR cluster average network utilization under load, no concurrent Full Reseed SRDF/CE (Cluster Enabler) 26

27 Figure 14. CCR cluster network utilization during LoadGen and Full Reseed (passive node) SRDF/CE (Cluster Enabler) 27

28 Figure 15. CCR cluster network utilization during LoadGen and Full Reseed (active node) SRDF/CE (Cluster Enabler) 28

Local and Remote Data Protection for Microsoft Exchange Server 2007

Local and Remote Data Protection for Microsoft Exchange Server 2007 EMC Business Continuity for Microsoft Exchange 2007 Local and Remote Data Protection for Microsoft Exchange Server 2007 Enabled by EMC RecoverPoint CLR and EMC Replication Manager Reference Architecture

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007 Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange Server Enabled by MirrorView/S and Replication Manager Reference Architecture EMC

More information

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005 Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Long Distance Recovery for SQL Server 2005 Enabled by Replication Manager and RecoverPoint CRR Reference Architecture EMC Global

More information

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S Enterprise Solutions for Microsoft Exchange 2007 EMC CLARiiON CX3-40 Metropolitan Exchange Recovery (MER) for Exchange in a VMware Environment Enabled by MirrorView/S Reference Architecture EMC Global

More information

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Enterprise Solutions for Microsoft SQL Server 2005 EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S Reference Architecture EMC Global Solutions

More information

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1

High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 High Availability and Disaster Recovery features in Microsoft Exchange Server 2007 SP1 Product Group - Enterprise Dell White Paper By Farrukh Noman Ananda Sankaran April 2008 Contents Introduction... 3

More information

EMC Business Continuity for Microsoft Applications

EMC Business Continuity for Microsoft Applications EMC Business Continuity for Microsoft Applications Enabled by EMC Celerra, EMC MirrorView/A, EMC Celerra Replicator, VMware Site Recovery Manager, and VMware vsphere 4 Copyright 2009 EMC Corporation. All

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi

EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120 and VMware vsphere 4.0 using iscsi EMC Virtual Infrastructure for Microsoft Exchange 27 Enabled by EMC CLARiiON CX4-12 and Applied Technology Abstract This white paper details a solution built and tested using EMC CLARiiON CX4-12 and VMware

More information

Secure and Consolidated 16,000 Exchange Users Solution on a VMware/EMC Environment

Secure and Consolidated 16,000 Exchange Users Solution on a VMware/EMC Environment Secure and Consolidated 16,000 Exchange Users Solution on a VMware/EMC Environment Applied Technology Abstract A virtualization-enabled platform for Microsoft Exchange 2007 can provide a number of technical

More information

EMC Business Continuity for Microsoft SQL Server 2008

EMC Business Continuity for Microsoft SQL Server 2008 EMC Business Continuity for Microsoft SQL Server 2008 Enabled by EMC Symmetrix V-Max with SRDF/CE, EMC Replication Manager, and Enterprise Flash Drives Proven Solution Guide Copyright 2010 EMC Corporation.

More information

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers

Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers Exchange Server 2007 Performance Comparison of the Dell PowerEdge 2950 and HP Proliant DL385 G2 Servers By Todd Muirhead Dell Enterprise Technology Center Dell Enterprise Technology Center dell.com/techcenter

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

EMC Business Continuity for Microsoft Exchange 2010

EMC Business Continuity for Microsoft Exchange 2010 EMC Business Continuity for Microsoft Exchange 2010 Enabled by EMC Unified Storage and Microsoft Database Availability Groups Proven Solution Guide Copyright 2011 EMC Corporation. All rights reserved.

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

DATA PROTECTION IN A ROBO ENVIRONMENT

DATA PROTECTION IN A ROBO ENVIRONMENT Reference Architecture DATA PROTECTION IN A ROBO ENVIRONMENT EMC VNX Series EMC VNXe Series EMC Solutions Group April 2012 Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information

More information

This document contains information about the EMC DMX SRDF/A Storage Solution for Microsoft Exchange Server.

This document contains information about the EMC DMX SRDF/A Storage Solution for Microsoft Exchange Server. ESRP Storage Program EMC Symmetrix DMX-3 4500 SRDF/A (60000 Users) Storage Solution for Microsoft Exchange Server Replicated Storage Rev 1.0 January 16, 2007 This document contains information about the

More information

EMC Backup and Recovery for Microsoft Exchange 2007

EMC Backup and Recovery for Microsoft Exchange 2007 EMC Backup and Recovery for Microsoft Exchange 2007 Enabled by EMC CLARiiON CX4-120, Replication Manager, and Hyper-V on Windows Server 2008 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC CLARiiON CX3 Series FCP

EMC CLARiiON CX3 Series FCP EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com Copyright 2008

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for IBM zseries mainframes. Geographically

More information

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3. EMC Backup and Recovery for Microsoft Exchange 2007 SP1 Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.5 using iscsi Reference Architecture Copyright 2009 EMC Corporation.

More information

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960

EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960 EMC Virtualized Architecture for Microsoft Exchange Server 2007 with VMware Virtual Infrastructure 3 and EMC CLARiiON CX4-960 Best Practices Planning Abstract This white paper presents best practices and

More information

DISK LIBRARY FOR MAINFRAME

DISK LIBRARY FOR MAINFRAME DISK LIBRARY FOR MAINFRAME Geographically Dispersed Disaster Restart Tape ABSTRACT Disk Library for mainframe is Dell EMC s industry leading virtual tape library for mainframes. Geographically Dispersed

More information

Exchange 2003 Archiving for Operational Efficiency

Exchange 2003 Archiving for Operational Efficiency Exchange 2003 Archiving for Operational Efficiency Enabled by EmailXtender Reference Architecture EMC Global Solutions Operations EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000

More information

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture EMC Virtual Architecture for Microsoft SharePoint Server 2007 Enabled by EMC CLARiiON CX3-40, VMware ESX Server 3.5 and Microsoft SQL Server 2005 Reference Architecture EMC Global Solutions Operations

More information

Microsoft Exchange 2007 on VMware Infrastructure 3 and NetApp iscsi Storage

Microsoft Exchange 2007 on VMware Infrastructure 3 and NetApp iscsi Storage NETAPP/VMWARE TECHNICAL REPORT Microsoft Exchange 2007 on VMware Infrastructure 3 and NetApp iscsi Storage Solution Overview and Workload Characterization May 2008 TR-3683 TABLE OF CONTENTS 1. EXECUTIVE

More information

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated

Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Solutions for Microsoft Exchange 2007 Virtual Exchange 2007 in a VMware ESX Datastore with a VMDK File Replicated Virtual Exchange 2007 within a VMware ESX datastore VMDK file replicated EMC Commercial

More information

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008

Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Veritas Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL 2008 Windows Server 2003 Windows Server 2008 5.1 Application Pack 1 Veritas Storage Foundation

More information

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018

IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 IBM MQ Appliance HA and DR Performance Report Model: M2001 Version 3.0 September 2018 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before

More information

Reference Architecture

Reference Architecture EMC Solutions for Microsoft SQL Server 2005 on Windows 2008 in VMware ESX Server EMC CLARiiON CX3 Series FCP EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com www.emc.com

More information

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems NETAPP TECHNICAL REPORT Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems A Performance Comparison Study of FC, iscsi, and NFS Protocols Jack McLeod, NetApp

More information

IBM MQ Appliance HA and DR Performance Report Version July 2016

IBM MQ Appliance HA and DR Performance Report Version July 2016 IBM MQ Appliance HA and DR Performance Report Version 2. - July 216 Sam Massey IBM MQ Performance IBM UK Laboratories Hursley Park Winchester Hampshire 1 Notices Please take Note! Before using this report,

More information

EMC Virtual Infrastructure for Microsoft Exchange 2007

EMC Virtual Infrastructure for Microsoft Exchange 2007 EMC Virtual Infrastructure for Microsoft Exchange 2007 Enabled by EMC Replication Manager, EMC CLARiiON AX4-5, and iscsi Reference Architecture EMC Global Solutions 42 South Street Hopkinton, MA 01748-9103

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager

EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager EMC Virtual Infrastructure for Microsoft Exchange 2010 Enabled by EMC Symmetrix VMAX, VMware vsphere 4, and Replication Manager Reference Architecture Copyright 2010 EMC Corporation. All rights reserved.

More information

Dell Exchange 2007 Advisor and Representative Deployments

Dell Exchange 2007 Advisor and Representative Deployments Dell Exchange 2007 Advisor and Representative Deployments Product Group - Enterprise Dell White Paper By Farrukh Noman Bharath Vasudevan April 2007 Contents Executive Summary... 3 Introduction... 4 Dell

More information

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE

EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE EMC Performance Optimization for VMware Enabled by EMC PowerPath/VE Applied Technology Abstract This white paper is an overview of the tested features and performance enhancing technologies of EMC PowerPath

More information

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS

Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Navisphere QoS Maintaining End-to-End Service Levels for VMware Virtual Machines Using VMware DRS and EMC Applied Technology Abstract This white paper describes tests in which Navisphere QoS Manager and VMware s Distributed

More information

SAN for Business Continuity

SAN for Business Continuity SAN for Business Continuity How Cisco IT Achieves Subminute Recovery Point Objective A Cisco on Cisco Case Study: Inside Cisco IT 1 Overview Challenge Improve recovery point objective (RPO) and recovery

More information

Demartek December 2007

Demartek December 2007 HH:MM Demartek Comparison Test: Storage Vendor Drive Rebuild Times and Application Performance Implications Introduction Today s datacenters are migrating towards virtualized servers and consolidated storage.

More information

EMC Disk Library Automated Tape Caching Feature

EMC Disk Library Automated Tape Caching Feature EMC Disk Library Automated Tape Caching Feature A Detailed Review Abstract This white paper details the EMC Disk Library configuration and best practices when using the EMC Disk Library Automated Tape

More information

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange

Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Veritas Storage Foundation and High Availability Solutions Quick Recovery and Microsoft Clustering Solutions Guide for Microsoft Exchange Windows Server 2003 Windows Server 2008 5.1 Veritas Storage Foundation

More information

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1 Copyright 2011, 2012 EMC Corporation. All rights reserved. Published March, 2012 EMC believes the information in this publication

More information

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks

Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks Deploying Microsoft Exchange Server 2007 mailbox roles on VMware Infrastructure 3 using HP ProLiant servers and HP StorageWorks Executive summary... 2 Target audience... 2 Introduction... 2 Disclaimer...

More information

VERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path

VERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path White Paper VERITAS Storage Foundation for Windows VERITAS Dynamic MultiPathing (DMP) Increasing the Availability and Performance of the Data Path 12/6/2004 1 Introduction...3 Dynamic MultiPathing (DMP)...3

More information

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software

Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Virtualized SQL Server Performance and Scaling on Dell EMC XC Series Web-Scale Hyper-converged Appliances Powered by Nutanix Software Dell EMC Engineering January 2017 A Dell EMC Technical White Paper

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

Copyright 2012 EMC Corporation. All rights reserved.

Copyright 2012 EMC Corporation. All rights reserved. 1 TRANSFORMING MICROSOFT APPLICATIONS TO THE CLOUD Louaye Rachidi Technology Consultant 2 22x Partner Of Year 19+ Gold And Silver Microsoft Competencies 2,700+ Consultants Worldwide Cooperative Support

More information

Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution Dell PowerVault MD3000i 5000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution Tested with: ESRP Storage Version 2.0 Tested Date: October 2, 2007 Table of Contents Table of Contents...2

More information

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution

Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Performance Characterization of the Dell Flexible Computing On-Demand Desktop Streaming Solution Product Group Dell White Paper February 28 Contents Contents Introduction... 3 Solution Components... 4

More information

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring Ashish Ray Group Product Manager Oracle Corporation Causes of Downtime Unplanned Downtime Planned Downtime System Failures Data

More information

DELL EMC UNITY: REPLICATION TECHNOLOGIES

DELL EMC UNITY: REPLICATION TECHNOLOGIES DELL EMC UNITY: REPLICATION TECHNOLOGIES A Detailed Review ABSTRACT This white paper explains the replication solutions for Dell EMC Unity systems. This paper outlines the native and non-native options

More information

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4 W H I T E P A P E R Comparison of Storage Protocol Performance in VMware vsphere 4 Table of Contents Introduction................................................................... 3 Executive Summary............................................................

More information

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import

Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Data Migration from Dell PS Series or PowerVault MD3 to Dell EMC SC Series Storage using Thin Import Abstract The Thin Import feature of Dell Storage Center Operating System offers solutions for data migration

More information

Hosted Microsoft Exchange Server 2003 Deployment Utilizing Network Appliance Storage Solutions

Hosted Microsoft Exchange Server 2003 Deployment Utilizing Network Appliance Storage Solutions Hosted Microsoft Exchange Server 23 Deployment Utilizing Network Appliance Storage Solutions Large-Scale, 68,-Mailbox Exchange Server Proof of Concept Lee Dorrier, Network Appliance, Inc. Eric Johnson,

More information

HP StorageWorks 600 Modular Disk System 4,000 user 3GB Mailbox resiliency Exchange 2010 storage solution

HP StorageWorks 600 Modular Disk System 4,000 user 3GB Mailbox resiliency Exchange 2010 storage solution HP StorageWorks 600 Modular Disk System 4,000 user 3GB Mailbox resiliency Exchange 2010 storage solution Technical white paper Table of contents Overview... 2 Disclaimer... 2 Features... 2 Solution description...

More information

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015

Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: June 2015 Dell PowerEdge R730xd 2,500 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Tested with ESRP Storage Version 4.0 Tested Date: June 2015 Copyright 2015 Dell Inc. All rights reserved. This product

More information

Many organizations rely on Microsoft Exchange for

Many organizations rely on Microsoft Exchange for Feature section: Microsoft Exchange server 007 A Blueprint for Implementing Microsoft Exchange Server 007 Storage Infrastructures By Derrick Baxter Suresh Jasrasaria Designing a consolidated storage infrastructure

More information

Demartek September Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi. Introduction. Evaluation Environment. Evaluation Summary

Demartek September Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi. Introduction. Evaluation Environment. Evaluation Summary Intel 10GbE Adapter Performance Evaluation for FCoE and iscsi Evaluation report prepared under contract with Intel Corporation Introduction As the interest in converged networks grows, and as the vendors

More information

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014

Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution. Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014 Dell PowerEdge R720xd 6,000 Mailbox Resiliency Microsoft Exchange 2013 Storage Solution Tested with ESRP Storage Version 4.0 Tested Date: Feb 2014 2014 Dell Inc. All Rights Reserved. Dell, the Dell logo,

More information

DISASTER RECOVERY IN AN EMC DISKXTENDER FOR WINDOWS ENVIRONMENT

DISASTER RECOVERY IN AN EMC DISKXTENDER FOR WINDOWS ENVIRONMENT White Paper DISASTER RECOVERY IN AN EMC DISKXTENDER FOR WINDOWS ENVIRONMENT Recommended best practices Abstract This white paper explains how to prepare for disaster recovery in an environment where EMC

More information

EMC VPLEX Geo with Quantum StorNext

EMC VPLEX Geo with Quantum StorNext White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote

More information

ESRP Storage Program

ESRP Storage Program ESRP Storage Program EMC CLARiiON AX4-5i (500 User) Exchange 2010 Mailbox Resiliency Storage Solution Tested with: ESRP - Storage Version 3.0 Tested Date: 25 November 2009 EMC Global Solutions Copyright

More information

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage

Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage Storage Consolidation with the Dell PowerVault MD3000i iscsi Storage By Dave Jaffe Dell Enterprise Technology Center and Kendra Matthews Dell Storage Marketing Group Dell Enterprise Technology Center delltechcenter.com

More information

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide

Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Dell PowerVault MD3600f/MD3620f Remote Replication Functional Guide Page i THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT

More information

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning

EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning EMC Solutions for Backup to Disk EMC Celerra LAN Backup to Disk with IBM Tivoli Storage Manager Best Practices Planning Abstract This white paper describes how to configure the Celerra IP storage system

More information

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi

EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi EMC Solutions for Microsoft Exchange 2007 CLARiiON CX3 Series iscsi Best Practices Planning Abstract This white paper presents the best practices for optimizing performance for a Microsoft Exchange 2007

More information

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510

A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 A Comparative Study of Microsoft Exchange 2010 on Dell PowerEdge R720xd with Exchange 2007 on Dell PowerEdge R510 Incentives for migrating to Exchange 2010 on Dell PowerEdge R720xd Global Solutions Engineering

More information

Four-Socket Server Consolidation Using SQL Server 2008

Four-Socket Server Consolidation Using SQL Server 2008 Four-Socket Server Consolidation Using SQL Server 28 A Dell Technical White Paper Authors Raghunatha M Leena Basanthi K Executive Summary Businesses of all sizes often face challenges with legacy hardware

More information

VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorView/S

VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorView/S VMware Site Recovery Manager with EMC CLARiiON CX3 and MirrorView/S Implementation Guide EMC Global Solutions Centers EMC Corporation Corporate Headquarters Hopkinton MA 01748-9103 1.508.435.1000 www.emc.com

More information

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays

A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays Microsoft Hyper-V Planning Guide for Dell PowerVault MD Series Storage Arrays A Dell Technical White Paper PowerVault MD32X0, MD32X0i, and MD36X0i Series of Arrays THIS WHITE PAPER IS FOR INFORMATIONAL

More information

How Enterprise Vault Supports Exchange 2007 High Availability Options

How Enterprise Vault Supports Exchange 2007 High Availability Options WHITE PAPER: TECHNICAL How Enterprise Vault Supports Exchange 2007 High Availability Options Niels Van Ingen, Product Manager Andy Joyce, Technical Field Enablement Version 2.0 (March 2009) Applicable

More information

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels:

Reduce Costs & Increase Oracle Database OLTP Workload Service Levels: Reduce Costs & Increase Oracle Database OLTP Workload Service Levels: PowerEdge 2950 Consolidation to PowerEdge 11th Generation A Dell Technical White Paper Dell Database Solutions Engineering Balamurugan

More information

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE White Paper EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE EMC XtremSF, EMC XtremCache, EMC Symmetrix VMAX and Symmetrix VMAX 10K, XtremSF and XtremCache dramatically improve Oracle performance Symmetrix

More information

IBM InfoSphere Streams v4.0 Performance Best Practices

IBM InfoSphere Streams v4.0 Performance Best Practices Henry May IBM InfoSphere Streams v4.0 Performance Best Practices Abstract Streams v4.0 introduces powerful high availability features. Leveraging these requires careful consideration of performance related

More information

Microsoft Exchange Server 2010 Performance on VMware vsphere 5

Microsoft Exchange Server 2010 Performance on VMware vsphere 5 Microsoft Exchange Server 2010 Performance on VMware vsphere 5 Performance Study TECHNICAL WHITE PAPER Table of Contents Introduction.... 3 Experimental Configuration and Methodology... 3 Test-Bed Configuration....

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution

Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution Dell Storage Center 6.6 SCv2000 SAS Front-end Arrays and 2,500 Mailbox Exchange 2013 Resiliency Storage Solution Microsoft ESRP 4.0 Dell Storage Engineering October 2015 A Dell Technical White Paper Revisions

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and VMware ESX 3.5

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and VMware ESX 3.5 EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and VMware ESX 3.5 Applied Technology Abstract Establishing a virtualization-enabled platform for

More information

Overview of HP tiered solutions program for Microsoft Exchange Server 2010

Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Overview of HP tiered solutions program for Microsoft Exchange Server 2010 Table of contents Executive summary... 2 Introduction... 3 Exchange 2010 changes that impact tiered solutions... 3 Hardware platforms...

More information

EMC RecoverPoint V3.0 Asks Why Not Both? to CDP and Remote Replication

EMC RecoverPoint V3.0 Asks Why Not Both? to CDP and Remote Replication EMC RecoverPoint V3.0 Asks Why Not Both? to CDP and Remote Replication THE CLIPPER GROUP NavigatorTM Navigating Information Technology Horizons SM Published Since 1993 Report #TCG2008013 February 25, 2008

More information

IBM System Storage DS3300 Storage System 1000 Mailbox Clustered Continuous Replication Microsoft Exchange 2007 Storage Solution

IBM System Storage DS3300 Storage System 1000 Mailbox Clustered Continuous Replication Microsoft Exchange 2007 Storage Solution DS3300 Storage System 1000 Mailbox Clustered Storage Solution Tested with: ESRP Storage Version 2.0 Tested Date: February 12, 2008 Authors: Joe Richards i, David Hartman ii Document Version: 2.1 Content...

More information

EMC RECOVERPOINT FAMILY OVERVIEW A Detailed Review

EMC RECOVERPOINT FAMILY OVERVIEW A Detailed Review White Paper EMC RECOVERPOINT FAMILY OVERVIEW A Detailed Review Abstract This white paper provides an overview of EMC RecoverPoint, establishing the basis for a functional understanding of the product and

More information

Dell PowerVault MD Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution

Dell PowerVault MD Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution Dell PowerVault MD3000 3000 Mailbox Single Copy Cluster Microsoft Exchange 2007 Storage Solution Tested with: ESRP Storage Version 2.0 Tested Date: August 08, 2007 Table of Contents Table of Contents...2

More information

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR

PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR PERFORMANCE TUNING TECHNIQUES FOR VERITAS VOLUME REPLICATOR Tim Coulter and Sheri Atwood November 13, 2003 VERITAS ARCHITECT NETWORK TABLE OF CONTENTS Introduction... 3 Overview of VERITAS Volume Replicator...

More information

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group WHITE PAPER: BEST PRACTICES Sizing and Scalability Recommendations for Symantec Rev 2.2 Symantec Enterprise Security Solutions Group White Paper: Symantec Best Practices Contents Introduction... 4 The

More information

EMC VNX Series: Introduction to SMB 3.0 Support

EMC VNX Series: Introduction to SMB 3.0 Support White Paper EMC VNX Series: Introduction to SMB 3.0 Support Abstract This white paper introduces the Server Message Block (SMB) 3.0 support available on the EMC VNX and the advantages gained over the previous

More information

EMC RECOVERPOINT/EX Applied Technology

EMC RECOVERPOINT/EX Applied Technology White Paper EMC RECOVERPOINT/EX Applied Technology Abstract This white paper discusses how EMC RecoverPoint/EX can be used with the EMC Symmetrix VMAX 20K and Symmetrix VMAX 40K with Enginuity 5876 and

More information

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Video Surveillance EMC Storage with Godrej IQ Vision Ultimate Sizing Guide H15052 01 Copyright 2016 EMC Corporation. All rights reserved. Published in the USA. Published May 2016 EMC believes the information

More information

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes

Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes Reference Architecture Microsoft Exchange 2013 on Dell PowerEdge R730xd 2500 Mailboxes A Dell Reference Architecture Dell Engineering August 2015 A Dell Reference Architecture Revisions Date September

More information

Connecting EMC DiskXtender for Windows to EMC Centera

Connecting EMC DiskXtender for Windows to EMC Centera Connecting EMC DiskXtender for Windows to EMC Centera Best Practices Planning Abstract This white paper provides details on building the connection string that EMC DiskXtender for Windows uses to connect

More information

QuickStart Guide vcenter Server Heartbeat 5.5 Update 1 EN

QuickStart Guide vcenter Server Heartbeat 5.5 Update 1 EN vcenter Server Heartbeat 5.5 Update 1 EN-000205-00 You can find the most up-to-date technical documentation on the VMware Web site at: http://www.vmware.com/support/ The VMware Web site also provides the

More information

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business Technical Report Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users Reliable and affordable storage for your business Table of Contents 1 Overview... 1 2 Introduction... 2 3 Infrastructure

More information

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain Performance testing results using Dell EMC Data Domain DD6300 and Data Domain Boost for Enterprise Applications July

More information

SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT

SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT SCALING UP VS. SCALING OUT IN A QLIKVIEW ENVIRONMENT QlikView Technical Brief February 2012 qlikview.com Introduction When it comes to the enterprise Business Discovery environments, the ability of the

More information

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo Exam : S10-200 Title : Snia Storage Network Management/Administration Version : Demo 1. A SAN architect is asked to implement an infrastructure for a production and a test environment using Fibre Channel

More information

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA. This solution guide describes the disaster recovery modular add-on to the Federation Enterprise Hybrid Cloud Foundation solution for SAP. It introduces the solution architecture and features that ensure

More information

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE

DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE DELL EMC READY BUNDLE FOR MICROSOFT EXCHANGE EXCHANGE SERVER 2016 Design Guide ABSTRACT This Design Guide describes the design principles and solution components for Dell EMC Ready Bundle for Microsoft

More information

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture EMC Solutions for Microsoft Exchange 2007 EMC Celerra NS20 EMC NAS Product Validation Corporate Headquarters Hopkinton, MA 01748-9103 1-508-435-1000 www.emc.com Copyright 2008 EMC Corporation. All rights

More information