Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies

Similar documents
EMC VPLEX Geo with Quantum StorNext

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC VPLEX with Quantum Stornext

DATA PROTECTION IN A ROBO ENVIRONMENT

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

Copyright 2012 EMC Corporation. All rights reserved.

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

Copyright 2012 EMC Corporation. All rights reserved.

EMC BUSINESS CONTINUITY SOLUTION FOR GE HEALTHCARE CENTRICITY PACS-IW ENABLED BY EMC MIRRORVIEW/CE

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

VMware vsphere 5.0 STORAGE-CENTRIC FEATURES AND INTEGRATION WITH EMC VNX PLATFORMS

EMC Integrated Infrastructure for VMware. Business Continuity

vsan Disaster Recovery November 19, 2017

VPLEX & RECOVERPOINT CONTINUOUS DATA PROTECTION AND AVAILABILITY FOR YOUR MOST CRITICAL DATA IDAN KENTOR

EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO

EMC Business Continuity for Microsoft Applications

Local and Remote Data Protection for Microsoft Exchange Server 2007

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Surveillance Dell EMC Storage with Bosch Video Recording Manager

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

Verron Martina vspecialist. Copyright 2012 EMC Corporation. All rights reserved.

Vblock Architecture. Andrew Smallridge DC Technology Solutions Architect

E EMC. EMC Storage and Information Infrastructure Expert for Technology Architects

EMC Backup and Recovery for Microsoft Exchange 2007

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING

VMware vstorage APIs FOR ARRAY INTEGRATION WITH EMC VNX SERIES FOR SAN

Surveillance Dell EMC Storage with Digifort Enterprise

EMC BUSINESS CONTINUITY FOR VMWARE VIEW 5.1

EMC Backup and Recovery for Microsoft Exchange 2007 SP1. Enabled by EMC CLARiiON CX4-120, Replication Manager, and VMware ESX Server 3.

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

Disaster Recovery-to-the- Cloud Best Practices

vsan Remote Office Deployment January 09, 2018

Dell Technologies IoT Solution Surveillance with Genetec Security Center

EMC XTREMCACHE ACCELERATES ORACLE

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

EMC RECOVERPOINT/EX Applied Technology

BC/DR Strategy with VMware

Native vsphere Storage for Remote and Branch Offices

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

XtremIO Business Continuity & Disaster Recovery. Aharon Blitzer & Marco Abela XtremIO Product Management

Surveillance Dell EMC Storage with Verint Nextiva

Nutanix Tech Note. Virtualizing Microsoft Applications on Web-Scale Infrastructure

EMC Virtual Infrastructure for Microsoft SharePoint Server 2010 Enabled by EMC CLARiiON and VMware vsphere 4

EMC RECOVERPOINT FAMILY OVERVIEW A Detailed Review

EMC Simple Support Matrix

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Surveillance Dell EMC Storage with Milestone XProtect Corporate

Symantec Reference Architecture for Business Critical Virtualization

Redefine Data Protection: Next Generation Backup & Business Continuity Solutions

EMC VSPEX SERVER VIRTUALIZATION SOLUTION

Cisco UCS Mini Software-Defined Storage with StorMagic SvSAN for Remote Offices

Exam Name: Technology Architect Solutions Design Exam

Copyright 2012 EMC Corporation. All rights reserved.

Redefine Data Protection: Next Generation Backup And Business Continuity

Video Surveillance EMC Storage with LENSEC Perspective VMS

SRM 8.1 Technical Overview First Published On: Last Updated On:

Surveillance Dell EMC Storage with FLIR Latitude

SRM 6.5 Technical Overview February 26, 2018

Dell EMC SAN Storage with Video Management Systems

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX

Modern hyperconverged infrastructure. Karel Rudišar Systems Engineer, Vmware Inc.

Copyright 2012 EMC Corporation. All rights reserved.

VMware vcenter Site Recovery Manager 5 Technical

Transform Availability

Functional Testing of SQL Server on Kaminario K2 Storage

Dell EMC Storage with the Avigilon Control Center System

AVAILABILITY AND DISASTER RECOVERY. Ravi Baldev

Copyright 2012 EMC Corporation. All rights reserved. EMC And VMware. <Date>

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH HYPER-V

Veeam Availability Solution for Cisco UCS: Designed for Virtualized Environments. Solution Overview Cisco Public

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

Enhancing Oracle VM Business Continuity Using Dell Compellent Live Volume

Virtualizing Oracle on VMware

Solutions for Demanding Business

Dell EMC UnityVSA Cloud Edition with VMware Cloud on AWS

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: DISASTER RECOVERY BUNDLE VNX

EMC INFRASTRUCTURE FOR VIRTUAL DESKTOPS ENABLED BY EMC VNX SERIES (NFS),VMWARE vsphere 4.1, VMWARE VIEW 4.6, AND VMWARE VIEW COMPOSER 2.

EMC Storage for VMware vsphere Enabled by EMC VPLEX Local

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

Exchange 2003 Archiving for Operational Efficiency

EMC Innovations in High-end storages

VxRAIL for the ClearPath Software Series

EMC Virtual Architecture for Microsoft SharePoint Server Reference Architecture

EMC Backup and Recovery for Microsoft SQL Server

HCI: Hyper-Converged Infrastructure

VMware Site Recovery Manager 5.x guidelines for the IBM Storwize family

How Symantec Backup solution helps you to recover from disasters?

EMC VPLEX Metro with HP Serviceguard A11.20

Video Surveillance EMC Storage with Digifort Enterprise

Transcription:

Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies Enabled by EMC VNX Blue print of a highly-available and disaster-tolerant infrastructure EMC VPLEX EMC RecoverPoint EMC E-Lab TM Vertical Engineering Group Abstract This document depicts the reference architecture used to validate the functionality and performance of Sectra PACS for disaster recovery and high availability solutions enabled by EMC VNX, VMware ESX, EMC RecoverPoint, EMC VPLEX, and VMware SRM. March 2015

Copyright 2015 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All trademarks used herein are the property of their respective owners. Part Number H14076 2

Table of contents Introduction... 5 Purpose... 5 Scope... 5 Sectra PACs... 5 EMC VNX Series... 5 EMC RecoverPoint... 5 EMC VPLEX... 6 Overview... 6 EMC VPLEX Metro... 6 EMC VPLEX Geo... 6 VMware vsphere... 6 Test results and analysis... 7 Server and software configuration... 7 Server configuration... 7 Software configuration... 7 Configuration 1: Baseline... 8 Testing environment... 8 EMC Unified Storage arrays... 8 VSphere clusters and storage... 8 Application testing... 9 Configuration 2: SRM using RecoverPoint data replication... 12 Testing environment... 12 Testing... 13 Notes... 13 Configurations 3: SRM using MirrorView data replication... 14 Testing environment... 14 Testing... 14 Notes... 15 Configuration 4: VMware stretched cluster using VPLEX... 15 Testing environment... 15 Testing... 16 Notes... 16 Conclusion... 17 References... 18 3

Executive summary Governments around the world are focusing more and more on medical costs, availability and reliability of medical services, retention of records and the quality of patient care. This renewed focus is driving medical institutions to acquire infrastructure solutions that provide high levels of resiliency and availability while still providing the necessary performance to ensure appropriate medical care and retention of data. This reference architecture is the result of Sectra and EMC engineers working together to leverage the performance and workflow of the Sectra PACS application along with the performance and availability capabilities of VMware and a variety of EMC storage technologies. The solution capabilities tested and documented enable a customer to deploy a highly available virtualized Sectra PACS environment that can easily be expanded to include robust disaster recovery capabilities, such as single and multi-site data replication. Failover scenarios of site 1 to site 2 and back were developed and tested, including meeting the performance requirements of the Sectra PACS application during this process. Sectra and EMC have a global interest to ensure that the solutions offered to customers are validated, tested, and documented as any outage has the potential to impact a person s health. An outage can also cause fines to be levied against the medical institution and its business partners so it is critical that these solutions work the first time out of the box and consistently until no longer needed. The global medical industry has gone through a series of rapid changes over the last few years due to new regulations, requirements, and reimbursement recalculations. These trends are expected to continue for the foreseeable future as governments continue to reduce costs while increasing the medical services offered to their citizens. This new Sectra/EMC solution will help medical institutions meet these new challenges head-on, ensuring delivery of care when and where needed. 4

Introduction Purpose Scope Sectra PACs This document provides a reference architecture for Sectra PACS using multiple EMCbased solutions for storage and data replication. VMware vsphere data center products are used as the virtualization platform. This document is intended for use by pre-sales, sales engineers, and customers who want to deploy the Sectra PACS vapp on EMC platforms. Sectra PACS provides radiologists with the environment to perform their work as smoothly and efficiently as possible. Sectra PACS provides instant access to all the tools in one single diagnostic workstation. Efficient planning tools and excellent overviews reduce wait times to a minimum, supporting fast and accurate diagnoses. Sectra PACS provides the perfect reading tool in a data-intense environment. Sectra PACS provides fast transfer of data regardless of whether the image set contains 60, 600, 6,000 or even 60,000 images. A unique image- handling model makes it possible to start working with the images just as quickly, regardless of the size of the dataset. In addition, it features an optional easy-to-use, totally integrated 3Drendering tool facilitating diagnostics through improved overview of data, and it visualizes results to the referring physician. Choosing the right storage solution for Sectra PACS is vital to ensure that the performance and availability meets the expectations of modern healthcare. EMC VNX Series EMC RecoverPoint The EMC VNX Series provides high-performing unified storage managing both block and file data with unsurpassed simplicity and efficiency, and is optimized for virtual applications. With the VNX Series, new levels of performance, protection, compliance, and ease of management can be achieved. EMC RecoverPoint is an advanced enterprise-class disaster recovery solution supporting heterogeneous storage and server environments. RecoverPoint provides bi-directional local and remote data replication across any distance and utilizes continuous data protection technologies to provide consistent point-in-time recovery. RecoverPoint has three modes of data replication: local replication (CDP), remote replication (CRR), and local and remote replication (CLR). 5

EMC VPLEX VMware vsphere Overview The EMC VPLEX family removes physical barriers within, across, and between data centers with AccessAnywhere. This is accomplished through the creation of VPLEX distributed virtual volumes. These are storage devices located at both sites of the environment. EMC VPLEX Metro EMC VPLEX Metro with AccessAnywhere lets you seamlessly relocate data between two sites within synchronous distances. The combination of Virtual Storage with VPLEX Metro and Microsoft Hyper-V lets you transparently move virtual machines across a distance. EMC VPLEX Geo VPLEX Geo introduces federated AccessAnywhere technology that creates a highavailability infrastructure to deliver simultaneous, active-active information access between data centers, across asynchronous distances. VPLEX Geo is a unique virtual storage technology that enables mission-critical applications to remain up and running during any of a variety of planned and unplanned downtime scenarios. VMware vsphere, the industry leading server virtualization platform, lets you virtualize applications with confidence. vsphere empowers users to virtualize scaleup and scale-out application with confidence, redefines availability, and simplifies the virtual data center. VMware vcenter Site Recovery Manager VMware vcenter Site Recovery Manager (SRM) is the market-leading disaster recovery management product. It ensures the simplest and most reliable disaster protection for all virtualized applications. SRM leverages cost-efficient vsphere replication and supports a broad set of high-performance storage-replication products to replicate virtual machines to a secondary site. SRM provides a simple interface for setting up recovery plans that are coordinated across all infrastructure layers, replacing traditional, error-prone run books. Recovery plans can be tested non-disruptively as frequently as required to ensure that they meet business objectives. At the time of a site failover or migration, SRM automates the failover and failback processes, ensuring fast and highly-predictable recovery point objectives (RPOs) and recovery time objectives (RTOs). 6

Test results and analysis This section discusses the following: Server and software configuration Configuration 1: Baseline Configuration 2: SRM using RecoverPoint data replication Configurations 3: SRM using MirrorView data replication Configuration 4: VMware stretched cluster using VPLEX Server and software configuration The following configurations were used in this test. Server configuration Two Cisco UCS C240 servers were used for testing. Component Processor Memory Description Intel Xeon R ES-2690 v2@3.00 Ghz 20 cores per blade 192 Mb per blade Software configuration Component Description VMware ESXi V5.5.0, 1331820 VMware SRM V 5.5 VNX OE for block V 5.33 VNX OE for file V 8.1 RecoverPoint V 4.1 SP1 VPLEX SMSv2 V D30.60.0.3.0 VPLEX Mgmt. Server Base V D30.0.0.112 VPLEX Mgmt. Server Software V D30.60.0.3 7

Configuration 1: Baseline Testing environment The following figure shows this testing environment. Figure 1: Base configuration EMC Unified Storage arrays Both sites used identical VNX 5200 unified storage arrays. The arrays were licensed for FAST VP, FAST Cache, and MirrorView. The arrays were connected to a Brocade DCX 8510 SAN switch using four 8 Gigabit fiber channel connections (4 per storage processor). All LUNs were load balanced across the storage processors for maximum throughput. VSphere clusters and storage The base configuration used for this solution consisted of two vsphere clusters of four nodes each. The clusters were implemented on two sites with no connection between the sites. Each cluster was built on identical Cisco USC C240 servers with each blade containing 192 GB of memory and two 10 core processor sockets. Each cluster had its own SAN switch and VNX 5200 Unified storage array. There were five (5) shared data stores for each cluster plus a separate network mounted archive volume. 8

The LUNs for the data stores were initially set up in a storage pool as follows. Production Pool Single tier RAID 5 SAS disks Data store 1 500 GB (virtual machines) Data store 2 500 GB (DB server) Data store 3 500 GB (DB server) Data store 4 2 TB (Image Store) Data store 5 200 GB Network mounted 4TB archive share More storage tiers were added later as part of the testing. All virtual machines were deployed as a virtual application with the compute load spread out across all four (4) nodes on the cluster. Application testing Application testing focused on two areas functional and performance. Functional testing simply tests that the application is functioning as expected. (Images can be loaded and viewed) Performance testing focuses on how quickly images from different modalities can be loaded and viewed from a client. By concentrating the performance testing on images that are known to stress the storage, conclusions can be drawn about how well the storage is responding based on that demand. Client performance is gauged by two types of tests, client testing (reads) and server testing (writes). Client testing (reads) Cine-loops Bring up a large series of images, typically, a CT scan of approximately 1500 slices, and lets the client run through them like a movie. By measuring the number of frames per second the client can access, we can accurately gauge the storage performance. To be acceptable, the test needs to achieve greater than 25 FPS with no hesitations. The test also measured the read time for each image with any read time less than 2ms being acceptable. Case pre-loads Enables the downloading of a full presentation suitable for a user selected case. The pre-load is read intensive and by timing the preload and monitoring performance counters on the server side, we can draw more conclusions about storage related system performance. Server testing (writes) Image loading When loading images for the testing, most of the images were the same. The metadata that accompanies the pixel data was changed to make each image or series of images unique. The volume of loads being generated is controlled by the number of sender processes being used. Performance is captured by monitoring the time of each load and associated system counters. 9

The initial testing was carried out by Sectra using seven (7) variations on storage. The following sections describe the storage variation and a summary of the results achieved. Read testing 1. Baseline storage: Frames per second 50 fps on each Chunk generation time 2-11 ms Seconds per read 3-4 ms Reads per second 500 Comments: Performance was all within expectations 2. Using VMware snapshots: Frames per second ~50 fps on each Chunk generation time 2-11ms Seconds per read 3-4ms Reads per second 500 Comments: VMware snapshots without memory dump works well There is a slight delay when the snapshot is taken VMware snapshots with memory dump did not work well 3. Using VNX snapshots: Frames per second ~50 fps on each Chunk generation time 2-11ms Seconds per read 3-4ms Reads per second 500 Comments: When measuring disk sec/write it went up from 2-3ms to 12ms on average. This was due to an internal conversion in the VNX array from a thick LUN to a thin LUN that takes place automatically on the array. If the LUN is originally created as a thin this will not be a factor. 4. Using FAST Cache: Frames per second ~50 fps on each Chunk generation time 2 ms Seconds per read 2 ms Reads per second 500 10

Comments: We did a reboot on the IMS/S VM to clear the file system cache first. This will decrease variation in the generation time. (Normally, you get a moment with higher times when we start loading a case.) 5. Adding Enterprise Flash drives into the storage pool: Frames per second ~50 fps on each Chunk generation time 1-2 ms Seconds per read 1 ms Reads per second 500 Comments: No spike in chunk generation time in the beginning of the loading Smoother generation times 6. Utilizing EMC PowerPath : Frames per second ~50 fps on each Chunk generation time 2-11 ms Seconds per read 2-3 ms Reads per second 450-500 Comments: Unexpected results since they are a fraction higher than without PowerPath 7. Utilizing storage VMotion: Frames per second 25-30 fps on each Chunk generation time 9 ms Seconds per read 7-8 ms Reads per second 400 Comments: The test managed 25 fps on both stacks without a problem 11

Write testing Import test (DICOM JPEG2000) Measurement PP, 1 CT PP, 1 CT, 1 MR Writes/s 40-100 100-200 sec/write 20ms 15-25 Bytes Received/sec 22-23 MB/s 25-35 MB/s Write Queue Length 1-2 1,5-2,5 Processor time (6 vcpus) 40% 50-60% Import times Test case Storage type Time Importing CT exam (~2000 images) SSD 39s avg Importing CT exam (~2000 images) SAS with RecoverPoint repl 36s avg Importing CT exam (~2000 images) SAS with Mirrorview repl 40s avg Importing CT exam (~2000 images) SAS with VPLEX repl 38s avg Configuration 2: SRM using RecoverPoint data replication Testing environment VMware Site Recovery Manager (SRM) with RecoverPoint data replication creates an active-passive relationship between sites. Virtual machines all reside on a vsphere cluster on the primary site with RecoverPoint appliances handling the data replication between the sites. RecoverPoint lets you set granularity when determining how often to replicate data based on your RTO and RPO. You have the option of setting up routines to perform application consistent snapshots as well as the automatic crash consistent snapshots. While bi-directional replication is supported by RecoverPoint, only one copy of the replicated LUN is available for access. When an outage is detected on the primary site, SRM will automatically start and customize all the virtual machines from the primary site on the secondary site. Customization includes changing network adapter properties including, but not limited to, IP address, DNS servers, and Netmask. When the outage has been resolved, you then use the SRM vsphere application to reverse replication. If desired, you can fail all the virtual machines back to the primary site at a scheduled time resulting in the smallest impact. 12

The following figure shows this testing environment. Figure 2: SRM with RecoverPoint data replication Testing In our testing, we failed all the virtual machines from the Sectra PACS vapp to the secondary site and then verified that everything came back properly and was fully functional. Then, we reversed the replication, failed back to the primary site, and once again verified the environment was fully functional. Notes Time to recover from primary to secondary site 14 min 5 sec Re-protecting the secondary site (reversing replication) 1 min 33 sec Time to recover from secondary site back to primary 14 min 59 sec Re-protecting the primary site (re-establishing replication) 1 min 39 sec SRM does not support vapps. We had to change the vapp to a resource pool and define a custom startup procedure to fully support the solution. When setting up consistency groups in SRM, the virtual machines in each SRM Consistency group must match virtual machines on the data stores in each RecoverPoint Consistency Group. SRM requires a specific Storage Replication Adapter (SRA) to work with RecoverPoint. 13

Configurations 3: SRM using MirrorView data replication Testing environment SRM with MirrorView data replication creates an active-passive relationship between sites that are either in the same data center or between two data centers with a close geographic relationship and a fiber optic SAN link between the data centers. MirrorView replication is performed by the VNX arrays using a common SAN switch where the arrays and MirrorView ports are zoned together, enabling the arrays to exchange data. While bi-directional replication is supported by MirrorView, only one copy of the replicated LUN is available for access. The following figure shows this testing environment. Figure 3: SRM with MirrorView data replication Testing In our testing, we failed all the virtual machines from the Sectra PACS vapp to the secondary site, verified that everything came back properly and was fully functional. Then we reversed the replication, failed back to the primary site, and once again verified the environment was fully functional. Time to recover from primary to secondary site 13 min 7 sec Re-protecting the secondary site (reversing replication) 1 min 33 sec Time to recover from secondary site back to primary - 3 min 18 sec Re-protecting the primary site (re-establishing replication) 2 min 55 sec 14

Notes Configuration 4: VMware stretched cluster using VPLEX SRM does not support vapps. We had to change the vapp to a resource pool and define a custom startup procedure to fully support the solution. SRM requires a specific Storage Replication Adapter (SRA) to work with MirrorView. Testing environment VMware stretched clusters allow you to expand the capabilities of a localized vsphere cluster to include ESXi nodes on separate sites. This includes the ability to vmotion virtual machines and storage across sites. It also allows for HA and DRS across sites. This is made possible by two-way storage replication, stretched networks, and a single vcenter instance for both sites. For our purposes, we used EMC VPLEX with distributed storage groups to handle the two-way data replication between the sites. The network from the primary site was made available (stretched) to all the nodes on the secondary site. The VPLEX data replication was handled via IP over two 10 Gb optical Ethernet links. The following figure shows this testing environment. Figure 4: VMware Stretched cluster using VPLEX 15

Testing With the stretched cluster implemented, we vmotioned each virtual machine between the sites and data stores with the application running. To simulate the loss of an array, we shut down the primary VNX array with the application running and observed no change in the environment. All the virtual machines remained up with data stores still available from the secondary site through VPLEX. Notes When configuring the storage for a stretched cluster, it is a best practice to keep virtual machines on a data store that is local to the site where the ESXi servers reside. This practice will reduce R/W latency issues across sites since distributed groups are synchronous. 16

Conclusion The configurations and platforms that were tested in this reference architecture cover a large range of storage and replication possibilities. The base storage configuration validated the performance and capabilities of the EMC VNX storage array and how it responded to the demands of the Sectra PACS application. The array used FAST VP and FAST Cache to ensure the highest level of access to the data. After the base configuration, we moved to multisite Active/Passive data replication using EMC RecoverPoint appliances. Replication was accomplished over TCP/IP to a secondary site. VMware Site Recovery Manager was implemented and used to move the entire environment to the secondary site and then back to the primary site. The next configuration used EMC MirrorView. Once again this was an Active/Passive model, but this time the replication was performed over a Fibre Channel link to the secondary array. VMware Site Recovery Manager was also used to move the entire environment to the secondary site and then back again to the primary site. The final configuration used EMC VPLEX Metro, utilizing a high-speed link between sites to perform Active/Active data replication. To verify the full functionality of the replication, with both sites active we turned off one of the arrays and witnessed no loss of data, no loss of connectivity, and no downtime at either site. By validating these configurations we demonstrated how well Sectra PACS and EMC platforms work together to provide high performance, high flexibility, great scalability, and constant access to the most important data. 17

References The following references can be found at http://support.emc.com. Stretched Clusters and VMware vcenter Site Recovery Manager Technical White Paper RecoverPoint 4.1 Documentation set VNX2 FAST VP A Detailed Review Technical White Paper The following references can be found at http://vmware.com. VMware Site Recovery Manager 5.5 Installation and Configuration Guide VMware Site Recovery Manager 5.5 Administration guide 18