EMC VPLEX Geo with Quantum StorNext

Similar documents
EMC VPLEX with Quantum Stornext

EMC VPLEX Metro with HP Serviceguard A11.20

EMC DATA PROTECTION, FAILOVER AND FAILBACK, AND RESOURCE REPURPOSING IN A PHYSICAL SECURITY ENVIRONMENT

VPLEX Networking. Implementation Planning and Best Practices

EMC RECOVERPOINT: ADDING APPLICATION RECOVERY TO VPLEX LOCAL AND METRO

EMC VPLEX WITH SUSE HIGH AVAILABILITY EXTENSION BEST PRACTICES PLANNING

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange 2007

EMC CLARiiON CX3-40. Reference Architecture. Enterprise Solutions for Microsoft Exchange Enabled by MirrorView/S

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE VNX

EMC CLARiiON CX3-80 EMC Metropolitan Recovery for SQL Server 2005 Enabled by Replication Manager and MirrorView/S

Quantifying Performance of Sectra PACS with EMC VNX Storage Technologies

Copyright 2012 EMC Corporation. All rights reserved.

EMC Symmetrix VMAX Cloud Edition

EMC RECOVERPOINT FAMILY OVERVIEW A Detailed Review

EMC XTREMCACHE ACCELERATES VIRTUALIZED ORACLE

Copyright 2012 EMC Corporation. All rights reserved.

EMC CLOUD-ENABLED INFRASTRUCTURE FOR SAP - BUSINESS CONTINUITY SERIES: HIGH AVAILABILITY AND APPLICATION MOBILITY BUNDLE

VPLEX Operations. System Requirements. This chapter contains the following sections:

EMC CLARiiON CX3-80. Enterprise Solutions for Microsoft SQL Server 2005

VMAX3 AND VMAX ALL FLASH WITH CLOUDARRAY

IBM TS7700 grid solutions for business continuity

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

Disaster Recovery-to-the- Cloud Best Practices

Surveillance Dell EMC Storage with Digifort Enterprise

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

LONG-DISTANCE APPLICATION MOBILITY ENABLED BY EMC VPLEX GEO

Data Sheet: Storage Management Veritas Storage Foundation for Oracle RAC from Symantec Manageability and availability for Oracle RAC databases

EMC Simple Support Matrix

Microsoft Office SharePoint Server 2007

Dell EMC. VxBlock Systems for VMware NSX 6.2 Architecture Overview

Commentary. EMC VPLEX Launches the Virtual Storage Era

EMC RECOVERPOINT/EX Applied Technology

What's in this guide... 4 Documents related to NetBackup in highly available environments... 5

EMC VSPEX FOR VIRTUALIZED MICROSOFT EXCHANGE 2013 WITH MICROSOFT HYPER-V

EMC Business Continuity for Microsoft Applications

EMC VPLEX VIRTUAL EDITION: USE CASES AND PERFORMANCE PLANNING

DATA PROTECTION IN A ROBO ENVIRONMENT

Reasons to Deploy Oracle on EMC Symmetrix VMAX

EMC CLARiiON CX3 Series FCP

IBM TotalStorage Enterprise Storage Server Model 800

ECS High Availability Design

StorNext 3.0 Product Update: Server and Storage Virtualization with StorNext and VMware

PURE STORAGE PURITY ACTIVECLUSTER

Dell EMC SAN Storage with Video Management Systems

EMC STORAGE FOR MILESTONE XPROTECT CORPORATE

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

Transforming your IT infrastructure Journey to the Cloud Mike Sladin

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

EMC Backup and Recovery for Microsoft Exchange 2007

EMC BUSINESS CONTINUITY SOLUTION FOR GE HEALTHCARE CENTRICITY PACS-IW ENABLED BY EMC MIRRORVIEW/CE

Oracle RAC 10g Celerra NS Series NFS

SAN Storage Array Workbook September 11, 2012

Storage Area Networks SAN. Shane Healy

EMC XTREMCACHE ACCELERATES MICROSOFT SQL SERVER

LIFECYCLE MANAGEMENT FOR ORACLE RAC 12c WITH EMC RECOVERPOINT

Vendor: EMC. Exam Code: E Exam Name: Cloud Infrastructure and Services Exam. Version: Demo

Copyright 2015 EMC Corporation. All rights reserved. Published in the USA.

VMware vsphere Stretched Cluster with X-IO Technologies ISE using Active-Active Mirroring. Whitepaper

Dell/EMC CX3 Series Oracle RAC 10g Reference Architecture Guide

Overview. CPS Architecture Overview. Operations, Administration and Management (OAM) CPS Architecture Overview, page 1 Geographic Redundancy, page 5

Virtualizing SQL Server 2008 Using EMC VNX Series and VMware vsphere 4.1. Reference Architecture

EMC Solutions for Enterprises. EMC Tiered Storage for Oracle. ILM Enabled by EMC Symmetrix V-Max. Reference Architecture. EMC Global Solutions

EMC XTREMCACHE ACCELERATES ORACLE

Introduction to the CX700

EMC VNX Replication Technologies

EMC Celerra NS20. EMC Solutions for Microsoft Exchange Reference Architecture

DELL EMC UNITY: HIGH AVAILABILITY

Dell EMC. VxBlock Systems for VMware NSX 6.3 Architecture Overview

The Right Choice for DR: Data Guard, Stretch Clusters, or Remote Mirroring. Ashish Ray Group Product Manager Oracle Corporation

Data Sheet: High Availability Veritas Cluster Server from Symantec Reduce Application Downtime

An Introduction to GPFS

Dell Exchange 2013 Reference Architecture for 500 to 20,000 Microsoft Users. 1 Overview. Reliable and affordable storage for your business

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

Data Center Interconnect Solution Overview

Business Continuity and Disaster Recovery. Ed Crowley Ch 12

Copyright 2012 EMC Corporation. All rights reserved.

Video Surveillance EMC Storage with Godrej IQ Vision Ultimate

EMC ViPR Controller. Ingest Services for Existing Environments Guide. Version REV 01

EMC VSPEX END-USER COMPUTING

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

EMC Business Continuity for Microsoft Office SharePoint Server 2007

Datacenter replication solution with quasardb

Local and Remote Data Protection for Microsoft Exchange Server 2007

White Paper. A System for Archiving, Recovery, and Storage Optimization. Mimosa NearPoint for Microsoft

XP7 High Availability User Guide

GR Reference Models. GR Reference Models. Without Session Replication

IBM System Storage DS5020 Express

EMC DiskXtender for Windows and EMC RecoverPoint Interoperability

Best Practices for Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Technical Note P/N REV A01 March 29, 2007

HCI: Hyper-Converged Infrastructure

Veritas Storage Foundation for Oracle RAC from Symantec

Surveillance Dell EMC Storage with FLIR Latitude

Surveillance Dell EMC Storage with Cisco Video Surveillance Manager

Veritas Storage Foundation for Windows by Symantec

Simplified and Consolidated Parallel Media File System Solution

A Continuous Availability Solution for Virtual Infrastructure

Symantec Storage Foundation for Oracle Real Application Clusters (RAC)

Transcription:

White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote hosts can take advantage of VPLEX geographical dispersed caching. Additionally, this solution simplifies File System management from a centralized environment. March 2012

Copyright 2012 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. Part Number H10573 2

Table of Contents Executive summary... 4 Key value propositions... 4 VPLEX overview... 5 Application availability during data center outages... 6 Quantum StorNext overview... 7 Technology integration... 8 E-Lab configuration... 8 Configuration... 10 Enhanced Performance... 10 VPLEX failure semantics... 11 Conclusion... 13 References... 13 3

Executive summary EMC VPLEX Geo is a federation solution that can be stretched across two geographically dispersed data centers separated by asynchronous distances (maximum supported round trip latency = 50 ms). VPLEX provides simultaneous access to storage devices at two sites through the creation of a VPLEX Asynchronous Consistency Group and VPLEX distributed virtual volumes, supported on each side by a VPLEX cluster. Each VPLEX cluster is highly available, scaling from two directors per VPLEX cluster to up to eight directors per VPLEX cluster. Each director is supported by independent power supplies, fans, and interconnects. Each VPLEX cluster has no single point of failure. The EMC VPLEX storage federation solution, together with a Quantum StorNext File System (SNFS), enables a stretched File System (FS) solution where hosts in a local site have simultaneous read and write access to a common file system and the remote hosts have read access to the same file in a heterogeneous environment. This architecture provides new options, including: High performance access of files over asynchronous distances with heterogeneous hosts. Supporting organizations with remote offices. Deploying private or public cloud computing environments that can span multiple data centers. Without such a solution, customers normally update files and send it to another site using FTP. In an FTP approach, if the workflow is not well coordinated, and teams work independently, the file becomes inconsistent as multiple teams are working on it at the same time, unaware of the most recent changes. Combining changes at the end is timeconsuming and complicated, especially as the amount of data increases. Key value propositions With this VPLEX and StorNext solution, the same data can be readable to all SAN clients at all times at both sites. The data is shared, not copied, providing coherence across the sites. A change at the local site shows up immediately at the remote site. VPLEX makes use of smart caching to send only the updated changes. In the local data center site, the StorNext Metadata Controller (MDC) systems ensure StorNext high availability. StorNext MDC is a two-node system with a dedicated private network for cluster data traffic that acts as the traffic cop for the shared file system. The SAN clients directly access the storage for read and write. The file metadata (e.g., inode information) is managed by the metadata controllers. This white paper demonstrates key functionality in a lab environment using VPLEX Geo and StorNext clustered FS, which supports asynchronous, high-performance file access over 4

geographic clusters up to 50ms apart. VPLEX Geo is generally available (starting from 5.0.1.00.00.06) and SNFS 4.1.2 is fully-qualified on this platform. VPLEX overview EMC VPLEX is an enterprise-class storage area network-based federation solution that aggregates and manages pools of Fibre Channel-attached storage arrays that can be either collected in single data center or multiple data centers that are geographically separated by metropolitan area network (MAN) physical distances, as described next. EMC VPLEX Geo provides non-disruptive, heterogeneous data movement and volume management functionality within asynchronous distances, both within and between data centers. With a unique scale-up and scale-out architecture, advanced data caching, and distributed cache coherency, VPLEX provides workload resiliency, automatic sharing, balancing, failover of storage domains, and enables both local and remote data access with predictable service levels. EMC Access Anywhere, available with VPLEX, is a break-through technology that enables a set of data to be shared, accessed, and relocated over distance. EMC GeoSynchrony is the VPLEX operating system. VPLEX Local VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro VPLEX Metro provides active/active data access and mobility between two VPLEX clusters within synchronous distances up to 5ms Round Trip Time (RTT). VPLEX Geo VPLEX Geo provides active/active data access and mobility between two VPLEX clusters within asynchronous distances up to 50ms RTT. VPLEX Metro and VPLEX Geo support distributed mirroring that protects the data of a virtual volume by mirroring it between two VPLEX clusters. The mirrored volume is active at both clusters. The distributed mirrored volume appears and behaves as a single volume and acts in a similar manner to a virtual volume whose data resides in one cluster. Because VPLEX Geo allows more than 5ms of latency, asynchronous I/O is used for distributed volumes. Write-back caching can give better performance in that the host receives acknowledgement after the write reaches the VPLEX cache and has been protected to another director in the local cluster. VPLEX collects writes in form of deltas and the clusters exchange deltas to create a globally-merged delta. 5

There is a risk that data will be lost if one of the following conditions exists: Multiple directors fail at the same time. There is an inter-cluster link partition and both clusters are actively writing. The cluster that is actively writing fails. Consistency groups provide a convenient way to apply rule sets and other properties to a group of distributed volumes. All distributed volumes in an asynchronous consistency group share the same detach rule and write-back cache mode and behave the same way during an inter-cluster link failure. A VPLEX cluster, shown in the following figure, can scale up through the addition of more engines and scale out by connecting multiple clusters to form a VPLEX Geo configuration. VPLEX clusters provide non-disruptive data mobility, heterogeneous storage management, and improved application availability. Figure 1. VPLEX cluster Application availability during data center outages A VPLEX Geo cluster distributed between two data centers can be used to protect against data unavailability in event of a data center outage by mirroring data between two data centers. With the active-cluster-wins rule, read and write I/O continues at the cluster where the application was most recently active. Access to the data for volumes in the data center with the outage can be resumed at the surviving cluster with the invocation of a manual command to resume the suspended I/O. Combined with failover logic for active-passive environments using the StorNext File System (SNFS), a VPLEX Geo cluster provides an infrastructure that is able to quickly restore service operations. 6

Quantum StorNext overview Quantum StorNext is data management software that helps customers build an infrastructure that consolidates resources so file workflow runs faster and operations cost less. StorNext provides high-speed content sharing combined with cost-effective data archiving. StorNext offers data sharing and retention in a single solution so you do not have to piece together multiple products that may not integrate effectively. Even in heterogeneous environments, all data is easily accessible to all hosts. Figure 2. Quantum StorNext The StorNext File System (SNFS) streamlines processes and facilitates job completion by enabling multiple business applications to work from a single, consolidated data set. Using SNFS, applications running on different operating systems (such as Windows, Linux, Solaris, HP-UX, AIX, and Mac OS X) can simultaneously access and modify files on a common, high-speed SAN storage pool. This centralized storage solution eliminates slow LAN-based file transfers between workstations and dramatically reduces delays caused by single-server failures. The SNFS configuration that EMC E-Lab tested consists of two main components, briefly described next. StorNext Metadata Controller (MDC) The server on which the StorNext Storage Manager software is running (the metadata controller host.) Also known as the local host or the primary server on HA systems. StorNext SAN clients The server/desktop on which StorNext client software is running. These directly access the storage for read and write functions. In high-availability (HA) MDC configurations, which is an active-passive configuration, a redundant server is available to access files and pick up processing requirements of a failed system, and carry on processing. 7

The primary advantage of an HA system is file system availability, because an HA configuration has redundant servers. If one server fails during operation, failover automatically occurs and operations are resumed on its peer server. At any point in time, only one of the two servers is allowed to control and update StorNext metadata and databases. The HA feature enforces this rule by monitoring for conditions that might allow conflicts of control that could lead to data corruption. Before this so-called split-brain scenario would occur, the failing server is reset at the hardware level, which causes it to immediately relinquish all control. The redundant server is able to take control without any risk of split-brain data corruption. The HA feature provides this protection without requiring special hardware and HA resets occur only when necessary according to HA protection rules. Technology integration It has been proven over time that the joint solution by EMC and StorNext is extremely effective at supporting infrastructure for large-file distributed access. SNFS is designed to fully utilize the capabilities of the EMC VNX, EMC Symmetrix, EMC VMAX, and EMC VPLEX. Additionally, it supports key multipathing software applications, such as EMC PowerPath. E-Lab configuration The following configuration demonstrates a use-case for VPLEX Geo with StorNext FS where a user can access a data set from a remote site. The configuration includes two VPLEX cluster locations named Site 1 (remote) and Site 2 (local). An EMC VNX resides at Site 1 while an EMC Symmetrix VMAX resides at Site 2. VPLEX clusters present LUNS (distributed devices) to the metadata controller as well as to the SAN Clients. Metadata controllers service the metadata while maintaining the file system journal. The SAN Clients in Site 2 are capable of writes directly to the storage allocated by the metadata controller. The SAN Clients in Site 2 have simultaneous read and write access to the file system. At VPLEX cluster Site 1, SAN Clients were configured for read access only to the same file system as SAN Clients in VPLEX cluster Site 2. Note: To make SAN Clients read-only, refer to the Quantum SNFS Configuration Guide. 8

Figure 3. Accessing data from a remote site Site 1 consists of: Red Hat Enterprise Linux (RHEL) SAN Client VPLEX with two Directors Storage Fibre Channel VNX Site 2 consists of: RHEL MDC Primary RHEL MDC Secondary, which provides high availability with RHEL MDC-Primary RHEL SAN Client VPLEX with two Directors Storage Fibre Channel VMAX A network emulator introduced a RTT of 50ms between the two sites in VPLEX inter-cluster communication and SNFS heartbeat exchanges. 9

Configuration Table 1. SNFS 4.1.2 MDC (both primary and secondary) OS - RHEL 5.6 PowerPath - Version 5.6 OS Kernel - 2.6.18-238.el5 Client 's OS - RHEL 5.6 PowerPath - Version 5.6 OS Kernel - 2.6.18-238.el5 VPLEX Config-Geo Version - 5.0.1.00.0 0.06 Switches - Used in test bed. Brocade -300B- Fabric OS - v6.4.0b Brocade 24k- Fabric OS- 5.3.1b Arrays - VNX 5500- Block Software Version: 05.31.000.3.591 VMAX -5875.135.91 The configuration shown in the above table is designed with following benefits: Administration of StorNext MDC is centralized at VPLEX cluster Site 2. Highly available StorNext MDCs at VPLEX cluster Site 2. No requirement for administration of MDC at VPLEX clusters Site 1. Enhanced performance VPLEX AccessAnywhere in GeoSynchrony 5.0 supports distributed RAID 1, where VPLEX devices are mirrored between sites. With VPLEX global-distributed cache, any director within a VPLEX cluster is able to service I/O requests for a virtual volume. VPLEX distributed algorithms minimize inter-director messaging and take advantage of I/O locality in the placement of data. This translates to performance benefits once the cache is warmed up. This will be further illustrated in E-Lab experiments, discussed next. During E-Lab experiments, the Site 2 SAN Client hosts wrote multiple sets of file (3 GB, 1.4 GB, 600 MB, and 300 MB). The same files were then immediately read by SAN Client hosts at Site 1. It was observed that during the first read by the SAN Client at Site 1, the read took more time since the VPLEX at Site 1 required fetching the entire file from the VPLEX at Site 2. However, subsequent reads by Site 1 SAN Clients were directly from the Site 1 VPLEX, so the performance was similar to reading from a local disk. In the background, VPLEX intelligently tracked the changes of the file and copied only the differences. 10

120 SAN Clients' Reads Completion Time 100 Time (Secs) 80 60 40 c1 read 3.4GB c2 read 3.4GB c1 read 1.4GB c2 read 1.4GB c1 read 600 MB c2 read 600 MB c1 read 300MB c2 read 300MB 20 0 1st read 2nd read 3rd read Figure 4. SAN Clients Reads completion time Legend: C1: SAN Clients with VPLEX cluster on Site 1, which is read-only. C2: SAN Clients with VPLEX cluster on Site 2, which is read-write capable. NOTE: In any implementation of cache, there is a period of loading a set of data so that the cache gets populated with valid data. In the above scenario, the cache in the director of the cluster in Site 1 went through warm-up periods during several early reads. VPLEX failure semantics The following table shows VPLEX failure semantics from testing: Table 2. VPLEX failure semantics Failure Type Site 1 fails Site 2 fails Complete WAN link failures Recovery of suspended sites Site 1 (Remote) read-only; Site 2 (Local) read-write capable Site 2 continues Site 1 suspends Site 1 suspends, Site 2 continues Manual 11

This E-Lab configuration was used to check following functionalities: Creating SNFS file system over VPLEX-based federated storage in a RHEL-based environment. To ensure that E-Lab can create a file system when the devices are presented by the VPLEX to the hosts. File system expansion and basic I/O tests. To test that FS can be expanded over time whenever there is a need for space. To make sure different file sizes with different record length can be created. Compatibility with PowerPath. To test the path failover features at host level. Physicals devices removal and restoration. To test how StorNext behaves when devices are not available and to ensure that data is consistent when the devices are brought back. SNFS failover capabilities. To test the various HA failover scenarios like heart beat failure, manual failover, device unavailability, and network down. WAN link failure. To test the failover of VPLEX /SNFS and its recovery. Latency between the two sites is equal to 50ms latency RTT (Geo). Induce the shutdown of one VPLEX site and rebuild data. To test how robust the VPLEX/SNFS will be when one complete site is powered down. To test the ability for VPLEX/SNS to provide recovery and data consistency. Array power shut down. NDU and online upgrades for the storage arrays. VPLEX online (non-disruptive) code upgrades. VPLEX cluster shutdown and restart. Virtual Private Network (VPN) between the two VPLEX sites failure and recovery. 12

Conclusion VPLEX storage federation technology provides new capability to distribute workloads across data centers, improve collaboration, and efficiently provide DR protection using active-active (rather than active-passive) secondary data center architecture. A clustered file system enables customers to fully utilize this capability, providing concurrent shared read-write access to shared data in two or more geographic locations. The Quantum clustered SNFS easily integrates with EMC VPLEX, EMC storage systems, and EMC PowerPath, providing robust functionalities and easy management. The joint solution provides customers with: High degree of HA at the storage level, with the help of VPLEX. High degree of HA at the file system level, with the help of SNFS. Accessibility to users over a distance who may share the file system for a variety of needs. Highly-scalable file system. Clustered file system that will leverage a federated volume across a distance. References http://www.emc.com http://www.quantum.com 13