FICON Extended Distance Solution (FEDS)

Similar documents
IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Delivers Bluefin Support (SNIA SMIS) with the ESS API, and Enhances Linux Support and Interoperability

IBM TotalStorage Enterprise Storage Server Model 800

IBM TotalStorage Enterprise Storage Server Enhances Performance 15,000 rpm Disk Drives and Delivers New Solutions for Long Distance Copy

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment

IBM TRAINING Z13. What s New with GDPS. David Raften / Orlando, FL

Migration Roadmap for LANRES For z/vm Customers - Where to go from here?

IBM TotalStorage Enterprise Storage Server (ESS) Model 750

IBM GDPS V3.3: Improving disaster recovery capabilities to help ensure a highly available, resilient business environment

Network Layer Flow Control via Credit Buffering

IBM System Storage DS6800

IBM Tivoli System Automation for z/os

IBM TotalStorage SAN Switch F08

Transport is now key for extended SAN applications. Main factors required in SAN interconnect transport solutions are:

IBM TS7700 grid solutions for business continuity

A GPFS Primer October 2005

DISK LIBRARY FOR MAINFRAME

IBM LTO Ultrium 5 Half High Tape Drive

IBM TotalStorage SAN Switch M12

IBM System p5 185 Express Server

IBM System Storage TS1120 Tape Drive

IBM System Storage IBM :

IBM Data Center Networking in Support of Dynamic Infrastructure

ZVM17: z/vm Device Support Overview

IBM System Storage SAN06B-R extension switch

IBM System Storage DS4800

IBM System Storage TS7740 Virtualization Engine now supports three cluster grids, Copy Export for standalone clusters, and other upgrades

EMC Celerra CNS with CLARiiON Storage

WebSphere Application Server Base Performance

DISK LIBRARY FOR MAINFRAME

IBM System p5 510 and 510Q Express Servers

IBM TotalStorage Enterprise Tape Library 3494

IBM TotalStorage Enterprise Tape Controller 3590 Model A60 enhancements support attachment of the new 3592 Model J1A Tape Drive

How Smarter Systems Deliver Smarter Economics and Optimized Business Continuity

SONET Links Extend Fibre Channel SANs

Storwize V7000 real-time compressed volumes with Symantec Veritas Storage Foundation

IBM _` p5 570 servers

IBM System Storage DS5020 Express

SHARE in Pittsburgh Session 15801

z/vm 6.3 A Quick Introduction

Technology Insight Series

IBM Europe Announcement ZG , dated February 13, 2007

IBM System Storage. Tape Library. A highly scalable, tape solution for System z, IBM Virtualization Engine TS7700 and Open Systems.

InfiniBand and Next Generation Enterprise Networks

IBM ^ iseries Logical Partition Isolation and Integrity

Preview: IBM z/vse Version 4 Release 3 offers more capacity and IBM zenterprise exploitation

IBM TotalStorage 3592 Tape Drive Model J1A

IBM System Storage SAN40B-4

Cisco MDS 9000 Enhancements Fabric Manager Server Package Bundle, Mainframe Package Bundle, and 4 Port IP Storage Services Module

IBM TotalStorage SAN Switch F32

Simplify IP Telephony with System i. IBM System i IP Telephony

IBM System Storage SAN24B-4 Express

Advanced FICON Networks. Stephen Guendert BROCADE February 15, 2007 Session 3809

IBM TotalStorage FAStT900 Storage Server New Offering Expands IBM SAN Storage Solutions

Server Time Protocol for IBM System z9, zseries 990 and 890; non-raised-floor support for System z9 BC

IBM System Storage SAN40B-4

Lawson M3 7.1 Large User Scaling on System i

DFSMSdss Best Practices in an SMS Environment

High Availability and Scalability with System z and z/os

Managing LDAP Workloads via Tivoli Directory Services and z/os WLM IBM. Kathy Walsh IBM. Version Date: July 18, 2012

Solutions for iseries

iseries Tech Talk Linux on iseries Technical Update 2004

Introduction to iscsi

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

An Energy, Memory, and Performance Analysis Case Study on the IBM System z10 BC

UNIX Workstations Facts and Features

IBM System Storage SAN80B-4

IBM Power 740 Express server

IBM expands multiprotocol storage offerings with new products from Cisco Systems

p5 520 server Robust entry system designed for the on demand world Highlights

Global Crossing Optical. Lambdasphere Lambdaline Managed Fibre

IBM System p5 550 and 550Q Express servers

IBM i Edition Express for BladeCenter S

Enterprise Workload Manager Overview and Implementation

Virtualization Selling with IBM Tape

Fiber Saver (2029) Implementation Guide

iscsi Technology: A Convergence of Networking and Storage

IBM Storage Services. IBM Storage Services

IBM Client Center z/vm 6.2 Single System Image (SSI) & Life Guest Relocation (LGR) DEMO

IBM Europe, Middle East, and Africa Hardware Announcement ZG , dated April 10, 2012

WebSphere Application Server, Version 5. What s New?

IBM To Resell Cisco Systems MDS 9000 Multilayer Switch and Director Family of Intelligent Storage Networking Products

Veritas Volume Replicator Option by Symantec

z/vm Data Collection for zpcr and zcp3000 Collecting the Right Input Data for a zcp3000 Capacity Planning Model

Interoperability Matrix

IBM Real-time Compression and ProtecTIER Deduplication

Infor Lawson on IBM i 7.1 and IBM POWER7+

IBM System Storage DS5020 Express

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

IBM Tivoli OMEGAMON XE on z/os

IBM TotalStorage Storage Switch L10

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

IBM and Sirius help food service distributor Nicholas and Company deliver a world-class data center

IBM System Storage DS8870 Release R7.3 Performance Update

IBM Power 755 server. High performance compute node for scalable clusters using InfiniBand architecture interconnect products.

IBM System Storage SAN768B

CenturyLink QC Technical Publication

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

GDPS: The Enterprise Continuous Availability / Disaster Recovery Solution

IBM InfoSphere Streams v4.0 Performance Best Practices

Transcription:

IBM ^ zseries Extended Distance Solution (FEDS) The Optimal Transport Solution for Backup and Recovery in a Metropolitan Network Author: Brian Fallon bfallon@us.ibm.com

FEDS: The Optimal Transport Solution for Backup and Recovery in a Metropolitan Area Network (MAN) Traditional means of backup and recovery for customer data centers are typically implemented using channel extension schemes. These connectivity requirements are well understood by networking and data center personnel alike. T1, T3, telco lines are time division multiplexed onto Synchronous Optical Network rings (SONET) between data center sites delivering tape, print, and disk data. SONET rings can typically handle a capacity of 2.5 gigabits per second of data. But as businesses have grown, so have their requirements for bandwidth. This paper focuses on a scalable solution, built to deliver higher capacity at faster speeds for lower cost. Extended Distance Solution (FEDS) is the answer for those customers that require such capacity within 150 KM (90 fiber miles) in the Metropolitan Area Network. The Problem: In today s world, terabytes of disk data are used for image and video applications. Traditional channel extension requires transport based on T1, T3, OC/3, over telecommunications lines as shown in Figure 1. These telecommunication line speeds do not have sufficient bandwidth to transport the kind of capacity required for today s Metropolitan Area Networks (MAN). Traditional Channel Extension out to 90 Miles: Escon channels The Problem: Channel ext. equipment. T1 / T3/OC3 Sonet OC/48 T1 / T3/OC3 Channel ext. equipment. Figure 1: Traditional Channel Extension Escon channels - Enterprise Server can only handle 256 maximum channels. - T1 speed = 1.544mbps T3 speed = 45mbps OC/3 speed = 155mbps. Escon speed = 200mbps (slower speeds = poorer performance). - Ficon cannot run over Sonet. - OC/48 Sonet Ring capacity = 2.5gigabits. (12 Escon channels) Multiple rings required to accommodate growth = higher costs. - Channel ext. equipment not scalable. Additional equipment needed to accommodate growth = higher costs. As stated previously, legacy channel extension utilizes a SONET ring infrastructure that on average can only handle a capacity between 622 megabits per second (Mbps) to 2.5 gigabits per second (Gbps). This equates to a maximum of about 12 channels in a 2.5 Gbps SONET ring and does not take into consideration other types of batch and interactive data traffic. Customers can find themselves trapped in a never ending bandwidth crunch, that requires more and more T1s, T3s, and SONET rings, just to accommodate even the most basic data growth within their organization. The Answer: Extended Distance Solution (FEDS) Several finance and securities customers required a more scalable channel extension solution to meet their data needs for backup and recovery within 150 KM (90 fiber miles). To meet this challenge, IBM constructed a FEDS lab at their Gaithersburg location. The following customer requirements were addressed: Reduce I/O channels on IBM S/390 Enterprise Servers and IBM ^ zseries. Remove data buffering equipment. Provide for transport scalability. Provide for seamless migration from an asynchronous disk copy to a synchronous disk copy technology when implementing the Geographically Dispersed Parallel Sysplex TM ( TM ) solution. : FEDS utilizes IBM s fiber connection ( ) technology to transport significant amounts of data at high speeds between data centers over 150 KM (90 fiber miles) apart. is the next step in channel I/O technology, available on the S/390 Enterprise Server and zseries. replaces the current channel implementation. resides at layer 5 of the fibre channel protocol stack (refer to Figure 2). A channel can deliver data at speeds of up to 1.0625 Gbps at distances of over 150 KM (90 fiber miles) as compared to T1 (1.544 Mbps) or T3 (45 Mbps). reduces the number of protocol exchanges, as compared to, and therefore can transport at longer distances with minimal performance impact. can preserve the customer s investment in S/390 and zseries I/O capacity by replacing a number of channels with a fewer number of high capacity channels. 1

Builds on Fibre Channel Standards FC-4 Mapping FC-3 FC-2 Signaling Protocol FC-1 Transmission Protocol FC-0 Interface / Media S/390 IPI / SCSI / HIPPI / SB / IP Audio / Video P / 802.2 Multimedia Channels Networks Upper Level Protocol (ULP) Common Services Framing Protocol / Flow Control Encode / Decode Physical Characteristics Single Mode Fibre / Multimode Fibre / Copper FC-PH leverages Fibre Channel Standards FC-3 layer and below IBM is enhancing the FC-4 layer with IBM has initiated an open committee process for standardization of this FC-4 mapping layer proposal with NCITS ( ANSI ) Native protocols differ from bridge protocols DWDM: Dense Wavelength Division Multiplexing (DWDM) can take multiple data streams (up to 80 gigabytes when using an IBM Fiber Saver 2029 Dense Wavelength Division Multiplexer) and multiplex them over just two pair of optical fibers. This, used in conjunction with Optical amplifiers, can extend the channel out beyond 150 KM (90 fiber miles). Remote Copy: Remote copy provides for the ability to mirror or copy data from the application or local site to the recovery or secondary site. This can be achieved through two specific implementations: extended Remote Copy or XRC is a combined hardware and software asynchronous remote copy solution. The application I/O is signaled completed when the data update to the primary storage is completed. Subsequently, a DFSMSdfp component, called System Data Mover (SDM), asynchronously offloads data from the primary storage subsystem s cache and updates the secondary disk volumes in the recovery site. Data consistency in a XRC environment is ensured by the Consistency Group (CG) processing performed by the System Data Mover (SDM). The CG contains records that have their order of update preserved across multiple Logical Control Units within a storage subsystem and across multiple storage subsystems. XRC operation results in minimal performance impact to the application systems at the application site. Unlike Peer -to- Peer Remote Copy (PPRC), XRC is not distance dependent. Peer-to-Peer Remote Copy or PPRC is a hardware solution that synchronously mirrors data residing on a set of disk volumes, called the primary volumes in the application site, to secondary disk volumes on a second system at another site, the recovery site. Only when the application site storage subsystem receives write complete from the recovery site storage subsystem is the application I/O signaled completed. PPRC is distance sensitive. 2 Test Setup and Test Results: Test Setup: At the IBM Gaithersburg test setup, two LPARs, LPAR1 and LPAR2 running on an IBM 9672-ZZ7 zseries simulated the application site and the recovery site as shown in Figure 3. The I/O driver executed in the application site s LPAR 1 copying data from IBM Enterprise Storage Server 1 (ESS1) to ESS2 to simulate a production environment. The extended Remote Copy / System Data Mover (XRC/SDM) executed in the recovery site s LPAR2 asynchronously copying data from the primary XRC disk subsystem ESS2 in the application site to the secondary disk subsystem ESS3 in the recovery site. Two channels from the SDM image were Dense Wavelength Division Multiplexed using an IBM 2029 Fiber Saver in the recovery site and another IBM 2029 Fiber Saver in the application site. In the application site, two channels were split out over channels via an IBM 9032-5 director with the bridge card feature. The channels in turn attach to ESS2.. Figure 3: / XRC Test Setup ESS 1 32 volumes /XRC Test 9672-ZZ7 LPAR 1 - I/O Driver ESS 2 32 volumes Director Mod 5 2 IBM 2029 Fiber Saver LPAR 2 - System Data Mover Optical Amplifier IBM 2029 Fiber Saver 2 SDM ESS 3 32 volumes

Tests: Various tests, indicative of XRC s and s scalability over distances ranging from 0KM to 150KM, were conducted. Elapsed time to initially synchronize 32 volumes from ESS2 to ESS3. Sustained throughput achieved during the 32 volume synchronization process from ESS2 to ESS3. Elapsed time to copy 100% write data, generated by 32 DSS Copy jobs from ESS1 to ESS2 and copying updated data from 32 volumes of ESS2 to ESS3 using XRC. Sustained throughput to copy 100% write data, generated by 32 DSS Copy jobs from ESS1 to ESS2 and copying updated data from 32 volumes of ESS2 to ESS3 using XRC. The final test required breaking the primary fiber path and dynamically switching to the secondary or backup fiber path without loss of data. Test Results: Figure 4 shows the test results summarized below: Elapsed Time (MIN) 25 24 23 22 21 20 19 18 17 15 ESS Testcase Results XRC/SDM via 2029/Amplifier at Various Distances 0 25 50 75 100 125 150 Distance (KM) Figure 4: / XRC Test Results Elapsed time to initialize 32 volumes (XADDPAIRs) remained consistent from 0 Km to 150 Km. The time ranged from 15:53 min (0 Km) to :32 min (150 Km). 32 XADDPAIR aggregate throughput achieved a consistent 70 to 74 MBS (mega bytes per second) for the 2 channels from 0 to 150 KM. The average throughput per channel was 35 to 37 MBS. 32 DSS copy jobs from ESS1 to ESS2 along with using XRC to copy updates from 32 volumes of 75 70 65 60 55 50 45 40 Avg Thruput (MBS) 32 XADDPAIRs time 32 XADDPAIR thruput 32 DSS Copy jobs time 32 DSS Copy thruput (1) XRC XADDPAIR initial copy/resynch processing is highly optimized (2) DSS copy jobs are 100% write activity whereas most enterprises' workload is 20-30% write 3 ESS2 to ESS3 had an elapsed time of 24.5 min and remained consistent, when the distance was varied from 0 to 150 km. 32 DSS copy jobs from ESS1 to ESS2 along with using XRC to copy updates from 32 volumes of ESS2 to ESS3 aggregate throughput achieved a consistent 46 MBS for the 2 channels from 0 to 150 Km. The average throughput per channel was 23 MBS. When the primary fiber path failed, the 2029 automatically switched over to the secondary fiber path without any impact to the XRC remote copy application. XRC/SDM continued normal execution after the switchover. FEDS, The Right Choice For Metropolitan Geographically Dispersed Parallel Sysplex () is a multisite management facility that is a combination of system code and automation that utilizes the capabilities of Parallel Sysplex technology, storage subsystem mirroring and databases to manage enterprise servers, storage, and network resources. It is designed to minimize and potentially eliminate the impact of a disaster or planned site outage. It provides the ability to perform a controlled site switch for both planned and unplanned site outages, with no data loss in a synchronous environment (PPRC) and minimal data loss in an asynchronous environment (XRC). maintains full data integrity across multiple volumes and storage subsystems. Currently, in synchronous mode (PPRC), known as /PPRC, is restricted to 40 km (24 fiber miles) between data centers. In the future, /PPRC is planned to be able to extend the distance between data centers to 100 km (60 fiber miles). In the interim, customers can implement for disaster recovery using XRC, known as /XRC. Both forms of disk copy can use FEDS for optical fiber transport - XRC over FEDS today, and PPRC over FEDS in the future. By using FEDS, the customer can protect their transport infrastructure investment. (Refer to figures 5 and 6).

Currently or FEDS Transition From /XRC -To- /PPRC Within 60 Fiber Miles RCMF/ XRC 100 km 2029 DWDM 2029 DWDM 9032-5 Director with Bridge Card Optical Amplifier 68GB FEDS capacity Optical Amplifier ESS Shark Storage with XRC RCMF/XRC SDM Additional Information: : (FCV Mode) Planning Guide, SG24-5445-00 S/390 Implementation Guide, SG24-59 IBM Fiber Saver: Fiber Saver (2029) Implementation Guide SG24-5608 Fiber Saver (2029) Planning and Maintenance SC28-6801 Primary DASD SITE 1 Secondary DASD SITE 2 : ibm.com/servers/eserver/zseries/pso/ Figure 5: Current /XRC Implementation RCMF/ PPRC Ficon Director Primary DASD Site 1 FEDS Transition From /XRC -To- /PPRC Within 60 Fiber Miles Future ETR Timer support ISC3 100 km 2029 134GB Feds capacity 2029 ESS Shark Storage PPRC Amplifier Amplifier RCMF/PPRC Director ESS Shark Storage PPRC Secondary DASD Site 2 Acknowledgments: David B. Petersen Enterprise Server Group petersen@us.ibm.com Noshir Dhondy Enterprise Server Group dhondy@us.ibm.com Ian R. Wright Enterprise System Connectivity Specialist iwright@us.ibm.com Figure 6: Future /PPRC Implementation Summary: The Extended Distance Solution (FEDS) has solved the traditional channel extension problems of insufficient bandwidth and scalability. The advantages provided by FEDS are: Scalable bandwidth upto 80 GB Consistent throughput independent of distance from 0 to 150 Km (90 fiber miles) Fewer I/O channels on the enterprise server Faster data rates Lower cost and simpler implementation A migration path from /XRC to /PPRC. 4

Copyright IBM Corporation 2001 IBM Corporation Integrated Marketing Communications, Server Group Route 100 Somers, NY 10589 Produced in the United States of America 09-01 All Rights Reserved References in this publication to IBM products or services do not imply that IBM intends to make them available in every country in which IBM operates. Consult your local IBM business contact for information on the products, features, and services available in your area. IBM, IBM logo, e-business logo,, Enterprise Storage Server,, Geographically Dispersed Parallel Sysplex,, and Parallel Sysplex, S/390 and z/series are trademarks or registered trademarks of IBM Corporation in the United States, other countries or both. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Linux is a registered trademarks of Linus Torvalds. Tivoli is a registered trademark of Tivoli Systems Inc.in the United States, other countries or both. UNIX is a registered trademark in the United States and other countries, licensed exclusively through The Open Group. Windows NT is a registered trademark of Microsoft Corporation. Other trademarks and registered trademarks are the properties of their respective companies. IBM hardware products are manufactured from new parts, or new and used parts. Regardless, our warranty terms apply. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. This equipment is subject to all applicable FCC rules and will comply with them upon delivery. Information concerning non-ibm products was obtained from the suppliers of those products. Questions concerning those products should be directed to those suppliers. All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. GM13-0092-00.