designed. engineered. results. Parallel DMF

Similar documents
SPECIFICATION FOR NETWORK ATTACHED STORAGE (NAS) TO BE FILLED BY BIDDER. NAS Controller Should be rack mounted with a form factor of not more than 2U

A GPFS Primer October 2005

Backing Up and Restoring Multi-Terabyte Data Sets

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Dell Fluid Data solutions. Powerful self-optimized enterprise storage. Dell Compellent Storage Center: Designed for business results

iseries Tech Talk Linux on iseries Technical Update 2004

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

High-Performance Lustre with Maximum Data Assurance

Data Movement & Tiering with DMF 7

Comparison of Storage Protocol Performance ESX Server 3.5

IBM System Storage TS1130 Tape Drive Models E06 and other features enhance performance and capacity

SGI InfiniteStorage 350 Quick Start Guide

An Introduction to GPFS

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

Exam : S Title : Snia Storage Network Management/Administration. Version : Demo

Introduction...2. Executive summary...2. Test results...3 IOPs...3 Service demand...3 Throughput...4 Scalability...5

T e c h n o l o g y. LaserTAPE: The Future of Storage

Symantec NetBackup PureDisk Compatibility Matrix Created August 26, 2010

IBM _` p5 570 servers

IBM System Storage DS8870 Release R7.3 Performance Update

Sun Microsystems Product Information

Is it a good idea to create an HSM without tapes?

The Power of PowerVM Power Systems Virtualization. Eyal Rubinstein

Data Protection for Cisco HyperFlex with Veeam Availability Suite. Solution Overview Cisco Public

Storage Protocol Offload for Virtualized Environments Session 301-F

Veritas NetBackup Appliance Fibre Channel Guide

Best Practices for deploying VMware ESX 3.x and 2.5.x server with EMC Storage products. Sheetal Kochavara Systems Engineer, EMC Corporation

REFERENCE ARCHITECTURE Microsoft SQL Server 2016 Data Warehouse Fast Track. FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray//X

IBM System Storage LTO Ultrium 6 Tape Drive Performance White Paper

The Oracle Database Appliance I/O and Performance Architecture

p5 520 server Robust entry system designed for the on demand world Highlights

Fujitsu ETERNUS LT40 S2

iscsi Technology Brief Storage Area Network using Gbit Ethernet The iscsi Standard

IBM TotalStorage Enterprise Storage Server Model 800

DMF Administrator s Guide for SGI InfiniteStorage

Learn Your Alphabet - SRIOV, NPIV, RoCE, iwarp to Pump Up Virtual Infrastructure Performance

Virtualization Selling with IBM Tape

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

dcache tape pool performance Niklas Edmundsson HPC2N, Umeå University

System input-output, performance aspects March 2009 Guy Chesnot

Storage Optimization with Oracle Database 11g

Snia S Storage Networking Management/Administration.

Datasheet Fujitsu ETERNUS LT60 S2

Oracle Database 11g Direct NFS Client Oracle Open World - November 2007

IBM IBM Open Systems Storage Solutions Version 4. Download Full Version :

The Last Bottleneck: How Parallel I/O can attenuate Amdahl's Law

FlashStack 70TB Solution with Cisco UCS and Pure Storage FlashArray

Intel Server Products and Technology Overview

IBM System Storage. Tape Library. A highly scalable, tape solution for System z, IBM Virtualization Engine TS7700 and Open Systems.

Integrating Fibre Channel Storage Devices into the NCAR MSS

Introduction. Architecture Overview

MELLANOX MTD2000 NFS-RDMA SDK PERFORMANCE TEST REPORT

Overcoming Obstacles to Petabyte Archives

DATASHEET FUJITSU ETERNUS LT20 TAPE LIBRARY

EMC CLARiiON Backup Storage Solutions

DELL Reference Configuration Microsoft SQL Server 2008 Fast Track Data Warehouse

Dell EMC SAP HANA Appliance Backup and Restore Performance with Dell EMC Data Domain

Shared Object-Based Storage and the HPC Data Center

Datasheet Fujitsu ETERNUS LT60 S2

Veritas InfoScale Enterprise for Oracle Real Application Clusters (RAC)

IBM System i Model 515 offers new levels of price performance

Exploiting the full power of modern industry standard Linux-Systems with TSM Stephan Peinkofer

2 to 4 Intel Xeon Processor E v3 Family CPUs. Up to 12 SFF Disk Drives for Appliance Model. Up to 6 TB of Main Memory (with GB LRDIMMs)

EMC Backup and Recovery for Microsoft SQL Server

NFS: What s Next. David L. Black, Ph.D. Senior Technologist EMC Corporation NAS Industry Conference. October 12-14, 2004

Performance Report: Multiprotocol Performance Test of VMware ESX 3.5 on NetApp Storage Systems

IBM TotalStorage Enterprise Storage Server Model 800

Datasheet Fujitsu ETERNUS LT20 S2

IOPStor: Storage Made Easy. Key Business Features. Key Business Solutions. IOPStor IOP5BI50T Network Attached Storage (NAS) Page 1 of 5

W H I T E P A P E R. Comparison of Storage Protocol Performance in VMware vsphere 4

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Coordinating Parallel HSM in Object-based Cluster Filesystems

Anil Vasudeva IMEX Research. (408)

WebSphere Application Server Base Performance

Exam : Title : Storage Sales V2. Version : Demo

Table 9. ASCI Data Storage Requirements

Microsoft SQL Server 2012 Fast Track Reference Architecture Using PowerEdge R720 and Compellent SC8000

Red Hat enterprise virtualization 3.1 feature comparison

i Scalar 2000 The Intelligent Library for the Enterprise FEATURES AND BENEFITS

New tape and hard drive options for IBM System x servers help to lower costs and increase efficiency

IBM i supports additional IBM POWER6 hardware

Overview. Implementing Fibre Channel SAN Boot with the Oracle ZFS Storage Appliance. January 2014 By Tom Hanvey; update by Peter Brouwer Version: 2.

XtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL

IBM iseries Models 800 and 810 for small to medium enterprises

Lenovo LTO Generation 6 (LTO6) Internal SAS Tape Drive Product Guide

Extremely Fast Distributed Storage for Cloud Service Providers

IBM System p5 550 and 550Q Express servers

CAPEX Savings and Performance Advantages of BigTwin s Memory Optimization

All-Flash Storage Solution for SAP HANA:

GLOBAL TENDER NOTICE NO.: 05/ Last date of receipt of the sealed quotations: Upto 3 P.M. of

The Last Bottleneck: How Parallel I/O can improve application performance

IBM System x3850 M2 servers feature hypervisor capability

Preserving the World s Most Important Data. Yours. SYSTEMS AT-A-GLANCE: KEY FEATURES AND BENEFITS

IBM System x servers. Innovation comes standard

1Z0-433

SAP SD Benchmark with DB2 and Red Hat Enterprise Linux 5 on IBM System x3850 M2

An Oracle Technical White Paper October Sizing Guide for Single Click Configurations of Oracle s MySQL on Sun Fire x86 Servers

Specialist Support Days (SSDs)

Technical White paper Certification of ETERNUS DX in Milestone video surveillance environment

Transcription:

designed. engineered. results. Parallel DMF

Agenda Monolithic DMF Parallel DMF Parallel configuration considerations

Monolithic DMF

Monolithic DMF DMF Databases DMF Central Server DMF Data File server Data Mover LAN

Data Movement Parallelism Recall: same path DMF Central Server MAX_PUT_CHILDREN tape drives at maximum Size of MAX_CHUNK_SIZE per volumes group

Parallel DMF (pdmf)

Why pdmf technology upgrade Increasing per drive transfer rates 300 250 200 MB/s 150 100 50 Native Comp 0 LTO-1 LTO-2 LTO-3 LTO-4 LTO-5

Why pdmf single server limitation Single server will bottleneck at some point CPU cores do not get equal bandwidth to all memory buffer placement creates bottlenecks DMF, FS, page cache, tape I/O, Not all software layers optimize buffer placement No longer possible to move all the data with a single system

pdmf Benefits Scale throughput to/from the tape drives Add mover nodes as needed Provide redundancy Loss of a data mover node Loss of a tape path on a data mover node Lower cost x86_64 hardware small central server HA solution

DMF 4.0 is Parallel DMF Current release DMF 4.4.1/5.0 4.4.1 (sles10sp3) 5.0 (sles11) Runs in monolithic configuration traditional way in which DMF has run DMF s data movers run on the DMF server Runs in parallel configuration Scalable way in which DMF 4.x can run DMF s data movers run on dedicated nodes Same DMF code base for parallel & monolithic

Parallel DMF LAN DMF Mover CXFS Client NFS/CIFS Servers DMF FS Server CXFS server DMF Mover CXFS Client Shared files space DMF space

SGI Scalable Storage Architecture NFS NFS Clients Clients IB IB Network Network 104 GB/s NFS bandwidth 52 Edge Servers 30 GB/s tape bandwidth 10 data movers NFS Edge Server 0 DMF Server (Active) NFS Edge Nodes Throughput per NFS server: 2GB/s aggregate per node, 500MB/s per client, 50MB/s per cp stream... 51 8G FC Switch Network DMF Server (Passive) DMF Parallel Mover(s) 0 9 DMF Parallel Data Movers 3 GB/s aggregate per node Tier 1 DMF Managed High Performance Tier 2 (DCM) DMF Managed Cost Effective RAID (optional) Tier 3 DMF Managed Tape Tier Tape

Data Mover Platform Select hardware targeted for data movement x86_64 better than ia64 Both price and performance Best price/performance mover node C1103 (Rackable server line) 1U / 2 socket / 96 GB DDR3 2 PCI E x16 or 4 PCI E x8

Summary of architectural advantages I/O Scalability HA Redundancies Data Movers DMF Server NFS Edge Servers Flexibility choose platforms according to their task DMF Server Data movers NFS Servers Utilization of low cost x86 servers

pdmf Considerations

Configuration Considerations DMF server needs access to some tape drives backup, dmatsnf, dmatread.. run only on the server Server can optionally be a data mover node too Not all data movers need to see all tape drives Best practice: 2+ movers can see each tape drive If the last mover goes down, tape drive won t get used If no mover can see the tape drive the cartridge might require admin intervention to become usable again (just as today with monolithic DMF)

Configuration Considerations DMF user and admin filesystems need to be CXFS filesystems server and mover nodes need fast access to the filesystem Movers need to be able to dmapi reads/writes to the filesystems Disk Cache and Disk MSP All I/O currently done on DMF server

Library Robot scheduling Openvault mounting service required for parallel configurations Released version has a limitation in multiarmed ACSLS controlled libraries June release eliminates this limitation asynchronously schedules as many cartridge movements as ACSLS has tape drives

Tape drive scheduling General Rules Choose a drive in the same library bay as the cartridge if possible Balance load across mover nodes Balance load across HBA ports within mover nodes If several choices, choose least used drive for wear leveling per mover config param for FC port speed all HBA s should have the same Gb/s port speed chooses a port with the most remaining bandwidth per mover config param for total bandwidth each mover can have different throughput capabilities chooses a mover with the most remaining bandwidth

Tape Merge Considerations Socket merges Merges may be done between mover nodes Network bandwidth generally not enough to keep up with modern tape drives When do socket merges occur? files too large to fit in the cache dir filesystem files larger than a configurable size default is files larger than 25% of the the cache dir Recommend tape merges using cache dir

SAN Configuration Use FC zoning to isolate tape drives Prevent unexpected tape rewind with risk of data corruption Persistent reserve in OpenVault to come Disk and tape should be on different HBA ports CXFS may fence the disk HBA port

www.sgi.com 2002 SiGI. All rights reserved. SGI, IRIX,, and the SGI logo are registered trademarks and SGI SAN Server, XFS, and CXFS are trademarks of Silicon Graphics, Inc., in the U.S. and/or other countries worldwide. MIPS is a registered trademark of MIPS Technologies, Inc., used under license by Silicon Graphics, Inc. UNIX is a registered trademark of The Open Group in the U.S. and other countries. Intel and Itanium are registered trademarks of Intel Corporation. Linux is a registered trademark of Linus Torvalds. Windows is a registered trademark or trademark of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. (10/02) Slide 22