5.4 - DAOS Demonstration and Benchmark Report
|
|
- Elfrieda Bishop
- 5 years ago
- Views:
Transcription
1 5.4 - DAOS Demonstration and Benchmark Report Johann LOMBARDI on behalf of the DAOS team September 25 th, 2013 Livermore (CA) NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL UNDER ITS SUBCONTRACT WITH LAWRENCE LIVERMORE NATIONAL SECURITY, LLC WHO IS THE OPERATOR AND MANAGER OF LAWRENCE LIVERMORE NATIONAL LABORATORY UNDER CONTRACT NO. DE-AC52-07NA27344 WITH THE U.S. DEPARTMENT OF ENERGY. THE UNITED STATES GOVERNMENT RETAINS AND THE PUBLISHER, BY ACCEPTING THE ARTICLE OF PUBLICATION, ACKNOWLEDGES THAT THE UNITED STATES GOVERNMENT RETAINS A NON-EXCLUSIVE, PAID-UP, IRREVOCABLE, WORLD-WIDE LICENSE TO PUBLISH OR REPRODUCE THE PUBLISHED FORM OF THIS MANUSCRIPT, OR ALLOW OTHERS TO DO SO, FOR UNITED STATES GOVERNMENT PURPOSES. THE VIEWS AND OPINIONS OF AUTHORS EXPRESSED HEREIN DO NOT NECESSARILY REFLECT THOSE OF THE UNITED STATES GOVERNMENT OR LAWRENCE LIVERMORE NATIONAL SECURITY, LLC.
2 Agenda 1. Functional Demonstrations 2. Performance & Scalability 2
3 Functional Demonstrations DAOS Demonstration and Benchmark Report 3
4 Testing Infrastructure Reuse existing Lustre* test infrastructure Patch pushed to gerrit, built by jenkins, tested by autotest and results available in maloo Usual test suites (sanity, conf-sanity, sanityn, ) are run New sanity-daos test suite Invoke binary tests (e.g. event and event queue testing) Execute tests through a tool (daosop) using the DAOS API 52 regression tests New tests added each time we land a major patch Intel High Performance Data Division * Other names and brands may be claimed as the property of others. 4
5 Test Distribution Component Number of Tests Test range Event & EQ 4 1 5b System Container Container 7 20a - 24 Container + Shard Objects Collective Open Epoch Protocol
6 Testing Configurations Autotest configuration 2 clients, 2 OSSs & 1 MDT in VMs results available here: sanity-daos to be run live during this presentation in a virtual machine on this laptop 6
7 Event & Event Queue (EQ) Criteria Associated Test # Maximum number of EQs test 1 Maximum number of child events test 2 Finalize in-progress event test 2 Parent event test 2 Maximum number of events per EQ test 3 Non-blocking operations / EQ Poll & Query Benchmark #events handled with a single core Benchmark #events handled with multiple cores test 4a test 5a test 5b 7
8 System Container Criteria Associated Test # System container open/close test 10 System container query test 11 System container query target test 12 8
9 Container & Shard Criteria Associated Test # Container open/create/unlink/query tests 20[a,b] Container POSIX stat/unlink test 21 No POSIX open on container test 22 Exclusive open for write test 24 Shard addition & list tests 25/26/28/29 Shard Query test 30 9
10 Object I/O Criteria Associated Test # Object open/close + mode check tests 40[a,b] 44 Object direct read/write Object cached write + flush Read on non-existent object Object punch tests 45[a-h] tests 45[i-l] tests 45[c,d] tests 46[a,b] Shard object index listing test 47 Object created on write test 47 10
11 Collective Open Criteria Associated Test # Collective open in same mountpoint test 60 Collective open in different mountpoint test 61 Layout refresh through loca2global2local test 62 11
12 Epoch Protocol Criteria Associated Test # Epoch reference on open test 70 Epoch slip test 71 Epoch commit test 72 Epoch wait test 73 12
13 Performance & Scalability DAOS Demonstration and Benchmark Report 13
14 Performance Testbed - lola Intel s FF internal cluster called lola 5 OSS nodes connected to JBODs 1 OST per OSS (5 OSTs in all) each OST has a dedicated RAIDZ2 pool composed of 10 JBOD disks 32 GB of memory 1 MDS connected to a JBOD 1 MDT in a dedicated zpool of 1 JBOD disk 32 GB of memory 20 client nodes Infiniband network 14
15 Scalability Testbed - hyperion 8 OSSs connected to NetApp E-5460 arrays 1 OST per OSS (8 OSTs in all) Each OST has a dedicated zpool composed of 1 RAID6 LUN across 10 disks 1 MDS connected to NetApp E-5460 array 1 MDT on a dedicated zpool composed of 1 RAID10 LUN across 4 disks 120 client nodes Infiniband network 15
16 Collective Container Open/Close (1/3) DAOSBENCH in container open/close mode Process leader opens the container and generates global handle Global handle broadcasted to all tasks Slave tasks call local to global Measure how many such operations can be completed per second Each container has one shard per target ssfopen used to compare with POSIX similar to daosbench except that each client node performs a POSIX open(2) 16
17 Collective Open/Close lola (2/3) Collective open rate (per sec) DAOS POSIX number of client nodes 17
18 Collective Open/Close hyperion (3/3) Collective open rate (per sec) DAOS POSIX Number of client nodes 18
19 Shard Addition (1/2) DAOSBENCH in shard-add mode Process leader creates a container and broadcasts handle Each task creates multiple shards in the same container --shards=n option means that a container gets N shards in total Those N shards are mapped to processes as even as possible. e.g. N = 7 ntasks = 3, rank #0 adds shards 0, 3 & 6, rank #1 adds 1 & 4 and rank #2 adds 2 & 5. Process leader refreshes layout information and broadcasts updated global handle Slave processes re-execute global2local to update local layout 19
20 Shard Addition - hyperion (2/2) Shard create rate (per sec) DAOS Shard Creation Number of client nodes 20
21 Small I/O: 4K Object Creation (1/3) New DAOSBENCH benchmark, object-create mode Create a container with a shard on each OST Each task is assigned a shard and writes 4KB into multiple objects in the shard Standard mdsrate used to compare with POSIX Use the write=4096 option to create non-empty files Each task creates files in its own directory 21
22 Small I/O - lola (2/3) KB object create rate (per sec) DAOS (DAOSBENCH) POSIX (mdsrate) Number of client nodes 22
23 Small I/O - hyperion (3/3) KB object create rate (per sec) DAOS POSIX Number of client nodes 23
24 Streaming I/O (1/5) IOR benchmark modified to use the DAOS API Process leader opens containers, creates one shard per task and broadcasts global handle Each task is assigned a shard and submits async I/Os to an object in that shard No application-level striping/mirroring support yet (in our TODO list) DAOS runs includes: direct writes & reads buffered writes For comparison, POSIX runs include: file per process buffered reads & writes direct I/O not benchmarked because too slow could likely be improved by using ZIL 24
25 Streaming I/O - single client - lola (2/5) Throughput (MiB/s) DAOS Direct Write DAOS Read DAOS Buffered Write Maximum number of I/Os in flight 25
26 Streaming I/O - mutli clients - lola (3/5) Throughput (MiB/s) DAOS Direct Write DAOS Direct Read POSIX FPP Write POSIX FPP Read DAOS Buffered Write Number of client nodes 26
27 Streaming I/O single client - hyperion (4/5) Throughput (MiB/s) DAOS Direct Write DAOS Direct Read Maximum number of I/Os in flight 27
28 Streaming I/O multi client - hyperion (5/5) Throughput (MiB/s) DAOS Direct Write DAOS Direct Read POSIX FPP Write POSIX FPP Read Number of client nodes 28
29 Conclusions DAOS prototype turns out to be fairly stable when tested at scale DAOS model seems to scale pretty well Still room for improvements Speed up shard creation rate on a single OST A ZFS dataset creation involves a txg sync ZFS Read performance drop not yet understood e.g. reduce amount of seeks by increasing block size to 1MB Limit impact of asynchronous dataset removal Disabling ZFS feature@async_destroy seems to help Add striping support to IOR/DAOS and test with unaligned I/Os Groundwork in place for epoch recovery implementation 29
30 Fast Forward Project DAOS
6.5 Collective Open/Close & Epoch Distribution Demonstration
6.5 Collective Open/Close & Epoch Distribution Demonstration Johann LOMBARDI on behalf of the DAOS team December 17 th, 2013 Fast Forward Project - DAOS DAOS Development Update Major accomplishments of
More informationFastForward I/O and Storage: ACG 8.6 Demonstration
FastForward I/O and Storage: ACG 8.6 Demonstration Kyle Ambert, Jaewook Yu, Arnab Paul Intel Labs June, 2014 NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL UNDER ITS SUBCONTRACT WITH LAWRENCE LIVERMORE
More informationProgress on Efficient Integration of Lustre* and Hadoop/YARN
Progress on Efficient Integration of Lustre* and Hadoop/YARN Weikuan Yu Robin Goldstone Omkar Kulkarni Bryon Neitzel * Some name and brands may be claimed as the property of others. MapReduce l l l l A
More informationFastForward I/O and Storage: ACG 5.8 Demonstration
FastForward I/O and Storage: ACG 5.8 Demonstration Jaewook Yu, Arnab Paul, Kyle Ambert Intel Labs September, 2013 NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL UNDER ITS SUBCONTRACT WITH LAWRENCE
More information8.5 End-to-End Demonstration Exascale Fast Forward Storage Team June 30 th, 2014
8.5 End-to-End Demonstration Exascale Fast Forward Storage Team June 30 th, 2014 NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL, THE HDF GROUP, AND EMC UNDER INTEL S SUBCONTRACT WITH LAWRENCE LIVERMORE
More informationLustre overview and roadmap to Exascale computing
HPC Advisory Council China Workshop Jinan China, October 26th 2011 Lustre overview and roadmap to Exascale computing Liang Zhen Whamcloud, Inc liang@whamcloud.com Agenda Lustre technology overview Lustre
More informationHigh Level Design IOD KV Store FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: January 10, 2013 High Level Design IOD KV Store FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor Address B599860
More informationLustre on ZFS. At The University of Wisconsin Space Science and Engineering Center. Scott Nolin September 17, 2013
Lustre on ZFS At The University of Wisconsin Space Science and Engineering Center Scott Nolin September 17, 2013 Why use ZFS for Lustre? The University of Wisconsin Space Science and Engineering Center
More informationOpenZFS Performance Improvements
OpenZFS Performance Improvements LUG Developer Day 2015 April 16, 2015 Brian, Behlendorf This work was performed under the auspices of the U.S. Department of Energy by under Contract DE-AC52-07NA27344.
More informationJohann Lombardi High Performance Data Division
ZFS Improvements for HPC Johann Lombardi High Performance Data Division Lustre*: ZFS Support ZFS backend fully supported since 2.4.0 Basic support for ZFS-based OST introduced in 2.3.0 ORION project funded
More informationDemonstration Milestone Completion for the LFSCK 2 Subproject 3.2 on the Lustre* File System FSCK Project of the SFS-DEV-001 contract.
Demonstration Milestone Completion for the LFSCK 2 Subproject 3.2 on the Lustre* File System FSCK Project of the SFS-DEV-1 contract. Revision History Date Revision Author 26/2/14 Original R. Henwood 13/3/14
More informationHigh Level Design Client Health and Global Eviction FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O MILESTONE: 4.
Date: 2013-06-01 High Level Design Client Health and Global Eviction FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O MILESTONE: 4.1 LLNS Subcontract No. Subcontractor
More informationLustre on ZFS. Andreas Dilger Software Architect High Performance Data Division September, Lustre Admin & Developer Workshop, Paris, 2012
Lustre on ZFS Andreas Dilger Software Architect High Performance Data Division September, 24 2012 1 Introduction Lustre on ZFS Benefits Lustre on ZFS Implementation Lustre Architectural Changes Development
More informationEFF-IO M7.5 Demo. Semantic Migration of Multi-dimensional Arrays
EFF-IO M7.5 Demo Semantic Migration of Multi-dimensional Arrays John Bent, Sorin Faibish, Xuezhao Liu, Harriet Qui, Haiying Tang, Jerry Tirrell, Jingwang Zhang, Kelly Zhang, Zhenhua Zhang NOTICE: THIS
More informationDesign Document (Historical) HDF5 Dynamic Data Structure Support FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: July 24, 2013 Design Document (Historical) HDF5 Dynamic Data Structure Support FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor
More informationMilestone 6.3: Basic Analysis Shipping Demonstration
The HDF Group Milestone 6.3: Basic Analysis Shipping Demonstration Ruth Aydt, Mohamad Chaarawi, Ivo Jimenez, Quincey Koziol, Jerome Soumagne 12/17/2013 NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL
More informationAn Overview of Fujitsu s Lustre Based File System
An Overview of Fujitsu s Lustre Based File System Shinji Sumimoto Fujitsu Limited Apr.12 2011 For Maximizing CPU Utilization by Minimizing File IO Overhead Outline Target System Overview Goals of Fujitsu
More informationDAOS Epoch Recovery Design FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: June 4, 2014 DAOS Epoch Recovery Design FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor Address B599860 Intel
More informationComputer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research
Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya
More informationSmall File I/O Performance in Lustre. Mikhail Pershin, Joe Gmitter Intel HPDD April 2018
Small File I/O Performance in Lustre Mikhail Pershin, Joe Gmitter Intel HPDD April 2018 Overview Small File I/O Concerns Data on MDT (DoM) Feature Overview DoM Use Cases DoM Performance Results Small File
More informationThe current status of the adoption of ZFS* as backend file system for Lustre*: an early evaluation
The current status of the adoption of ZFS as backend file system for Lustre: an early evaluation Gabriele Paciucci EMEA Solution Architect Outline The goal of this presentation is to update the current
More informationUsing file systems at HC3
Using file systems at HC3 Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu Basic Lustre
More informationUK LUG 10 th July Lustre at Exascale. Eric Barton. CTO Whamcloud, Inc Whamcloud, Inc.
UK LUG 10 th July 2012 Lustre at Exascale Eric Barton CTO Whamcloud, Inc. eeb@whamcloud.com Agenda Exascale I/O requirements Exascale I/O model 3 Lustre at Exascale - UK LUG 10th July 2012 Exascale I/O
More informationWelcome! Virtual tutorial starts at 15:00 BST
Welcome! Virtual tutorial starts at 15:00 BST Parallel IO and the ARCHER Filesystem ARCHER Virtual Tutorial, Wed 8 th Oct 2014 David Henty Reusing this material This work is licensed
More informationlibhio: Optimizing IO on Cray XC Systems With DataWarp
libhio: Optimizing IO on Cray XC Systems With DataWarp May 9, 2017 Nathan Hjelm Cray Users Group May 9, 2017 Los Alamos National Laboratory LA-UR-17-23841 5/8/2017 1 Outline Background HIO Design Functionality
More informationHigh-Performance Lustre with Maximum Data Assurance
High-Performance Lustre with Maximum Data Assurance Silicon Graphics International Corp. 900 North McCarthy Blvd. Milpitas, CA 95035 Disclaimer and Copyright Notice The information presented here is meant
More informationFile Open, Close, and Flush Performance Issues in HDF5 Scot Breitenfeld John Mainzer Richard Warren 02/19/18
File Open, Close, and Flush Performance Issues in HDF5 Scot Breitenfeld John Mainzer Richard Warren 02/19/18 1 Introduction Historically, the parallel version of the HDF5 library has suffered from performance
More informationMilestone 8.1: HDF5 Index Demonstration
The HDF Group Milestone 8.1: HDF5 Index Demonstration Ruth Aydt, Mohamad Chaarawi, Quincey Koziol, Aleksandar Jelenak, Jerome Soumagne 06/30/2014 NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY THE HDF GROUP
More informationVersioning Object Storage Device (VOSD) Design Document FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: June 4, 2014 Versioning Object Storage Device (VOSD) Design Document FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor
More informationGuidelines for Efficient Parallel I/O on the Cray XT3/XT4
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 Jeff Larkin, Cray Inc. and Mark Fahey, Oak Ridge National Laboratory ABSTRACT: This paper will present an overview of I/O methods on Cray XT3/XT4
More informationDAOS Lustre Restructuring and Protocol Changes Design FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: May 26th, 2014 DAOS Lustre Restructuring and Protocol Changes Design FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor
More informationScalability Testing of DNE2 in Lustre 2.7 and Metadata Performance using Virtual Machines Tom Crowe, Nathan Lavender, Stephen Simms
Scalability Testing of DNE2 in Lustre 2.7 and Metadata Performance using Virtual Machines Tom Crowe, Nathan Lavender, Stephen Simms Research Technologies High Performance File Systems hpfs-admin@iu.edu
More informationDAOS API and DAOS POSIX DESIGN DOCUMENT FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: December 13, 2012 DAOS API and DAOS POSIX DESIGN DOCUMENT FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor
More informationRAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System
RAIDIX Data Storage Solution Clustered Data Storage Based on the RAIDIX Software and GPFS File System 2017 Contents Synopsis... 2 Introduction... 3 Challenges and the Solution... 4 Solution Architecture...
More informationAndreas Dilger. Principal Lustre Engineer. High Performance Data Division
Andreas Dilger Principal Lustre Engineer High Performance Data Division Focus on Performance and Ease of Use Beyond just looking at individual features... Incremental but continuous improvements Performance
More informationThe HDF Group Q5 Demo
The HDF Group The HDF Group Q5 Demo 5.6 HDF5 Transaction API 5.7 Full HDF5 Dynamic Data Structure NOTICE: THIS MANUSCRIPT HAS BEEN AUTHORED BY INTEL UNDER ITS SUBCONTRACT WITH LAWRENCE LIVERMORE NATIONAL
More informationFan Yong; Zhang Jinghai. High Performance Data Division
Fan Yong; Zhang Jinghai High Performance Data Division How Can Lustre * Snapshots Be Used? Undo/undelete/recover file(s) from the snapshot Removed file by mistake, application failure causes data invalid
More informationLustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions
LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions Roger Goff Senior Product Manager DataDirect Networks, Inc. What is Lustre? Parallel/shared file system for
More informationExperiences with HP SFS / Lustre in HPC Production
Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre
More informationFeedback on BeeGFS. A Parallel File System for High Performance Computing
Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December
More informationNetApp High-Performance Storage Solution for Lustre
Technical Report NetApp High-Performance Storage Solution for Lustre Solution Design Narjit Chadha, NetApp October 2014 TR-4345-DESIGN Abstract The NetApp High-Performance Storage Solution (HPSS) for Lustre,
More informationINTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT
INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT Abhisek Pan 2, J.P. Walters 1, Vijay S. Pai 1,2, David Kang 1, Stephen P. Crago 1 1 University of Southern California/Information Sciences Institute 2
More informationHPE Scalable Storage with Intel Enterprise Edition for Lustre*
HPE Scalable Storage with Intel Enterprise Edition for Lustre* HPE Scalable Storage with Intel Enterprise Edition For Lustre* High Performance Storage Solution Meets Demanding I/O requirements Performance
More informationLustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG
Lustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG 2017 @Beijing Outline LNet reliability DNE improvements Small file performance File Level Redundancy Miscellaneous improvements
More informationMySQL Database Scalability
MySQL Database Scalability Nextcloud Conference 2016 TU Berlin Oli Sennhauser Senior MySQL Consultant at FromDual GmbH oli.sennhauser@fromdual.com 1 / 14 About FromDual GmbH Support Consulting remote-dba
More informationData Management. Parallel Filesystems. Dr David Henty HPC Training and Support
Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER
More informationLustre* - Fast Forward to Exascale High Performance Data Division. Eric Barton 18th April, 2013
Lustre* - Fast Forward to Exascale High Performance Data Division Eric Barton 18th April, 2013 DOE Fast Forward IO and Storage Exascale R&D sponsored by 7 leading US national labs Solutions to currently
More informationIntroduction The Project Lustre Architecture Performance Conclusion References. Lustre. Paul Bienkowski
Lustre Paul Bienkowski 2bienkow@informatik.uni-hamburg.de Proseminar Ein-/Ausgabe - Stand der Wissenschaft 2013-06-03 1 / 34 Outline 1 Introduction 2 The Project Goals and Priorities History Who is involved?
More informationMulti-Rail LNet for Lustre
Multi-Rail LNet for Lustre Rob Mollard September 2016 The SGI logos and SGI product names used or referenced herein are either registered trademarks or trademarks of Silicon Graphics International Corp.
More informationLLNL Lustre Centre of Excellence
LLNL Lustre Centre of Excellence Mark Gary 4/23/07 This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under
More informationBlue Waters I/O Performance
Blue Waters I/O Performance Mark Swan Performance Group Cray Inc. Saint Paul, Minnesota, USA mswan@cray.com Doug Petesch Performance Group Cray Inc. Saint Paul, Minnesota, USA dpetesch@cray.com Abstract
More informationArchitecting Storage for Semiconductor Design: Manufacturing Preparation
White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data
More informationFhGFS - Performance at the maximum
FhGFS - Performance at the maximum http://www.fhgfs.com January 22, 2013 Contents 1. Introduction 2 2. Environment 2 3. Benchmark specifications and results 3 3.1. Multi-stream throughput................................
More informationGot Burst Buffer. Now What? Early experiences, exciting future possibilities, and what we need from the system to make it work
Got Burst Buffer. Now What? Early experiences, exciting future possibilities, and what we need from the system to make it work The Salishan Conference on High-Speed Computing April 26, 2016 Adam Moody
More informationFastForward I/O and Storage: IOD M5 Demonstration (5.2, 5.3, 5.9, 5.10)
FastForward I/O and Storage: IOD M5 Demonstration (5.2, 5.3, 5.9, 5.10) 1 EMC September, 2013 John Bent john.bent@emc.com Sorin Faibish faibish_sorin@emc.com Xuezhao Liu xuezhao.liu@emc.com Harriet Qiu
More information<Insert Picture Here> Lustre Development
Lustre Development Eric Barton Lead Engineer, Lustre Group Lustre Development Agenda Engineering Improving stability Sustaining innovation Development Scaling
More informationLustre at Scale The LLNL Way
Lustre at Scale The LLNL Way D. Marc Stearman Lustre Administration Lead Livermore uting - LLNL This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory
More informationCSCS HPC storage. Hussein N. Harake
CSCS HPC storage Hussein N. Harake Points to Cover - XE6 External Storage (DDN SFA10K, SRP, QDR) - PCI-E SSD Technology - RamSan 620 Technology XE6 External Storage - Installed Q4 2010 - In Production
More informationLustre/HSM Binding. Aurélien Degrémont Aurélien Degrémont LUG April
Lustre/HSM Binding Aurélien Degrémont aurelien.degremont@cea.fr Aurélien Degrémont LUG 2011 12-14 April 2011 1 Agenda Project history Presentation Architecture Components Performances Code Integration
More informationFast Forward I/O & Storage
Fast Forward I/O & Storage Eric Barton Lead Architect 1 Department of Energy - Fast Forward Challenge FastForward RFP provided US Government funding for exascale research and development Sponsored by 7
More informationDemonstration Milestone for Parallel Directory Operations
Demonstration Milestone for Parallel Directory Operations This milestone was submitted to the PAC for review on 2012-03-23. This document was signed off on 2012-04-06. Overview This document describes
More informationMetadata Performance Evaluation LUG Sorin Faibish, EMC Branislav Radovanovic, NetApp and MD BWG April 8-10, 2014
Metadata Performance Evaluation Effort @ LUG 2014 Sorin Faibish, EMC Branislav Radovanovic, NetApp and MD BWG April 8-10, 2014 OpenBenchmark Metadata Performance Evaluation Effort (MPEE) Team Leader: Sorin
More informationTR4UTB RECOVER RAID DISKS DOCUMENT
10 March, 2018 TR4UTB RECOVER RAID DISKS DOCUMENT Document Filetype: PDF 113.55 KB 0 TR4UTB RECOVER RAID DISKS DOCUMENT Active@ File Recovery provides an easy way to assemble RAID disks together. If 2
More informationZEST Snapshot Service. A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1
ZEST Snapshot Service A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1 Design Motivation To optimize science utilization of the machine Maximize
More informationEngineering Goals. Scalability Availability. Transactional behavior Security EAI... CS530 S05
Engineering Goals Scalability Availability Transactional behavior Security EAI... Scalability How much performance can you get by adding hardware ($)? Performance perfect acceptable unacceptable Processors
More informationFinding the Needle Improving Management of Large Lustre File Systems. Presented by David Dillow
Finding the Needle Improving Management of Large Lustre File Systems Presented by David Dillow 1 The Haystack Key metric for common management tasks is number of files rather than size Generating candidates
More informationAPI and Usage of libhio on XC-40 Systems
API and Usage of libhio on XC-40 Systems May 24, 2018 Nathan Hjelm Cray Users Group May 24, 2018 Los Alamos National Laboratory LA-UR-18-24513 5/24/2018 1 Outline Background HIO Design HIO API HIO Configuration
More informationAndreas Dilger High Performance Data Division RUG 2016, Paris
Andreas Dilger High Performance Data Division RUG 2016, Paris Multi-Tiered Storage and File Level Redundancy Full direct data access from clients to all storage classes Management Target (MGT) Metadata
More informationInfiniBand Networked Flash Storage
InfiniBand Networked Flash Storage Superior Performance, Efficiency and Scalability Motti Beck Director Enterprise Market Development, Mellanox Technologies Flash Memory Summit 2016 Santa Clara, CA 1 17PB
More informationFilesystems on SSCK's HP XC6000
Filesystems on SSCK's HP XC6000 Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Overview» Overview of HP SFS at SSCK HP StorageWorks Scalable File Share (SFS) based on
More informationLustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE
Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute
More informationLUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November Abstract
LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November 2008 Abstract This paper provides information about Lustre networking that can be used
More informationParallel File Systems. John White Lawrence Berkeley National Lab
Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation
More informationParallel File Systems for HPC
Introduction to Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for 2 The File System 3 Cluster & A typical
More informationNetwork Request Scheduler Scale Testing Results. Nikitas Angelinas
Network Request Scheduler Scale Testing Results Nikitas Angelinas nikitas_angelinas@xyratex.com Agenda NRS background Aim of test runs Tools used Test results Future tasks 2 NRS motivation Increased read
More informationManaging Zone Configuration
Oracle Enterprise Manager Ops Center Managing the Configuration of a Zone 12c Release 1 (12.1.2.0.0) E27356-01 November 2012 This guide provides an end-to-end example for how to use Oracle Enterprise Manager
More informationLustre OSS Read Cache Feature SCALE Test Plan
Lustre OSS Read Cache Feature SCALE Test Plan Auth Date Description of Document Change Client Approval By Jack Chen 10/08//08 Add scale testing to Feature Test Plan Jack Chen 10//30/08 Create Scale Test
More informationReduction Network Discovery Design Document FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: May 01, 2014 Reduction Network Discovery Design Document FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor
More informationLustre HSM at Cambridge. Early user experience using Intel Lemur HSM agent
Lustre HSM at Cambridge Early user experience using Intel Lemur HSM agent Matt Rásó-Barnett Wojciech Turek Research Computing Services @ Cambridge University-wide service with broad remit to provide research
More informationNIF ICCS Test Controller for Automated & Manual Testing
UCRL-CONF-235325 NIF ICCS Test Controller for Automated & Manual Testing J. S. Zielinski October 5, 2007 International Conference on Accelerator and Large Experimental Physics Control Systems Knoxville,
More informationSFA12KX and Lustre Update
Sep 2014 SFA12KX and Lustre Update Maria Perez Gutierrez HPC Specialist HPC Advisory Council Agenda SFA12KX Features update Partial Rebuilds QoS on reads Lustre metadata performance update 2 SFA12KX Features
More informationPerformance comparisons and trade-offs for various MySQL replication schemes
Performance comparisons and trade-offs for various MySQL replication schemes Darpan Dinker VP Engineering Brian O Krafka, Chief Architect Schooner Information Technology, Inc. http://www.schoonerinfotech.com/
More information朱义普. Resolving High Performance Computing and Big Data Application Bottlenecks with Application-Defined Flash Acceleration. Director, North Asia, HPC
October 28, 2013 Resolving High Performance Computing and Big Data Application Bottlenecks with Application-Defined Flash Acceleration 朱义普 Director, North Asia, HPC DDN Storage Vendor for HPC & Big Data
More informationHigh Level Design Server Collectives FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O
Date: June 05, 2013 High Level Design Server Collectives FOR EXTREME-SCALE COMPUTING RESEARCH AND DEVELOPMENT (FAST FORWARD) STORAGE AND I/O LLNS Subcontract No. Subcontractor Name Subcontractor Address
More informationDELL EMC ISILON F800 AND H600 I/O PERFORMANCE
DELL EMC ISILON F800 AND H600 I/O PERFORMANCE ABSTRACT This white paper provides F800 and H600 performance data. It is intended for performance-minded administrators of large compute clusters that access
More informationRemote Directories High Level Design
Remote Directories High Level Design Introduction Distributed Namespace (DNE) allows the Lustre namespace to be divided across multiple metadata servers. This enables the size of the namespace and metadata
More informationIntroduction to Lustre* Architecture
Introduction to Lustre* Architecture Lustre* systems and network administration October 2017 * Other names and brands may be claimed as the property of others Lustre Fast, Scalable Storage for HPC Lustre*
More informationLUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan)
LUG 2012 From Lustre 2.1 to Lustre HSM Lustre @ IFERC (Rokkasho, Japan) Diego.Moreno@bull.net From Lustre-2.1 to Lustre-HSM - Outline About Bull HELIOS @ IFERC (Rokkasho, Japan) Lustre-HSM - Basis of Lustre-HSM
More informationThe Google File System
October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single
More informationBen Walker Data Center Group Intel Corporation
Ben Walker Data Center Group Intel Corporation Notices and Disclaimers Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation.
More informationImproved Solutions for I/O Provisioning and Application Acceleration
1 Improved Solutions for I/O Provisioning and Application Acceleration August 11, 2015 Jeff Sisilli Sr. Director Product Marketing jsisilli@ddn.com 2 Why Burst Buffer? The Supercomputing Tug-of-War A supercomputer
More informationAndreas Dilger, Intel High Performance Data Division Lustre User Group 2017
Andreas Dilger, Intel High Performance Data Division Lustre User Group 2017 Statements regarding future functionality are estimates only and are subject to change without notice Performance and Feature
More informationDNE2 High Level Design
DNE2 High Level Design Introduction With the release of DNE Phase I Remote Directories Lustre* file systems now supports more than one MDT. This feature has some limitations: Only an administrator can
More informationFCP: A Fast and Scalable Data Copy Tool for High Performance Parallel File Systems
FCP: A Fast and Scalable Data Copy Tool for High Performance Parallel File Systems Feiyi Wang (Ph.D.) Veronica Vergara Larrea Dustin Leverman Sarp Oral ORNL is managed by UT-Battelle for the US Department
More informationLock Ahead: Shared File Performance Improvements
Lock Ahead: Shared File Performance Improvements Patrick Farrell Cray Lustre Developer Steve Woods Senior Storage Architect woods@cray.com September 2016 9/12/2016 Copyright 2015 Cray Inc 1 Agenda Shared
More informationEnhancing Lustre Performance and Usability
October 17th 2013 LUG2013 Enhancing Lustre Performance and Usability Shuichi Ihara Li Xi DataDirect Networks, Japan Agenda Today's Lustre trends Recent DDN Japan activities for adapting to Lustre trends
More informationIntel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage
Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John
More informationLustre Parallel Filesystem Best Practices
Lustre Parallel Filesystem Best Practices George Markomanolis Computational Scientist KAUST Supercomputing Laboratory georgios.markomanolis@kaust.edu.sa 7 November 2017 Outline Introduction to Parallel
More informationOracle Enterprise Manager Ops Center. Introduction. Creating Oracle Solaris 11 Zones Guide 12c Release 1 ( )
Oracle Enterprise Manager Ops Center Creating Oracle Solaris 11 Zones Guide 12c Release 1 (12.1.0.0.0) E27336-01 April 2012 This guide provides an end-to-end example for how to use Oracle Enterprise Manager
More informationTutorial: Lustre 2.x Architecture
CUG 2012 Stuttgart, Germany April 2012 Tutorial: Lustre 2.x Architecture Johann Lombardi 2 Why a new stack? Add support for new backend filesystems e.g. ZFS, btrfs Introduce new File IDentifier (FID) abstraction
More information