Lustre usages and experiences

Size: px
Start display at page:

Download "Lustre usages and experiences"

Transcription

1 Lustre usages and experiences at German Climate Computing Centre in Hamburg Carsten Beyer

2 High Performance Computing Center Exclusively for the German Climate Research Limited Company, non-profit Staff: ~ 70 Services for Climate Research: Support for Scientific Computing and Simulation, Model Optimization, Parallelization Data Management and Archiving Data Visualization (3D Graphics and Video) About DKRZ University Research Group: HPC (Prof. Dr. Ludwig) 2 / 18

3 Mistral First phase 2015 (second phase 2016), total cost: 41 Mio Euro Bull Supercomputer: 26 Mio Euro Bullx B700 DLC-System ~ (+ ~67.000) cores (Intel Haswell/Intel Broadwell) nodes (2x 12 Cores) (+1750 nodes 2x 18 Cores) 1,4 (3,0) PetaFLOPS 115 TB (266 TB) main memory Infiniband FDR Parallel file system: Lustre, 21 (+33) PetaByte Throughput > 0.5 TeraByte/s 3 / 18

4 Lustre - ClusterStor Seagate ClusterStor Setup (Phase 1 - CS9000/ Phase 2 - L300) 62 OSS / 124 OST s / 6TB disks 5 MDT 21 PB / max. 6 Billion files Lustre / IB FDR 455 million files / 90% filled 74 OSS / 148 OST s / 8TB disks 7 MDT 33 PB / max. 8 Billion files Lustre / IB FDR 196 million files / 40% filled 4 / 18

5 Filesystem structure Lustre Phase 1 HOME directories (/mnt/lustre01/pf => MDT0000) POOL directory (/mnt/lustre01/pool => MDT0000) Software tree (/mnt/lustre01/sw => MDT0000) SCRATCH dirs (/mnt/lustre01/scratch => MDT000[1-4]) WORK-directories (/mnt/lustre01/work => MDT000[1-4]) Lustre Phase 2 WORK directories (/mnt/lustre02/work => MDT000[0-6]) extension of phase 1 Soft link /work -> /mnt/lustre01/work Soft link /mnt/lustre01/work/<prj> -> /mnt/lustre02/work/<prj> 5 / 18

6 Migration GPFS to Lustre Copying 4.5 PB from GPFS (AIX) to Lustre (Linux) Usage of GPFS policies to generate filelist (130 mio files) Sort and split filelist (mix of big/medium/small files) as input for rsync Using SLURM on new system to schedule ~6000 Jobs for copying via IB gateway from NSD server Challenge Not overloading NSD server on previous system (2x 10 GbE per node) Changing UID/GID for some users from id s <1000 to >20000 during transfer (newer rsync version needed than in RHEL 6) Time frame to achieve this Because Benchmarks for approval of the system still ongoing 6 / 18

7 Migration GPFS to Lustre Where s my Quota and why takes mv so long All copied project data belonging to MDT0000 in a separate location The new project directories were distributed to MDT000[1-4] How to copy the old data to the new directories? Could not be moved easily with mv due to DNE phase 1 Trying tools like shift for copying No quota could be used as before (user/fileset/user in fileset) HOME/SCRATCH/WORK in one filesystem now Using Robinhood for soft quota (starting 2016) User can see their quota and amount of data on a DKRZ web frontend 7 / 18

8 Migration GPFS to Lustre Small files HOME directories (~ 30 mio files / 6 TB ) Software tree Problem to run Backup with Calypso/Simpana Full Backup of HOME takes days Large loading times for software (e.g. Matlab, Python) Generating a 300GB image file stripped accross 16 OST s Loop mount of this image file to interactive nodes (Login/Graphics) Using caching on clients for higher performance 8 / 18

9 Tools used - Copytools Self-Healing Independent File Transfer (Shift) Paul Kolano (NASA) Dependent on mutil (Paul Kolano / NASA) Final rsync at the needed (Hardlinks) lfs find + rsync Generating filelist with lfs find and split in equal parts rsync with split-lists (running as SLURM jobs in parallel) Final rsync needed afterwards for hardlinks, directory ownership pftool Los Alamos National Lab Easy to use, scalable on several nodes with SLURM Needs final rsync afterwards for hardlinks, directory ownership 9 / 18

10 Tools used - Robinhood Report generation for User reports -> HOME/SCRATCH Project reports -> WORK Total amount of data for each project (~ 220 active projects) Per user in a project (up to 130 user per project) Overview for project admin Soft quota by mail to users / project admin Setup Robinhood version / reading Lustre changelogs 2 Robinhood server for lustre01 (connected to 2/3 MDT) 3 Robinhood server for lustre02 (connected to 2/2/3 MDT) 10 / 18

11 Tools used HPSS pftp Tape Archive HPSS Projects have to request storage on a annual basis (Quota) Manual copy of data by user for their projects with pftp User can decide for single / double copy or long term archive (LTA) Douple copy is also accounted twice for project LTA needs a description of data (to be identified later, stored up to 10 years) Currently no automatic migration from Lustre to HPSS Possible tool could be Robinhood Currently not tried yet How to identify data to be migrated for projects 11 / 18

12 Monitoring tools / sources Sources: Starting with own python daemon on CS (replaced by Seastream) Seastream API on ClusterStor (currently switched off on lustre02) Entries from syslog (Lustre clients) Lustre lllite (from clients) Tools: Opentsdb / hbase / hadoop / elastic search Frontends: Xdmod / Grafana / Kibana Icinga on Clusterstor Thanks to: Olaf Gellert, Hendrik Bockelmann, Josef Dvoracek (ATOS), Eugen Betke (Uni HH) 12 / 18

13 Monitoring - Overview 3340 CN cluster 24 Login On demand: System metrics Lustre llite Mistral Permanent: System metrics Lustre llite Lustre Server FS lustre01 Seastream Job stats System stats Lustre Server FS lustre02 Nginx proxy Syslog-ng cluster SLURMctld Nightly testsuite with IOR, Climate models Opentsdb 6 instances hbase 6 instances HADOOP 4 instances Log files Nginx proxy Elastic search 4 instanses Job scripts xdmod grafana kibana 13 / 18

14 Monitoring - Grafana 14 / 18

15 Monitoring - Grafana 15 / 18

16 Pro s / Con s from daily business Most times fine but Degradation of performance when reaching >95% filling state and not symmetric usage of OST s (lustre01) First solution was to deactivate OST S / migrating data from OST s and shifting data to lustre02 filesystem Newer solution is setting qos_threshold_rr to 5% on all MDT That was leading to about 1.2 PB of unusable disk space With the newer setting it is below 1 PB Degradation of performance by monthy RAID-check (lustre01) Lowering the raid check priority RAID check runs now 2 ½ weeks 16 / 18

17 Pro s / Con s from daily business Client dis-/reconnects from OST s / MDT s Issue for SLURM jobs with IO (extended runtime) Robinhood could not read changelog anymore Workaround: Failover/reboot/failback affected MDT Firmware bug on hardware (lustre02) Rare case of watchdog issue causes OSS HA pair to power down After power up of OSS and fixing OST s reconnect from clients or umount of lustre02 not possible Reboot of all Lustre clients needed (power reset) 17 / 18

18 Thank you for your Attention! Carsten Beyer Questions? 18 / 18

Robin Hood 2.5 on Lustre 2.5 with DNE

Robin Hood 2.5 on Lustre 2.5 with DNE Robin Hood 2.5 on Lustre 2.5 with DNE Site status and experience at German Climate Computing Centre in Hamburg Carsten Beyer High Performance Computing Center Exclusively for the German Climate Research

More information

LUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan)

LUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan) LUG 2012 From Lustre 2.1 to Lustre HSM Lustre @ IFERC (Rokkasho, Japan) Diego.Moreno@bull.net From Lustre-2.1 to Lustre-HSM - Outline About Bull HELIOS @ IFERC (Rokkasho, Japan) Lustre-HSM - Basis of Lustre-HSM

More information

I/O at the German Climate Computing Center (DKRZ)

I/O at the German Climate Computing Center (DKRZ) I/O at the German Climate Computing Center (DKRZ) Julian M. Kunkel, Carsten Beyer kunkel@dkrz.de German Climate Computing Center (DKRZ) 16-07-2015 Outline 1 Introduction 2 Workload 3 System View 4 Obstacles

More information

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl DDN s Vision for the Future of Lustre LUG2015 Robert Triendl 3 Topics 1. The Changing Markets for Lustre 2. A Vision for Lustre that isn t Exascale 3. Building Lustre for the Future 4. Peak vs. Operational

More information

Extraordinary HPC file system solutions at KIT

Extraordinary HPC file system solutions at KIT Extraordinary HPC file system solutions at KIT Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools for ldiskfs investigation

More information

Filesystems on SSCK's HP XC6000

Filesystems on SSCK's HP XC6000 Filesystems on SSCK's HP XC6000 Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Overview» Overview of HP SFS at SSCK HP StorageWorks Scalable File Share (SFS) based on

More information

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 Overview The Globally Accessible Data Environment (GLADE) provides centralized file storage for HPC computational, data-analysis,

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

LCE: Lustre at CEA. Stéphane Thiell CEA/DAM

LCE: Lustre at CEA. Stéphane Thiell CEA/DAM LCE: Lustre at CEA Stéphane Thiell CEA/DAM (stephane.thiell@cea.fr) 1 Lustre at CEA: Outline Lustre at CEA updates (2009) Open Computing Center (CCRT) updates CARRIOCAS (Lustre over WAN) project 2009-2010

More information

ROBINHOOD POLICY ENGINE

ROBINHOOD POLICY ENGINE ROBINHOOD POLICY ENGINE Aurélien DEGREMONT Thomas LEIBOVICI CEA/DAM LUSTRE USER GROUP 2013 PAGE 1 ROBINHOOD: BIG PICTURE Admin rules & policies find and du clones Parallel scan (nighly, weekly, ) Lustre

More information

Lustre HSM at Cambridge. Early user experience using Intel Lemur HSM agent

Lustre HSM at Cambridge. Early user experience using Intel Lemur HSM agent Lustre HSM at Cambridge Early user experience using Intel Lemur HSM agent Matt Rásó-Barnett Wojciech Turek Research Computing Services @ Cambridge University-wide service with broad remit to provide research

More information

Challenges in making Lustre systems reliable

Challenges in making Lustre systems reliable Challenges in making Lustre systems reliable Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Challenges and in making Lustre systems reliable

More information

Andreas Dilger. Principal Lustre Engineer. High Performance Data Division

Andreas Dilger. Principal Lustre Engineer. High Performance Data Division Andreas Dilger Principal Lustre Engineer High Performance Data Division Focus on Performance and Ease of Use Beyond just looking at individual features... Incremental but continuous improvements Performance

More information

An ESS implementation in a Tier 1 HPC Centre

An ESS implementation in a Tier 1 HPC Centre An ESS implementation in a Tier 1 HPC Centre Maximising Performance - the NeSI Experience José Higino (NeSI Platforms and NIWA, HPC Systems Engineer) Outline What is NeSI? The National Platforms Framework

More information

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John

More information

Shared Services Canada Environment and Climate Change Canada HPC Renewal Project

Shared Services Canada Environment and Climate Change Canada HPC Renewal Project Shared Services Canada Environment and Climate Change Canada HPC Renewal Project CUG 2017 Redmond, WA, USA Deric Sullivan Alain St-Denis & Luc Corbeil May 2017 Background: SSC's HPC Renewal for ECCC Environment

More information

Andreas Dilger, Intel High Performance Data Division Lustre User Group 2017

Andreas Dilger, Intel High Performance Data Division Lustre User Group 2017 Andreas Dilger, Intel High Performance Data Division Lustre User Group 2017 Statements regarding future functionality are estimates only and are subject to change without notice Performance and Feature

More information

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016 Mission-Critical Lustre at Santos Adam Fox, Lustre User Group 2016 About Santos One of the leading oil and gas producers in APAC Founded in 1954 South Australia Northern Territory Oil Search Cooper Basin

More information

Using file systems at HC3

Using file systems at HC3 Using file systems at HC3 Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu Basic Lustre

More information

Data Movement & Tiering with DMF 7

Data Movement & Tiering with DMF 7 Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why

More information

MANAGING LUSTRE & ITS CEA

MANAGING LUSTRE & ITS CEA MANAGING LUSTRE & ITS DATA @ CEA LUG Japan Aurelien Degremont CEA, DAM, DIF, F-91297 ARPAJON CEDEX October 17, 2013 CEA 10 AVRIL 2012 PAGE 1 AGENDA WHAT IS CEA? LUSTRE ARCHITECTURE

More information

An Overview of Fujitsu s Lustre Based File System

An Overview of Fujitsu s Lustre Based File System An Overview of Fujitsu s Lustre Based File System Shinji Sumimoto Fujitsu Limited Apr.12 2011 For Maximizing CPU Utilization by Minimizing File IO Overhead Outline Target System Overview Goals of Fujitsu

More information

BeeGFS. Parallel Cluster File System. Container Workshop ISC July Marco Merkel VP ww Sales, Consulting

BeeGFS.   Parallel Cluster File System. Container Workshop ISC July Marco Merkel VP ww Sales, Consulting BeeGFS The Parallel Cluster File System Container Workshop ISC 28.7.18 www.beegfs.io July 2018 Marco Merkel VP ww Sales, Consulting HPC & Cognitive Workloads Demand Today Flash Storage HDD Storage Shingled

More information

Lustre* is designed to achieve the maximum performance and scalability for POSIX applications that need outstanding streamed I/O.

Lustre* is designed to achieve the maximum performance and scalability for POSIX applications that need outstanding streamed I/O. Reference Architecture Designing High-Performance Storage Tiers Designing High-Performance Storage Tiers Intel Enterprise Edition for Lustre* software and Intel Non-Volatile Memory Express (NVMe) Storage

More information

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT

INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT Abhisek Pan 2, J.P. Walters 1, Vijay S. Pai 1,2, David Kang 1, Stephen P. Crago 1 1 University of Southern California/Information Sciences Institute 2

More information

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves

More information

Lustre/HSM Binding. Aurélien Degrémont Aurélien Degrémont LUG April

Lustre/HSM Binding. Aurélien Degrémont Aurélien Degrémont LUG April Lustre/HSM Binding Aurélien Degrémont aurelien.degremont@cea.fr Aurélien Degrémont LUG 2011 12-14 April 2011 1 Agenda Project history Presentation Architecture Components Performances Code Integration

More information

The current status of the adoption of ZFS* as backend file system for Lustre*: an early evaluation

The current status of the adoption of ZFS* as backend file system for Lustre*: an early evaluation The current status of the adoption of ZFS as backend file system for Lustre: an early evaluation Gabriele Paciucci EMEA Solution Architect Outline The goal of this presentation is to update the current

More information

Assistance in Lustre administration

Assistance in Lustre administration Assistance in Lustre administration Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu

More information

Application Performance on IME

Application Performance on IME Application Performance on IME Toine Beckers, DDN Marco Grossi, ICHEC Burst Buffer Designs Introduce fast buffer layer Layer between memory and persistent storage Pre-stage application data Buffer writes

More information

XtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL

XtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL XtreemStore A SCALABLE STORAGE MANAGEMENT SOFTWARE WITHOUT LIMITS YOUR DATA. YOUR CONTROL Software Produkt Portfolio New Products Product Family Scalable sync & share solution for secure data exchange

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

TGCC OVERVIEW. 13 février 2014 CEA 10 AVRIL 2012 PAGE 1

TGCC OVERVIEW. 13 février 2014 CEA 10 AVRIL 2012 PAGE 1 STORAGE @ TGCC OVERVIEW CEA 10 AVRIL 2012 PAGE 1 CONTEXT Data-Centric Architecture Centralized storage, accessible from every TGCC s compute machines Make cross-platform data sharing possible Mutualized

More information

Administering Lustre 2.0 at CEA

Administering Lustre 2.0 at CEA Administering Lustre 2.0 at CEA European Lustre Workshop 2011 September 26-27, 2011 Stéphane Thiell CEA/DAM stephane.thiell@cea.fr Lustre 2.0 timeline at CEA 2009 / 04 2010 / 04 2010 / 08 2011 Lustre 2.0

More information

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and

More information

Fujitsu's Lustre Contributions - Policy and Roadmap-

Fujitsu's Lustre Contributions - Policy and Roadmap- Lustre Administrators and Developers Workshop 2014 Fujitsu's Lustre Contributions - Policy and Roadmap- Shinji Sumimoto, Kenichiro Sakai Fujitsu Limited, a member of OpenSFS Outline of This Talk Current

More information

Lustre on ZFS. At The University of Wisconsin Space Science and Engineering Center. Scott Nolin September 17, 2013

Lustre on ZFS. At The University of Wisconsin Space Science and Engineering Center. Scott Nolin September 17, 2013 Lustre on ZFS At The University of Wisconsin Space Science and Engineering Center Scott Nolin September 17, 2013 Why use ZFS for Lustre? The University of Wisconsin Space Science and Engineering Center

More information

Shared Object-Based Storage and the HPC Data Center

Shared Object-Based Storage and the HPC Data Center Shared Object-Based Storage and the HPC Data Center Jim Glidewell High Performance Computing BOEING is a trademark of Boeing Management Company. Computing Environment Cray X1 2 Chassis, 128 MSPs, 1TB memory

More information

RobinHood Policy Engine

RobinHood Policy Engine RobinHood Policy Engine Quick Tour 2011/05/23 v2.3.0 The issue with large filesystems Common needs / usual solutions: Space usage accounting Per user, per group quotas Per project, per directory, du find

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

The Spider Center-Wide File System

The Spider Center-Wide File System The Spider Center-Wide File System Presented by Feiyi Wang (Ph.D.) Technology Integration Group National Center of Computational Sciences Galen Shipman (Group Lead) Dave Dillow, Sarp Oral, James Simmons,

More information

Lessons learned from Lustre file system operation

Lessons learned from Lustre file system operation Lessons learned from Lustre file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions

LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions LustreFS and its ongoing Evolution for High Performance Computing and Data Analysis Solutions Roger Goff Senior Product Manager DataDirect Networks, Inc. What is Lustre? Parallel/shared file system for

More information

RobinHood Project Status

RobinHood Project Status FROM RESEARCH TO INDUSTRY RobinHood Project Status Robinhood User Group 2015 Thomas Leibovici 9/18/15 SEPTEMBER, 21 st 2015 Project history... 1999: simple purge tool for HPC

More information

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Brett Bode, Michelle Butler, Sean Stevens, Jim Glasgow National Center for Supercomputing Applications/University

More information

HPC at UZH: status and plans

HPC at UZH: status and plans HPC at UZH: status and plans Dec. 4, 2013 This presentation s purpose Meet the sysadmin team. Update on what s coming soon in Schroedinger s HW. Review old and new usage policies. Discussion (later on).

More information

NetApp High-Performance Storage Solution for Lustre

NetApp High-Performance Storage Solution for Lustre Technical Report NetApp High-Performance Storage Solution for Lustre Solution Design Narjit Chadha, NetApp October 2014 TR-4345-DESIGN Abstract The NetApp High-Performance Storage Solution (HPSS) for Lustre,

More information

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC A ClusterStor update Torben Kling Petersen, PhD Principal Architect, HPC Sonexion (ClusterStor) STILL the fastest file system on the planet!!!! Total system throughput in excess on 1.1 TB/s!! 2 Software

More information

The Fusion Distributed File System

The Fusion Distributed File System Slide 1 / 44 The Fusion Distributed File System Dongfang Zhao February 2015 Slide 2 / 44 Outline Introduction FusionFS System Architecture Metadata Management Data Movement Implementation Details Unique

More information

Fujitsu s Contribution to the Lustre Community

Fujitsu s Contribution to the Lustre Community Lustre Developer Summit 2014 Fujitsu s Contribution to the Lustre Community Sep.24 2014 Kenichiro Sakai, Shinji Sumimoto Fujitsu Limited, a member of OpenSFS Outline of This Talk Fujitsu s Development

More information

Xyratex ClusterStor6000 & OneStor

Xyratex ClusterStor6000 & OneStor Xyratex ClusterStor6000 & OneStor Proseminar Ein-/Ausgabe Stand der Wissenschaft von Tim Reimer Structure OneStor OneStorSP OneStorAP ''Green'' Advancements ClusterStor6000 About Scale-Out Storage Architecture

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

Parallel File Systems Compared

Parallel File Systems Compared Parallel File Systems Compared Computing Centre (SSCK) University of Karlsruhe, Germany Laifer@rz.uni-karlsruhe.de page 1 Outline» Parallel file systems (PFS) Design and typical usage Important features

More information

libhio: Optimizing IO on Cray XC Systems With DataWarp

libhio: Optimizing IO on Cray XC Systems With DataWarp libhio: Optimizing IO on Cray XC Systems With DataWarp May 9, 2017 Nathan Hjelm Cray Users Group May 9, 2017 Los Alamos National Laboratory LA-UR-17-23841 5/8/2017 1 Outline Background HIO Design Functionality

More information

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC ARCHER/RDF Overview How do they fit together? Andy Turner, EPCC a.turner@epcc.ed.ac.uk www.epcc.ed.ac.uk www.archer.ac.uk Outline ARCHER/RDF Layout Available file systems Compute resources ARCHER Compute

More information

Green Supercomputing

Green Supercomputing Green Supercomputing On the Energy Consumption of Modern E-Science Prof. Dr. Thomas Ludwig German Climate Computing Centre Hamburg, Germany ludwig@dkrz.de Outline DKRZ 2013 and Climate Science The Exascale

More information

Triton file systems - an introduction. slide 1 of 28

Triton file systems - an introduction. slide 1 of 28 Triton file systems - an introduction slide 1 of 28 File systems Motivation & basic concepts Storage locations Basic flow of IO Do's and Don'ts Exercises slide 2 of 28 File systems: Motivation Case #1:

More information

Data Movement and Storage. 04/07/09 1

Data Movement and Storage. 04/07/09  1 Data Movement and Storage 04/07/09 www.cac.cornell.edu 1 Data Location, Storage, Sharing and Movement Four of the seven main challenges of Data Intensive Computing, according to SC06. (Other three: viewing,

More information

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department

More information

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

Data storage on Triton: an introduction

Data storage on Triton: an introduction Motivation Data storage on Triton: an introduction How storage is organized in Triton How to optimize IO Do's and Don'ts Exercises slide 1 of 33 Data storage: Motivation Program speed isn t just about

More information

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016 National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor

More information

Project Quota for Lustre

Project Quota for Lustre 1 Project Quota for Lustre Li Xi, Shuichi Ihara DataDirect Networks Japan 2 What is Project Quota? Project An aggregation of unrelated inodes that might scattered across different directories Project quota

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

SFA12KX and Lustre Update

SFA12KX and Lustre Update Sep 2014 SFA12KX and Lustre Update Maria Perez Gutierrez HPC Specialist HPC Advisory Council Agenda SFA12KX Features update Partial Rebuilds QoS on reads Lustre metadata performance update 2 SFA12KX Features

More information

I/O at the Center for Information Services and High Performance Computing

I/O at the Center for Information Services and High Performance Computing Mich ael Kluge, ZIH I/O at the Center for Information Services and High Performance Computing HPC-I/O in the Data Center Workshop @ ISC 2015 Zellescher Weg 12 Willers-Bau A 208 Tel. +49 351-463 34217 Michael

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

Zhang, Hongchao

Zhang, Hongchao 2016-10-20 Zhang, Hongchao Legal Information This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice.

More information

RAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System

RAIDIX Data Storage Solution. Clustered Data Storage Based on the RAIDIX Software and GPFS File System RAIDIX Data Storage Solution Clustered Data Storage Based on the RAIDIX Software and GPFS File System 2017 Contents Synopsis... 2 Introduction... 3 Challenges and the Solution... 4 Solution Architecture...

More information

An introduction to GPFS Version 3.3

An introduction to GPFS Version 3.3 IBM white paper An introduction to GPFS Version 3.3 Scott Fadden, IBM Corporation Contents 1 Overview 2 What is GPFS? 2 The file system 2 Application interfaces 3 Performance and scalability 3 Administration

More information

Lustre overview and roadmap to Exascale computing

Lustre overview and roadmap to Exascale computing HPC Advisory Council China Workshop Jinan China, October 26th 2011 Lustre overview and roadmap to Exascale computing Liang Zhen Whamcloud, Inc liang@whamcloud.com Agenda Lustre technology overview Lustre

More information

AFM Migration: The Road To Perdition

AFM Migration: The Road To Perdition AFM Migration: The Road To Perdition Spectrum Scale Users Group UK Meeting 9 th -10 th May 2017 Mark Roberts (AWE) Laurence Horrocks-Barlow (OCF) British Crown Owned Copyright [2017]/AWE GPFS Systems Legacy

More information

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś Presentation plan 2 Cyfronet introduction System description SLURM modifications Job

More information

HPC NETWORKING IN THE REAL WORLD

HPC NETWORKING IN THE REAL WORLD 15 th ANNUAL WORKSHOP 2019 HPC NETWORKING IN THE REAL WORLD Jesse Martinez Los Alamos National Laboratory March 19 th, 2019 [ LOGO HERE ] LA-UR-19-22146 ABSTRACT Introduction to LANL High Speed Networking

More information

NetCDF based data archiving system proposal for the ITER Fast Plant System Control prototype

NetCDF based data archiving system proposal for the ITER Fast Plant System Control prototype NetCDF based data archiving system proposal for the ITER Fast Plant System Control prototype R. Castro 1, J. Vega 1, M. Ruiz 2, G. De Arcas 2, E. Barrera 2, J.M. López 2, D. Sanz 2, B. Gonçalves 3, B.

More information

Embedded Filesystems (Direct Client Access to Vice Partitions)

Embedded Filesystems (Direct Client Access to Vice Partitions) Embedded Filesystems (Direct Client Access to Vice Partitions) Hartmut Reuter reuter@rzg.mpg.de Hartmut Reuter, RZG, Germany Felix Frank, DESY Zeuthen, Germany Andrei Maslennikov, CASPUR, Italy Overview

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

The RAMDISK Storage Accelerator

The RAMDISK Storage Accelerator The RAMDISK Storage Accelerator A Method of Accelerating I/O Performance on HPC Systems Using RAMDISKs Tim Wickberg, Christopher D. Carothers wickbt@rpi.edu, chrisc@cs.rpi.edu Rensselaer Polytechnic Institute

More information

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Experiences using a multi-tiered GPFS file system at Mount Sinai Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Outline 1. Storage summary 2. Planning and Migration 3. Challenges

More information

Experiences with HP SFS / Lustre in HPC Production

Experiences with HP SFS / Lustre in HPC Production Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre

More information

Diagnosing Performance Issues in Cray ClusterStor Systems

Diagnosing Performance Issues in Cray ClusterStor Systems Diagnosing Performance Issues in Cray ClusterStor Systems Comprehensive Performance Analysis with Cray View for ClusterStor Patricia Langer Cray, Inc. Bloomington, MN USA planger@cray.com I. INTRODUCTION

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

Coordinating Parallel HSM in Object-based Cluster Filesystems

Coordinating Parallel HSM in Object-based Cluster Filesystems Coordinating Parallel HSM in Object-based Cluster Filesystems Dingshan He, Xianbo Zhang, David Du University of Minnesota Gary Grider Los Alamos National Lab Agenda Motivations Parallel archiving/retrieving

More information

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

XSEDE New User Training. Ritu Arora November 14, 2014

XSEDE New User Training. Ritu Arora   November 14, 2014 XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative

More information

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director CAS 2K13 Sept. 2013 Jean-Pierre Panziera Chief Technology Director 1 personal note 2 Complete solutions for Extreme Computing b ubullx ssupercomputer u p e r c o p u t e r suite s u e Production ready

More information

Introduction to Lustre* Architecture

Introduction to Lustre* Architecture Introduction to Lustre* Architecture Lustre* systems and network administration October 2017 * Other names and brands may be claimed as the property of others Lustre Fast, Scalable Storage for HPC Lustre*

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Storage Supporting DOE Science

Storage Supporting DOE Science Storage Supporting DOE Science Jason Hick jhick@lbl.gov NERSC LBNL http://www.nersc.gov/nusers/systems/hpss/ http://www.nersc.gov/nusers/systems/ngf/ May 12, 2011 The Production Facility for DOE Office

More information

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS

More information

Lustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG

Lustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG Lustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG 2017 @Beijing Outline LNet reliability DNE improvements Small file performance File Level Redundancy Miscellaneous improvements

More information

CS500 SMARTER CLUSTER SUPERCOMPUTERS

CS500 SMARTER CLUSTER SUPERCOMPUTERS CS500 SMARTER CLUSTER SUPERCOMPUTERS OVERVIEW Extending the boundaries of what you can achieve takes reliable computing tools matched to your workloads. That s why we tailor the Cray CS500 cluster supercomputer

More information

Data Movement & Storage Using the Data Capacitor Filesystem

Data Movement & Storage Using the Data Capacitor Filesystem Data Movement & Storage Using the Data Capacitor Filesystem Justin Miller jupmille@indiana.edu http://pti.iu.edu/dc Big Data for Science Workshop July 2010 Challenges for DISC Keynote by Alex Szalay identified

More information

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500

More information

Lustre Metadata Fundamental Benchmark and Performance

Lustre Metadata Fundamental Benchmark and Performance 09/22/2014 Lustre Metadata Fundamental Benchmark and Performance DataDirect Networks Japan, Inc. Shuichi Ihara 2014 DataDirect Networks. All Rights Reserved. 1 Lustre Metadata Performance Lustre metadata

More information