Dr. John Dennis
|
|
- Warren Black
- 5 years ago
- Views:
Transcription
1 Dr. John Dennis June 23,
2 High-resolution climate generates a large amount of data! June 23,
3 PIO update and Lustre optimizations How do we analyze high-resolution climate data faster? Wavelet compression for climate data June 23,
4 June 23,
5 Goals: Reduce memory usage Improve performance Writing a single file from parallel application Flexibility in I/O libraries MPI-IO,NetCDF3, NetCDF4, pnetcdf June 23,
6 Computational decomposition I/O decomposition Rearrangement Match I/O decomposition to Lustre stripe size June 23,
7 Read/write 3D POP sized variable [3600x2400x40] 10 files, 10 variables per file, [max bandwidth] Using Kraken (Cray XT5) + Lustre filesystem Used 16 of 336 OST Impact of PIO features Flow-control Vary number of IO-tasks Different general I/O backends Did we achieve our design goals? June 23,
8 June 23,
9 June 23,
10 June 23,
11 June 23,
12 John Dennis, Matthew Woitaszek (NCAR) Taleena Sines* (Frostburg State) Parallel high-resolution climate data analysis using Swift, Linux Clusters [under review] * SIParCS Intern June 23,
13 Used Swift a workflow language (UC/ANL) Parallel scripting language Data dependency driven Examine performance on several dataintensive architectures Flash memory: Dash (SDSC) SGI UV: Nautilus (NICS) Large memory node: Polynya (NCAR) 32 cores 1 TB ram [ 512 GB memory/ 512 GB ramdisk] 120 TB GPFS file-system (old hardware) June 23,
14 4 mins June 23,
15 John Clyne, Yanick Polius, John Dennis (NCAR) June 23,
16 Compression algorithm: Apply wavelet transform to model outputs Sort resulting wavelet coefficients by absolute magnitude Discard smallest coefficients Reconstruct an approximation of original data from remaining coefficients Compare original and reconstructed data using CESM AMWG Diagnostic Package June 23,
17 Preliminary experiment 10 years of 0.5 deg CAM Compression 2D variables 2:1 3D variables 8:1 Evaluate using student T-test Outcomes/Issues Looks great! Limited temporal variability Does not preserve zeros June 23,
18 June 23,
19 June 23,
20 June 23,
21 NCAR: NICS: D. Bailey F. Bryan M. Fahey T. Craig P. Kovatch B. Eaton J. Edwards [IBM] ANL: N. Hearn R. Jacob K. Lindsay R. Loy N. Norton M. Vertenstein LANL: COLA: E. Hunke J. Kinter P. Jones C. Stan M. Maltrud U. Miami LLNL B. Kirtman D. Bader U.C. Berkeley W. Collins D. Ivanova K. Yelick (NERSC) J. McClean (Scripps) U. Washington A. Mirin C. Bitz ORNL: P. Worley and many more Grant Support: DOE DE-FC03-97ER62402 [SciDAC] DE-PS02-07ER07-06 [SciDAC] NSF Cooperative Grant NSF01 OCI [PetaApps] CNS CNS CNS Computer Allocations: TeraGrid NICS DOE NERSC LLNL Grand Challenge Thanks for Assistance: Cray, NICS, and NERSC June 23,
22 June 23,
23 Computational decomposition + Simple + Most versions of MPI-IO will do aggregation - Computational decomposition may not be optimal for disk access - pnetcdf requires block cyclic decompositions June 23,
24 Computational decomposition I/O decomposition Rearrangement + Maximize size of individual io-op s to disk - Non-scalable user space buffering - Very large fan-in large MPI buffer allocations June 23,
25 Computational decomposition I/O decomposition Rearrangement + Scalable user space memory + Relatively large individual io-op s to disk - Very large fan-in large MPI buffer allocations June 23,
26 Computational decomposition I/O decomposition Rearrangement + Scalable user space memory + Smaller fan-in -> modest MPI buffer allocations - Smaller individual io-op s to disk June 23,
27 June 23,
28 Supported parallel I/O library in CCSM4 & CESM1 release Addition of Flow-control algorithms (Worley) Initial documentation using Doxygen Small but growing user base ESMF VAPOR + wavelet compression June 23,
Parallel File Systems Compared
Parallel File Systems Compared Computing Centre (SSCK) University of Karlsruhe, Germany Laifer@rz.uni-karlsruhe.de page 1 Outline» Parallel file systems (PFS) Design and typical usage Important features
More informationComputer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research
Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya
More informationExperiences with Porting CESM to ARCHER
Experiences with Porting CESM to ARCHER ARCHER Technical Forum Webinar, 25th February, 2015 Gavin J. Pringle 25 February 2015 ARCHER Technical Forum Webinar Overview of talk Overview of the associated
More information(Some) Answers to the Challenges of Petascale Computing
(Some) Answers to the Challenges of Petascale Computing Rich Loft Director, Technology Development Computational and Information Systems Laboratory National Center for Atmospheric Research loft@ucar.edu
More informationCCSM Performance with the New Coupler, cpl6
CCSM Performance with the New Coupler, cpl6 Tony Craig Brian Kauffman Tom Bettge National Center for Atmospheric Research Jay Larson Rob Jacob Everest Ong Argonne National Laboratory Chris Ding Helen He
More informationShared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments
LCI HPC Revolution 2005 26 April 2005 Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments Matthew Woitaszek matthew.woitaszek@colorado.edu Collaborators Organizations National
More informationCESM Workflow Refactor Project Land Model and Biogeochemistry Working Groups 2015 Winter Meeting CSEG & ASAP/CISL
CESM Workflow Refactor Project Land Model and Biogeochemistry Working Groups 2015 Winter Meeting Alice Bertini Sheri Mickelson CSEG & ASAP/CISL CESM Workflow Refactor Project Who s involved? Joint project
More informationComparing Linux Clusters for the Community Climate System Model
Comparing Linux Clusters for the Community Climate System Model Matthew Woitaszek, Michael Oberg, and Henry M. Tufo Department of Computer Science University of Colorado, Boulder {matthew.woitaszek, michael.oberg}@colorado.edu,
More informationGPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009
GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS
More informationIndiana University s Lustre WAN: The TeraGrid and Beyond
Indiana University s Lustre WAN: The TeraGrid and Beyond Stephen C. Simms Manager, Data Capacitor Project TeraGrid Site Lead, Indiana University ssimms@indiana.edu Lustre User Group Meeting April 17, 2009
More informationComparison of Scheduling Policies and Workloads on the NCCS and NICS XT4 Systems at Oak Ridge National Laboratory
Comparison of Scheduling Policies and Workloads on the NCCS and NICS XT4 Systems at Oak Ridge National Laboratory Troy Baer HPC System Administrator National Institute for Computational Sciences University
More informationParallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package
Parallel I/O Performance Study and Optimizations with HDF5, A Scientific Data Package MuQun Yang, Christian Chilan, Albert Cheng, Quincey Koziol, Mike Folk, Leon Arber The HDF Group Champaign, IL 61820
More informationCISL Update. 29 April Operations and Services Division
CISL Update Operations and Services CISL HPC Advisory Panel Meeting Anke Kamrath anke@ucar.edu Operations and Services Division Computational and Information Systems Laboratory 1 CHAP Meeting 14 May 2009
More informationAPI and Usage of libhio on XC-40 Systems
API and Usage of libhio on XC-40 Systems May 24, 2018 Nathan Hjelm Cray Users Group May 24, 2018 Los Alamos National Laboratory LA-UR-18-24513 5/24/2018 1 Outline Background HIO Design HIO API HIO Configuration
More informationCompressing CESM Data while Preserving Information
National Center for Atmospheric Research Compressing CESM Data while Preserving Information Allison H. Baker Dorit Hammerling Haiying Xu Computational Information Systems Laboratory National Center for
More informationDVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011
DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves
More informationTuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov
Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery
More informationProgress on Advanced Dynamical Cores for the Community Atmosphere Model. June 2010
Progress on Advanced Dynamical Cores for the Community Atmosphere Model June 2010 Art Mirin, P. O. Box 808, Livermore, CA 94551 This work performed under the auspices of the U.S. Department of Energy by
More informationThe Red Storm System: Architecture, System Update and Performance Analysis
The Red Storm System: Architecture, System Update and Performance Analysis Douglas Doerfler, Jim Tomkins Sandia National Laboratories Center for Computation, Computers, Information and Mathematics LACSI
More informationParallel File Systems for HPC
Introduction to Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for 2 The File System 3 Cluster & A typical
More informationExtending scalability of the community atmosphere model
Journal of Physics: Conference Series Extending scalability of the community atmosphere model To cite this article: A Mirin and P Worley 2007 J. Phys.: Conf. Ser. 78 012082 Recent citations - Evaluation
More information16 th Annual CESM Workshop s Software Engineering Working Group. Parallel Analysis of GeOscience Data Status and Future
16 th Annual CESM Workshop s Software Engineering Working Group Parallel Analysis of GeOscience Data Status and Future Jeff Daily PI: Karen Schuchardt, in collaboration with Colorado State University s
More informationlibhio: Optimizing IO on Cray XC Systems With DataWarp
libhio: Optimizing IO on Cray XC Systems With DataWarp May 9, 2017 Nathan Hjelm Cray Users Group May 9, 2017 Los Alamos National Laboratory LA-UR-17-23841 5/8/2017 1 Outline Background HIO Design Functionality
More informationCESM2 Software Update. Mariana Vertenstein CESM Software Engineering Group
CESM2 Software Update Mariana Vertenstein CESM Software Engineering Group Outline CMIP6 Computational Performance Cheyenne Status CESM2 new user-friendly infrastructure features CIME New porting capabilities
More informationScalaIOTrace: Scalable I/O Tracing and Analysis
ScalaIOTrace: Scalable I/O Tracing and Analysis Karthik Vijayakumar 1, Frank Mueller 1, Xiaosong Ma 1,2, Philip C. Roth 2 1 Department of Computer Science, NCSU 2 Computer Science and Mathematics Division,
More informationRegional & National HPC resources available to UCSB
Regional & National HPC resources available to UCSB Triton Affiliates and Partners Program (TAPP) Extreme Science and Engineering Discovery Environment (XSEDE) UCSB clusters https://it.ucsb.edu/services/supercomputing
More informationA Comparative Experimental Study of Parallel File Systems for Large-Scale Data Processing
A Comparative Experimental Study of Parallel File Systems for Large-Scale Data Processing Z. Sebepou, K. Magoutis, M. Marazakis, A. Bilas Institute of Computer Science (ICS) Foundation for Research and
More informationLight-Weight Parallel Python Tools for Earth System Modeling Workflows
2015 IEEE International Conference on Big Data (Big Data) Light-Weight Parallel Python Tools for Earth System Modeling Workflows Kevin Paul, Sheri Mickelson, John M. Dennis, Haiying Xu, and David Brown
More informationSDS: A Framework for Scientific Data Services
SDS: A Framework for Scientific Data Services Bin Dong, Suren Byna*, John Wu Scientific Data Management Group Lawrence Berkeley National Laboratory Finding Newspaper Articles of Interest Finding news articles
More informationToward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies
Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies François Tessier, Venkatram Vishwanath, Paul Gressier Argonne National Laboratory, USA Wednesday
More informationEvaluating the Performance of the Community Atmosphere Model at High Resolutions
Evaluating the Performance of the Community Atmosphere Model at High Resolutions Soumi Manna MS candidate, University of Wyoming Mentor: Dr. Ben Jamroz National Center for Atmospheric Research Boulder,
More informationIME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning
IME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning September 22 nd 2015 Tommaso Cecchi 2 What is IME? This breakthrough, software defined storage application
More informationIan Foster, CS554: Data-Intensive Computing
The advent of computation can be compared, in terms of the breadth and depth of its impact on research and scholarship, to the invention of writing and the development of modern mathematics. Ian Foster,
More informationLecture 33: More on MPI I/O. William Gropp
Lecture 33: More on MPI I/O William Gropp www.cs.illinois.edu/~wgropp Today s Topics High level parallel I/O libraries Options for efficient I/O Example of I/O for a distributed array Understanding why
More informationIntroduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill
Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center
More informationStore Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete
Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete 1 DDN Who We Are 2 We Design, Deploy and Optimize Storage Systems Which Solve HPC, Big Data and Cloud Business
More informationARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC
ARCHER/RDF Overview How do they fit together? Andy Turner, EPCC a.turner@epcc.ed.ac.uk www.epcc.ed.ac.uk www.archer.ac.uk Outline ARCHER/RDF Layout Available file systems Compute resources ARCHER Compute
More informationEvaluation of Parallel I/O Performance and Energy with Frequency Scaling on Cray XC30 Suren Byna and Brian Austin
Evaluation of Parallel I/O Performance and Energy with Frequency Scaling on Cray XC30 Suren Byna and Brian Austin Lawrence Berkeley National Laboratory Energy efficiency at Exascale A design goal for future
More informationShort Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy
Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,
More informationThe Center for Computational Research & Grid Computing
The Center for Computational Research & Grid Computing Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst NSF, NIH, DOE NIMA, NYS,
More informationBuilding a Global Data Federation for Climate Change Science The Earth System Grid (ESG) and International Partners
Building a Global Data Federation for Climate Change Science The Earth System Grid (ESG) and International Partners 24th Forum ORAP Cite Scientifique; Lille, France March 26, 2009 Don Middleton National
More informationParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing
ParaMEDIC: Parallel Metadata Environment for Distributed I/O and Computing Prof. Wu FENG Department of Computer Science Virginia Tech Work smarter not harder Overview Grand Challenge A large-scale biological
More informationGuidelines for Efficient Parallel I/O on the Cray XT3/XT4
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 Jeff Larkin, Cray Inc. and Mark Fahey, Oak Ridge National Laboratory ABSTRACT: This paper will present an overview of I/O methods on Cray XT3/XT4
More informationImproving I/O Performance in POP (Parallel Ocean Program)
Improving I/O Performance in POP (Parallel Ocean Program) Wang Di 2 Galen M. Shipman 1 Sarp Oral 1 Shane Canon 1 1 National Center for Computational Sciences, Oak Ridge National Laboratory Oak Ridge, TN
More informationCoordinating Parallel HSM in Object-based Cluster Filesystems
Coordinating Parallel HSM in Object-based Cluster Filesystems Dingshan He, Xianbo Zhang, David Du University of Minnesota Gary Grider Los Alamos National Lab Agenda Motivations Parallel archiving/retrieving
More informationA New NSF TeraGrid Resource for Data-Intensive Science
A New NSF TeraGrid Resource for Data-Intensive Science Michael L. Norman Principal Investigator Director, SDSC Allan Snavely Co-Principal Investigator Project Scientist Slide 1 Coping with the data deluge
More informationTaming Parallel I/O Complexity with Auto-Tuning
Taming Parallel I/O Complexity with Auto-Tuning Babak Behzad 1, Huong Vu Thanh Luu 1, Joseph Huchette 2, Surendra Byna 3, Prabhat 3, Ruth Aydt 4, Quincey Koziol 4, Marc Snir 1,5 1 University of Illinois
More informationRegression Testing on Petaflop Computational Resources. CUG 2010, Edinburgh Mike McCarty Software Developer May 27, 2010
Regression Testing on Petaflop Computational Resources CUG 2010, Edinburgh Mike McCarty Software Developer May 27, 2010 Additional Authors Troy Baer (NICS) Lonnie Crosby (NICS) Outline What is NICS and
More informationManaging HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory
Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department
More informationIllinois Proposal Considerations Greg Bauer
- 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and
More informationData Management. Parallel Filesystems. Dr David Henty HPC Training and Support
Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER
More informationEarly Evaluation of the Cray XD1
Early Evaluation of the Cray XD1 (FPGAs not covered here) Mark R. Fahey Sadaf Alam, Thomas Dunigan, Jeffrey Vetter, Patrick Worley Oak Ridge National Laboratory Cray User Group May 16-19, 2005 Albuquerque,
More informationNational Level Computing at UTK. Jim Ferguson NICS Director of Education, Outreach & Training August 19, 2011
National Level Computing at UTK Jim Ferguson NICS Director of Education, Outreach & Training August 19, 2011 National Institute for Computational Sciences University of Tennessee and ORNL partnership q
More informationCommon Infrastructure for Modeling Earth (CIME) and MOM6. Mariana Vertenstein CESM Software Engineering Group
Common Infrastructure for Modeling Earth (CIME) and MOM6 Mariana Vertenstein CESM Software Engineering Group Outline What is CIME? New CIME coupling infrastructure and MOM6 CESM2/DART Data Assimilation
More informationEngagement With Scientific Facilities
Engagement With Scientific Facilities Eli Dart, Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory Global Science Engagement Panel Internet2 Technology Exchange San Francisco,
More informationScaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users
Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users Phil Andrews, Bryan Banister, Patricia Kovatch, Chris Jordan San Diego Supercomputer Center University
More informationData Management Components for a Research Data Archive
Data Management Components for a Research Data Archive Steven Worley and Bob Dattore Scientific Computing Division Computational and Information Systems Laboratory National Center for Atmospheric Research
More informationHPC Capabilities at Research Intensive Universities
HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)
More informationLustre Parallel Filesystem Best Practices
Lustre Parallel Filesystem Best Practices George Markomanolis Computational Scientist KAUST Supercomputing Laboratory georgios.markomanolis@kaust.edu.sa 7 November 2017 Outline Introduction to Parallel
More informationZEST Snapshot Service. A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1
ZEST Snapshot Service A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1 Design Motivation To optimize science utilization of the machine Maximize
More informationALICE Grid Activities in US
ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions
More informationZhengyang Liu! Oct 25, Supported by NSF Grant OCI
SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu! Oct 25, 2013 Supported by
More informationData Movement & Storage Using the Data Capacitor Filesystem
Data Movement & Storage Using the Data Capacitor Filesystem Justin Miller jupmille@indiana.edu http://pti.iu.edu/dc Big Data for Science Workshop July 2010 Challenges for DISC Keynote by Alex Szalay identified
More informationExploration of Parallel Storage Architectures for a Blue Gene/L on the TeraGrid
Exploration of Parallel Storage Architectures for a Blue Gene/L on the TeraGrid Michael Oberg 2, Henry M. Tufo 1,2, and Matthew Woitaszek 2 oberg@ucar.edu 1 University of Colorado, Boulder, CO 2 National
More informationIoan Raicu. Everyone else. More information at: Background? What do you want to get out of this course?
Ioan Raicu More information at: http://www.cs.iit.edu/~iraicu/ Everyone else Background? What do you want to get out of this course? 2 Data Intensive Computing is critical to advancing modern science Applies
More informationIntroducing ParVis. Robert Jacob. 16th Annual CESM Workshop June 23rd, 2011
Introducing ParVis Robert Jacob 16th Annual CESM Workshop June 23rd, 2011 ParVis (Parallel Analysis Tools and New Visualization Techniques for Ultra-Large Climate Data Sets) Motivation: Ability to gain
More informationHigh Performance Storage Solutions
November 2006 High Performance Storage Solutions Toine Beckers tbeckers@datadirectnet.com www.datadirectnet.com DataDirect Leadership Established 1988 Technology Company (ASICs, FPGA, Firmware, Software)
More informationParallel File Systems. John White Lawrence Berkeley National Lab
Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation
More informationIntroduction to High Performance Parallel I/O
Introduction to High Performance Parallel I/O Richard Gerber Deputy Group Lead NERSC User Services August 30, 2013-1- Some slides from Katie Antypas I/O Needs Getting Bigger All the Time I/O needs growing
More informationPerformance of a Direct Numerical Simulation Solver forf Combustion on the Cray XT3/4
Performance of a Direct Numerical Simulation Solver forf Combustion on the Cray XT3/4 Ramanan Sankaran and Mark R. Fahey National Center for Computational Sciences Oak Ridge National Laboratory Jacqueline
More informationLustre at the OLCF: Experiences and Path Forward. Galen M. Shipman Group Leader Technology Integration
Lustre at the OLCF: Experiences and Path Forward Galen M. Shipman Group Leader Technology Integration A Demanding Computational Environment Jaguar XT5 18,688 Nodes Jaguar XT4 7,832 Nodes Frost (SGI Ice)
More informationAnalyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016
Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis
More informationTowards Large Scale Predictive Flow Simulations
Towards Large Scale Predictive Flow Simulations using ITAPS Tools and Services Onkar Sahni a a Present address: Center for Predictive Engineering and Computational Sciences (PECOS), ICES, UT Austin ollaborators:
More informationLustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG
Lustre * Features In Development Fan Yong High Performance Data Division, Intel CLUG 2017 @Beijing Outline LNet reliability DNE improvements Small file performance File Level Redundancy Miscellaneous improvements
More informationExperiences with ENZO on the Intel Many Integrated Core Architecture
Experiences with ENZO on the Intel Many Integrated Core Architecture Dr. Robert Harkness National Institute for Computational Sciences April 10th, 2012 Overview ENZO applications at petascale ENZO and
More informationCICE and POP2 Boundary-exchange Performance Optimization (High-resolution and Large-scale Run)
CICE and POP2 Boundary-exchange Performance Optimization (High-resolution and Large-scale Run) June 19, 2018 Youngsung Kim and John M. Dennis CESM Workshop 2018 CESM high-resolution simulation Adding more
More informationBeyond Petascale. Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center
Beyond Petascale Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center GPFS Research and Development! GPFS product originated at IBM Almaden Research Laboratory! Research continues to
More informationThe RAMDISK Storage Accelerator
The RAMDISK Storage Accelerator A Method of Accelerating I/O Performance on HPC Systems Using RAMDISKs Tim Wickberg, Christopher D. Carothers wickbt@rpi.edu, chrisc@cs.rpi.edu Rensselaer Polytechnic Institute
More informationWelcome! Virtual tutorial starts at 15:00 BST
Welcome! Virtual tutorial starts at 15:00 BST Parallel IO and the ARCHER Filesystem ARCHER Virtual Tutorial, Wed 8 th Oct 2014 David Henty Reusing this material This work is licensed
More informationDynamic Active Storage for High Performance I/O
Dynamic Active Storage for High Performance I/O Chao Chen(chao.chen@ttu.edu) 4.02.2012 UREaSON Outline Ø Background Ø Active Storage Ø Issues/challenges Ø Dynamic Active Storage Ø Prototyping and Evaluation
More informationLustre Lockahead: Early Experience and Performance using Optimized Locking. Michael Moore
Lustre Lockahead: Early Experience and Performance using Optimized Locking Michael Moore Agenda Purpose Investigate performance of a new Lustre and MPI-IO feature called Lustre Lockahead (LLA) Discuss
More informationData Movement & Tiering with DMF 7
Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why
More informationINTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT
INTEGRATING HPFS IN A CLOUD COMPUTING ENVIRONMENT Abhisek Pan 2, J.P. Walters 1, Vijay S. Pai 1,2, David Kang 1, Stephen P. Crago 1 1 University of Southern California/Information Sciences Institute 2
More informationStream Processing for Remote Collaborative Data Analysis
Stream Processing for Remote Collaborative Data Analysis Scott Klasky 146, C. S. Chang 2, Jong Choi 1, Michael Churchill 2, Tahsin Kurc 51, Manish Parashar 3, Alex Sim 7, Matthew Wolf 14, John Wu 7 1 ORNL,
More informationA More Realistic Way of Stressing the End-to-end I/O System
A More Realistic Way of Stressing the End-to-end I/O System Verónica G. Vergara Larrea Sarp Oral Dustin Leverman Hai Ah Nam Feiyi Wang James Simmons CUG 2015 April 29, 2015 Chicago, IL ORNL is managed
More informationUsing DDN IME for Harmonie
Irish Centre for High-End Computing Using DDN IME for Harmonie Gilles Civario, Marco Grossi, Alastair McKinstry, Ruairi Short, Nix McDonnell April 2016 DDN IME: Infinite Memory Engine IME: Major Features
More informationOrientation to NCAR, CISL and the Outreach Services Group
Orientation to NCAR, CISL and the Outreach Services Group Dr. Richard Loft Director, SIParCS Program Director, Technology Development Div. Computational and Information Systems Laboratory loft@ucar.edu
More informationUsing Quality of Service for Scheduling on Cray XT Systems
Using Quality of Service for Scheduling on Cray XT Systems Troy Baer HPC System Administrator National Institute for Computational Sciences, University of Tennessee Outline Introduction Scheduling Cray
More informationTriton file systems - an introduction. slide 1 of 28
Triton file systems - an introduction slide 1 of 28 File systems Motivation & basic concepts Storage locations Basic flow of IO Do's and Don'ts Exercises slide 2 of 28 File systems: Motivation Case #1:
More informationCSCS HPC storage. Hussein N. Harake
CSCS HPC storage Hussein N. Harake Points to Cover - XE6 External Storage (DDN SFA10K, SRP, QDR) - PCI-E SSD Technology - RamSan 620 Technology XE6 External Storage - Installed Q4 2010 - In Production
More informationIntroduction to Parallel I/O
Introduction to Parallel I/O Bilel Hadri bhadri@utk.edu NICS Scientific Computing Group OLCF/NICS Fall Training October 19 th, 2011 Outline Introduction to I/O Path from Application to File System Common
More informationIntroduction to HPC Parallel I/O
Introduction to HPC Parallel I/O Feiyi Wang (Ph.D.) and Sarp Oral (Ph.D.) Technology Integration Group Oak Ridge Leadership Computing ORNL is managed by UT-Battelle for the US Department of Energy Outline
More informationNFS, GPFS, PVFS, Lustre Batch-scheduled systems: Clusters, Grids, and Supercomputers Programming paradigm: HPC, MTC, and HTC
Segregated storage and compute NFS, GPFS, PVFS, Lustre Batch-scheduled systems: Clusters, Grids, and Supercomputers Programming paradigm: HPC, MTC, and HTC Co-located storage and compute HDFS, GFS Data
More informationStorage Hierarchy Management for Scientific Computing
Storage Hierarchy Management for Scientific Computing by Ethan Leo Miller Sc. B. (Brown University) 1987 M.S. (University of California at Berkeley) 1990 A dissertation submitted in partial satisfaction
More informationTeraGrid TeraGrid and the Path to Petascale
TeraGrid TeraGrid and the Path to Petascale John Towns Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing Applications University of Illinois jtowns@ncsa.illinois.edu
More informationDiagnostics and Exploratory Analysis Infrastructure for ACME Workflow
Diagnostics and Exploratory Analysis Infrastructure for ACME Workflow ORNL: Brian Smith, John Harney, Brian Jewell LLNL: Jeffrey Painter, James McEnerney, ORNL is managed by UT-Battelle for the US Department
More informationFile Systems for HPC Machines. Parallel I/O
File Systems for HPC Machines Parallel I/O Course Outline Background Knowledge Why I/O and data storage are important Introduction to I/O hardware File systems Lustre specifics Data formats and data provenance
More informationArchitecting Storage for Semiconductor Design: Manufacturing Preparation
White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data
More informationOverview of HPC at LONI
Overview of HPC at LONI Le Yan HPC Consultant User Services @ LONI What Is HPC High performance computing is to use supercomputers to solve problems computationally The most powerful supercomputer today
More informationScalable, Automated Parallel Performance Analysis with TAU, PerfDMF and PerfExplorer
Scalable, Automated Parallel Performance Analysis with TAU, PerfDMF and PerfExplorer Kevin A. Huck, Allen D. Malony, Sameer Shende, Alan Morris khuck, malony, sameer, amorris@cs.uoregon.edu http://www.cs.uoregon.edu/research/tau
More information