NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

Size: px
Start display at page:

Download "NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017"

Transcription

1 NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 Overview The Globally Accessible Data Environment (GLADE) provides centralized file storage for HPC computational, data-analysis, visualization and science gateway resources. The environment provides >220GB/s high-performance bandwidth accessible over FDR/EDR IB or 10/40Gb Ethernet networks. The GLADE environment also provides data transfer services to facilitate data movement both within NCAR and to external resources. These data access nodes support common data transfer protocols, Globus transfer service endpoints, Globus data sharing services and high-bandwidth access to the NCAR HPSS system. File Systems NCAR HPC file systems are currently all part of the Globally Accessible Data Environment (GLADE) and can be accesses by all current HPC systems simplifying data sharing between platforms. File systems are configured for different purposes with a total of five spaces available served by three file systems. All file systems are implemented using IBM s Spectrum Scale parallel file system, formerly GPFS. Home (/glade/u/home): Permanent, relatively small storage for data like source code, shell scripts, etc. This file system is not tuned for high performance from parallel jobs, therefore it is not recommended for usage during batch job runs. Project (/glade/p): Large, allocated, permanent, high-performance file system. Project directories are intended for sharing data within a group of researchers and are allocated as part of the annual allocations process. It is recommended that this file system be used for computational runs where the data will be accessed post run in the near future. Work (/glade/p/work): Medium, permanent, high-performance file space. Project directories intended for individual work where data needs to remain resident for an extended period of time. Scratch (/glade/scratch): Large, high-performance file system. Place large data files in this file system for capacity and capability computing. Data is purged as described below, so you must save important files elsewhere, like HPSS, prior to the purge period. Share (/glade/p/datashare): Medium, permanent, high-performance file space. Accessible through Globus Online services and intended to facilitate data transfers to non-ncar users. Available on request. GLADE Overview 15Feb17 1

2 Summary of File Space Policies File Space Type Peak Perf Quota Backups Purge /glade/u/home/user SS Fileset 8 GB/s 25 GB Yes No /glade/p/project SS Fileset 90 GB/s allocated No No /glade/p/work/user SS Fileset 90 GB/s 512 GB No No /glade/scratch/user SS 220 GB/s 10 TB No > 45 days /glade/p/datashare SS 90 GB/s allocated No No File Space Intended Use File System Intended Use File Optimization /glade/u/home Hold source code, executables, configuration files, etc. NOT meant to hold the output from your application runs; the scratch or project file systems should be used for computational output. Optimized for small to medium sized files. /glade/p /glade/p/work /glade/scratch Sharing data within a team or across computational platforms. Store application output files or common scripts, source code and executables. Intended for actively used data that needs to remain resident for an extended period of time. Store application output files intended for individual use. Intended for actively used data that needs to remain resident for an extended period of time. The scratch file system is intended for temporary uses such as storage of checkpoints or application result output. If files need to be retained longer than the purge period, the files should be copied to project space or to HPPS. Optimized for highbandwidth, largeblock-size access to large files. Optimized for highbandwidth, largeblock-size access to large files. Optimized for highbandwidth, largeblock-size access to large files. GLADE Overview 15Feb17 2

3 /data/share Sharing data with non-ncar users. Intended for transient data being delivered offsite. Only available through the Globus Online service. Optimized for network-based transfers. Summary of File Space Capacities File Space Capacity Block Size /glade/u/home 90 TB 1M /glade/p 16.5 PB 4M /glade/p/work 768 TB 4M /glade/scratch 15 PB 8M /glade/p/datashare 500 TB 4M Current File System Usage The following table shows the current file system usage sorted by file system and number of files. The table represents usage across all the available GLADE file systems. Note that in the summary there is a difference in the amount of space actually being used versus the amount of space allocated. This is due to the minimum allocated block size. The data is current as of 14 Feb Total Space Used Space Allocated Total Size PB PB Number of Files 894,960, PB PB Number of Directories 47,643, TB 2.95 TB Number of Links 66,704, GB GB GLADE Overview 15Feb17 3

4 File Distribution Size Bucket # of Files % of Files Total Size % of Capacity 0 Bytes 33,312, % 0 bytes 0.00% < 512 Bytes 92,758, % GB 0.00% < 4 KB 176,062, % GB 0.00% <16 KB 155,940, % 1.26 TB 0.01% <128 KB 195,007, % 9.11 TB 0.06% <512 KB 75,125, % TB 0.13% <1 MB 19,218, % TB 0.09% <4 MB 53,616, % TB 0.78% <100 MB 76,426, % PB 12.47% <1 GB 14,870, % PB 30.67% <10 GB 2,526, % PB 37.74% <100 GB 93, % PB 14.56% <1 TB 2, % TB 3.08% 1 TB % TB 0.40% GLADE Overview 15Feb17 4

5 Systems Served System Description Purpose Connect cheyenne SGI ICE-XA Cluster main computational cluster EDR IB yellowstone IBM idataplex Cluster main computational cluster FDR IB geyser caldera pronghorn data-access RDA science gateway ESG science gateway CDP science gateway IBM xseries Cluster, nvidia GPU IBM idataplex Cluster, nvidia GPU IBM idataplex Cluster, nvidia GPU Linux Cluster Linux Cluster, web services Linux Cluster, web services Linux Cluster, web services data analysis and visualization GPGPU computational cluster GPGPU computational cluster Globus data transfer services, data sharing services Research Data Archive Service Earth Systems Grid Service Community Data Portal Service FDR IB FDR IB FDR IB 40GbE 10GbE 10GbE 10GbE Data Access Nodes Service Globus / GridFTP Globus+ Data Share HPSS Bandwidth 20 GB/s 20 GB/s 30 GB/s GLADE Overview 15Feb17 5

6 Current GLADE Architecture At this time, GLADE resources are split between the FDR IB network of Yellowstone and the EDR network of Cheyenne. The following diagram shows the various components and network connections of GLADE. Figure 1. Current GLADE Architecture GLADE Overview 15Feb17 6

7 Future GLADE Architecture In early 2018, we will be decommissioning the Yellowstone HPC resource. At that time, all GLADE resources will be relocated to the EDR network of Cheyenne. The following diagram shows the various components and network connections of GLADE in early Figure 2. Future GLADE Architecture GLADE Overview 15Feb17 7

Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment

Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment David Hart, NCAR/CISL User Services Manager June 23, 2016 1 History of computing at NCAR 2 2 Cheyenne Planned production, January

More information

NCAR Workload Analysis on Yellowstone. March 2015 V5.0

NCAR Workload Analysis on Yellowstone. March 2015 V5.0 NCAR Workload Analysis on Yellowstone March 2015 V5.0 Purpose and Scope of the Analysis Understanding the NCAR application workload is a critical part of making efficient use of Yellowstone and in scoping

More information

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 28, 2011 David L. Hart, CISL

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 28, 2011 David L. Hart, CISL NCAR s Data-Centric Supercomputing Environment Yellowstone November 28, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations

More information

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne

More information

The NCAR Yellowstone Data Centric Computing Environment. Rory Kelly ScicomP Workshop May 2013

The NCAR Yellowstone Data Centric Computing Environment. Rory Kelly ScicomP Workshop May 2013 The NCAR Yellowstone Data Centric Computing Environment Rory Kelly ScicomP Workshop 27 31 May 2013 Computers to Data Center EVERYTHING IS NEW 2 NWSC Procurement New facility: the NWSC NCAR Wyoming Supercomputing

More information

NCAR Workload Analysis on Yellowstone. September 2014 V4.1

NCAR Workload Analysis on Yellowstone. September 2014 V4.1 NCAR Workload Analysis on Yellowstone September 2014 V4.1 Purpose and Scope of the Analysis Understanding the NCAR applica5on workload is a cri5cal part of making efficient use of Yellowstone and in scoping

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

Introduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende

Introduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing

More information

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 29, 2011 David L. Hart, CISL

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 29, 2011 David L. Hart, CISL NCAR s Data-Centric Supercomputing Environment Yellowstone November 29, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations

More information

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler The Blue Water s File/Archive System Data Management Challenges Michelle Butler (mbutler@ncsa.illinois.edu) NCSA is a World leader in deploying supercomputers and providing scientists with the software

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500

More information

Storage Supporting DOE Science

Storage Supporting DOE Science Storage Supporting DOE Science Jason Hick jhick@lbl.gov NERSC LBNL http://www.nersc.gov/nusers/systems/hpss/ http://www.nersc.gov/nusers/systems/ngf/ May 12, 2011 The Production Facility for DOE Office

More information

Lustre usages and experiences

Lustre usages and experiences Lustre usages and experiences at German Climate Computing Centre in Hamburg Carsten Beyer High Performance Computing Center Exclusively for the German Climate Research Limited Company, non-profit Staff:

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

Data Management Components for a Research Data Archive

Data Management Components for a Research Data Archive Data Management Components for a Research Data Archive Steven Worley and Bob Dattore Scientific Computing Division Computational and Information Systems Laboratory National Center for Atmospheric Research

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

Yellowstone allocations and writing successful requests. November 27, 2012 David L. Hart, CISL

Yellowstone allocations and writing successful requests. November 27, 2012 David L. Hart, CISL Yellowstone allocations and writing successful requests November 27, 2012 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone environment AllocaCons opportunices at NWSC University,

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

An ESS implementation in a Tier 1 HPC Centre

An ESS implementation in a Tier 1 HPC Centre An ESS implementation in a Tier 1 HPC Centre Maximising Performance - the NeSI Experience José Higino (NeSI Platforms and NIWA, HPC Systems Engineer) Outline What is NeSI? The National Platforms Framework

More information

DATARMOR: Comment s'y préparer? Tina Odaka

DATARMOR: Comment s'y préparer? Tina Odaka DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016 PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR DATARMOR : convention de nommage ClusterHPC REF SCRATCH

More information

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment

Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Implementing a Hierarchical Storage Management system in a large-scale Lustre and HPSS environment Brett Bode, Michelle Butler, Sean Stevens, Jim Glasgow National Center for Supercomputing Applications/University

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS

More information

GPFS for Life Sciences at NERSC

GPFS for Life Sciences at NERSC GPFS for Life Sciences at NERSC A NERSC & JGI collaborative effort Jason Hick, Rei Lee, Ravi Cheema, and Kjiersten Fagnan GPFS User Group meeting May 20, 2015-1 - Overview of Bioinformatics - 2 - A High-level

More information

Outline. March 5, 2012 CIRMMT - McGill University 2

Outline. March 5, 2012 CIRMMT - McGill University 2 Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

CISL Update. 29 April Operations and Services Division

CISL Update. 29 April Operations and Services Division CISL Update Operations and Services CISL HPC Advisory Panel Meeting Anke Kamrath anke@ucar.edu Operations and Services Division Computational and Information Systems Laboratory 1 CHAP Meeting 14 May 2009

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

Interconnect Your Future

Interconnect Your Future Interconnect Your Future Gilad Shainer 2nd Annual MVAPICH User Group (MUG) Meeting, August 2014 Complete High-Performance Scalable Interconnect Infrastructure Comprehensive End-to-End Software Accelerators

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY Preston Smith Director of Research Services RESEARCH DATA DEPOT AT PURDUE UNIVERSITY May 18, 2016 HTCONDOR WEEK 2016 Ran into Miron at a workshop recently.. Talked about data and the challenges of providing

More information

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016 National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor

More information

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves

More information

XSEDE New User Training. Ritu Arora November 14, 2014

XSEDE New User Training. Ritu Arora   November 14, 2014 XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative

More information

Filesystems on SSCK's HP XC6000

Filesystems on SSCK's HP XC6000 Filesystems on SSCK's HP XC6000 Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Overview» Overview of HP SFS at SSCK HP StorageWorks Scalable File Share (SFS) based on

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

Tape Data Storage in Practice Minnesota Supercomputing Institute

Tape Data Storage in Practice Minnesota Supercomputing Institute Tape Data Storage in Practice Minnesota Supercomputing Institute GlobusWorld 208 Jeffrey McDonald, PhD Associate Director for Operations HPC Resources MSI Users PI Accounts: 843 Users: > 4600 Mesabi Cores:

More information

Genius Quick Start Guide

Genius Quick Start Guide Genius Quick Start Guide Overview of the system Genius consists of a total of 116 nodes with 2 Skylake Xeon Gold 6140 processors. Each with 18 cores, at least 192GB of memory and 800 GB of local SSD disk.

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Data Staging: Moving large amounts of data around, and moving it close to compute resources

Data Staging: Moving large amounts of data around, and moving it close to compute resources Data Staging: Moving large amounts of data around, and moving it close to compute resources PRACE advanced training course on Data Staging and Data Movement Helsinki, September 10 th 2013 Claudio Cacciari

More information

Habanero Operating Committee. January

Habanero Operating Committee. January Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes

More information

Data Transfer Study for HPSS Archiving

Data Transfer Study for HPSS Archiving Data Transfer Study for HPSS Archiving James R. Wynne III wynnejr@ornl.gov Stanley White whiters@ornl.gov Suzanne T. Parete-Koon paretekoonst@ornl.gov Quinn D. Mitchell mitchellqd@ornl.gov Tom Barron tbarron@ornl.gov

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

HOKUSAI System. Figure 0-1 System diagram

HOKUSAI System. Figure 0-1 System diagram HOKUSAI System October 11, 2017 Information Systems Division, RIKEN 1.1 System Overview The HOKUSAI system consists of the following key components: - Massively Parallel Computer(GWMPC,BWMPC) - Application

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

Data Staging: Moving large amounts of data around, and moving it close to compute resources

Data Staging: Moving large amounts of data around, and moving it close to compute resources Data Staging: Moving large amounts of data around, and moving it close to compute resources Digital Preserva-on Advanced Prac--oner Course Glasgow, July 19 th 2013 c.cacciari@cineca.it Definition Starting

More information

Scientific Computing with UNICORE

Scientific Computing with UNICORE Scientific Computing with UNICORE Dirk Breuer, Dietmar Erwin Presented by Cristina Tugurlan Outline Introduction Grid Computing Concepts Unicore Arhitecture Unicore Capabilities Unicore Globus Interoperability

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant Goals Introduction to MSI resources Show you how to access our systems

More information

Using SDSC Systems (part 2)

Using SDSC Systems (part 2) Using SDSC Systems (part 2) Running vsmp jobs, Data Transfer, I/O SDSC Summer Institute August 6-10 2012 Mahidhar Tatineni San Diego Supercomputer Center " 1 vsmp Runtime Guidelines: Overview" Identify

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery

More information

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES)

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) Advanced Photon Source Data Management S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) APS Data Management - Globus World 2018 Growing Beamline Data Needs X-ray detector capabilities

More information

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Liran Zvibel CEO, Co-founder WekaIO @liranzvibel 1 WekaIO Matrix: Full-featured and Flexible Public or Private S3 Compatible

More information

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011 Overview of the Texas Advanced Computing Center Bill Barth TACC September 12, 2011 TACC Mission & Strategic Approach To enable discoveries that advance science and society through the application of advanced

More information

Data Movement and Storage. 04/07/09 1

Data Movement and Storage. 04/07/09  1 Data Movement and Storage 04/07/09 www.cac.cornell.edu 1 Data Location, Storage, Sharing and Movement Four of the seven main challenges of Data Intensive Computing, according to SC06. (Other three: viewing,

More information

NERSC. National Energy Research Scientific Computing Center

NERSC. National Energy Research Scientific Computing Center NERSC National Energy Research Scientific Computing Center Established 1974, first unclassified supercomputer center Original mission: to enable computational science as a complement to magnetically controlled

More information

Transitioning NCAR MSS to HPSS

Transitioning NCAR MSS to HPSS Transitioning NCAR MSS to HPSS Oct 29, 2009 Erich Thanhardt Overview Transitioning to HPSS Explain rationale behind the move Introduce current HPSS system in house Present transition plans with timelines

More information

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer Daniel Yorgov Department of Mathematical & Statistical Sciences, University of Colorado Denver

More information

Guillimin HPC Users Meeting November 16, 2017

Guillimin HPC Users Meeting November 16, 2017 Guillimin HPC Users Meeting November 16, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit

More information

CISL Update Operations and Services

CISL Update Operations and Services CISL Update Operations and Services CISL HPC Advisory Panel Meeting Anke Kamrath anke@ucar.edu Operations and Services Division Computational and Information Systems Laboratory 1 CHAP Meeting A lot happening

More information

I/O at the Center for Information Services and High Performance Computing

I/O at the Center for Information Services and High Performance Computing Mich ael Kluge, ZIH I/O at the Center for Information Services and High Performance Computing HPC-I/O in the Data Center Workshop @ ISC 2015 Zellescher Weg 12 Willers-Bau A 208 Tel. +49 351-463 34217 Michael

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Data Transfers in the Grid: Workload Analysis of Globus GridFTP

Data Transfers in the Grid: Workload Analysis of Globus GridFTP Data Transfers in the Grid: Workload Analysis of Globus GridFTP Nicolas Kourtellis, Lydia Prieto, Gustavo Zarrate, Adriana Iamnitchi University of South Florida Dan Fraser Argonne National Laboratory Objective

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler MSST 10 Hadoop in Perspective Hadoop scales computation capacity, storage capacity, and I/O bandwidth by

More information

Getting Started with XSEDE. Dan Stanzione

Getting Started with XSEDE. Dan Stanzione November 3, 2011 Getting Started with XSEDE Dan Stanzione Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing large scale computing, data, and visualization resources. XSEDE is the evolution

More information

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC ARCHER/RDF Overview How do they fit together? Andy Turner, EPCC a.turner@epcc.ed.ac.uk www.epcc.ed.ac.uk www.archer.ac.uk Outline ARCHER/RDF Layout Available file systems Compute resources ARCHER Compute

More information

SAM at CCIN2P3 configuration issues

SAM at CCIN2P3 configuration issues SAM at CCIN2P3 configuration issues Patrice Lebrun - IPNL/IN2P3 CCIN2P3 present actions Computing and data storage services for about 45 experiments Regional Center services for: EROS II BaBar ( Tier A)

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

AFM Use Cases Spectrum Scale User Meeting

AFM Use Cases Spectrum Scale User Meeting 1! AFM Use Cases Spectrum Scale User Meeting May, 2017 Vic Cornell, Systems Engineer 2! DDN Who We Are Customers: 1,200+ in 50 Countries Employees: 650+ in 20 Countries Headquarters: Santa Clara, CA Key

More information

Using file systems at HC3

Using file systems at HC3 Using file systems at HC3 Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu Basic Lustre

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

Flux: The State of the Cluster

Flux: The State of the Cluster Flux: The State of the Cluster Andrew Caird acaird@umich.edu 7 November 2012 Questions Thank you all for coming. Questions? Andy Caird (acaird@umich.edu, hpc-support@umich.edu) Flux Since Last November

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś Presentation plan 2 Cyfronet introduction System description SLURM modifications Job

More information

Leonhard: a new cluster for Big Data at ETH

Leonhard: a new cluster for Big Data at ETH Leonhard: a new cluster for Big Data at ETH Bernd Rinn, Head of Scientific IT Services Olivier Byrde, Group leader High Performance Computing Bernd Rinn & Olivier Byrde 2017-02-15 1 Agenda Welcome address

More information

1 of 8 10/10/2018, 12:52 PM RM-01, 10/10/2018. * Required. 1. Agency Name: * 2. Fiscal year reported: * 3. Date: *

1 of 8 10/10/2018, 12:52 PM RM-01, 10/10/2018. * Required. 1. Agency Name: * 2. Fiscal year reported: * 3. Date: * 1 of 8 10/10/2018, 12:52 PM RM-01, 10/10/2018 * Required 1. Agency Name: * 2. Fiscal year reported: * 3. Date: * Example: December 15, 2012 4. Name of agency staff member completing this report: * The

More information

Analytics in the cloud

Analytics in the cloud Analytics in the cloud Dow we really need to reinvent the storage stack? R. Ananthanarayanan, Karan Gupta, Prashant Pandey, Himabindu Pucha, Prasenjit Sarkar, Mansi Shah, Renu Tewari Image courtesy NASA

More information

Extraordinary HPC file system solutions at KIT

Extraordinary HPC file system solutions at KIT Extraordinary HPC file system solutions at KIT Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools for ldiskfs investigation

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Experiences using a multi-tiered GPFS file system at Mount Sinai Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Outline 1. Storage summary 2. Planning and Migration 3. Challenges

More information

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016 Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis

More information

NetApp High-Performance Storage Solution for Lustre

NetApp High-Performance Storage Solution for Lustre Technical Report NetApp High-Performance Storage Solution for Lustre Solution Design Narjit Chadha, NetApp October 2014 TR-4345-DESIGN Abstract The NetApp High-Performance Storage Solution (HPSS) for Lustre,

More information

How to access Geyser and Caldera from Cheyenne. 19 December 2017 Consulting Services Group Brian Vanderwende

How to access Geyser and Caldera from Cheyenne. 19 December 2017 Consulting Services Group Brian Vanderwende How to access Geyser and Caldera from Cheyenne 19 December 2017 Consulting Services Group Brian Vanderwende Geyser nodes useful for large-scale data analysis and post-processing tasks 16 nodes with: 40

More information

An introduction to GPFS Version 3.3

An introduction to GPFS Version 3.3 IBM white paper An introduction to GPFS Version 3.3 Scott Fadden, IBM Corporation Contents 1 Overview 2 What is GPFS? 2 The file system 2 Application interfaces 3 Performance and scalability 3 Administration

More information

Blue Waters System Overview. Greg Bauer

Blue Waters System Overview. Greg Bauer Blue Waters System Overview Greg Bauer The Blue Waters EcoSystem Petascale EducaIon, Industry and Outreach Petascale ApplicaIons (CompuIng Resource AllocaIons) Petascale ApplicaIon CollaboraIon Team Support

More information

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009 Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)

More information

Choosing Resources Wisely. What is Research Computing?

Choosing Resources Wisely. What is Research Computing? Choosing Resources Wisely Scott Yockel, PhD Harvard - Research Computing What is Research Computing? Faculty of Arts and Sciences (FAS) department that handles nonenterprise IT requests from researchers.

More information