NCAR Workload Analysis on Yellowstone. September 2014 V4.1
|
|
- Deborah Heath
- 5 years ago
- Views:
Transcription
1 NCAR Workload Analysis on Yellowstone September 2014 V4.1
2 Purpose and Scope of the Analysis Understanding the NCAR applica5on workload is a cri5cal part of making efficient use of Yellowstone and in scoping the future system procurements. Analysis of applica5on performance on Yellowstone is the first step in understanding the transi5on needed to move to new architectures. Primary sources of informa5on for the analysis included: Science area Applica,on code 3 rd party applica,on usage Algorithm Job size Memory usage Threading usage Library usage I/O pa@erns 2
3 Yellowstone Environment Yellowstone (High- performance compu;ng) IBM idataplex Cluster with Intel Sandy Bridge processors 1.5 PetaFLOPs 4,536 nodes 72,576 Xeon E cores 145 TB total memory Mellanox FDR InfiniBand full fat- tree interconnect GLADE (Centralized file systems and data storage) 16.4 PB GPFS file systems, 90 GB/s aggregate I/O bandwidth Geyser & Caldera (Data analysis and visualiza;on) Large- memory system with Intel Westmere EX processors 16 nodes, 640 Westmere- EX cores, 1 TB/node, 16 NVIDIA K5000 GPU s GPU computa5on/vis system with Intel Sandy Bridge processors 16 nodes, 256 Xeon E cores, 64 GB/node, 32 NVIDIA K20X GPUs Pronghorn (Intel Phi testbed system) 16 nodes, 256 Xeon E cores; 64 GB/node 32 Intel Phi 5110P adapters (Knight s Corner)
4 Yellowstone Physical Infrastructure Resource HPC GLADE DAV AMPS # Racks 63 - idataplex Racks (72 nodes per rack) Racks (9 Mellanox FDR core switches, 1 Ethernet switch) 1-19 Rack (login, service, management nodes) 19 - NSD Server, Controller and Storage Racks 1-19 Rack (I/O aggregator nodes, management, IB & Ethernet switches) 1 - idataplex Rack (GPU- Comp & Knights Corner) 2-19 Racks (Large Memory, management, IB switch) 1 - idataplex Rack 1-19 Rack (login, IB, NSD, disk & management nodes) Total Power Required HPC GLADE DAV AMPS ~1.7 MW ~1.4 MW MW MW MW
5 Yellowstone Environment Yellowstone GLADE HPC resource, 1.5 PFLOPS peak Central disk resource 16.4 PB Geyser Caldera DAV clusters Pronghorn Phi testbed High Bandwidth Low Latency HPC and I/O Networks FDR InfiniBand and 10Gb Ethernet NCAR HPSS Archive 160 PB capacity ~11 PB/yr growth 1Gb/10Gb Ethernet (40Gb+ future) Science Gateways RDA, ESG Data Transfer Services Remote Vis Partner Sites XSEDE Sites
6 User CommuniWes 1,134 HPC users in the last 12 months more than 450 dis5nct users each month 535 projects in the last 12 months more than 250 dis5nct projects each month NCAR staff (29%) Roughly equal use by CGD, MMM, ACD, HAO, RAL Smaller use by CISL, EOL, other programs University (29%) Larger number of smaller scale projects Many graduate students, post- docs Climate SimulaWon Laboratory (28%) Small number (<6) large- scale climate- focused projects Large por5on devoted to CESM community Wyoming researchers (13%) Smaller number of ac5vi5es from a broader set of science domains 6
7 Yellowstone usage reflects its mission to serve the atmospheric sciences Yellowstone use since start of producwon Climate, Large- Scale Dynamics 53% All Others 2% ComputaWonal Science 3% Ocean Sciences 9% Earth Sciences 3% Mesoscale Meteorology 3% Atmospheric Chemistry 4% Weather PredicWon 6% Geospace Sciences 9% Fluid Dynamics and Turbulence 8% 7
8 Applications used on Yellowstone Yellowstone Usage by ApplicaWon (excluding CESM) GHOST Zeus3D Chem WRF- DART WRF+ WRF WACCM NSU3D- FLOWYO MHD FWSDA CAM/IMPACT MURaM CESM- DART WRFDA MPAS CFD MHD- DART DART CAM PRS SWMF WRF- Chem HiPPSTR P3D GEOS5- ModelE NRCM NCAR LES Other HYCOM POP-SODA CAM-CARMA GCS model BATS-R-US LES ParFlow-CLM FVCOM CAM-ECHAM CAM-AGCM CLM WRF-Hydro Pencil MPAS-DA RegCM4-CLM MITgcm P3D+LFM CASINO 3D-EMPIC CM1 CESM-ROMS GFDL-FMS GFDL-CM1 SAM-LES 50+% of use from CESM (not shown on this chart) 52+ other apps/models idenwfied in 171 projects, represenwng 95% of resource use 8
9 Most jobs are small jobs; ~50% of core- hours are consumed by jobs > 64 nodes 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% % Jobs % Core- Hours 99% 99% of all jobs use 64 nodes or less regardless of core- hours consumed 49% 96% 96% of all jobs use 1,024 nodes or less, when weighted by core- hours consumed (i.e., 4% use > 1,024 nodes) job size (nodes) 9
10 Total CPU Hours Millions Recent producwon workload is dominated by 9-64 node jobs Monthly Yellowstone Resource Usage (CPU Hours) by Job Size >4kn 2kn- 4kn 1kn- 2kn n n n n 33-64n 17-32n 9-16n 5-8n 3-4n 2n 1n 0 Larger jobs during ASD period (early 2013) 10
11 Historical trends in job size (max, avg, weighted) show no dramawc shios Max nodes Avg node size Avg node size (weighted) 11
12 Daily Average Yellowstone FloaWng Point Efficiency is low relawve to peak theorewcal Daily Average % FloaWng Point Efficiency Yellowstone FloaWng Point Efficiency Yellowstone %FP Efficiency Yellowstone Avg TFLOP/s 5.0% 4.0% 3.0% 1.57% Lifetime Average Application Floating-Point Efficiency Daily Average TeraFLOP/second 2.0% % % 13/01/01 13/04/11 13/07/20 13/10/28 14/02/05 14/05/
13 On average, applicawons use about ~25% of Yellowstone s available memory Yellowstone has 32 GB of memory per node which is 2 GB/core Data collec5on for various periods of 5me Memory use collected from each node every 5 minutes, then averaged over 5me. 13
14 When looking at runwmes, most jobs consume 30 minutes or less Two- thirds or more of the very short jobs result from data assimila5on ac5vi5es using the DART framework, usually on 1-4 nodes. The remainder comprise model development and tes5ng and a small number of groups running large numbers of serial tasks. 14
15 When looking at core- hours consumed, distribuwon of runwmes is fairly uniform Wallclock limit for most Yellowstone queues is 12 hours. Prior NCAR system had wallclock limit of 6 hours. 15
16 GLADE: GLobally Accessible Data Environment GPFS NSD Servers 20 IBM x3650 M4 nodes; Intel Xeon E processors w/avx 16 cores, 64 GB memory per node; 2.6 GHz clock 91.8 GB/sec aggregate I/O bandwidth (4.8+ GB/s/server) I/O Aggregator Servers (export GPFS, GLADE- HPSS connecwvity) 4 IBM x3650 M4 nodes; Intel Xeon E processors w/avx 16 cores, 64 GB memory per node; 2.6 GHz clock 10 Gigabit Ethernet & FDR fabric interfaces High- Performance I/O interconnect to HPC & DAV Resources Mellanox FDR InfiniBand full fat- tree 13.6 GB/sec bidirec5onal bandwidth/node Disk Storage Subsystem 76 IBM DCS3700 controllers & expansion drawers each populated with 90 3 TB NL- SAS drives/controller PB usable capacity
17 GLADE Filesystems Snapshot (8/5/14) File System Intended use Capacity (PB) Used (PB) Sub- block/ Block size # Files (millions) # Directories (millions) /glade/u User program files; environment KB 512KB Projects Allocated project space; not purged KB 4MB Scratch Scratch space; purged (currently 90 day) KB 4MB
18 Project file system is dominated by a large number of small files Space used: 3 PB File count: 226 million /glade/p File System (project space) 4MB block, 128kB sub- block - August 2014 # of Files Millions B < 512 B < 128 KB < 4 MB < 100 MB < 1 GB < 10 GB < 100 GB # Files # NetCDF Files Total TB < 1 TB 1 TB+ NetCDF TB TeraBytes GPFS block size = 4MB GPFS sub-block size=128kb 18
19 Scratch file system exhibits the same usage pauern as Projects space Space used: 4 PB File count: 172 million /glade/scratch File System (scratch space) 4MB block, 128kB sub- block - August 2014 # of Files Millions B < 512 B < 128 KB < 4 MB < 100 MB < 1 GB < 10 GB < 100 GB # Files # NetCDF Files Total TB < 1 TB 1 TB+ NetCDF TB TeraBytes GPFS block size = 4MB GPFS sub-block size=128kb 19
20 /glade/u file system is used for home file system, applicawons & tools directories Space used: 10 TB /glade/u File System (home space) 512kB block, 16kB sub- block - August 2014 # of Files Millions B < 512 B < 16 KB < 512 KB < 100 MB File count: 37 million < 1 GB < 10 GB < 100 GB # Files # NetCDF Files Total GB < 1 TB 1TB+ NetCDF GB GigaBytes GPFS block size = 512kB GPFS sub-block size=16kb 20
21 DAV Resource UWlizaWon is Low Lifetime average utilization: Caldera 12.0% Geyser 13.1% There has been a slight uptrend in utilization of both DAV systems in recent months. While the DAV resources are, in part, meant to be used interactively (and thus should not be routinely running at high %utilization), they remain relatively underutilized particularly the caldera GPUaccelerated computational system. 21
22 Profile of a typical CESM run Between 3.54 GB per node (2 resolu5on) and 7 GB per node (¼ resolu5on) 15 cores, 2 threads per core (not all CESM models are threaded, however). Best Yellowstone configura5on for modest- sized runs (may not be true for all machines). The use of 16 cores appears to result in high OS noise (jiwer) that reduces performance below the 15 core configura5on. Ac5ve area of inves5ga5on. Largest cases may not use threading (affects on scalability being inves5gated) 22
23 I/O pauern of a typical CESM run shows lots of small files doing small block I/O Opens files (depending on configura5on) Has aggregate I/O performance of MB/s Spends 3%- 8% of run5me in I/O Most I/O opera5ons are very small (< 100 Byte) POSIX file opera5ons, but model output is wriwen as ~512 kb chunks Analysis of GLADE/GPFS performance shows no bowlenecks in metadata, disk, or network I/O traffic 23
NCAR Workload Analysis on Yellowstone. March 2015 V5.0
NCAR Workload Analysis on Yellowstone March 2015 V5.0 Purpose and Scope of the Analysis Understanding the NCAR application workload is a critical part of making efficient use of Yellowstone and in scoping
More informationNCAR s Data-Centric Supercomputing Environment Yellowstone. November 28, 2011 David L. Hart, CISL
NCAR s Data-Centric Supercomputing Environment Yellowstone November 28, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations
More informationNCAR s Data-Centric Supercomputing Environment Yellowstone. November 29, 2011 David L. Hart, CISL
NCAR s Data-Centric Supercomputing Environment Yellowstone November 29, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations
More informationNCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017
NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 Overview The Globally Accessible Data Environment (GLADE) provides centralized file storage for HPC computational, data-analysis,
More informationCheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment
Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment David Hart, NCAR/CISL User Services Manager June 23, 2016 1 History of computing at NCAR 2 2 Cheyenne Planned production, January
More informationThe NCAR Yellowstone Data Centric Computing Environment. Rory Kelly ScicomP Workshop May 2013
The NCAR Yellowstone Data Centric Computing Environment Rory Kelly ScicomP Workshop 27 31 May 2013 Computers to Data Center EVERYTHING IS NEW 2 NWSC Procurement New facility: the NWSC NCAR Wyoming Supercomputing
More informationResources Current and Future Systems. Timothy H. Kaiser, Ph.D.
Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic
More informationResources Current and Future Systems. Timothy H. Kaiser, Ph.D.
Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic
More informationInfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014
InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance
More informationYellowstone allocations and writing successful requests. November 27, 2012 David L. Hart, CISL
Yellowstone allocations and writing successful requests November 27, 2012 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone environment AllocaCons opportunices at NWSC University,
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection
More informationUniversity at Buffalo Center for Computational Research
University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support
More informationCISL Update Operations and Services
CISL Update Operations and Services CISL HPC Advisory Panel Meeting Anke Kamrath anke@ucar.edu Operations and Services Division Computational and Information Systems Laboratory 1 CHAP Meeting A lot happening
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)
PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems
More informationAdvanced Research Compu2ng Informa2on Technology Virginia Tech
Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)
More informationHPC Architectures. Types of resource currently in use
HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationIntroduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende
Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing
More informationLBRN - HPC systems : CCT, LSU
LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit
More informationCISL Update. 29 April Operations and Services Division
CISL Update Operations and Services CISL HPC Advisory Panel Meeting Anke Kamrath anke@ucar.edu Operations and Services Division Computational and Information Systems Laboratory 1 CHAP Meeting 14 May 2009
More informationOutline. March 5, 2012 CIRMMT - McGill University 2
Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx
More informationGateways to Discovery: Cyberinfrastructure for the Long Tail of Science
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.
More informationDVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011
DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves
More informationThe Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center
The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier0) Contributing sites and the corresponding computer systems for this call are: GENCI CEA, France Bull Bullx cluster GCS HLRS, Germany Cray
More informationAnalyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016
Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis
More informationPRACE Project Access Technical Guidelines - 19 th Call for Proposals
PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture
More informationHPC Hardware Overview
HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn
More informationComet Virtualization Code & Design Sprint
Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working
More informationLeibniz Supercomputer Centre. Movie on YouTube
SuperMUC @ Leibniz Supercomputer Centre Movie on YouTube Peak Performance Peak performance: 3 Peta Flops 3*10 15 Flops Mega 10 6 million Giga 10 9 billion Tera 10 12 trillion Peta 10 15 quadrillion Exa
More informationStorage Supporting DOE Science
Storage Supporting DOE Science Jason Hick jhick@lbl.gov NERSC LBNL http://www.nersc.gov/nusers/systems/hpss/ http://www.nersc.gov/nusers/systems/ngf/ May 12, 2011 The Production Facility for DOE Office
More informationI/O Monitoring at JSC, SIONlib & Resiliency
Mitglied der Helmholtz-Gemeinschaft I/O Monitoring at JSC, SIONlib & Resiliency Update: I/O Infrastructure @ JSC Update: Monitoring with LLview (I/O, Memory, Load) I/O Workloads on Jureca SIONlib: Task-Local
More informationIntroduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende
Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne
More informationInterconnect Your Future
Interconnect Your Future Gilad Shainer 2nd Annual MVAPICH User Group (MUG) Meeting, August 2014 Complete High-Performance Scalable Interconnect Infrastructure Comprehensive End-to-End Software Accelerators
More informationSuperMike-II Launch Workshop. System Overview and Allocations
: System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of
More informationTo Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC
To Infiniband or Not Infiniband, One Site s s Perspective Steve Woods MCNC 1 Agenda Infiniband background Current configuration Base Performance Application performance experience Future Conclusions 2
More informationManaging HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory
Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department
More informationThe Stampede is Coming: A New Petascale Resource for the Open Science Community
The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation
More informationNVIDIA Update and Directions on GPU Acceleration for Earth System Models
NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,
More informationDell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia
More informationThe Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole
The Center for High Performance Computing Dell Breakfast Events 20 th June 2016 Happy Sithole Background: The CHPC in SA CHPC User Community: South Africa CHPC Existing Users Future Users Introduction
More informationPerformance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA
Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA Pak Lui, Gilad Shainer, Brian Klaff Mellanox Technologies Abstract From concept to
More informationResults from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence
Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence Jens Domke Research Staff at MATSUOKA Laboratory GSIC, Tokyo Institute of Technology, Japan Omni-Path User Group 2017/11/14 Denver,
More informationComputer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research
Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya
More informationPreparing GPU-Accelerated Applications for the Summit Supercomputer
Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership
More informationLustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE
Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute
More informationRESEARCH DATA DEPOT AT PURDUE UNIVERSITY
Preston Smith Director of Research Services RESEARCH DATA DEPOT AT PURDUE UNIVERSITY May 18, 2016 HTCONDOR WEEK 2016 Ran into Miron at a workshop recently.. Talked about data and the challenges of providing
More informationNetApp High-Performance Storage Solution for Lustre
Technical Report NetApp High-Performance Storage Solution for Lustre Solution Design Narjit Chadha, NetApp October 2014 TR-4345-DESIGN Abstract The NetApp High-Performance Storage Solution (HPSS) for Lustre,
More informationEnabling the Smart Grid through Big Data
Enabling the Smart Grid through Big Data Paul A. Navrá;l, Ph.D. Manager Scalable Visualiza;on Technologies Texas Advanced Compu;ng Center TACC Booth @ SC12 November 14, 2012 The Age of Big Data Records
More informationHPC Storage Use Cases & Future Trends
Oct, 2014 HPC Storage Use Cases & Future Trends Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era Atul Vidwansa Email: atul@ DDN About Us DDN is a Leader in Massively
More informationThe BioHPC Nucleus Cluster & Future Developments
1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does
More informationLS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance
11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton
More informationCESM (Community Earth System Model) Performance Benchmark and Profiling. August 2011
CESM (Community Earth System Model) Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell,
More informationIBM Information Technology Guide For ANSYS Fluent Customers
IBM ISV & Developer Relations Manufacturing IBM Information Technology Guide For ANSYS Fluent Customers A collaborative effort between ANSYS and IBM 2 IBM Information Technology Guide For ANSYS Fluent
More informationTuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov
Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery
More informationThe RAMDISK Storage Accelerator
The RAMDISK Storage Accelerator A Method of Accelerating I/O Performance on HPC Systems Using RAMDISKs Tim Wickberg, Christopher D. Carothers wickbt@rpi.edu, chrisc@cs.rpi.edu Rensselaer Polytechnic Institute
More informationCISL Operations and Yellowstone Update
CISL Operations and Yellowstone Update CISL HPC Advisory Panel Meeting 17 October 2013 David Hart dhart@ucar.edu Operations and Services Division Computational and Information Systems Laboratory Much happening
More informationNCEP HPC Transition. 15 th ECMWF Workshop on the Use of HPC in Meteorology. Allan Darling. Deputy Director, NCEP Central Operations
NCEP HPC Transition 15 th ECMWF Workshop on the Use of HPC Allan Darling Deputy Director, NCEP Central Operations WCOSS NOAA Weather and Climate Operational Supercomputing System CURRENT OPERATIONAL CHALLENGE
More informationI/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings
Mitglied der Helmholtz-Gemeinschaft I/O at JSC I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O Wolfgang Frings W.Frings@fz-juelich.de Jülich Supercomputing
More informationHPC NETWORKING IN THE REAL WORLD
15 th ANNUAL WORKSHOP 2019 HPC NETWORKING IN THE REAL WORLD Jesse Martinez Los Alamos National Laboratory March 19 th, 2019 [ LOGO HERE ] LA-UR-19-22146 ABSTRACT Introduction to LANL High Speed Networking
More informationHPC Capabilities at Research Intensive Universities
HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)
More informationI/O at the Center for Information Services and High Performance Computing
Mich ael Kluge, ZIH I/O at the Center for Information Services and High Performance Computing HPC-I/O in the Data Center Workshop @ ISC 2015 Zellescher Weg 12 Willers-Bau A 208 Tel. +49 351-463 34217 Michael
More informationHabanero Operating Committee. January
Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes
More informationACCRE High Performance Compute Cluster
6 중 1 2010-05-16 오후 1:44 Enabling Researcher-Driven Innovation and Exploration Mission / Services Research Publications User Support Education / Outreach A - Z Index Our Mission History Governance Services
More informationGPFS for Life Sciences at NERSC
GPFS for Life Sciences at NERSC A NERSC & JGI collaborative effort Jason Hick, Rei Lee, Ravi Cheema, and Kjiersten Fagnan GPFS User Group meeting May 20, 2015-1 - Overview of Bioinformatics - 2 - A High-level
More informationInterconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017
Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2017 InfiniBand Accelerates Majority of New Systems on TOP500 InfiniBand connects 77% of new HPC
More informationIntel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins
Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications
More informationCluster Network Products
Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster
More informationPART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System
INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE
More informationOverview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011
Overview of the Texas Advanced Computing Center Bill Barth TACC September 12, 2011 TACC Mission & Strategic Approach To enable discoveries that advance science and society through the application of advanced
More informationNERSC. National Energy Research Scientific Computing Center
NERSC National Energy Research Scientific Computing Center Established 1974, first unclassified supercomputer center Original mission: to enable computational science as a complement to magnetically controlled
More informationBuilding Multi-Petaflop Systems with MVAPICH2 and Global Arrays
Building Multi-Petaflop Systems with MVAPICH2 and Global Arrays ABHINAV VISHNU*, JEFFREY DAILY, BRUCE PALMER, HUBERTUS VAN DAM, KAROL KOWALSKI, DARREN KERBYSON, AND ADOLFY HOISIE PACIFIC NORTHWEST NATIONAL
More informationChoosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing
Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationJÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich
JÜLICH SUPERCOMPUTING CENTRE Site Introduction 09.04.2018 Michael Stephan JSC @ Forschungszentrum Jülich FORSCHUNGSZENTRUM JÜLICH Research Centre Jülich One of the 15 Helmholtz Research Centers in Germany
More information1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K
1. ALMA Pipeline Cluster specification The following document describes the recommended hardware for the Chilean based cluster for the ALMA pipeline and local post processing to support early science and
More information2014 LENOVO. ALL RIGHTS RESERVED.
2014 LENOVO. ALL RIGHTS RESERVED. Parallel System description. Outline p775, p460 and dx360m4, Hardware and Software Compiler options and libraries used. WRF tunable parameters for scaling runs. nproc_x,
More informationAn evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks
An evaluation of the Performance and Scalability of a Yellowstone Test-System in 5 Benchmarks WRF Model NASA Parallel Benchmark Intel MPI Bench My own personal benchmark HPC Challenge Benchmark Abstract
More informationThe Cray CX1 puts massive power and flexibility right where you need it in your workgroup
The Cray CX1 puts massive power and flexibility right where you need it in your workgroup Up to 96 cores of Intel 5600 compute power 3D visualization Up to 32TB of storage GPU acceleration Small footprint
More informationIllinois Proposal Considerations Greg Bauer
- 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and
More informationDynamical Exascale Entry Platform
DEEP Dynamical Exascale Entry Platform 2 nd IS-ENES Workshop on High performance computing for climate models 30.01.2013, Toulouse, France Estela Suarez The research leading to these results has received
More informationData storage services at KEK/CRC -- status and plan
Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The
More informationIntroduc)on to Xeon Phi
Introduc)on to Xeon Phi IXPUG 14 Lars Koesterke Acknowledgements Thanks/kudos to: Sponsor: National Science Foundation NSF Grant #OCI-1134872 Stampede Award, Enabling, Enhancing, and Extending Petascale
More informationGeneral Plasma Physics
Present and Future Computational Requirements General Plasma Physics Center for Integrated Computation and Analysis of Reconnection and Turbulence () Kai Germaschewski, Homa Karimabadi Amitava Bhattacharjee,
More informationIntroduc)on to Hyades
Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on
More informationIBM Power 755 server. High performance compute node for scalable clusters using InfiniBand architecture interconnect products.
IBM Power 755 server High performance compute node for scalable clusters using InfiniBand architecture interconnect products. Highlights Optimized for running highly parallel computationally intensive
More informationSupercomputing at the United States National Weather Service (NWS)
Supercomputing at the United States National Weather Service (NWS) Rebecca Cosgrove Deputy Director, NCEP Central Operations United States National Weather Service 18th Workshop on HPC in Meteorology September
More informationUAntwerpen, 24 June 2016
Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO
More informationOrganizational Update: December 2015
Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2
More informationHETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA
HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA STATE OF THE ART 2012 18,688 Tesla K20X GPUs 27 PetaFLOPS FLAGSHIP SCIENTIFIC APPLICATIONS
More informationMapping MPI+X Applications to Multi-GPU Architectures
Mapping MPI+X Applications to Multi-GPU Architectures A Performance-Portable Approach Edgar A. León Computer Scientist San Jose, CA March 28, 2018 GPU Technology Conference This work was performed under
More informationData Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016
National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor
More informationInauguration Cartesius June 14, 2013
Inauguration Cartesius June 14, 2013 Hardware is Easy...but what about software/applications/implementation/? Dr. Peter Michielse Deputy Director 1 Agenda History Cartesius Hardware path to exascale: the
More informationDell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance
Dell EMC Ready Bundle for HPC Digital Manufacturing ANSYS Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for ANSYS Mechanical, ANSYS Fluent, and
More informationHigh Performance Computing and Data Resources at SDSC
High Performance Computing and Data Resources at SDSC "! Mahidhar Tatineni (mahidhar@sdsc.edu)! SDSC Summer Institute! August 05, 2013! HPC Resources at SDSC Hardware Overview HPC Systems : Gordon, Trestles
More informationHigh Performance Computing. Leopold Grinberg T. J. Watson IBM Research Center, USA
High Performance Computing Leopold Grinberg T. J. Watson IBM Research Center, USA High Performance Computing Why do we need HPC? High Performance Computing Amazon can ship products within hours would it
More informationThe Future of High Performance Interconnects
The Future of High Performance Interconnects Ashrut Ambastha HPC Advisory Council Perth, Australia :: August 2017 When Algorithms Go Rogue 2017 Mellanox Technologies 2 When Algorithms Go Rogue 2017 Mellanox
More informationNARCCAP: North American Regional Climate Change Assessment Program. Seth McGinnis, NCAR
NARCCAP: North American Regional Climate Change Assessment Program Seth McGinnis, NCAR mcginnis@ucar.edu NARCCAP: North American Regional Climate Change Assessment Program Nest highresolution regional
More information