Hands-On Workshop bwunicluster June 29th 2015

Similar documents
bwfortreff bwhpc user meeting

Operating two InfiniBand grid clusters over 28 km distance

Access: bwunicluster, bwforcluster, ForHLR

Performance Analysis and Prediction for distributed homogeneous Clusters

Implementierung eines Dynamic Remote Storage Systems (DRS) für Applikationen mit hohen IO Anforderungen

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

Using file systems at HC3

bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

A Long-distance InfiniBand Interconnection between two Clusters in Production Use

Now SAML takes it all:

Assistance in Lustre administration

Extraordinary HPC file system solutions at KIT

bwfdm Communities - a Research Data Management Initiative in the State of Baden-Wuerttemberg

Filesystems on SSCK's HP XC6000

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Introduction to High Performance Computing Using Sapelo2 at GACRC

Lessons learned from Lustre file system operation

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

Grid Computing Activities at KIT

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

Umeå University

Umeå University

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

OBTAINING AN ACCOUNT:

Frequently Asked Questions

Illinois Proposal Considerations Greg Bauer

3M Molecular Detection System Software Upgrade/Installation Instructions

Introduction to HPC Using zcluster at GACRC

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

Introduction to Discovery.

High Performance Computing (HPC) Using zcluster at GACRC

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

High Performance Computing Resources at MSU

UAntwerpen, 24 June 2016

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

Introduction to High-Performance Computing (HPC)

Slurm at the George Washington University Tim Wickberg - Slurm User Group Meeting 2015

Practice of Software Development: Dynamic scheduler for scientific simulations

Migrating from Zcluster to Sapelo

The SHARED hosting plan is designed to meet the advanced hosting needs of businesses who are not yet ready to move on to a server solution.

Introduction to High-Performance Computing (HPC)

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC

Parallel Programming Techniques. Intro to PSC Tom Maiden

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Should you encounter any issues or have questions as you go through this registration process, please send an to:

Windows-HPC Environment at RWTH Aachen University

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

Users and utilization of CERIT-SC infrastructure

Introduction to Discovery.

How to Use a Supercomputer - A Boot Camp

Outline. March 5, 2012 CIRMMT - McGill University 2

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Oakland University Obtaining Your 1098-T Electronically

Introduction to HPC Using zcluster at GACRC

Citrix Synchronizer Quick Start Guide

Introduction to High Performance Computing (HPC) Resources at GACRC

Remote & Collaborative Visualization. Texas Advanced Computing Center

Deep Learning on SHARCNET:

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

LBRN - HPC systems : CCT, LSU

Citrix Synchronizer Quick Start Guide

Using Sapelo2 Cluster at the GACRC

SuperMike-II Launch Workshop. System Overview and Allocations

Introduction to High-Performance Computing (HPC)

New Rock Technologies, Inc. Unified Management System. User Guide. Document Version:

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

NERSC. National Energy Research Scientific Computing Center

Batch system usage arm euthen F azo he Z J. B T

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

Introduction to HPC Using the New Cluster at GACRC

Rechenzentrum HIGH PERFORMANCE SCIENTIFIC COMPUTING

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

Introduction to BioHPC New User Training

Milestone Technical Configuration Level 2 Training Workshop Agenda

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

Cloud Control Panel (CCP) User Guide

Introduction to the Cluster

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Introduction to Discovery.

Remote Deposit. Getting Started Guide

IBM z Systems Development and Test Environment Tools User's Guide IBM

Cornell Theory Center 1

INTRODUCTION TO THE CLUSTER

Nextiva Drive The Setup Process Mobility & Storage Option

Feedback on BeeGFS. A Parallel File System for High Performance Computing

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

Experiences with HP SFS / Lustre in HPC Production

Research Collection. WebParFE A web interface for the high performance parallel finite element solver ParFE. Report. ETH Library

Files.Kennesaw.Edu. Kennesaw State University Information Technology Services. Introduces. Presented by the ITS Technology Outreach Team

Product Bulletin for Supermicro CDE250 BIOS Upgrade to Version 2.1a

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Graham vs legacy systems

OFFSIDER. Standalone Installation Guide. Updated: August 2018

Transcription:

Hands-On Workshop bwunicluster June 29th 2015 Agenda Welcome Introduction to bwhpc and the bwunicluster Modules - Software Environment Management Job Submission and Monitoring Interactive Work and Remote Visualisation Questions and Answers, Open Discussion End 1

High performance computing in Baden-Württemberg An introduction to bwhpc and the bwunicluster Jürgen Salk (bwhpc-c5) 2

1. bwhpc concept 3

bwhpc: Where do we come from? bwgrid@ulm bwhpc is the successor of bwgrid bwgrid: Clusters located at 9 universities in BW Homogeneous resources Common hardware Feel at home on all 9 bwgrid sites One-size-fits-all approach One-size-fits-all: describes a piece of clothing that is designed to fit a person of any size. Source: http://dictionary.cambridge.org/dictionary/british/one-size-fits-all 4

bwhpc: Where do we come from? bwgrid@ulm bwhpc is the successor of bwgrid bwgrid: Clusters located at 9 universities in BW Homogeneous resources Common hardware Feel at home on all 9 bwgrid sites One-size-fits-all approach One-size-fits-all: describes a piece of clothing that is designed to fit a person of any size. Source: http://dictionary.cambridge.org/dictionary/british/one-size-fits-all 5

bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering 6 Elementary particle physics Computational chemistry

bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering Elementary particle physics Computational chemistry JUSTUS 7

bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg bwunicluster Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering Elementary particle physics Computational chemistry JUSTUS 8

2. Introduction to the bwunicluster 9

bwunicluster Physically located at KIT in Karlsruhe Co-financed by Baden-Württemberg's ministry of science, research and arts and the shareholders: 10 40,25 10 Usage: 10 7,5 2,25 Stuttgart Freiburg Ulm Hohenheim Konstanz Heidelberg Tübingen Mannheim KIT 5 5 10 Free of charge General purpose, teaching Technical computing (sequential & weak parallel) & parallel computing Access / limitations: For all members of shareholder's university, but user needs to be entitled by home university Registration at https://bwidm.scc.kit.edu Participate questionaire at https://www.bwhpc-c5.de/en/zas/bwunicluster_survey.php Filsesystem quota and computation share based on own university's share 10

bwunicluster hardware architecture 2 x Login Nodes Nodes that are directly accessible by end users. interactive login, file management, program development and interactive pre- and postprocessing. 520 x Compute Nodes 512 thin nodes: 16-way (2x8) Intel Xeon E5-2670, clock speed 2.6 GHz 64 GB RAM 2 TB local disk space 8 fat nodes: 32-way (4x8) Intel Xeon E5-4640, clock speed 2.4 GHz 1 TB RAM 7 TB local disk space fast interconnect Infiniband 4 x FDR (4 x 14 Gbit/s) Access is managed by a batch system Jobs are submitted via MOAB Job is executed depending on its priority, when required resources are available. 11

bwunicluster hardware architecture 2 TB 4 TB 2 TB 4 TB 2 TB 2 TB 7 TB 8x 8x $HOME 469 TB 7 TB 8x 8x $WORK / workspaces 938 TB Global shared storage by parallel files system Lustre 12

bwunicluster HOME file system Any user will be automatically placed into $HOME upon login Environment variable: $HOME (e.g.: /home/ul/ul_theophys/ul_<username>) Intended to keep important permanent user's files only, e.g. program source codes, final result files, personal configuration files, Daily backups Group quotas for disk space and number of files (no quota for individual users) How to check quota and disk usage: $ cat $HOME/../diskusage For users from Ulm group quota is regulary adjusted to reflect group size Aggregated read/write performance is low (~8 GB/s) DO NOT COMPUTE IN $HOME! 13

bwunicluster work file systems Aggregated read/write performance is much better than for $HOME (~16 GB/s) Intended for parallel access (shared across multiple nodes) and for high throughput to large files, e.g. temporary job files, intermediate result files (checkpoint files), No backups!!! Limited lifetime of files!!! 2 different concepts to access work file system: (a) via $WORK environment variable (b) via Workspace tools 14

bwunicluster work file systems (a) $WORK Automatically created for any user upon first login Environment variable: $WORK (e.g.: /work/ul/ul_theophys/ul_<username>) Change to it: $ cd $WORK Limited lifetime: Any file in $WORK not accessed by more than 28 days will be automatically deleted. Maximum lifetime of a file is 280 days. Files no longer needed should be removed by the user Group quotas for disk space and number of files may be introduced if required How to check quota and disk usage: $ cat $WORK/../diskusage 15

bwunicluster work file systems (b) Workspace tools (highly recommended) Advantage: Provides more control over lifetime and location of files Create a workspace folder named Simulation with a lifetime of 30 days (max. 60 days) from now: $ ws_allocate Simulation 30 List your workspaces with location, creation date and remaining lifetime: $ ws_list Extend lifetime of existing workspace (up to 3x): $ ws_extend Simulation 60 Find location of workspace folder by it's name: $ ws_find Simulation Release (delete!) workspace. (Remember: There is no backup): $ ws_release Simulation Example usage: $ ws_allocate Simulation 30 $ SIMWS=`ws_find Simulation` $ ln s $SIMWS $HOME 16

bwunicluster local file systems Higher aggregated read/write performance than global file systems Temporary subdirectory automatically created for every individual job on the compute node Environment variable: $TMP (e.g.: /scratch/slurm_tmpdir/job_<jobnumber>) Intended for single node jobs with massive IO demands. Data stored in $TMP will be deleted at the end of the job. Copy important results to $HOME or $WORK or an allocated workspace at end of job No backup!!! Example usage (somewhat simplified): cp $HOME/inputfile $TMP cd $TMP program <inputfile >outfile cp outfile $HOME 17

bwunicluster file systems at a glance Property $TMP $HOME $WORK / workspace Visibility local global global Lifetime batch job runtime permanent max. 240 days Disk space 2 TB @ thin nodes 7 TB @ fat nodes 469 TB 938 TB Quotas no yes if required Backup no yes no low high Aggr. read/write Very high performance 18

Documentation and Support Website: General info: www.bwhpc-c5.de in English and German Best-practices-guide (documentation on clusters): www.bwhpc-c5.de/wiki in English User Support: Send email to: <bwunicluster-hotline@lists.kit.edu> Ticket-system: http://www.support.bwhpc-c5.de 19

Thank you for your attention! Questions? 20

3. Get ready to start 21

Prerequisites Register at bwunicluster and/or check your registration status in a web browser: https://bwidm.scc.kit.edu What's your localuid? Optionally set a reasonably strong password for the bwunicluster Check your status and/or participate the questionaire in the web browser at https://www.bwhpc-c5.de/en/zas/bwunicluster_survey.php On your local desktop open a terminal window in KDE: press <ALT>+<F2>, type konsole, press <Enter> Log into the bwunicluster: At the local desktop's terminal command prompt type: ssh X <UserID>@bwunicluster.scc.kit.edu 22