Scientific Computing in practice

Similar documents
Scientific Computing in practice

Slurm basics. Summer Kickstart June slide 1 of 49

Data storage on Triton: an introduction

Triton file systems - an introduction. slide 1 of 28

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Introduction to High-Performance Computing (HPC)

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

High Performance Computing Cluster Advanced course

Sherlock for IBIIS. William Law Stanford Research Computing

High Performance Computing Cluster Basic course

Graham vs legacy systems

CNAG Advanced User Training

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Batch Usage on JURECA Introduction to Slurm. May 2016 Chrysovalantis Paschoulas HPS JSC

Introduction to High-Performance Computing (HPC)

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/

Introduction to PICO Parallel & Production Enviroment

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

HPC Introductory Course - Exercises

Introduction to the Cluster

Using a Linux System 6

Compiling applications for the Cray XC

Exercises: Abel/Colossus and SLURM

New User Seminar: Part 2 (best practices)

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

Introduction to GALILEO

ECE 574 Cluster Computing Lecture 4

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research

How to Use a Supercomputer - A Boot Camp

SCALABLE HYBRID PROTOTYPE

Introduction to the Cluster

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Introduction to RCC. September 14, 2016 Research Computing Center

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013

Introduction to RCC. January 18, 2017 Research Computing Center

Genius Quick Start Guide

Introduction to SLURM & SLURM batch scripts

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

How to run a job on a Cluster?

Introduction to HPC Using zcluster at GACRC

Introduction to High-Performance Computing (HPC)

Using Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

UAntwerpen, 24 June 2016

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

Introduction to SLURM & SLURM batch scripts

Monitoring and Trouble Shooting on BioHPC

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Our new HPC-Cluster An overview

Introduction to UBELIX

Scientific Computing in practice

Introduction to HPC Using zcluster at GACRC

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to HPC Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC

Slurm at the George Washington University Tim Wickberg - Slurm User Group Meeting 2015

Scientific Computing in practice

Beginner's Guide for UK IBM systems

Introduc)on to Hyades

Introduction to GALILEO

How to access Geyser and Caldera from Cheyenne. 19 December 2017 Consulting Services Group Brian Vanderwende

Heterogeneous Job Support

Introduction to GALILEO

Introduction to HPC2N

Choosing Resources Wisely. What is Research Computing?

Batch Systems. Running your jobs on an HPC machine

Cerebro Quick Start Guide

The JANUS Computing Environment

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

Submitting batch jobs

Tech Computer Center Documentation

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

LAB. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers

Kamiak Cheat Sheet. Display text file, one page at a time. Matches all files beginning with myfile See disk space on volume

The BioHPC Nucleus Cluster & Future Developments

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose

Cluster Clonetroop: HowTo 2014

HPCC New User Training

Introduction to Discovery.

HPC Input/Output. I/O and Darshan. Cristian Simarro User Support Section

Computing with the Moore Cluster

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

TITANI CLUSTER USER MANUAL V.1.3

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Our Workshop Environment

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Student HPC Hackathon 8/2018

Slurm Birds of a Feather

June Workshop Series June 27th: All About SLURM University of Nebraska Lincoln Holland Computing Center. Carrie Brown, Adam Caprez

Training day SLURM cluster. Context Infrastructure Environment Software usage Help section SLURM TP For further with SLURM Best practices Support TP

Training day SLURM cluster. Context. Context renewal strategy

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

Introduction to Discovery.

Introduction to SLURM & SLURM batch scripts

INTRODUCTION TO GPU COMPUTING WITH CUDA. Topi Siro

HTC Brief Instructions

Introduction to GACRC Teaching Cluster PHYS8602

Our Workshop Environment

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Transcription:

Scientific Computing in practice Winter Kickstart 2016 Ivan Degtyarenko, Janne Blomqvist, Mikko Hakala, Simo Tuomisto School of Science, Aalto University Feb 29, 2016 slide 1 of 99

What is it? Triton crash course Part of Scientific Computing in Practice serie (http://science-it.aalto.fi/scip/) ~4 hours intro into Triton usage Covered: running and monitoring SLURM jobs of all kind, storage how-to, running on GPUs, Matlab runs, getting help etc Live demos on Triton Feel free to try: temp account is available if you have no one Course purpose: kick-off for Aalto researchers to get started with the Triton computational resources at Aalto Feel free to interrupt and ask slide 2 of 99

An HPC cluster Bunch of powefull computers connected through fast network. Plus common cross-mounted on all the nodes storage system. compute node #1 compute node #2 net storage compute node #567 Login node On top of that couple of administrative servers Most often runs a Linux. Another critical component is A batch system, that takes care of queueing. slide 3 of 99

Triton: overview Heterogeneous compute nodes with different Xeon and Opteron nodes Memory up to 256GB on normal nodes, plus several fat (large memory) nodes with 1TB of RAM InfiniBand as interconnect GPU computing with Tesla DDN + Lustre filesystem as a storage Software: Scientific Linux 6 / 7, SLURM as a batch system, additional apps provided through modules slide 4 of 99

Triton tech specs (older part) HP SL390s G7 nodes 2x Xeon X5650 @ 2.67GHz (6core) / 48 GB RAM 2x local SATA disks (7.2k) -> 800GB of local storage 2x Tesla M20[5 7 9]0 GPU (Fermi, 3/6 GB mem) on 19 servers / 24 GB RAM HP BL465c G6 nodes 2x AMD Opteron 2435 @ 2.6GHz (6core) 200GB local storage 16/32/64 GB RAM HP DL580 G7 nodes 4x X7542 @ 2.67 GHz (6core) 1.3TB local storage 1024 GB RAM slide 5 of 99

Triton tech specs (newer as of 2016) HP SL230s G8 nodes 2x E5 2680 v2 @ 2.8 GHz (10 cores) 1.8TB local storage 64/256 GB RAM Dell PowerEdge C4130 2x E5 2680 v3 @ 2.5 GHz (12 cores each) 1TB SAS drives 128/256 GB RAM slide 6 of 99

Triton as a parallel computer In principle, all the computers today are parallel from a hardware perspective (!) Triton's compute nodes are multiprocessor parallel computers in itself. On top of that they are connected through network to make larger computer: a cluster. Hybrid architecture: it is both and shared memory and distributed memory Shared: within one node Distributed: over the cluster slide 7 of 99

About interconnect Interconnect: link in between the compute nodes Distributed memory model, main characteristics latency and bandwidth Infiniband fabrics on Triton: fast & expensive, very high throughput and very low latency for MPI, Lustre [aka $WRKDIR] on Triton Xeon E5-2680 v3: PowerEdge C4130: 4xFDR Infiniband (56GT/s) Xeon Westmere nodes: SL390, DL580: 4xQDR Infiniband (40GT/s) Xeon IvyBridge nodes: SL230: 4xFDR Infiniband (56GT/s) Opteron nodes: BL465: 4xDDR Infiband (20GT/s) 4x spine switches Mostly fat-tree configuration, subscription differs See full Inifiniband map at https://wiki.aalto.fi/display/triton/cluster+overview#clusteroverview-networking Ethernet: cheap & slow 1G internal net for SSH, NFS [aka /home] on Triton 10G uplink towards Aalto net slide 8 of 99

About storage on Triton Key component of the cluster: Reliability, speed, high throughput for hundreds of network clients, massive IOPs, ability to handle large volumes of data quickly Parallel filesystem Lustre: mounted under /triton (/scratch) Lustre storage system (v1.8) 600 TB raw capacity 20 000 IOPS, 3-4 GB/s Stream (read/write) DDN SFA10k system Upcoming Lustre (v2.5) 1.8PB of raw space 100 000 IOPS, 30 GB/s read/write DDN SFA12K on background Round Robin and IML slide 9 of 99

GPU cards for computing GPGPU stands for General-Purpose computation on Graphics Processing Units. GPUs are massively parallel manycore processors with fast memories. Perfect accelerators Triton has 12x Tesla K80 on gpu[020-022], 22x Tesla M2090 cards on gpu[001-011] nodes, 6x Tesla M2070 on gpu[017-019] and 10x Tesla M2050 on gpu[012-016] slide 10 of 99

GPUs: way to use existing applications tuned for particular accelerator. Examples for Tesla: Gromacs, MATLAB, LAMMPS, NAMD, Amber, Quantum Espresso, etc.; Intel: Accelyrs, Altair, CD-Adapco, etc. libraries CUBLAS, cufft, Thrust for GPU native programming with CUDA (see our web pages for Topi's GPU course) OpenACC: compiler directives that specify regions of code in C/C++ and Fortran to be offloaded from main CPU slide 11 of 99

Support team The core team Mikko Hakala, D. Sc. (Tech) (CS+NBE/SCI) Janne Blomqvist, D. Sc. (Tech) (PHYS+NBE/SCI) Simo Tuomisto, (CS/SCI) Ivan Degtyarenko, D. Sc. (Tech) (PHYS/SCI) responsible for daily cluster maintenance wiki-area at http://wiki.aalto.fi, look for Triton User Guide, support tracker at http://tracker.triton.aalto.fi easy entry point for new users Support team members named by Departments to be local Science-IT support team members provide department user support, being close to researchers being contact person between schools (departments) and Science-IT support slide 12 of 99

Triton practicalities From now on you will need your laptop Access & help Login, environment, disk space Code compilation Running jobs: serial, parallel GPU runs SLURM advances slide 13 of 99

Accessing Triton $ ssh AALTO_LOGIN@triton.aalto.fi Requires valid Aalto account. Contact local Triton support member and ask for granting access. Directly reachable from: Aalto net: department workstations wired visitor networks, wireless Aalto, Aalto Open and Eduroam at Aalto CSC servers Outside of Aalto: Must hop through Aalto shell servers; first ssh to taltta.aalto.fi or kosh.aalto.fi alternatively amor.becs.hut.fi, james.ics.hut.fi, hugo.physics.aalto.fi slide 14 of 99

Best practice: SSH key On your workstation where from you want to login to Triton: $ ssh keygen $ ssh copy id i ~/.ssh/id_rsa.pub triton.aalto.fi $ ssh triton.aalto.fi for a sake of security / convenience SSH key must have a secure passphrase! slide 15 of 99

Triton session ssh triton.aalto.fi cd $WRKDIR A few commands to get started quota slurm p slurm f slide 16 of 99

Frontend node: intended usage Frontend triton.aalto.fi - just one of the computers out of others adapted for server needs (Xeon E5-2695 v2, 24x cores with 256G of memory) File editing File transfers Code compilation Job control Debugging No multi-cpu loads No multi-gb datasets into memory Matlab, R, IPython sessions otherwise OK Checking results slide 17 of 99

Getting help points of search for help: at wiki.aalto.fi look for Triton User Guide access requires valid Aalto account see also FAQ over there follow triton-users@list.aalto.fi list: all users MUST BE there tracker.triton.aalto.fi: for any issue; see whether it has been published already, if not, then shoot Accessible from Aalto net only, from outside run proxy through your department (hugo, james, amor, etc) local support team member: for granting access, quota slide 18 of 99

Triton nodes naming scheme Labeled after real label on each phyiscal node Compute nodes: cn01, cn02, cn03 cn488, ivy01 ivy48 Same for gpu nodes: gpu001, gpu002 gpu022 Fat nodes: fn01, fn02 Mostly administrative purpose nodes: tb01, tb08 Listing: cn[225-248] or gpu[001-002] Opterons: cn[01-112], cn[249-488] Xeons Westmere: cn[113-248] Xeons Ivy Bridge: ivy[01-48] Xeons E5 2680 v3: pe[01-67] This naming scheme is used by SLURM commands, at any place one needs a nodelist squeue w gpu[001 019] slide 19 of 99

Quick demo: getting started Let us use wiki, command line (slurm, squeue, etc) and Google Login to triton.aalto.fi, cd $WRKDIR (if you use the course account, create a directory for yourself) find out next: absolute path to your $WRKDIR, your account expiration date, your current quota at $WRKDIR and $HOME, your groups membership, for how long the system is up and how many users are logged in right now find out next: how big is the cluster: amount of CPUs of each type, memory on different nodes, amount of GPU cards and their type find out next: default version of GNU compilers (C/Fortran), default version for Python check out how busy is the cluster: number of pending jobs, number of running jobs Find out CPU type and memory amount on the frontend. Try to login to any of the compute node ('ssh cn01' for instance). Succeed? Why not? What is the longest run possible on the cluster? How many CPUs one can use at once and for how long? How many idling gpu nodes? slide 20 of 99

Transferring files Network share (NBE, CS) SSHFS /triton/ filesystem mounted on workstations Mount remote directories over SSH SCP/SFTP Copy individual files and directories (inefficiently) $ scp file.txt triton:file.txt Rsync over SSH Like scp, but avoids copying existing files again $ rsync -auv --no-group source/ triton:target/ slide 21 of 99

Software: modules environment $ python3 bash: python3: command not found $ module load python3 $ python3 Python3.3.1 (default, Apr 8 2013, 16:11:51)... module avail : compilers, libraries, interpreters and other software Maintained by Triton admins or FGI partners (/cvmfs) module load : adds installed programs and libraries to $PATH, $LD_LIBRARY_PATH, etc, sets up required environment variables for the lifetime of current shell or job slide 22 of 99

module subcommands module help or man module module list module purge module avail module whatis python module unload python module load openmpi/1.6.5 intel module swap openmpi/1.8.1 intel module show intel slide 24 of 99

Compilers: GNU Default complier for Linux: gcc v Using GNU C compiler $ gcc O2 Wall hello.c o hello $./hello Hello world! C++: g++ Fortran: gfortran slide 25 of 99

Compilers: Intel Intel compilers C, C++, Fortran available through module load intel icc, icpc, ifort Example: ifort hello.f90 o hello Comes with MKL libs (math, mpi) Intel or GNU? both are valid, choose one that performs better while still producing correct results More on compiling/debugging optimizing tomorrow slide 26 of 99

Demo: Module Let's investigate module's subcommands (avail, load, list, purge, swap, unload, show, whatis) Find out Intel compilers versions available Load the latest Intel module, plus corresponding OpenMPI, load the latest LAMMPS. Check what you have loaded. Swap the latest OpenMPI module with the openmpi/1.6.5-intel. Investigate available Python versions. What environment variables it sets? Purge all your current modules loaded. Investigate available MATLAB versions, load the latest one. Add the latest Intel module to your.bashrc so that it would load automatically every time you login Mount your triton's work directory to your local dir on the laptop with SSHFS. (If you have Linux/Mac see sshfs, on Windows find out the way to do it) slide 27 of 99

Batch computing in HPC A master controller manages a set of compute nodes. Users submit jobs to the controller, which allocates resources from the compute nodes. Jobs are generally non-interactive (I/O from files, no GUI). slide 28 of 99

Batch computing system slide 29 of 99

What is a 'job'? Job = Your program + handling instructions Shell script that runs your program Single command or complex piece of programming Allocation size (Nodes or CPUs) x Memory x Time Constraints Submitted in a single script (.sh) file from the front node or through Grid interface slide 30 of 99

Job terminology Job top level job container Array job a set of (near identical) jobs Job step Each job consists of a number of job steps, typically launched with srun inside the job batch script. Job task Each job step consist of a number of tasks running in parallel. slide 31 of 99

The job scheduler: SLURM Basic units of resources: Nodes / CPU cores Memory Time GPU cards / harddrives Takes in cluster jobs from users Compares user-specified requirements to resources available on compute nodes Starts jobs on available machine(s) Individual computers and the variety of hardware configurations (mostly) abstracted away On Triton we have a /etc/slurm/job_submit.lua script that selects right QOS and partition based on the user-defined time/mem/cpu job requirements slide 32 of 99

SLURM (cont.) Tracks historical usage (walltime etc; sacct)...to enable hierarchical fair-share scheduling Jobs sorted in pending queue by priority (sprio) Computed from user's historical usage (fair-share), job age (waiting time), and service class (QOS) Highest priority jobs are started on compute nodes when resources become available Using cluster time consumes fairshare, lowering priority of future jobs for that user & department slide 33 of 99

SLURM batch script To submit your program for execution with sbatch #!/bin/sh #SBATCH time=5:00 #SBATCH mem per cpu=10 srun /bin/echo hello world Submiting: sbatch p play hello.slrm The batch script is a shell script with normal shell commands accompanied by SLURM instructions. Can be hundreds of shell scripting lines if needed. There is no need to define partition and QOS in general, SLURM automagically takes care about them based on the time/mem/cpu info you specify (!) slide 34 of 99

Job instructions Job instructions are special comments in batch script, with them you are saying to SLURM your needs: how long you run, how much memory and CPUs you require, what should be the name and where should go the output etc (see man sbatch for details). For instance: #SBATCH job name=my_name #SBATCH output=my_name.%j.out #SBATCH constraint=opteron #SBATCH mem per cpu=x Memory requirement per allocated CPU core, attempting to use more results in job termination. By default, X is in MB; with 'G' suffix in GB #SBATCH time=d hh:mm Job is killed after timelimit + some slack. slide 35 of 99

Important: job time limit Always use estimated time option (!): time=days hours:minutes:seconds Three and half days: time=3 12 One hour: time=1:00:00 30 minutes: time=30 One day, two hours and 27 minutes: time=1 2:27:00 Otherwise: the default time limit is the partition s default time limit By default, the longest runtime is 5 days, the longer runs are possible but user must be explicitly added to the 'long_runs_list' slide 36 of 99

Important: job memory limit Always specify how much memory your job needs. mem per cpu=<mb> How to figure out how much memory is needed? top on your workstation, look for RSS column /proc/<pid>/status or similar on your workstation, see VmHWM field. sacct l j <jobid> less S for a completed job and sstat j <jobid> for running. Check MaxRSS column. Fix your jobscript, and iterate. Note: MaxRSS is sampled, might be inaccurate for very short jobs! Note 2: For parallel jobs, only works properly if srun was used to launch every job step (NOT mpirun). slide 37 of 99

After job submission Job gets a unique jobid, from now on it is registered and whatever happens to the job it is already in the history Job waits in PENDING state until the resource manager finds available resources on matching node(s) Job script is executed on the nodes assigned to it; the requested memory and CPUs are reserved for it for a certain time When job is running, user has access to the node(s) her/his job is running on, otherwise SSH access to compute nodes is restricted Program output saved to '.out' and '.err' files slide 38 of 99

Pending reasons: why does not run? Look at NODELIST(REASON) field (Priority): Other jobs have higher priority than your job. (Resources): Your job has enough priority to run, but there aren t enough free resources. (ReqNodeNotAvail): You request something that is not available. Check memory requirements per CPU, CPUs per node. Possibly time limit is the issue. Could be that due to scheduled maintenance break, all nodes are reserved and thus your -t parameter can't be larger than time left till the break. (QOSResourceLimit): Your job exceeds the QOS limits. The QOS limits include wall time, number of jobs a user can have running at once, number of nodes a user can use at once, etc. This may or may no be a permanent status. If your job requests a wall time greater than what is allowed or exceeds the limit on the number of nodes a single job can use, this status will be permanent. However, your job may be in this status if you currently have jobs running and the total number of jobs running or aggregate node usage is at your limits. In this case, jobs in this state will become eligible when your existing jobs finish. (AssociationResourceLimit): The job exceeds some limit set on the association. On triton, this in practice means the per-user GrpCPURunMins limit, which currently is 1.5M minutes per user. Wait a while for running jobs to proceed, so that new jobs may start. Also, shorter job time limits help. slide 39 of 99

Job priority SLURM assigns each job a 'priority'. When resources are available, launch highest priority jobs first. Exception: backfill algorithm allows small jobs to run provided they don't change the estimated start time of the highest priority pending job. Hierarchical fair-share algorithm. slurm prio or sprio n l u $USER Impact factors: job length, wait time, partition priority, fair share slide 40 of 99

Job allocation When job enters RUNNING state, resources on a set of nodes have been assigned to it The SLURM batch script is copied and executed on the first of the assigned nodes User code is now responsible for making use of the allocated resources: in other words even if you got X number of CPUs, doesn't mean your program will run in parallel automatically List of assigned nodes is exposed to the user program through environment variables: $SLURM_CPUS_ON_NODE, $SLURM_JOB_NODELIST, etc. In case of MPI, srun handle launching processes on correct nodes, but non-mpi programs will run on the first node only (!) slide 41 of 99

Cluster partitions slurm p In general SLURM takes care about submitting a job to a right partition (!), but there are cases when one needs to put a partition name explicitly: like 'play' or 'pbatch' queues #SBATCH partition=name Partition is a group of computers, for instance play meant for <15 min test runs (to see if code runs) Default partition 'batch' for serial parallel runs, mix of Xeons and Opterons short has additional nodes, for <4h jobs Should use it if the task can finish in time; there are no downsides hugemem is reserved for >64 GB jobs One machine with 1 TB memory and 24 slightly slower CPUs Separate from 'batch' only to keep it immediately available to big jobs when specifically requested pbatch is for large parallel runs on up to 32 nodes at once slurm p: partition info (i.e. sinfo o "%10P %.11l %.15F %10f %N") NODES(A/I/O/T): Number of nodes by state in the format "allocated/idle/other/total" slide 42 of 99

The sandbox: 'play' partition Three dedicated machines for up to 15 minutes test runs and debugging: ivy48, cn01 and tb008, i.e. Opteron and two Xeons, Westmere and Ivy Bridge correspondingly sbatch p play my_job.slrm slide 43 of 99

Checking results Check output files Check if job finished successfully with: $ slurm history or $ slurm history 4hours $ slurm history 3days slide 44 of 99

Demo/Exercise: 'Hello Triton' Let us run and monitor jobs with SLURM Pick up the batch script at /triton/scip/kickstart/hello.slrm and submit it with sbatch. Look while it is pending (slurm q, squeue j, scontrol show job ). Why it is pending? Estimated start time? Cancel the job either by jobid or by name. Submit to 'play' queue. Failed? Modify time (require 1 minute), and memory (10M) and submit again. Check job history (slurm history). Check memory usage, is it possible? What is the longest job one can run on the default partition? Investigate scontrol show partition command output. Pick up something that eats more memory (bash_mem.slrm), run and check memory usage. Modify memory requirement according to your findings and run the job once again. Imitate job failing (for instance set time limit for bash_mem.slrm less that it really takes). For a sake of debugging: Find out the exact node name your job was run on, job ID, start/end time Real case: there is nothing in the standard output, what could be the reason? Modify hello.slrm to forward error and standard outputs to a new file named '<job_id.out>' Modify hello.slrm so that it would notify you by email when job has ended or failed slide 45 of 99

Slurm utility Triton-specific wrapper by Tapio Leipälä for Slurm commands squeue, sinfo, scontrol, sacct, sstat... Informative commands only (safe to test out) Shows detailed information about jobs, queues, nodes, and job history Example: show current jobs' state: $ slurm queue JOBID PARTITION NAME TIME START_TIME STATE 430816 batch test 0:00 N/A PENDING slide 46 of 99

Slurm utility (cont.) slurm p sinfo "%10P %.11l %.15F %10f %N" slurm q squeue S T,P, S, i o "%18i %9P %14j %.11M %.16S %.8T %R" u $USER slurm j <job_id> scontrol show job <job_id> slide 47 of 99

Job steps (sequential) Within single batch script one can run several programs sequentially. From SLURM point of view they are job steps, smaller units of work started inside a job allocation. Every single 'srun' row in your slurm script is yet another step srun n 1 myprogram srun n 1 myprogram2 srun n 1 myprogram3 Running job can be tracked by $ slurm steps <jobid> or sstat a j <jobid> Steps will be marked as <jobid>.1, <jobid>.2, etc After completion, job accounting remembers exit status and resource usage of each individual job step $ slurm history slide 48 of 99

Array jobs The way to submit thousands of tasks at once: they will be handled by SLURM as a single job. So called embarrassingly parallel jobs, i.e. fully independent runs that use the same binary but different datasets as an input All tasks have same limits: time, memory size etc. Supported for batch jobs only (i.e. not for interactive sessions) On Triton one can launch array job that will have maximum 9999 jobs: array=0 9999 or array=0 100:5 or array=1,3,5,7 Array of jobs, from SLURM point of view, is a set of individual jobs with its own array index each. Individual jobs are assigned an ID like jobid_index (for instance 1639672_1), where jobid is the first job ID of the array Additional environment variables that can be used in.slrm scripts: $SLURM_ARRAY_JOB_ID the first job ID of the array $SLURM_ARRAY_TASK_ID the job array index value SLURM allows to handle jobs either as a whole or by individual jobs: scancel 1639672 to cancel all array jobs at once scancel 1639672_1 to cancel only fisrt job of the array scancel 1639672_[1 3] to cancel first three jobs only squeue r to see all individual jobs, otherwise handled as a single record slide 49 of 99

Array job example Submits 200 jobs, each has its own directory and the corresponding input_file prepared in advance #!/bin/sh #SBATCH time=5 #SBATCH mem per cpu=100 #SBATCH array=1 200 cd run_$slurm_array_task_id srun $WRKDIR/my_program input_file slide 50 of 99

Job dependencies dependency=<dependency_list> allows to postpone the job start before some other condition is met. Where <dependency_list> is of the form <type:job_id[:job_id],type:job_id[:job_id]]> Example sbatch dependency=after:63452 job.slrm runs after job 63452 has been started Other types of dependencies: afterany:job_id[:jobid...] begin execution after the specified jobs have terminated afternotok:... after the specified jobs have terminated in some failed state (non-zero exit code, timed out, etc). afterok:... after the specified jobs have successfully executed (exit code of zero) expand:job_id resources allocated to this job should be used to expand the specified job. The job to expand must share the same QOS (Quality of Service) and partition. singleton begin execution after any previously launched jobs sharing the same job name and user have terminated slide 51 of 99

Constraints Limit job for running on the nodes with some specified feature(s) #SBATCH constraint=[xeon xeonib] or #SBATCH constraint=tesla2090 Available: opteron,xeon,xeonib,tesla* Useful for multi-node parallel jobs, gpu jobs slide 52 of 99

Exclusive access to nodes #SBATCH exclusive Resource manager guarantees no other job runs on the same node(s) Terrible idea for <12 core jobs in general May be necessary for timing cache-sensitive code Don't use it unless you need it; constraints only make job less likely to find available resources Makes sense for parallel jobs: moving data over the network is orders of magnitude slower than interprocess communication inside a single host Resources are wasted if ntasks is not a multiple of 12 (or 20 for ivy[*] nodes) slide 53 of 99

Demo/Exercise: array jobs Array jobs, job steps, dependencies, constraints with SLURM Run any two jobs but so that the second one would start only if the first one has finished successfully. Find out the difference between constraint=xeon xeonib opteron and constraint=[xeon xeonib opteron] Create on your own or copy /triton/scip/kickstart/array.slrm template for the array jobs. Run as is on the play queue. Check output. Modify array.slrm to run 10 jobs on 5 different opterons only, two processes per node at maximum Modify array.slrm so that each individual job would have a name with the job index in it, and standard output would go to the corresponding 'run_*' directory slide 54 of 99

Running in parallel on Triton Means using more that one CPU core to solve the problem Either across the nodes with MPI or within one node as multithreaded (i.e. OpenMP) Using ntasks=# (or just n #) allocate a # number of CPU cores, though alone does not guarantee job reservation on one computer, though SLURM will try hard to place them on the same node(s) To get them to run on the same node(s) specify number of nodes (a multiple of 12 or 20 if you go for xeonib nodes only): #SBATCH ntasks=24 #SBATCH nodes=2 slide 55 of 99

Multithreaded job #!/bin/sh #SBATCH time=2:00:00 #SBATCH mem=1g #SBATCH cpus per task=12 export OMP_PROC_BIND=true srun openmp_program We will be talking more about it on Day #3 slide 56 of 99

MPI job #!/bin/sh #SBATCH time=30 #SBATCH mem per cpu=1g #SBATCH nodes=4 #SBATCH ntasks=48 module load openmpi srun mpi=pmi2 mpi_program slide 57 of 99

MPI flavors on Triton Several versions available through module OpenMPI MVAPICH2 Try both, your software may benefit from a particular flavor Compiled against particular compiler suit: GCC or Intel, Intel also provides its own Parallel math libs: BLACS and ScaLAPACK More about MPI tomorrow slide 58 of 99

Scalability Refers to ability to demonstrate a proportionate increase in parallel speedup with the addition of more processors. Factors that contribute to scalability include: Hardware: particularly memory-cpu bandwidths and network communications Application algorithm Parallel overhead related Characteristics of your specific application and coding In general, ideal scalability factor is 2, and good is about 1.5, that is your program runs 1.5 times faster when you double the number of CPU cores slide 59 of 99

--gres option #SBATCH gres=name:x instructs SLURM to find out a node for you with the specific resources, where 'name' is the resource name and 'X' is a number Available resources (in addition to CPUs and memory) on Triton: gpu spindle (aka # of harddrives) $ sinfo o '%.48N %.9F %.15G' NODELIST NODES(A/I GRES cn[129 224],ivy[01 48],tb[003,005 008] 141/2/6/1 spindle:2 cn[01 18,20 64,68 69,71 112,225 362,365 488] 232/108/2 spindle:1 cn[113 128] 16/0/0/16 spindle:4 fn02 1/0/0/1 spindle:6 gpu[002 011] 9/0/1/10 spindle:2,gpu:2 gpu[012 019] 7/0/1/8 spindle:1,gpu:2 slide 60 of 99

Using GPUs gpu001 node for playing/compiling/debugging, others gpu[002-019] for production runs; gpu[001-011] and gpu[017-019] are Teslas 2090 and 2070 with 6G of video memory, others gpu[012-016] are Tesla 2050 with 3G of memory gpu[] nodes are the same Xeon nodes with the GPU cards installed and dedicated to GPU computing only see 'gpu' and 'gpushort' partitions; can run for up to 30 days module load cuda sbatch gpu_job.slrm ## require GPUs, could be 1 or 2 #SBATCH gres=gpu:1 ## optional, require Tesla 2090 only #SBATCH constraint=tesla2090 module load cuda srun gres=gpu:1 my_gpu_program See Triton User Guide for more examples and details slide 61 of 99

Interactive login We strongly recommend to adapt your workflow for normal batch jobs, but in case you need to run some interactively, there are a few ways to do it Request one hour on a node: sinteractive t 1:00 You will receive a normal shell allocated for you with one CPU core and 1G of memory. sinteractive is not a slurm command but a bash script at /usr/local/bin/sinteractive Runs on the dedicated partition 'interactive'; from SLURM point of view a normal job in RUNNING stage, always has a name _interactive To finish the job, just exit the shell slide 62 of 99

Interactive runs salloc does the same as sbatch but interactively, i.e. allocates resources, runs user's program and opens the session that accepts user's input. When program is complete, resource allocation is over. salloc time=120:00 mem per cpu=2500 srun my_program On other hand, one can just open a session and then use srun to start the program, every single srun considered by SLURM as yet another job step triton$ salloc time=120:00 mem per cpu=2500 nodes 1 ntasks 4 salloc: Pending job allocation 929194 salloc: job 929194 queued and waiting for resources salloc: job 929194 has been allocated resources salloc: Granted job allocation 929194 triton$ srun hostname cn01 triton$ srun /path/to/executable arguments... Usage of srun is obligatory (!), otherwise you will run on the front-end Most of the options valid for sbatch can be used with salloc as well slide 63 of 99

Demo/Exercise: SLURM's advances SLURM advances: running multithreaded and MPI job, interactive jobs/logins, using GPUs Start sinteractive session with no options, find out sessions's end time, what is the default time? 'GPU computing' page at Triton User Guide has an example of running devicequery utility on a gpu node. Try it with salloc. Copy multithreaded hello_omp and hello_omp.slrm. Make sure that you require 5 minutes, 50M of memory and run 4 threads only. Run with sbatch. Copy MPI version of hello_mpi, make you own (consult slides).slrm file, adapt and run with salloc as is. Library missing? Load openmpi/1.8.5-java first and try again. Try compiling with GCC and running hello_[mpi omp].c on your own. For really advanced participants, do the same with the Intel compiler. slide 64 of 99

Matlab jobs slide 65 of 99

Matlab environments on Triton Interactive use on fn01 (currently 1 machine) triton$ ssh XY fn01 fn01$ module load matlab; matlab & 24 CPUs and 1TB memory shared by a number of users Noninteractive Slurm job (unlimited jobs) Matlab started from a user-submitted job script Licensing terms changed in 2013; use of Matlab in cluster and grid environments is no longer prohibited slide 66 of 99

Matlab environment Slurm job + Matlab script Unconstrained license Batch job without GUI Simple and efficient way to run serial jobs Best use case a series of singlethreaded jobs slide 67 of 99

Matlab on Triton Primarily for serial runs use Slurm directly. A batch job without graphical window. 1) Prepare a M-file for the job 2) Prepare slurm submission script (example on right) 3) Submit #!/bin/bash #SBATCH t 04:00:00 #SBATCH mem per cpu=1000 module load matlab cd /triton/becs/scratch/myuname/ # execute run.m srun matlab nojvm r run slide 68 of 99

Matlab DEMO slide 69 of 99

Matlab Demo Actual research case from Brain and mind group Analyzing fmri data from actual measurements. Taking into account physiology and head movement and doing final visualization for further analysis. Using existing Matlab package available online (https://git.becs.aalto.fi/bml/bramila) Need to write integration for the packages. Manage Input data Setup proper Matlab environment (code paths) Submit job(s) to the queue Run things on /local, collect results to /triton slide 70 of 99

Matlab Excercise Small existing matlab code, that solves ODE and plots the results Split the computation and visualization Write the computation as a Matlab function Handle input / output. Run on /local and save results to a file at /local. Run for Mu values of 1..9 in parallel (play queue) Visualize the final results slide 71 of 99

Triton file systems: an introduction slide 72 of 99

File systems: Contents Motivation How storage is organized in Triton How to optimize IO Do's and Don'ts Exercises slide 73 of 99

File systems: Motivation Organizing your work flow so that it suits your data will make you more efficient Dealing with IO properly prevents performance bottlenecks Due to shared nature of IO in a cluster environment a single user can cause problems to other users slide 74 of 99

How storage is organized in Triton slide 77 of 99

File systems: Network storage User's Home folder (NFS filesystem) /home/<username> -directory Common for every computational node Meant for scripts etc Nightly backup, 1GB quota Work/Scratch (Lustre filesystem) /triton/<department>/work/<username> Common for every computational node Meant for input/output Data Quota varies per project No backups (reliable system with RAID) Has an optimized find function 'lfs find'. In the autumn, when system is updated even more dedicated tools. slide 78 of 99

File systems: Local storage Storage local to compute node /local -directory Best for calculation time storage Copy relevant data after computation, will be deleted after job completes $TMPDIR variable defines a temporary directory for the job Ramdisk for fast IO operations Special location, similar to /local /ramdisk -directory (20GB) Use case: job spends most of its time doing file operations on millions of small files. slide 79 of 99

File systems: Summary Location Type Usage Size/quota /home NFS Home dir 1 Gb /local local Local scratch ~800Gb (varies) /triton Lustre Work/Scratch 200Gb (varies, even several TB's) /ramdisk Ramdisk Local scratch 20GB slide 80 of 99

How to optimize IO slide 81 of 99

File systems: Data stored in files What is a file? File = metadata + block data Accessing block data: cat a.txt Showing some metadata: ls -l a.txt slide 82 of 99

File systems: Files stored in /triton In /triton, metadata and block data are separated: Metadata query answered by metadata server Block data query, object storage server answers slide 83 of 99

File systems: Meters of performance Stream I/O and random IOPS Stream measures the speed of reading large sequential data from system IOPS measure random small reads to the system number of metadata/block data accesses To measure your own application, profiling or internal timers needed Rough estimate can be aquired from /proc/<pid>/io or by using strace slide 86 of 99

File systems: Meters of performance Total numbers Device IOPS Stream Sata disk (7.2k) 50-100 50 MB/s SSD disk 3000-10 000 500 MB/s Ramdisk 40 000 5000 MB/s Triton NFS 300 300 MB/s Triton Lustre 20 000 4000 MB/s With 200 concurrent jobs using storage... Device IOPS Stream Sata disk (7.2k) 50-100 50 MB/s SSD disk N/a N/a Triton NFS 1.5 1.5 MB/s Triton Lustre 100 20 MB/s DON'T run job jobs from HOME! (NFS) slide 87 of 99

File systems: How to optimize IO/data? Know how your program does its data handling, know which file system your program utilizes for its IO Measure your program with profilers e.g. strace -c -e trace=file <program> Minimize the number of unnecessary file calls e.g. log output timestep Load data in good sized chunks Do not do metadata calls unless they are necessary, access blockdata directly Save data in good formats with plenty of metadata slide 88 of 99

File systems: Advanced Lustre By default striping is turned off lfs getstripe <dir> shows striping lfs setstripe c N <dir> stripe over N targets, -1 means all targets lfs setstripe d <dir> revert to default Use with care. Useful for HUGE files (>100GB) or parallel I/O from multiple compute nodes (MPI-I/O). Real numbers from single client (16 MB IO blocksize for 17 GB): Striping File size Stream Mb/s Off (1) 17 GB 214 2 17 GB 393 4 17 GB 557 max 17 GB 508 max 11 MB 55 max 200 KB 10 slide 89 of 99

File systems: Workflow suggestion Storage Program folder /home /data (symlink) Data /work Runtime Copy input to disk Input /work Work with temp data Temp data /local Work in node Copy output to network file system Output /work slide 90 of 99

Do's and Don'ts slide 91 of 99

File systems: Do's and Don'ts Lots of small files (+10k, <1MB) Well, bad starting point already in general. Though, sometimes no way to improve (e.g. legacy code) /ramdisk or /local: Best place for these Lustre: Not the best place. With many users local disk provides more IOPS and Stream in general NFS (Home): Very Bad idea, do not run calculation from Home The very best approach: modify you code. Large file(s) instead of many small (e.g. HDF5). Or even no-files-at-all. Sometimes IO due to unnecessary checkpointing. slide 92 of 99

File systems: Do's and Don'ts ls vs ls -la ls in a directory with 1000 files Simple ls is only a few IOPS ls -la in a directory with 1000 files Local fs: 1000+ IOPS (stat() each file!) NFS: a bit more overhead Lustre (striping off) 2000 IOPS (+rpcs) Lustre (striping on) 31000 IOPS! (+rpcs) => Whole Lustre stuck for a while for everyone Use ls -la and variant (ls --color) ONLY when needed slide 93 of 99

File systems: Do's and Don'ts 500Gb of data Estimated read time in minutes Many small files Single big /local 170+ 170 /triton (stripe off) 170+ 25 /triton (stripe max) BAD IDEA 11 Use /local or Lustre (+ maybe striping) for big files Note that above Triton results assume exclusive access (reality: shared with all other users)! slide 94 of 99

File systems: Do's and Don'ts Databases (sqlite) These can generate a lot of small random reads (=IOPS) /ramdisk or /local: Best place for these Lustre: Not the best place. With many users local disk provides more IOPS and Stream in general NFS (Home): very Bad idea slide 95 of 99

Exercise: File systems? minutes to proceed, use wiki/google to solve All scripts are in /triton/scip/lustre Simple file system operations Use mkdir and ln to create a project like the one in the workflow example. Use 'strace -c' to 'ls <file>' and 'ls -l <file>'. Compare output with eg. grep/diff. Copy create_iodata.sh to your data folder and run it to create sample data. Compare 'strace -c' of 'lfs find <directory>' and 'find <directory>' searches to the directory. Copy iotest.sh to your test project folder and submit it with sbatch. What does the output mean? Try to convert the code to use $TMPDIR. Once you're sure it works, change 'ls' to 'ls -l'. Compare the results. Convert the code to use tar/zip/gzip/bzip2. Can you deduce anything from /proc/<pid>/io output? slide 96 of 99

File systems: Best practices When unsure what is the best approach Check above Do's and Don'ts Google? Ask your local Triton support person tracker.triton.aalto.fi Ask your supervisor Trial-and-error (profile it) slide 97 of 99

File systems: Further topics These are not covered here. Ask/Google if you want to learn more. Using Lustre striping (briefly mentioned) HDF5 for small files Benchmarking, what is the share of IO of a job MPI-IO Hadoop slide 98 of 99

Questions or comments regarding Triton file systems? slide 99 of 99