Batch environment PBS (Running applications on the Cray XC30) 1/18/2016

Similar documents
Running applications on the Cray XC30

Compiling applications for the Cray XC

Advanced Job Launching. mapping applications to hardware

CLE and How to. Jan Thorbecke

PROGRAMMING MODEL EXAMPLES

COMPILING FOR THE ARCHER HARDWARE. Slides contributed by Cray and EPCC

First steps on using an HPC service ARCHER

Reducing Cluster Compatibility Mode (CCM) Complexity

ISTeC Cray High-Performance Computing System. Richard Casey, PhD RMRCE CSU Center for Bioinformatics

Batch Systems. Running calculations on HPC resources

Introduction to SahasraT. RAVITEJA K Applications Analyst, Cray inc E Mail :

Running Jobs on Blue Waters. Greg Bauer

Sharpen Exercise: Using HPC resources and running parallel applications

Quick Guide for the Torque Cluster Manager

Our new HPC-Cluster An overview

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Sharpen Exercise: Using HPC resources and running parallel applications

ISTeC Cray High Performance Computing System User s Guide Version 5.0 Updated 05/22/15

Logging in to the CRAY

User Orientation on Cray XC40 SERC, IISc

Introduction to GALILEO

Practical: a sample code

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

Content. MPIRUN Command Environment Variables LoadLeveler SUBMIT Command IBM Simple Scheduler. IBM PSSC Montpellier Customer Center

SLURM Operation on Cray XT and XE

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Answers to Federal Reserve Questions. Training for University of Richmond

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.

ParaTools ThreadSpotter Analysis of HELIOS

Introduction to PICO Parallel & Production Enviroment

NBIC TechTrack PBS Tutorial

User Guide of High Performance Computing Cluster in School of Physics

CSCS Proposal writing webinar Technical review. 12th April 2015 CSCS

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Understanding Communication and MPI on Cray XC40 C O M P U T E S T O R E A N A L Y Z E

Supercomputing environment TMA4280 Introduction to Supercomputing

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

Introduction to OpenMP. Lecture 2: OpenMP fundamentals

High Performance Beowulf Cluster Environment User Manual

Batch Systems. Running your jobs on an HPC machine

VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING. BSC Tools Hands-On. Germán Llort, Judit Giménez. Barcelona Supercomputing Center

Name Department/Research Area Have you used the Linux command line?

PRACTICAL MACHINE SPECIFIC COMMANDS KRAKEN

MPI for Cray XE/XK Systems & Recent Enhancements

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

HPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016

Introduction to CINECA Computer Environment

Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Introduction to GALILEO

Introduction to HPC Numerical libraries on FERMI and PLX

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot

Introduction to GALILEO

Understanding Aprun Use Patterns

Cluster Network Products

CLE User Application Placement Guide (CLE 6.0.UP01)

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to HPC Resources and Linux

Our Workshop Environment

High Performance Computing (HPC) Using zcluster at GACRC

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Running Jobs, Submission Scripts, Modules

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

PBS Pro Documentation

Introduction to HPC Using zcluster at GACRC

Improving the Productivity of Scalable Application Development with TotalView May 18th, 2010

Workload Managers. A Flexible Approach

OpenPBS Users Manual

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)

Table of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine

OBTAINING AN ACCOUNT:

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

Programming Environment 4/11/2015

Grid Engine Users Guide. 5.5 Edition

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Molecular Modelling and the Cray XC30 Performance Counters. Michael Bareford, ARCHER CSE Team

Cray Support of the MPICH ABI Compatibility Initiative

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

Intel Xeon Phi Coprocessor

Introduction to HPC Using zcluster at GACRC

Cheese Cluster Training

Welcome. Virtual tutorial starts at BST

Symmetric Computing. Jerome Vienne Texas Advanced Computing Center

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

Practical Introduction to

Tech Computer Center Documentation

Computing with the Moore Cluster

Using the Yale HPC Clusters

ARCHER Single Node Optimisation

The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi

Introduction to parallel computing

Mark Pagel

Guillimin HPC Users Meeting March 17, 2016

Moab Workload Manager on Cray XT3

Transcription:

Batch environment PBS (Running applications on the Cray XC30) 1/18/2016 1

Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch jobs on compute nodes in one of three available modes: 1. Extreme Scaling Mode (ESM Mode) The default high performance MPP mode 2. Pre/Post Processing Mode A throughput mode designed to allows efficient Multiple Applications Multiple User (MAMU) access for smaller applications. 3. Cluster Compatibility Mode (CCM) Emulation mode designed primarily to support 3 rd ISV applications which cannot be recompiled. 2

Extreme Scaling Mode (ESM) ESM is a high performance mode designed to run larger applications at scale. Important features are: Dedicated compute nodes for each user job No interference from other users sharing the node Minimum quanta of compute is 1 node. Nodes running with low-noise, low-overhead CNL OS Inter-node communication is via native Aries API. The best latency and bandwidth comms are available using ESM. Applications need to be linked with a Cray comms libraries (MPI, Shmem etc) or compiled with Cray language support (UPC, Coarrays) The appropriate parallel runtime environment is automatically set up between nodes ESM is expected to be the default mode for the majority of applications that require at least one node to run. 3

Requesting resources from a batch system As resources are dedicated to users in ESM mode, most supercomputers share nodes via batch systems. ECMWF Cray XC30s use PBS Pro to schedule resources Users submit batch job scripts to a scheduler from a login node (e.g. PBS) for execution at some point in the future. Each job requests resources and predicts how long it will run. The scheduler (running on an external server) then chooses which jobs to run and when, allocating appropriate resources at the start. The batch system will then execute a copy of the user s job script on an a one of the MOM nodes. The scheduler monitors the job throughout it lifetime. Reclaiming the resources when job scripts finish or are killed for overrunning. Each user job scripts will contain two types of command 1. Serial commands that are executed by the MOM node, e.g. quick setup and post processing commands e.g. (rm, cd, mkdir etc) 2. Parallel launches to run on the allocated compute nodes. 1. Launched using the aprun command. 4

Example ECMWF batch script #!/bin/ksh #PBS -N xthi #PBS -l EC_total_tasks=256 #PBS -l EC_threads_per_task=6 #PBS -j oe #PBS -o./output #PBS -l walltime=00:05:00 Request resources from the batch system export OMP_NUM_THREADS=$EC_threads_per_task aprun -n $EC_total_tasks \ -N $EC_tasks_per_node \ -d $EC_threads_per_task./a.out Launch the executable on the compute nodes in parallel 5

Lifecycle of an ECMWF batch script eslogin qsub run.sh Example Batch Job Script run.sh #!/bin/bash #PBS -l EC_total_tasks=256,EC_threads_per_task=6 #PBS l walltime=1:00:00 cd $WORKDIR Serial aprun n 256 d 6 simulation.exe rm r $WORKDIR/tmp Serial Requested Resources Parallel Cray XE Compute Nodes PBS Queue Manager PBS MOM Node 6

Submitting jobs to the batch system Job scripts are submitted to the batch system with qsub: qsub script.pbs Once a job is submitted is assigned a PBS Job ID, e.g. 1842232.sdb To view the state of all the currently queued and running jobs run: qstat To limit to just jobs owned by a specific user qstat u <username> To remove a job from the queue, or cancel a running job qdel <job id> 7

Requesting resources from PBS Jobs provide a list of requirements as #PBS comments in the headers of the submission script, e.g. #PBS l walltime 12:00:00 These can be overriden or suplemented at submission by adding to the qsub command line, e.g. > qsub l walltime 11:59:59 run.pbs Common options are: Option Description -N <name> A name for job, -q <queue> Submit job to a specific queues. -o <output file> A file to write the job s stdout stream in to. -e <error file> A file to write the job s stderr stream in to. -j oe Join stderr stream in to stdout stream as a single file Jobs must also describe how many compute nodes they will need. -l walltime <HH:MM:SS> Maximum wall time job will occupy 8

Glossary of terms To understand how many compute nodes a job needs, we need to understand how parallel jobs are described by Cray. PE/Processing Element A discrete software process with an individual address space. One PE is equivalent to: 1 MPI Rank, 1 Coarray Image, 1 UPC Thread, or 1 SHMEM PE Threads A logically separate stream of execution inside a parent PE that shares the same address space CPU The minimum piece of hardware capable of running a PE. It may share some or all of its hardware resources with other CPUs Equivalent to a single Intel Hyperthread Compute Unit The individual unit of hardware for processing, may be seen described as a core. May provide one or more CPUs. 9

Implementing the Parallel Programming Model on hardware Parallel Application 0 0 1 1 0 1 2 0 1 3 0 1 12 0 1 23 0 1 Threads Processing Element Programming Model Node 0 0 1 0 1 0 1 2 23 Node 1 0 1 0 1 0 1 2 23 1 Software Thread is bound to 1 Hardware CPU Linux Process Hardware implementation 10

Launching ESM Parallel applications ALPS : Application Level Placement Scheduler aprun is the ALPS application launcher It must be used to run application on the XC compute nodes in ESM mode, (either interactively or as a batch job) If aprun is not used, the application will be run on the MOM node (and will most likely fail). aprun launches sets of PEs on the compute nodes. aprun man page contains several useful examples The 4 most important parameters to set are: Description Option Total Number of PEs used by the application -n Number of PEs per compute node -N Number of threads per PE (More precise, the stride between 2 PEs on a node) Number of to CPUs to use per Compute Unit -j -d 11

Running applications on the Cray XC30: Some basic examples Assuming XC30 nodes with 2x12 core Ivybridge processors Each node has: 48 CPUs/Hyperthreads and 24 Compute Units/cores Launching an MPI application on all CPUs of 64 nodes: Using 1 CPU per Compute Unit means a maximum of 24 PEs per node. 64 nodes x 24 ranks/node = 1536 ranks $ aprun n 1536 N 24 j1./model-mpi.exe Launch the same MPI application on 64 nodes but with half as many ranks per node Still using 1 CPU per Compute Unit, but limiting to 12 Compute Units. $ aprun n 768 N 12 j1./model-mpi.exe Doubles the available memory for each PE on the node To use all availble CPUs on 64 nodes. Using 2 CPUs per Compute unit, so 48 PEs per node) $ aprun n 3072 N 48 j2./model-mpi.exe 12

Some examples of hybrid invocation To launch a Hybrid MPI/OpenMP application on 64 nodes 256 total ranks, using 1 CPU per Compute Unit (Max 24 Threads) Use 4 PEs per node and 6 Threads per PE Threads set by exporting OMP_NUM_THREADS $ export OMP_NUM_THREADS=6 $ aprun n 256 N 4 d $OMP_NUM_THREADS j1./model-hybrid.exe Launch the same hybrid application with 2 CPUs per CU Still 256 total ranks, using 2 CPU per Compute Unit (Max 48 Threads) Use 4 PEs per node and 12 Threads per PE $ export OMP_NUM_THREADS=12 $ aprun n 256 N 4 d $OMP_NUM_THREADS j2./model-hybrid.exe Much more detail in later session on advance placement and binding. 13

ECMWF PBS Job Directives ECMWF have created a bespoke set of job directives for PBS that can be used to define the job. The ECMWF job directives provide a direct map between aprun options and PBS jobs. Description Aprun Option EC Job Directive Total PEs -n <n> #PBS l EC_total_tasks=<n> PEs per node -N <N> #PBS l EC_tasks_per_node=<N> Threads per PE -d <d> #PBS l EC_threads per_task=<d> CPUs per CU -j <j> #PBS l EC_hyperthreads=<j> 14

A Note on the mppwidth, mppnppn, mppdepth and select notation Casual examination of older Cray documentation or internet searches may suggest the alternatives, mppwidth, mppnppn and mppdepth for requesting resources. These methods, while still functional, are Cray specific extensions to PBS Pro which have been deprecated by Altair. More recent versions of PBS Pro support select syntax which is in use at other Cray sites. Support for select has been discontinued at ECMWF in favour of a customised schema. 15

Watching a launched job on the Cray XE xtnodestat Shows how XE nodes are allocated and corresponding aprun commands apstat Shows aprun processes status apstat overview apstat a[ apid ] info about all the applications or a specific one apstat n info about the status of the nodes Batch qstat command shows batch jobs 16

Example qstat output Job id Name User Time Use S Queue ---------------- ---------------- ---------------- -------- - ----- 123557.sdb g1ma:t_wconsta rdx 0 Q of 123610.sdb test.sh syi 0 Q of 123654.sdb getini_1 emos 00:00:09 R os 123656.sdb getini_1 emos 00:00:00 R os 123675.sdb getini_1 emos 00:00:00 R os 123677.sdb getini_1 emos 00:00:00 R os 123680.sdb getini_1 emos 00:00:09 R os 123682.sdb getini_1 emos 00:00:02 R os 123686.sdb job_nf co5 0 Q of 123694.sdb job_nf co5 0 Q of 123739.sdb job_nf co5 0 Q nf 17

Example xtnodestat output (cct) Current Allocation Status at Thu Feb 06 15:16:14 2014 C0-0 n3 --S---------; n2 SSS--S---------- n1 SSS--S---------- c0n0 SSS---------- s0123456789abcdef Legend: nonexistent node S service node ; free interactive compute node - free batch compute node A allocated (idle) compute or ccm node? suspect compute node W waiting or non-running job X down compute node Y down or admindown service node Z admindown compute node Available compute nodes: 1 interactive, 45 batch 18

Pre/Post Processing Mode This is a new mode for the Cray XC30. Designed to allow multi-user jobs to share compute nodes More efficient for apps running on less than one node Possible interference from other users on the node Uses the same fully featured OS as service nodes Multiple use cases, applications can be: entirely serial embarrassingly parallel e.g. fork/exec, spawn + barrier. shared memory using OpenMP or other threading model. MPI (limited to intra-node MPI only*) Scheduling and launching handled by PBS Similar to any normal PBS cluster deployment. 19

Understanding Module Targets The wrappers, cc, CC and ftn are cross compilation environments that by default target the compute nodes. This means compilers will build binaries explicitly targeting the CPU architecture in the compute nodes It will also link distributed memory libraries by default. Binaries built using the default settings will probably not work on serial nodes or pre/post processing nodes. Users many need to switch the CPU target and/or opt to remove network dependencies. For example when compiling serial applications for use on eslogin or pre/post processing nodes. Targets are changed by adding/removing appropriate modules 20

Cray XC30 Target modules CPU Architecture Targets craype-ivybridge (default) Tell compiler and system libraries to target Intel Ivybridge processors craype-sandybridge Tell compiler and system libraries to build binaries targeting Intel Sandybridge processors Network Targets craype-network-aries (default) Link network libraries into binary (ESM mode). craype-target-local_host Do not link network libraries into the binary (serial apps non-esm mode). For MAMU nodes use mpiexec from module cray-smplauncher instead of aprun if you want support for MPI 21

Cluster Compatibility Mode (CCM) CCM creates small TCP/IP connected clusters of compute nodes. It was designed to support legacy ISV applications which could not be recompiled to use ESM mode. There is a small cost in performance due to the overhead of using TCP/IP rather than native Aries calls. Users also are also responsible for initialising the application on all the participating nodes, including any daemon processes. Still exclusive use of each node by a single users, so only recommended when recompilation unavailable. 22