Content. MPIRUN Command Environment Variables LoadLeveler SUBMIT Command IBM Simple Scheduler. IBM PSSC Montpellier Customer Center

Size: px
Start display at page:

Download "Content. MPIRUN Command Environment Variables LoadLeveler SUBMIT Command IBM Simple Scheduler. IBM PSSC Montpellier Customer Center"

Transcription

1 Content IBM PSSC Montpellier Customer Center MPIRUN Command Environment Variables LoadLeveler SUBMIT Command IBM Simple Scheduler

2 Control System Service Node (SN) An IBM system-p 64-bit system Control System and database are on this system Access to this system is generally privileged Communication with Blue Gene via a private 1Gb control ethernet Database A commercial database tracks state of the system Hardware inventory Partition configuration RAS data Environmental data Operational data including partition state, jobs, and job history Service action support for hot plug hardware Administration and System status Administration either via a console or web Navigator interfaces

3 Service Node Database Structure DB2 Configuration Database Operational Database Environmental Database RAS Database Configuration database is the representation of all the hardware on the system Operational database contains information and status for things that do not correspond directly to a single piece of hardware such as jobs, partitions, and history Environmental database keeps current values for all of hardware components on the system, such as fan speeds, temperatures, voltages RAS database collects hard errors, soft errors, machine checks, and software problems detected from the compute complex Useful log files: /bgsys/logs/bgp

4 Job Launching Mechanism mpirun Command Standard mpirun options supported May be used to launch any job, not just MPI based applications Has options to allocate partitions when a scheduler is not in use Scheduler APIs enable various schedulers LoadLeveler SLURM Platform LSF Altair PBS Pro Cobalt. Note: All the schedulers are on mpirun/mpiexec

5 MPIRUN Implementation Identical Functionalities to BG/L Implementation + New implementation + New options No more rsh/ssh mechanism for security reason, replace by a deamon running on the Service node freepartition command integrated as an option (-free) Standard input (STDIN) is supported on BGP (only MPI task 0)

6 MPIRUN Command Parameters 1 -args "program args" Pass "program args" to the BlueGene job on the compute nodes -cwd <Working Directory> Specifies the full path to use as the current working directory on the compute nodes. The path is specified as seen by the I/O and compute nodes -exe <Executable> Specifies the full path to the executable to run on the compute nodes. The path is specified as seen by the I/O and compute nodes -mode { SMP DUAL VN } specify what mode the job will run in. Choices are coprocessor or virtual node mode -np <Nb MPI Tasks> Create exactly n MPI ranks for the job. Aliases are -nodes and -n

7 MPIRUN Command Parameters 2 -enable_tty_reporting By default MPIRUN will tell the control system and the C runtime on the compute nodes that STDIN, STDOUT and STDERR are tied to TTY type devices. Enable STDOUT bufferization (GPFS blocksize) -env <Variable Name>=<Variable Value>" Set an environment variable in the environment of the job on the compute nodes -expenv <Variable Name> Export an environment variable in mpiruns current environment to the job on the compute nodes -label Use this option to have mpirun label the source of each line of output. -partition <Block ID> Specify a predefined block to use -mapfile <mapfile> Specify an alternative MPI toplogy. The mapfile path must be fully qualified as seen by the I/O and compute nodes -verbose { } Set the 'verbosity' level. The default is 0 which means that mpirun will not output any status or diagnostic messages unless a severe error occurs. If you are curious as to what is happening try levels 1 or 2. All mpirun generated status and error messages appear on STDERR.

8 MPIRUN Command Reference (Documentation)

9 MPIRUN Example mpirun partition XXX np 128 mode SMP exe /patch/exe cwd working_directory env OMP_NUM_THREADS=4 XLSMPOPTS=spins=0:yields=0:stack= Execution Settings 128 MPI Tasks SMP Mode 4 OpenMP Threads 64 MB Thread Stack Mpirun application program interfaces available: get_paramaters, mpirun_done

10 MPIRUN Environment Variables Most command line options for mpirun can be specified using an environment variable -partition MPIRUN_PARTITION -nodes MPIRUN_NODES -mode MPIRUN_MODE -exe MPIRUN_EXE -cwd MPIRUN_CWD -host MMCS_SERVER_IP -env MPIRUN_ENV -expenv MPIRUN_EXP_ENV -mapfile MPIRUN_MAPFILE -args MPIRUN_ARGS -label MPIRUN_LABEL -enable_tty_reporting MPIRUN_ENABLE_TTY_REPORTING

11 STDIN / STDOUT / STDERR Support STDIN, STDOUT, and STDERR work as expected You can pipe or redirect files into mpirun and pipe or redirect output from mpirun STDIN may also come from the keyboard interactively Any compute node may send STDOUT or STDERR data Only MPI rank 0 may read STDIN data Mpirun always tells the control system and the C runtime on the compute nodes that it is writing to TTY devices. This is because logically MPIRUN looks like a pipe; it can not do seeks on STDIN, STDOUT, and STDERR even if they are coming from files. As always, STDIN, STDOUT and STDERR are the slowest ways to get input and output from a supercomputer Use them sparingly STDOUT is not buffered and can generate a huge overhead for some applications Such applications should buffer the stdout with option -enable_tty_reporting

12 MPIEXEC Command What is mpiexec? Method for launching and interacting with parallel Mutliple Program Multiple Data (MPMD) jobs on BlueGene/P Very similar to mpirun with the only exception being the arguments supported by mpiexec are slightly different Command Limitations A pset is the smallest granularity for each executable, though one executable can span multiple psets You must use every compute node of each pset, specifically different -np values are not supported The job's mode (SMP, DUAL, VNM) must be uniform across all psets

13 MPIEXEC Command Parameters Only parameter / environmental supported by mpiexec that is not supported by mpirun -configfile / MPIRUN_MPMD_CONFIGFILE The following parameters / environmentals are not supported by mpiexec since their use is ambiguous for MPMD jobs -args / MPIRUN_ARGS -cwd / MPIRUN_CWD -env / MPIRUN_ENV -env_all / MPIRUN_EXP_ENV_ALL -exe / MPIRUN_EXE -exp_env / MPIRUN_EXP_ENV -partition / MPIRUN_PARTITION -mapfile / MPIRUN_MAPFILE

14 MPIEXEC Configuration File Syntax -n <Nb Nodes> -wdir <Working Directory> <Binary> Example Configuration File Content -n 32 -wdir /home/bgpuser /bin/hostname -n 32 -wdir /home/bgpuser/hello_world /home/bgpuser/hello_world/hello_world Runs /bin/hostname on one 32 node pset hello_world on one 32 node pset

15 SUBMIT Command submit = mpirun Command for HTC Command used to run a HTC job and act as a lightweight shadow for the real job running on a Blue Gene node Simplifies user interaction with the system by providing a simple common interface for launching, monitoring, and controlling HTC jobs Run from a Frontend Node Contacts the control system to run the HTC user job Allows the user to interact with the running job via the job's standard input, standard output, and standard error Standard System Location /bgsys/drivers/ppcfloor/bin/submit

16 HTC Technical Architecture

17 SUBMIT Command Syntax /bgsys/drivers/ppcfloor/bin/submit [options] or /bgsys/drivers/ppcfloor/bin/submit [options] binary [arg1 arg2... argn] Options -exe <exe> Executable to run -args "arg1 arg2... argn Arguments, must be enclosed in double quotes -env <env=value> Define an environmental for the job -exp_env <env> Export an environmental to the job's environment -env_all Add all current environmentals to the job's environment -cwd <cwd> The job's current working directory -timeout <seconds> Number of seconds before the job is killed -mode <SMP DUAL VNM> Job mode -location <Rxx-Mx-Nxx-Jxx-Cxx> Compute core location, regular expression supported -pool <id> Compute Node pool ID

18 IBM Scheduler for HTC IBM Scheduler for HTC = HTC Jobs Scheduler Handles scheduling of HTC jobs HTC Job Submission External work requests are routed to HTC scheduler Single or multiple work requests from each source IBM Scheduler for HTC finds available HTC client and forwards the work request HTC client runs executable on compute node A launcher program on each compute node handles work request sent to it by the scheduler. When work request completes, the launcher program is reloaded and client is ready to handle another work request.

19 IBM Scheduler for HTC Components IBM Scheduler for HTC Purpose Provides features not available with submit interface Queuing of jobs until compute resources are available Tracking of failed compute nodes submit interface is intended for usage by job schedulers Not end users directly IBM Scheduler for HTC Components simple_sched Daemon Runs on Service Node or Frontend Node Accepts connections from startd and client programs startd Daemons Run on Frontend Node Connects to simple_sched, gets jobs and executes submit Client programs qsub = Submits job to run qdel = Deletes job submitted by qsub qstat = Gets status of submitted job qcmd = Admin commands

20 HTC Executables htcpartition Utility program shipped with Blue Gene Responsible for booting / freeing HTC partitions from a Frontend Node run_simple_sched_jobs Provides instance of IBM Scheduler for HTC and startd Executes commands either specified in command files or read from stdin Creates a cfg file that can be used to submit jobs externally to the cmd files or stdin Exits when the commands have all finished (or can specify keep running )

21 IBM Scheduler for HTC Integration to LoadLeveler LoadLeveler handles Partition Reservation & Booting New LoadLeveler Keyword bg_partition_type = HTC_LINUX_SMP Partition Shutdown IBM Scheduler for HTC handles Batch of executions queueing Either specified in command files or read from stdin Executions submission Execution recovery when failure occurs Only system faults are recovered Failed submission can be retried User program failures are considered as permanent

22 IBM Scheduler for HTC Glide-In to LoadLeveler

23 LoadLeveler Job Command File Example #!/bin/bash bg_partition_type = HTC_LINUX_SMP class = BGP64_1H comment = "Personality / HTC" environment = error = $(job_name).$(jobid).err group = default input = /dev/null job_name = Personality-HTC job_type = bluegene notification = never output = $(job_name).$(jobid).out queue # Command File COMMANDS_RUN_FILE=$PWD/cmds.txt /bgsys/opt/simple_sched/bin/run_simple_sched_jobs $COMMANDS_RUN_FILE

24 IBM Scheduler for HTC Integration to LoadLeveler < 3.5 Described IBM Scheduler for HTC / LoadLeveler integration is valid for LoadLeveler versions >= 3.5 Looser integration with LoadLeveler versions < 3.5 LoadLeveler doesn t handle partition boot / shutdown Consequences Explicit partition boot / shutdown required in LoadLeveler job command file Achieved through call to HTC binary command htcpartition htcpartition --boot { } htcpartition --free

25 LoadLeveler Job Command File Example (LL < v3.5) #!/bin/bash class = BGP64_1H comment = "Personality / HTC" environment = error = $(job_name).$(jobid).err group = default input = /dev/null job_name = Personality-HTC job_type = bluegene notification = never output = $(job_name).$(jobid).out queue # Command File COMMANDS_RUN_FILE=$PWD/cmds.txt # Local Simple Scheduler Configuration File SIMPLE_SCHED_CONFIG_FILE=$PWD/my_simple_sched.cfg partition_free() { echo "Freeing HTC Partition" /bgsys/drivers/ppcfloor/bin/htcpartition --free } /bgsys/drivers/ppcfloor/bin/htcpartition --boot --configfile $SIMPLE_SCHED_CONFIG_FILE --mode linux_smp trap partition_free EXIT /bgsys/opt/simple_sched/bin/run_simple_sched_jobs -config $SIMPLE_SCHED_CONFIG_FILE $COMMANDS_RUN_FILE

IBM Scheduler for High Throughput Computing on IBM Blue Gene /P Table of Contents

IBM Scheduler for High Throughput Computing on IBM Blue Gene /P Table of Contents IBM Scheduler for High Throughput Computing on IBM Blue Gene /P Table of Contents Introduction...3 Architecture...4 simple_sched daemon...4 startd daemon...4 End-user commands...4 Personal HTC Scheduler...6

More information

Batch Systems. Running calculations on HPC resources

Batch Systems. Running calculations on HPC resources Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between

More information

Blue Gene/P Application User Worshop

Blue Gene/P Application User Worshop Blue Gene/P Application User Worshop Nicolas Tallet Pascal Vezolle nicolas.tallet@fr.ibm.com vezolle@fr.ibm.com PSSC Montpellier Blue Gene Team 11/4/2008 Agenda Blue Gene/P Application User Workshop Blue

More information

Blue Gene/Q User Workshop. User Environment & Job submission

Blue Gene/Q User Workshop. User Environment & Job submission Blue Gene/Q User Workshop User Environment & Job submission Topics Blue Joule User Environment Loadleveler Task Placement & BG/Q Personality 2 Blue Joule User Accounts Home directories organised on a project

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

OpenPBS Users Manual

OpenPBS Users Manual How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

Batch Systems. Running your jobs on an HPC machine

Batch Systems. Running your jobs on an HPC machine Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Introduction to HPC Numerical libraries on FERMI and PLX

Introduction to HPC Numerical libraries on FERMI and PLX Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of

More information

Batch environment PBS (Running applications on the Cray XC30) 1/18/2016

Batch environment PBS (Running applications on the Cray XC30) 1/18/2016 Batch environment PBS (Running applications on the Cray XC30) 1/18/2016 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch

More information

Running applications on the Cray XC30

Running applications on the Cray XC30 Running applications on the Cray XC30 Running on compute nodes By default, users do not access compute nodes directly. Instead they launch jobs on compute nodes using one of three available modes: 1. Extreme

More information

Table of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine

Table of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

IBM PSSC Montpellier Customer Center. Content

IBM PSSC Montpellier Customer Center. Content Content IBM PSSC Montpellier Customer Center Standard Tools Compiler Options GDB IBM System Blue Gene/P Specifics Core Files + addr2line Coreprocessor Supported Commercial Software TotalView Debugger Allinea

More information

Grid Compute Resources and Job Management

Grid Compute Resources and Job Management Grid Compute Resources and Job Management How do we access the grid? Command line with tools that you'll use Specialised applications Ex: Write a program to process images that sends data to run on the

More information

Quick Guide for the Torque Cluster Manager

Quick Guide for the Torque Cluster Manager Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

SCALABLE HYBRID PROTOTYPE

SCALABLE HYBRID PROTOTYPE SCALABLE HYBRID PROTOTYPE Scalable Hybrid Prototype Part of the PRACE Technology Evaluation Objectives Enabling key applications on new architectures Familiarizing users and providing a research platform

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Compiling applications for the Cray XC

Compiling applications for the Cray XC Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers

More information

Porting Applications to Blue Gene/P

Porting Applications to Blue Gene/P Porting Applications to Blue Gene/P Dr. Christoph Pospiech pospiech@de.ibm.com 05/17/2010 Agenda What beast is this? Compile - link go! MPI subtleties Help! It doesn't work (the way I want)! Blue Gene/P

More information

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA) Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit

More information

Debugging Intel Xeon Phi KNC Tutorial

Debugging Intel Xeon Phi KNC Tutorial Debugging Intel Xeon Phi KNC Tutorial Last revised on: 10/7/16 07:37 Overview: The Intel Xeon Phi Coprocessor 2 Debug Library Requirements 2 Debugging Host-Side Applications that Use the Intel Offload

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Grid Engine Users Guide. 5.5 Edition

Grid Engine Users Guide. 5.5 Edition Grid Engine Users Guide 5.5 Edition Grid Engine Users Guide : 5.5 Edition Published May 08 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the Rocks License

More information

MPICH User s Guide Version Mathematics and Computer Science Division Argonne National Laboratory

MPICH User s Guide Version Mathematics and Computer Science Division Argonne National Laboratory MPICH User s Guide Version 3.1.4 Mathematics and Computer Science Division Argonne National Laboratory Pavan Balaji Wesley Bland William Gropp Rob Latham Huiwei Lu Antonio J. Peña Ken Raffenetti Sangmin

More information

Application and System Memory Use, Configuration, and Problems on Bassi. Richard Gerber

Application and System Memory Use, Configuration, and Problems on Bassi. Richard Gerber Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13, Garching, Germany, July 17, 2007 NERSC is supported

More information

Answers to Federal Reserve Questions. Administrator Training for University of Richmond

Answers to Federal Reserve Questions. Administrator Training for University of Richmond Answers to Federal Reserve Questions Administrator Training for University of Richmond 2 Agenda Cluster overview Physics hardware Chemistry hardware Software Modules, ACT Utils, Cloner GridEngine overview

More information

Unix Processes. What is a Process?

Unix Processes. What is a Process? Unix Processes Process -- program in execution shell spawns a process for each command and terminates it when the command completes Many processes all multiplexed to a single processor (or a small number

More information

X Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!?

X Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!? X Grid Engine Where X stands for Oracle Univa Open Son of more to come...?!? Carsten Preuss on behalf of Scientific Computing High Performance Computing Scheduler candidates LSF too expensive PBS / Torque

More information

Early experience with Blue Gene/P. Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007

Early experience with Blue Gene/P. Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007 Early experience with Blue Gene/P Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007 Agenda System components The Daresbury BG/P and BG/L racks How to use the system Some

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Tutorial 4: Condor. John Watt, National e-science Centre

Tutorial 4: Condor. John Watt, National e-science Centre Tutorial 4: Condor John Watt, National e-science Centre Tutorials Timetable Week Day/Time Topic Staff 3 Fri 11am Introduction to Globus J.W. 4 Fri 11am Globus Development J.W. 5 Fri 11am Globus Development

More information

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection Switching Operational modes: Store-and-forward: Each switch receives an entire packet before it forwards it onto the next switch - useful in a general purpose network (I.e. a LAN). usually, there is a

More information

A unified user experience for MPI jobs in EMI

A unified user experience for MPI jobs in EMI A unified user experience for MPI jobs in EMI Enol Fernández (CSIC) glite MPI PT Outline Parallel Jobs EMI middleware stacks approaches How to execute a simple MPI job with 16 process with ARC/gLite/UNICORE?

More information

Grid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen

Grid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen Grid Engine - A Batch System for DESY Andreas Haupt, Peter Wegner 15.6.2005 DESY Zeuthen Introduction Motivations for using a batch system more effective usage of available computers (e.g. reduce idle

More information

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

HTC Brief Instructions

HTC Brief Instructions HTC Brief Instructions Version 18.08.2018 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 HTC BRIEF INSTRUCTIONS Table

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

Introducing the HTCondor-CE

Introducing the HTCondor-CE Introducing the HTCondor-CE CHEP 2015 Presented by Edgar Fajardo 1 Introduction In summer 2012, OSG performed an internal review of major software components, looking for strategic weaknesses. One highlighted

More information

How to for compiling and running MPI Programs. Prepared by Kiriti Venkat

How to for compiling and running MPI Programs. Prepared by Kiriti Venkat How to for compiling and running MPI Programs. Prepared by Kiriti Venkat What is MPI? MPI stands for Message Passing Interface MPI is a library specification of message-passing, proposed as a standard

More information

SGE Roll: Users Guide. Version 5.3 Edition

SGE Roll: Users Guide. Version 5.3 Edition SGE Roll: Users Guide Version 5.3 Edition SGE Roll: Users Guide : Version 5.3 Edition Published Dec 2009 Copyright 2009 University of California and Scalable Systems This document is subject to the Rocks

More information

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding

More information

SGE Roll: Users Guide. Version Edition

SGE Roll: Users Guide. Version Edition SGE Roll: Users Guide Version 4.2.1 Edition SGE Roll: Users Guide : Version 4.2.1 Edition Published Sep 2006 Copyright 2006 University of California and Scalable Systems This document is subject to the

More information

MIC Lab Parallel Computing on Stampede

MIC Lab Parallel Computing on Stampede MIC Lab Parallel Computing on Stampede Aaron Birkland and Steve Lantz Cornell Center for Advanced Computing June 11 & 18, 2013 1 Interactive Launching This exercise will walk through interactively launching

More information

PBS Pro and Ansys Examples

PBS Pro and Ansys Examples PBS Pro and Ansys Examples Introduction This document contains a number of different types of examples of using Ansys on the HPC, listed below. 1. Single-node Ansys Job 2. Single-node CFX Job 3. Single-node

More information

Grid Engine Users Guide. 7.0 Edition

Grid Engine Users Guide. 7.0 Edition Grid Engine Users Guide 7.0 Edition Grid Engine Users Guide : 7.0 Edition Published Dec 01 2017 Copyright 2017 University of California and Scalable Systems This document is subject to the Rocks License

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

First evaluation of the Globus GRAM Service. Massimo Sgaravatto INFN Padova

First evaluation of the Globus GRAM Service. Massimo Sgaravatto INFN Padova First evaluation of the Globus GRAM Service Massimo Sgaravatto INFN Padova massimo.sgaravatto@pd.infn.it Draft version release 1.0.5 20 June 2000 1 Introduction...... 3 2 Running jobs... 3 2.1 Usage examples.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Introduction (I)! Parallel applica/ons are common in clusters and HPC systems Grid infrastructures

More information

Symmetric Computing. John Cazes Texas Advanced Computing Center

Symmetric Computing. John Cazes Texas Advanced Computing Center Symmetric Computing John Cazes Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host and across nodes Also called heterogeneous computing Two executables are required:

More information

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500

More information

Contents: 1 Basic socket interfaces 3. 2 Servers 7. 3 Launching and Controlling Processes 9. 4 Daemonizing Command Line Programs 11

Contents: 1 Basic socket interfaces 3. 2 Servers 7. 3 Launching and Controlling Processes 9. 4 Daemonizing Command Line Programs 11 nclib Documentation Release 0.7.0 rhelmot Apr 19, 2018 Contents: 1 Basic socket interfaces 3 2 Servers 7 3 Launching and Controlling Processes 9 4 Daemonizing Command Line Programs 11 5 Indices and tables

More information

Symmetric Computing. SC 14 Jerome VIENNE

Symmetric Computing. SC 14 Jerome VIENNE Symmetric Computing SC 14 Jerome VIENNE viennej@tacc.utexas.edu Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required: CPU MIC Currently

More information

Backgrounding and Task Distribution In Batch Jobs

Backgrounding and Task Distribution In Batch Jobs Backgrounding and Task Distribution In Batch Jobs James A. Lupo, Ph.D. Assist Dir Computational Enablement Louisiana State University jalupo@cct.lsu.edu 1/76 Overview Description of the Problem Environment

More information

PROGRAMMING MODEL EXAMPLES

PROGRAMMING MODEL EXAMPLES ( Cray Inc 2015) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Introduction to Parallel Programming with MPI PICASso Tutorial October 25-26, 2006 Stéphane Ethier (ethier@pppl.gov) Computational Plasma Physics Group Princeton Plasma Physics Lab Why Parallel Computing?

More information

Shark Cluster Overview

Shark Cluster Overview Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 2 Graphical login nodes 800 Cores = slots 714 TB Storage RAW Slide 1/17 Introduction What is a High Performance Compute (HPC) cluster? A HPC

More information

Advanced Job Launching. mapping applications to hardware

Advanced Job Launching. mapping applications to hardware Advanced Job Launching mapping applications to hardware A Quick Recap - Glossary of terms Hardware This terminology is used to cover hardware from multiple vendors Socket The hardware you can touch and

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

Module 4: Working with MPI

Module 4: Working with MPI Module 4: Working with MPI Objective Learn how to develop, build and launch a parallel (MPI) program on a remote parallel machine Contents Remote project setup Building with Makefiles MPI assistance features

More information

Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows. Swen Boehm Wael Elwasif Thomas Naughton, Geoffroy R.

Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows. Swen Boehm Wael Elwasif Thomas Naughton, Geoffroy R. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows Swen Boehm Wael Elwasif Thomas Naughton, Geoffroy R. Vallee Motivation & Challenges Bigger machines (e.g., TITAN, upcoming Exascale

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

High Performance Beowulf Cluster Environment User Manual

High Performance Beowulf Cluster Environment User Manual High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

DEBUGGING ON FERMI PREPARING A DEBUGGABLE APPLICATION GDB. GDB on front-end nodes

DEBUGGING ON FERMI PREPARING A DEBUGGABLE APPLICATION GDB. GDB on front-end nodes DEBUGGING ON FERMI Debugging your application on a system based on a BG/Q architecture like FERMI could be an hard task due to the following problems: the core files generated by a crashing job on FERMI

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Getting started with the CEES Grid

Getting started with the CEES Grid Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account

More information

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA) Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS

More information

CLE and How to. Jan Thorbecke

CLE and How to. Jan Thorbecke CLE and How to Start Your Application i Jan Thorbecke Scalable Software Architecture t 2 Scalable Software Architecture: Cray Linux Environment (CLE) Specialized ed Linux nodes Microkernel on Compute nodes,

More information

Running LAMMPS on CC servers at IITM

Running LAMMPS on CC servers at IITM Running LAMMPS on CC servers at IITM Srihari Sundar September 9, 2016 This tutorial assumes prior knowledge about LAMMPS [2, 1] and deals with running LAMMPS scripts on the compute servers at the computer

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Implementation of Parallelization

Implementation of Parallelization Implementation of Parallelization OpenMP, PThreads and MPI Jascha Schewtschenko Institute of Cosmology and Gravitation, University of Portsmouth May 9, 2018 JAS (ICG, Portsmouth) Implementation of Parallelization

More information

The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi

The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi User Manual Dr. B. Jayaram (Professor of

More information

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory gmkurtzer@lbl.gov CONTAINERS IN HPC WITH SINGULARITY A QUICK REVIEW OF THE LANDSCAPE Many types of virtualization

More information

Lab: Hybrid Programming and NUMA Control

Lab: Hybrid Programming and NUMA Control Lab: Hybrid Programming and NUMA Control Steve Lantz Workshop: Parallel Computing on Ranger and Longhorn May 17, 2012 Based on materials developed by by Kent Milfeld at TACC 1 What You Will Learn How to

More information

Running Jobs on Blue Waters. Greg Bauer

Running Jobs on Blue Waters. Greg Bauer Running Jobs on Blue Waters Greg Bauer Policies and Practices Placement Checkpointing Monitoring a job Getting a nodelist Viewing the torus 2 Resource and Job Scheduling Policies Runtime limits expected

More information

XSEDE New User Tutorial

XSEDE New User Tutorial June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

Grid Examples. Steve Gallo Center for Computational Research University at Buffalo

Grid Examples. Steve Gallo Center for Computational Research University at Buffalo Grid Examples Steve Gallo Center for Computational Research University at Buffalo Examples COBALT (Computational Fluid Dynamics) Ercan Dumlupinar, Syracyse University Aerodynamic loads on helicopter rotors

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Slurm Overview. Brian Christiansen, Marshall Garey, Isaac Hartung SchedMD SC17. Copyright 2017 SchedMD LLC

Slurm Overview. Brian Christiansen, Marshall Garey, Isaac Hartung SchedMD SC17. Copyright 2017 SchedMD LLC Slurm Overview Brian Christiansen, Marshall Garey, Isaac Hartung SchedMD SC17 Outline Roles of a resource manager and job scheduler Slurm description and design goals Slurm architecture and plugins Slurm

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Viglen NPACI Rocks. Getting Started and FAQ

Viglen NPACI Rocks. Getting Started and FAQ Viglen NPACI Rocks Getting Started and FAQ Table of Contents Viglen NPACI Rocks...1 Getting Started...3 Powering up the machines:...3 Checking node status...4 Through web interface:...4 Adding users:...7

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head

More information

Symmetric Computing. Jerome Vienne Texas Advanced Computing Center

Symmetric Computing. Jerome Vienne Texas Advanced Computing Center Symmetric Computing Jerome Vienne Texas Advanced Computing Center Symmetric Computing Run MPI tasks on both MIC and host Also called heterogeneous computing Two executables are required: CPU MIC Currently

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Introduction to CINECA HPC Environment

Introduction to CINECA HPC Environment Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems

More information

Redpaper. Evolution of the IBM System Blue Gene Solution. Front cover. ibm.com/redbooks. A new generation of hardware

Redpaper. Evolution of the IBM System Blue Gene Solution. Front cover. ibm.com/redbooks. A new generation of hardware Front cover Evolution of the IBM System Blue Gene Solution A new generation of hardware Additional software functionality Enhanced control system software Gary Lakner Carlos P. Sosa ibm.com/redbooks Redpaper

More information

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming

More information

Resource Management at LLNL SLURM Version 1.2

Resource Management at LLNL SLURM Version 1.2 UCRL PRES 230170 Resource Management at LLNL SLURM Version 1.2 April 2007 Morris Jette (jette1@llnl.gov) Danny Auble (auble1@llnl.gov) Chris Morrone (morrone2@llnl.gov) Lawrence Livermore National Laboratory

More information