Shifter on Blue Waters

Similar documents
Shifter and Singularity on Blue Waters

Evaluating Shifter for HPC Applications Don Bahls Cray Inc.

Docker task in HPC Pack

XC Series Shifter User Guide (CLE 6.0.UP02) S-2571

Debugging on Blue Waters

Computing with the Moore Cluster

Simple examples how to run MPI program via PBS on Taurus HPC

Installation, Configuration and Performance Tuning of Shifter V16 on Blue Waters

Our new HPC-Cluster An overview

For Ryerson EE Network

Tech Computer Center Documentation

MPI introduction - exercises -

Sharpen Exercise: Using HPC resources and running parallel applications

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Programming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018

The JANUS Computing Environment

Compiling applications for the Cray XC

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

The Message Passing Model

Introduction to Computing V - Linux and High-Performance Computing

VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING. BSC Tools Hands-On. Germán Llort, Judit Giménez. Barcelona Supercomputing Center

Singularity: container formats

Getting started with the CEES Grid

ISTeC Cray High Performance Computing System User s Guide Version 5.0 Updated 05/22/15

Shifter at CSCS Docker Containers for HPC

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Shifter: Fast and consistent HPC workflows using containers

Practical MPI for the Geissler group

Batch Systems. Running calculations on HPC resources

Beacon Quickstart Guide at AACE/NICS

Exercise 1: Basic Tools

Parallel Programming. Libraries and implementations

First steps on using an HPC service ARCHER

MAKING CONTAINERS EASIER WITH HPC CONTAINER MAKER. Scott McMillan September 2018

UBDA Platform User Gudie. 16 July P a g e 1

Cluster Clonetroop: HowTo 2014

Working on the NewRiver Cluster

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Solution of Exercise Sheet 2

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Compilation and Parallel Start

Saint Louis University. Intro to Linux and C. CSCI 2400/ ECE 3217: Computer Architecture. Instructors: David Ferry

Supercomputing environment TMA4280 Introduction to Supercomputing

Sharpen Exercise: Using HPC resources and running parallel applications

Practical: a sample code

Grid Examples. Steve Gallo Center for Computational Research University at Buffalo

JURECA Tuning for the platform

High Performance Beowulf Cluster Environment User Manual

User Guide of High Performance Computing Cluster in School of Physics

Introduction to CINECA Computer Environment

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.

PROGRAMMING MODEL EXAMPLES

Some PBS Scripting Tricks. Timothy H. Kaiser, Ph.D.

Logging in to the CRAY

Parallel computing at LAM

ECMWF Environment on the CRAY practical solutions

Be smart. Think open source.

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Parameter searches and the batch system

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Containers. Pablo F. Ordóñez. October 18, 2018

Practical Introduction to Message-Passing Interface (MPI)

ParaTools ThreadSpotter Analysis of HELIOS

Working with Shell Scripting. Daniel Balagué

docker & HEP: containerization of applications for development, distribution and preservation

Parallel Programming. Libraries and Implementations

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Holland Computing Center Kickstart MPI Intro

Domain Decomposition: Computational Fluid Dynamics

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again:

Introduction to CINECA HPC Environment

Report S1 C. Kengo Nakajima

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

Anomalies. The following issues might make the performance of a parallel program look different than it its:

Scaling Applications on Blue Waters

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Yellow Dog 6.1 PS3 Cluster guide

P a g e 1. HPC Example for C with OpenMPI

OpenACC Accelerator Directives. May 3, 2013

Part One: The Files. C MPI Slurm Tutorial - Hello World. Introduction. Hello World! hello.tar. The files, summary. Output Files, summary

Hand-on Labs for Chapter 1 and Appendix A CSCE 212 Introduction to Computer Architecture, Spring

Hybrid MPI+OpenMP Parallel MD

Guillimin HPC Users Meeting March 17, 2016

SCALABLE HYBRID PROTOTYPE

MVAPICH MPI and Open MPI

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

AWP ODC QUICK START GUIDE

ITCS 4145/5145 Assignment 2

Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot

Blue Waters Programming Environment

Effective Use of CCV Resources

Docker und IBM Digital Experience in Docker Container

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

New User Tutorial. OSU High Performance Computing Center

ISTeC Cray High-Performance Computing System. Richard Casey, PhD RMRCE CSU Center for Bioinformatics

Shifter Configuration Guide 1.0

Computing on Mio Data & Useful Commands

Introduction to GALILEO

Transcription:

Shifter on Blue Waters

Why Containers? Your Computer Another Computer (Supercomputer) Application Application software libraries System libraries software libraries System libraries

Why Containers? Your Computer Another Computer (Supercomputer) Application software libraries System libraries centos:latest centos:7 ubuntu:14.04

Container solution for HPC systems Developed at NERSC: https://github.com/nersc/shifter Works with Docker images Images are optimized for faster redistribution across compute nodes Use custom software stacks Integrated into Blue Waters & implemented as a module Limited (compared to Docker) functionality

User-Defined Images Shifter images are called: User-Defined Images, or UDI Shifter is a module available on MOM nodes only ssh qsub aprun User Login nodes MOM nodes Compute nodes

Getting a Docker image onto Blue Waters $ qsub -I -l walltime=00:30:00 l nodes=1:ppn=1 $ module load shifter $ getdockerimage usage: getdockerimage images getdockerimage lookup <image>:<tag> getdockerimage pull <image>:<tag> Login node MOM node Docker image name: <repo>/<imagename>:<tag>

$ getdockerimage images Returns names, ID, sizes, as well as creation dates of User-Defined Images that are available on Blue Waters.... Image: centos:5 (...ommitted...), Size: 271.48MB, Date: 2016/08/30 13:19:35 Image: centos:latest (...ommitted...), Size: 182.95MB, Date: 2016/12/15 12:21:23...

$ getdockerimage lookup imagename:tag Returns ID of an image if it is available on the system or an error otherwise. $ getdockerimage lookup centos:7 Retrieved docker image centos:7 resolving to ID: 9baab0af79c4fab5200255fe226cb147f95255028bd400761a8242da43688512 $ getdockerimage lookup ubuntu:zesty ERR: Unable to find image 'ubuntu:zesty'

$ getdockerimage pull imagename:tag Download (update) Docker images from Docker Hub (hub.docker.com) only. $ getdockerimage pull centos:latest Retrieved docker image centos:latest resolving to ID:d4350798c2ee9f080caff7559bf4d5a48a1862330e145fe7118ac721da74a445 $ getdockerimage pull ncsa/polyglot:latest Downloading: c9d83ebb2deb [========================>] 32 B/32 B... Retrieved docker image ncsa/polyglot:latest resolving to ID:eb21e6d7bc24d5e0f37afacdb7327378b9ac1435fd32fd29d8cd3a95078ebbf8

Docker image not on Docker Hub? Simply upload it to Docker Hub! $ docker pull <repo>/<image>:<tag> or $ docker build -t <your-repo>/<image>:<tag> -f Dockerfile and then $ docker login $ docker push <your-repo>/<image>:<tag>

Using a Shifter UDI on Blue Waters in a Batch Job 1. Submit a job: $ qsub... 2. Load shifter: $ module load shifter 3. Start computing (use aprun -b and shifter): $ aprun -b -n X -N Y -- shifter --image=docker:osimage:version --... 4. Mount BW volumes... shifter... --volume=/path/on/bw:/path/in/udi...

Using a Shifter UDI on Blue Waters in a Batch Job 1. Special generic resource request: $ qsub... -l gres=shifter... 2. Set UDI environment variable: $ qsub... -v UDI=<image>:<tag>... 3. Set CRAY_ROOTFS environment variable in a job: $ export CRAY_ROOTFS=UDI 4. Use aprun with -b flag: $ aprun... -- <app> <arguments> -b

Using a Shifter UDI on Blue Waters in a Batch Job $ qsub -I -l nodes=2:ppn=32 -l gres=shifter -v UDI=centos:latest... $ export CRAY_ROOTFS=UDI

Using a Shifter UDI on Blue Waters in a Batch Job $ /bin/bash --version MOM node GNU bash, version 3.2.51(1)-release (x86_64-suse-linux-gnu) Copyright (C) 2007 Free Software Foundation, Inc. $ aprun -b -n 1 -N 1 -- /bin/bash --version GNU bash, version 4.2.46(1)-release (x86_64-redhat-linux-gnu) Compute node Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Application 382134 resources: utime ~0s, stime ~1s, Rss ~4244, inblocks ~7, outblocks ~0

Using a Shifter UDI on Blue Waters in a Batch Job $ cat sample_job.pbs #!/bin/bash #PBS -l gres=shifter #PBS -v UDI=centos:latest #PBS -l nodes=1:ppn=32:xe #PBS -l walltime=00:30:00 export CRAY_ROOTFS=UDI cd $PBS_O_WORKDIR aprun -b -n 16 -N 16 -- <app1> <args1>... & aprun -b -n 16 -N 16 -- <app2> <args2>... & wait! /projects, /scratch, and Home folders are mapped from UDI to Blue Waters

SSH to compute nodes running Shifter 1. Generate SSH key pair $ mkdir -p ~/.shifter $ ssh-keygen -t rsa f ~/.shifter/id_rsa -N '' 2. Figure out assigned compute nodes $ aprun -n $PBS_NUM_NODES -N 1 -b -- hostname 3. SSH to a compute node! $ ssh -p 204 -i ~/.shifter/id_rsa -o UserKnownHostsFile=/dev/null \ -o StrictHostKeyChecking=no -o LogLevel=error nidxxxxx

SSH to compute nodes running Shifter $ cat << EOF > ~/.shifter/config Host * Port 204 IdentityFile ~/.shifter/id_rsa StrictHostKeyChecking no UserKnownHostsFile /dev/null LogLevel error EOF $ ssh -F ~/.shifter/config nidxxxxx

MPI applications and Shifter MPICH-ABI compliant version of MPI Intel or GNU Programming Environment (that is, not Cray) Container image with glibc 2.17+ (CentOS 7, Ubuntu 14) Have to use TORQUE Batch prologue integration qsub... -l gres=shifter -v UDI=repo/name:tag @ runtime: LD_LIBRARY_PATH environment variable is set to the location of MPICH-ABI shared libraries and a little bit of magic J 18

Docker file 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 FROM centos:7 LABEL maintainer "Maxim Belkin <maxim.belkin@gmail.com>" RUN yum -y install file gcc make gcc-gfortran gcc-c++ wget curl RUN cd /usr/local/src/ && \ wget http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz && \ tar xf mpich-3.2.tar.gz && \ rm mpich-3.2.tar.gz && \ cd mpich-3.2 && \./configure && \ make && make install && \ cd /usr/local/src && \ rm -rf mpich-3.2* RUN cd /usr/local/src/ && \ wget http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.3.2.tar.gz && \ tar xf osu-micro-benchmarks-5.3.2.tar.gz && \ cd osu-micro-benchmarks-5.3.2 && \./configure CC=/usr/local/bin/mpicc CXX=/usr/local/bin/mpicxx && \ make && make install && \ cd /usr/local/src && \ rm -rf osu-micro-benchmarks-5.3.2 RUN mkdir test-mpi COPY./files/test.c test-mpi/test-mpi.c WORKDIR /test-mpi RUN /usr/local/bin/mpicc test-mpi.c -o test-mpi ENV PATH=/usr/bin:/usr/local/bin:/bin $ docker build -t repo/name:tag. $ docker push 19

Simple MPI program 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 #include <mpi.h> #include <stdio.h> int main(int argc, char** argv) { int size, rank; } char buffer[1024]; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &size); MPI_Comm_rank(MPI_COMM_WORLD, &rank); gethostname(buffer, 1024); printf("hello from %d of %d on %s\n", rank, size, buffer); MPI_Barrier(MPI_COMM_WORLD); MPI_Finalize(); return 0; 20

MPI applications and Shifter 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 qsub -I -l nodes=2:ppn=32 -l walltime=04:00:00 \ -l gres=shifter -v UDI=mbelkin/centos7-mpich:3.2 cd $PBS_O_WORKDIR module load shifter module unload PrgEnv-cray module unload cce module load PrgEnv-gnu module unload cray-mpich module load cray-mpich-abi LIBS=$CRAY_LD_LIBRARY_PATH LIBS=$LIBS:/opt/cray/wlm_detect/default/lib64 CACHE=$PWD/cache.$PBS_JOBID mkdir -p $CACHE for dir in $( echo $LIBS tr ":" " " ); do cp -L -r $dir $CACHE; done export LD_LIBRARY_PATH=$CACHE/lib:$CACHE/lib64 export CRAY_ROOTFS=UDI aprun -b -n 64 /test-mpi/test-mpi 21

Thank you! 22