Shifter and Singularity on Blue Waters

Similar documents
Shifter on Blue Waters

Evaluating Shifter for HPC Applications Don Bahls Cray Inc.

Shifter: Fast and consistent HPC workflows using containers

XC Series Shifter User Guide (CLE 6.0.UP02) S-2571

Compiling applications for the Cray XC

Cray Support of the MPICH ABI Compatibility Initiative

Shifter at CSCS Docker Containers for HPC

MAKING CONTAINERS EASIER WITH HPC CONTAINER MAKER. Scott McMillan September 2018

Singularity: container formats

First steps on using an HPC service ARCHER

Installation, Configuration and Performance Tuning of Shifter V16 on Blue Waters

Practical: a sample code

Singularity: Containers for High-Performance Computing. Grigory Shamov Nov 21, 2017

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

CSCS Proposal writing webinar Technical review. 12th April 2015 CSCS

Docker task in HPC Pack

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Programming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018

Effective Use of CCV Resources

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

8/19/13. Blue Waters User Monthly Teleconference

Computing with the Moore Cluster

Introduction to Containers. Martin Čuma Center for High Performance Computing University of Utah

Our new HPC-Cluster An overview

Molecular Forecaster Inc. Forecaster 1.2 Server Installation Guide

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.

Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules. Singularity overview. Vanessa HAMAR

Booting a Galaxy Instance

docker & HEP: containerization of applications for development, distribution and preservation

Introduction to High Performance Computing Using Sapelo2 at GACRC

CONTAINERIZING JOBS ON THE ACCRE CLUSTER WITH SINGULARITY

Tech Computer Center Documentation

Managing and Deploying GPU Accelerators. ADAC17 - Resource Management Stephane Thiell and Kilian Cavalotti Stanford Research Computing Center

Bright Cluster Manager: Using the NVIDIA NGC Deep Learning Containers

Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again:

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Be smart. Think open source.

Batch Systems. Running calculations on HPC resources

Interpolation calculation using cloud computing

This tutorial will guide you how to setup and run your own minecraft server on a Linux CentOS 6 in no time.

Introduction to Containers. Martin Čuma Center for High Performance Computing University of Utah

Who is Docker and how he can help us? Heino Talvik

CSC BioWeek 2016: Using Taito cluster for high throughput data analysis

CSC BioWeek 2018: Using Taito cluster for high throughput data analysis

Supercomputing environment TMA4280 Introduction to Supercomputing

Shifter Configuration Guide 1.0

MVAPICH MPI and Open MPI

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

USING NGC WITH GOOGLE CLOUD PLATFORM

Linux Clusters Institute:

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides)

Compile and run RAPID on the Microsoft Azure Cloud

Introduction to OpenMP. Lecture 2: OpenMP fundamentals

Containers. Pablo F. Ordóñez. October 18, 2018

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

Step 3: Access the HPC machine you will be using to run WRF: ocelote. Step 4: transfer downloaded WRF tar files to your home directory

Grid Examples. Steve Gallo Center for Computational Research University at Buffalo

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

Our Workshop Environment

AWP ODC QUICK START GUIDE

GPU Cluster Usage Tutorial

Dockerfile Best Practices

How to Make Best Use of Cray MPI on the XT. Slide 1

Using the MaRC2 HPC Cluster

Illinois Proposal Considerations Greg Bauer

Running applications on the Cray XC30

HPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016

Introduction to CINECA HPC Environment

MPI for Cray XE/XK Systems & Recent Enhancements

5.3 Install grib_api for OpenIFS

ncsa eclipse internal training

Advanced MPI: MPI Tuning or How to Save SUs by Optimizing your MPI Library! The lab.

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Introduction to GACRC Teaching Cluster

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Working with Shell Scripting. Daniel Balagué

SCALABLE HYBRID PROTOTYPE

LENS Server Maintenance Guide JZ 2017/07/28

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

3 Installation from sources

Migrating from Zcluster to Sapelo

Open SpeedShop Build and Installation Guide Version November 14, 2016

Running Blockchain in Docker Containers Prerequisites Sign up for a LinuxONE Community Cloud trial account Deploy a virtual server instance

Container System Overview

Debugging on Blue Waters

PROGRAMMING MODEL EXAMPLES

Using Sapelo2 Cluster at the GACRC

New User Tutorial. OSU High Performance Computing Center

Domain Decomposition: Computational Fluid Dynamics

Scientific Filesystem. Vanessa Sochat

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

COMPILING FOR THE ARCHER HARDWARE. Slides contributed by Cray and EPCC

CSE Linux VM. For Microsoft Windows. Based on opensuse Leap 42.2

Opportunities for container environments on Cray XC30 with GPU devices

VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING. BSC Tools Hands-On. Germán Llort, Judit Giménez. Barcelona Supercomputing Center

Computer Systems and Architecture

Docker und IBM Digital Experience in Docker Container

Transcription:

Shifter and Singularity on Blue Waters Maxim Belkin June 7, 2018

A simplistic view of a scientific application DATA RESULTS My Application

Received an allocation on Blue Waters! DATA RESULTS My Application DATA i RESULTS My Application Your Computer Blue Waters

A more realistic view of a scientific application LIBRARY X EXECUTABLE Y HARDWARE Z DATA My Application RESULTS Your Computer

A more realistic view of a scientific application LIBRARY A EXECUTABLE B HARDWARE C DATA RESULTS My Application Blue Waters

Software Containers LIBRARY X EXECUTABLE Y HARDWARE C DATA My Application RESULTS

Shifter Docker Singularity https://github.com/nersc/shifter Designed for HPC https://github.com/docker Not suitable for HPC https://github.com/singularityware/singularity Can be used in HPC environment

Docker FROM centos:7 Dockerfile RUN yum update &&... wget COPY file.ext inside/container/ RUN./configure && make && WORKDIR /home/user 1) Build: docker build -t repo/name:tag. 2) Upload: docker push repo/name:tag 3) Download: docker pull repo/name:tag 4) Run: docker run repo/name:tag 4a) Execute: docker exec <container> command

https://github.com/nersc/shifter https://github.com/singularityware/singularity

-l gres=shifter16 Docker ssh qsub aprun User Login nodes MOM nodes Compute nodes

-l gres=shifter16 -l gres=shifter16 -v UDI=repo/name:tag export CRAY_ROOTFS=SHIFTER Docker qsub aprun -b... -- app-in-container --options Login nodes MOM nodes -v UDI=repo/name:tag UDI: User-Defined Image application application application application application application application application Compute nodes

-l gres=shifter16 Docker -l gres=shifter16 qsub module load shifter aprun -b... -- shifter --image=repo/name:tag -- app-in-container --options Login nodes MOM nodes application application application application application application application application Compute nodes

Shifter module shifter shifterimg images lookup pull login launches an app from UDI on compute nodes returns info about available UDIs returns an ID of an image (if available) downloads Docker image and converts it to UDI authenticates to DockerHub (for private images)

shifterimg images shifterimg lookup shifterimg pull shifterimg login Examples ubuntu:zesty centos:latest image on Blue Waters image on DockerHub shifterimg pull --user bw_user prvimg1:latest shifterimg pull --group bw_grp prvimg2:latest

LIBRARY X EXECUTABLE Y HARDWARE Z DATA My Application RESULTS

MPI in Shifter: requirements In a Container: 1. glibc 2.17+ (CentOS 7, Ubuntu 14) 2. Use one of the following MPIs (see MPICH-ABI compatibility) MPICH v3.1+ Intel MPI Library v5.0+ Cray MPT v7.0.0+ MVAPICH2 2.0+ Parastation MPI 5.1.7-1+ IBM MPI v2.1+ 3. Do not use package manager to install MPI *

MPI in Shifter: requirements On Blue Waters: 1. Use Intel or GNU Programming Environment 2. Load MPICH ABI-compatible MPI (on BW: cray-mpich-abi) 3. Set LD_LIBRARY_PATH to the location of MPICH-ABI libraries

MPI in Shifter: in Dockerfile FROM centos:7 RUN yum -y install file gcc make gcc-gfortran gcc-c++ wget curl RUN cd /usr/local/src/ && \ wget http://www.mpich.org/static/downloads/3.2/mpich-3.2.tar.gz && \ tar xf mpich-3.2.tar.gz && \ rm mpich-3.2.tar.gz && \ cd mpich-3.2 && \./configure && \ make && make install && \ cd /usr/local/src && \ rm -rf mpich-3.2*

MPI in Shifter: job batch script snippet 1 2 3 4 5 6 7 8 9 10 11 12 module unload PrgEnv-cray module unload cce module load PrgEnv-gnu module unload cray-mpich module load cray-mpich-abi LIBS=$CRAY_LD_LIBRARY_PATH:/opt/cray/wlm_detect/default/lib64 CACHE=$PWD/cache.$PBS_JOBID mkdir -p $CACHE for dir in $( echo $LIBS tr ":" " " ); do cp -L -r $dir $CACHE; done export LD_LIBRARY_PATH=$CACHE/lib:$CACHE/lib64 19

MPI in Shifter 1 qsub -l gres=shifter16 -l nodes=2:ppn=32:xe module load shifter aprun -b -n 64 -N 32 -- shifter --image=repo/image:tag -- app aprun -b -n 64 -N 32 -- shifter --image=repo/image2:tag -- app2 2 qsub -l gres=shifter16 -v UDI=repo/image:tag -l nodes=2:ppn=32:xe export CRAY_ROOTFS=SHIFTER aprun -b -n 64 -N 32 -- app 20

MPI in Shifter T(N, ppn=1) a Job start-up time, s 256 128 64 32 16 8 4 ppn = 1 36 MB 1.7 GB CLI prologue T(N=80, ppn) b Job start-up time, s 2 64 32 16 8 2 4 8 16 32 64 128 256 512 1024 2048 4096 Nodes 80 nodes 36 MB 1.7 GB CLI prologue 1 2 4 1 2 4 8 16 processes per node (ppn) Hon Wai Leong DOI: 10.1145/3219104.3219145 21

Performance of MPI applications in Shifter a Time, µs b Time, ms 10 7 10 6 10 5 Shifter CLE MPI_Alltoall MPI_Alltoallv 10 4 10 3 10 2 1 10 10 2 10 3 10 4 10 5 10 6 Message size, bytes 4 Shifter CLE MPI_Bcast 3 MPI_Reduce 2 1 0 0 200 400 600 800 1000 1200 Message size, kilobytes IOR Read, GB/s IOR Write, GB/s 8.5 8.0 7.5 3.5 3.0 2.5 Shifter Cray Linux Environment Shifter Cray Linux Environment 0 10 20 30 Benchmark number Galen Arnold DOI: 10.1145/3219104.3219145 22

Using GPUs in Shifter 1. Use Shifter CLI: shifter --image... 2. Set CUDA_VISIBLE_DEVICES: export CUDA_VISIBLE_DEVICES=0 aprun -b... -- shifter --image=centos:latest -- nvidia-smi 23

Using MPI + GPUs in Shifter Caveat: shifter command removes LD_LIBRARY_PATH Solution: use a script to pass LD_LIBRARY_PATH aprun -b -n... -N 1 -e VAR="$LD_LIBRARY_PATH... \ shifter --image=repo/name:tag... -- script.sh script.sh: export LD_LIBRARY_PATH= $VAR application 24

Directory mapping between BW and Shifter UDI shifter... --volume=/path/on/bw:/path/in/udi Important: /path/in/udi must exist in UDI qsub -v UDI="centos:latest -v /u/sciteam/$user:/home"! /projects, /scratch, and Home folders are mapped between Blue Waters and UDI

https://github.com/nersc/shifter https://github.com/singularityware/singularity

-l gres=singularity Docker ssh qsub aprun User Login nodes MOM nodes Compute nodes

-l gres=singularity module load singularity Docker -l gres=singularity qsub singularity build IMAGE.simg docker://centos:latest singularity exec -H /u:/home IMAGE.simg -- app Login nodes MOM nodes application application application application application application application application Compute nodes

Directory mapping between BW and Singularity singularity exec -B /path/on/bw:/path/in/singularity And... /path/in/singularity does not have to exist. Note: /projects on BW is /mnt/abc/projects in Singularity /scratch on BW is /mnt/abc/scratch in Singularity

GPUs in Singularity singularity exec --nv... If you re using BW directories, make sure they are mapped into Singularity image Example: aprun -n 16 -N 8 -b -- singularity exec -B /dsl/opt/cray/nvidia/default/bin:/home/staff/mbelkin /home/staff/mbelkin/nvidia-smi

Thank you! 38