Singularity: container formats

Similar documents
Singularity: Containers for High-Performance Computing. Grigory Shamov Nov 21, 2017

CONTAINERIZING JOBS ON THE ACCRE CLUSTER WITH SINGULARITY

Docker for HPC? Yes, Singularity! Josef Hrabal

Introduction to Containers. Martin Čuma Center for High Performance Computing University of Utah

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY

Introduction to Containers. Martin Čuma Center for High Performance Computing University of Utah

Containers. Pablo F. Ordóñez. October 18, 2018

CENG 334 Computer Networks. Laboratory I Linux Tutorial

Introduction to the shell Part II

Shifter at CSCS Docker Containers for HPC

GPU Cluster Usage Tutorial

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

SINGULARITY P1. Containers for Science, Reproducibility and Mobility

Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules. Singularity overview. Vanessa HAMAR

Andrej Filipčič

Singularity - Containers for Scientific Computing ZID Workshop. Innsbruck Nov 2018

Introduction: What is Unix?

Travis Cardwell Technical Meeting

Ftp Command Line Commands Linux Example Windows Put

Platform Migrator Technical Report TR

Linux Essentials. Smith, Roderick W. Table of Contents ISBN-13: Introduction xvii. Chapter 1 Selecting an Operating System 1

Graham vs legacy systems

Docker task in HPC Pack

Singularity. Containers for Science

Linux Essentials Objectives Topics:

Recap From Last Time:

BGGN 213 Working with UNIX Barry Grant

Introduction to Linux. Woo-Yeong Jeong Computer Systems Laboratory Sungkyunkwan University

Unzip command in unix

Exercise 1: Basic Tools

Introduction to Linux

LING 408/508: Computational Techniques for Linguists. Lecture 5

Introduction to Linux

Shifter and Singularity on Blue Waters

This tutorial will guide you how to setup and run your own minecraft server on a Linux CentOS 6 in no time.

Introduction to remote command line Linux. Research Computing Team University of Birmingham

EECS Software Tools. Lab 2 Tutorial: Introduction to UNIX/Linux. Tilemachos Pechlivanoglou

GNU/Linux 101. Casey McLaughlin. Research Computing Center Spring Workshop Series 2018

LINUX FUNDAMENTALS. Supported Distributions: Red Hat Enterprise Linux 6 SUSE Linux Enterprise 11 Ubuntu LTS. Recommended Class Length: 5 days

Introduction to Linux

How To Start Mysql Using Linux Command Line Client In Ubuntu

CS197U: A Hands on Introduction to Unix

FEniCS Containers Documentation

Introduction in Unix. Linus Torvalds Ken Thompson & Dennis Ritchie

How To Start Mysql Use Linux Command Line Client In Ubuntu

Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Linux Training. for New Users of Cluster. Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala

An introduction to Docker

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Unix / Linux Overview

Shifter on Blue Waters

Linux Systems Administration Getting Started with Linux

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

More Raspian. An editor Configuration files Shell scripts Shell variables System admin

Introduction To Linux. Rob Thomas - ACRC

TangeloHub Documentation

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

EE516: Embedded Software Project 1. Setting Up Environment for Projects

CS155: Computer Security Spring Project #1

Ubuntu LTS Install Guide

Operating Systems Linux 1-2 Measurements Background material

Preparing Your Google Cloud VM for W4705

Jackson State University Department of Computer Science CSC / Computer Security Fall 2013 Instructor: Dr. Natarajan Meghanathan

Linux Operating System Environment Computadors Grau en Ciència i Enginyeria de Dades Q2

Prerequisites: Students should be comfortable with computers. No familiarity with Linux or other Unix operating systems is required.

Parallel Programming Pre-Assignment. Setting up the Software Environment

Automatic Dependency Management for Scientific Applications on Clusters. Ben Tovar*, Nicholas Hazekamp, Nathaniel Kremer-Herman, Douglas Thain

docker & HEP: containerization of applications for development, distribution and preservation

PREPARING TO USE CONTAINERS

Perl and R Scripting for Biologists

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

USING NGC WITH GOOGLE CLOUD PLATFORM

INTRODUCTION TO LINUX

Supercomputing environment TMA4280 Introduction to Supercomputing

Using the computational resources at the GACRC

Systems Programming and Computer Architecture ( ) Exercise Session 01 Data Lab

IT341 Introduction to System Administration. Project 4 - Backup Strategies with rsync and crontab

1Z Oracle Linux Fundamentals (Oracle Partner Network) Exam Summary Syllabus Questions

Dockerfile Best Practices

Prerequisites: Students must be proficient in general computing skills but not necessarily experienced with Linux or Unix. Supported Distributions:

Linux Command Line Primer. By: Scott Marshall

How to run NoMachine server inside Docker

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides)

Bright Cluster Manager: Using the NVIDIA NGC Deep Learning Containers

Introduction to UNIX. SURF Research Boot Camp April Jeroen Engelberts Consultant Supercomputing

Booting a Galaxy Instance

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Gerrit

Introduction To. Barry Grant

PROJECT INFRASTRUCTURE AND BASH INTRODUCTION MARKUS PILMAN<

Git Fusion Guide February 2016 Update

diskimage-builder: Building Linux Images for Cloud / Virtualization / Container

L.A.M.P. Stack Part I

swiftenv Documentation

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

CS CS Tutorial 2 2 Winter 2018

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Windows Subsystem for Linux Guide Documentation

New User Tutorial. OSU High Performance Computing Center

Transcription:

Singularity Easy to install and configure Easy to run/use: no daemons no root works with scheduling systems User outside container == user inside container Access to host resources Mount (parts of) filesystems of the host MPI

Singularity: container formats Singularity container format Unix directory Different compressed formats.tar,.tar.gz,.tar.bz2 cpio, cpio.gz Supported URIs: http:// https:// docker:// shub://

Singularity workflow User controlled system Shared computational resource Root/superuser Regular user Create a new container Bootstrap/install container Modify container Copy / share image Execute / run container Command line shell in container

Singularity workflow User controlled system Shared computational resource Root/superuser Regular user Create a new container Bootstrap/install container Modify container Copy / share image Today Execute / run container Command line shell in container

VirtualBox VM + Peregrine Ubuntu VM in VirtualBox username: tutorial password: singularity sudo privileges ssh pg-interactive.hpc.rug.nl username: f112511, f112512,, f112525 password: regular user, no special privileges

Singularity command USAGE: singularity [global options...] <command> [command options...]... GLOBAL -d -h -q OPTIONS: --debug --help --quiet --version -v --verbose -x --sh-debug Print debugging information Display usage summary Only print errors Show application version Increase verbosity +1 Print shell wrapper debugging information GENERAL COMMANDS: help Show additional help for a command CONTAINER USAGE COMMANDS: exec Execute a command within container run Launch a runscript within container shell Run a Bourne shell within container test Execute any test code defined within container CONTAINER MANAGEMENT COMMANDS (requires root): bootstrap Bootstrap a new Singularity image from scratch copy Copy files from your host into the container create Create a new container image expand Grow the container image export Export the contents of a container via a tar pipe import Import/add container contents via a tar pipe mount Mount a Singularity container image For any additional help or support visit the Singularity website: http://singularity.lbl.gov/

End-goal of this tutorial Submit a job that uses Singularity to run a Python script using pandas that takes an input file and processes it. The Python script is in /data/f1125xx/scripts/, while the input file is in /data/f1125xx/files/. Install Singularity in the VM Create an image, populate it with an OS and install pandas, matplotlib Copy the image to Peregrine and run it through a job

Exercise #1: Install Singularity http://singularity.lbl.gov/install-linux On Peregrine we use the latest stable version Use latest stable version (2.2.1) from github.com Install Singularity: configure, make, make install./configure --prefix=/usr/local make sudo make install

How to create an empty image singularity create [options] <image> -s/--size Size of image (default 768 MiB) -F/--force Overwrite an image file if it exists

How to populate the container Reuse an existing image: singularity import Easy and quick Customize later Less reproducible

How to populate the container singularity import <container> <URI> http/https Pull image using curl docker Docker repository image shub Singularity Hub image file Local file

How to access the container [sudo] singularity shell [options] <container> -s/--shell Specify interactive shell -w/--writable Shell into write mode -c/--contains Do not carry over /home to the container -B/--bind <src:dest> Mount src in container at dest -H/--home Use a different home dir in your container --pwd <path> Container path of cwd

Exercise #2: Build container create: Create a Singularity image import: Build a container with Python 3.5.2 from docker shell: Shell into the container to test it Verify which operating system is installed Check your username, user id and groups Check the disk space usage Do environment variables like $PATH get transferred from the hostto the container?

Exercise #2: Build container sudo singularity create image.img sudo singularity import image.img docker://python:3.5.2 singularity shell --shell /bin/bash image.img cat /etc/os-release id df -h env

How to expand an image singularity expand [options] <image> -s/--size Size of to expand BY (default 512 MiB) Expands the image <image> BY --size, NOT TO --size

How to execute a command singularity exec [options] <image> <command> [options] Same as for shell

Exercise #3: Modify and run expand: Expand the image size shell -w: Shell into the container (with write permission) apt, yum, etc: Upgrade software, install pandas and matplotlib run python3.5 script.py inputfile.csv in container

Exercise #3: Modify and run sudo singularity expand --size 1024 image.img sudo singularity shell --shell /bin/bash -w image.img apt update apt upgrade pip3 install pandas matplotlib singularity exec image.img python3.5 script.py inputfile.csv

Exercise #4: Copy to Peregrine scp: Copy image to Peregrine shell --contain: Shell into the container without /home What does it do? What happens if you store something in /home anyway? shell --bind: Map /data/f1125xx into container sudo singularity shell --shell /bin/bash -w image.img mkdir -p /data/f1125xx scp image.img f1125xx@peregrine.hpc.rug.nl:/data/f1125xx/singularitytutorial singularity shell --contain --shell /bin/bash image.img pwd ls $HOME singularity shell --bind /data/f1125xx/:/data/f1125xx image.img ls /data/f1125xx

Exercise #5: Create and submit job exec: Create job script running <script.py> on <file.in> Submit job Check results nano jobscript.sh #SBATCH... singularity exec image.img <command> sbatch jobscript.sh less <file.out>

How to populate the container Start from scratch: singularity bootstrap Write a definition file Bit more complex and time-consuming Very reproducible: Easily rebuild your container Share the definition file with others

How to populate the container singularity bootstrap <image> <definition_file> -f/--force Reinstall the core of the OS in the container

Definition file Choose base OS or base container: yum: Redhat / CentOS / Scientific Linux debootstrap: Debian / Ubuntu arch: Arch Linux busybox: BusyBox docker: Docker image Define scriptable sections for: setup (run outside the container) post (run inside the container) runscript (run when executing the container) test (for testing the container)

Definition file: example BootStrap: debootstrap OSVersion: trusty MirrorURL: http://us.archive.ubuntu.com/ubuntu/ %post echo "Running post-installation steps inside the container" apt-get -y install python3.4 %runscript exec python3.4 $@ %test python3.4 -V

How to run a container singularity run [options] <image.img> [args] or./image.img [args] [options] Same as for shell/exec [args] Arguments passed to the runscript executable, located inside the container at /singularity.

Exercise #6: Automate workflow bootstrap: Create a definition file for the workflow and bootstrap your container scp, run: Test on Peregrine nano pandas.def sudo singularity bootstrap image.img pandas.def singularity run --bind /data/f1125xx:/data/f1125xx image.img script.py inputfile.csv./image.img script.py inputfile.csv

Exercise #7 (optional): GPUs Compile a CUDA application on the login node, e.g.: http://computer-graphics.se/hello-world-for-cuda.html Try to run the application in your container on a GPU node: Submit it to the gpu partition Request one gpu Use the GPU node that we reserved for this tutorial: #SBATCH --reservation=singularity Note that your application needs the Nvidia drivers in order to run on the GPU. On the GPU nodes these can be found in /usr/lib64/nvidia; you will need to make them available in your container as well. Hint: use a bind mount and set your LD_LIBRARY_PATH, or: https://github.com/nih-hpc/gpu4singularity

More information Singularity website: http://singularity.lbl.gov Singularity Hub: https://singularity-hub.org Peregrine wiki page: https://redmine.hpc.rug.nl RIS Academy website: http://rug.nl/risacademy

Questions