Setup InstrucIons. If you need help with the setup, please put a red sicky note at the top of your laptop.

Similar documents
Getting Started with OSG Connect ~ an Interactive Tutorial ~

June Workshop Series June 27th: All About SLURM University of Nebraska Lincoln Holland Computing Center. Carrie Brown, Adam Caprez

Day 9: Introduction to CHTC

Building Campus HTC Sharing Infrastructures. Derek Weitzel University of Nebraska Lincoln (Open Science Grid Hat)

Anvil: HCC's Cloud. June Workshop Series - June 26th

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.

BOSCO Architecture. Derek Weitzel University of Nebraska Lincoln

BOSCO Architecture. Derek Weitzel University of Nebraska Lincoln

HIGH-THROUGHPUT COMPUTING AND YOUR RESEARCH

Backpacking with Code: Software Portability for DHTC Wednesday morning, 9:00 am Christina Koch Research Computing Facilitator

SUG Breakout Session: OSC OnDemand App Development

HTCondor Essentials. Index

Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011)

Introduction to Distributed HTC and overlay systems

Tutorial 4: Condor. John Watt, National e-science Centre

Graham vs legacy systems

User Guide of High Performance Computing Cluster in School of Physics

OSG Lessons Learned and Best Practices. Steven Timm, Fermilab OSG Consortium August 21, 2006 Site and Fabric Parallel Session

Singularity in CMS. Over a million containers served

A WEB-BASED SOLUTION TO VISUALIZE OPERATIONAL MONITORING LINUX CLUSTER FOR THE PROTODUNE DATA QUALITY MONITORING CLUSTER

Presented By: Gregory M. Kurtzer HPC Systems Architect Lawrence Berkeley National Laboratory CONTAINERS IN HPC WITH SINGULARITY

The HTCondor CacheD. Derek Weitzel, Brian Bockelman University of Nebraska Lincoln

Our Workshop Environment

Name Department/Research Area Have you used the Linux command line?

NBIC TechTrack PBS Tutorial

Introduction to Python for Scientific Computing

Our Workshop Environment

Simplified CICD with Jenkins and Git on the ZeroStack Platform

Grid Compute Resources and Job Management

Our Workshop Environment

L Modeling and Simulating Social Systems with MATLAB

Introduc+on. General Information. General Information. General Information. General Information. General Information

Our Workshop Environment

Martinos Center Compute Cluster

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Introduction to HPC Resources and Linux

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Our Workshop Environment

HTCondor on Titan. Wisconsin IceCube Particle Astrophysics Center. Vladimir Brik. HTCondor Week May 2018

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

OBTAINING AN ACCOUNT:

Automating Real-time Seismic Analysis

Introduction to Abel/Colossus and the queuing system

Pegasus. Automate, recover, and debug scientific computations. Rafael Ferreira da Silva.

Altus Data Engineering

Changing landscape of computing at BNL

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology

Our Workshop Environment

HTCondor overview. by Igor Sfiligoi, Jeff Dost (UCSD)

An Introduction to Cluster Computing Using Newton

SNAG: SDN-managed Network Architecture for GridFTP Transfers

HPC Introductory Course - Exercises

Our Workshop Environment

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides)

Why You Should Consider Grid Computing

.NET Library for Seamless Remote Execution of Supercomputing Software

HPC Resources at Lehigh. Steve Anthony March 22, 2012

XSEDE High Throughput Computing Use Cases

Introduction to Linux and Supercomputers

Practice of Software Development: Dynamic scheduler for scientific simulations

Introduction to Linux for BlueBEAR. January

Geant4 on Azure using Docker containers

AMath 483/583 Lecture 2. Notes: Notes: Homework #1. Class Virtual Machine. Notes: Outline:

Slurm basics. Summer Kickstart June slide 1 of 49

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

AMath 483/583 Lecture 2

Drupal Command Line Instructions Windows 7 List All Files >>>CLICK HERE<<<

Job Management on LONI and LSU HPC clusters

Gitlab Setup/Usage by Yifeng Zhu modified by Vince Weaver 30 January 2019

Introducing the HTCondor-CE

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Introduction to Condor. Jari Varje

Ge#ng Started with the RCE. Len Wisniewski

glideinwms: Quick Facts

Containerized Cloud Scheduling Environment

High Performance Compu2ng Using Sapelo Cluster

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

Introduction to High Performance Computing Using Sapelo2 at GACRC

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016

Introduction to HPC Using zcluster at GACRC

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Protected Environment at CHPC. Sean Igo Center for High Performance Computing September 11, 2014

OSGMM and ReSS Matchmaking on OSG

Tools for Handling Big Data and Compu5ng Demands. Humani5es and Social Science Scholars

07 - Processes and Jobs

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Introduction to SciTokens

Introduction to GALILEO

Teraflops of Jupyter: A Notebook Based Analysis Portal at BNL

XSEDE New User Tutorial

Shooting for the sky: Testing the limits of condor. HTCondor Week May 2015 Edgar Fajardo On behalf of OSG Software and Technology

Triton file systems - an introduction. slide 1 of 28

Public Cloud - Azure workshop

Jozef Cernak, Marek Kocan, Eva Cernakova (P. J. Safarik University in Kosice, Kosice, Slovak Republic)

Hands-on tutorial on usage the Kepler Scientific Workflow System

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Illinois Proposal Considerations Greg Bauer

GPU Cluster Usage Tutorial

MSG: An Overview of a Messaging System for the Grid

Transcription:

Setup InstrucIons Please complete these steps for the June 26 th workshop before the lessons start at 1:00 PM: h;p://hcc.unl.edu/june-workshop-setup#weekfour And make sure you can log into Crane. OS-specific login direcions here: h;ps://hcc-docs.unl.edu/display/hccdoc/quick+start+guides (If you don t have an HCC account, find a helper to give you a demo account.) If you need help with the setup, please put a red sicky note at the top of your laptop. When you are done with the setup, please put a green sicky note at the top of your laptop. 1

June Workshop Series June 26th: Anvil HCC s Cloud and the Open Science Grid University of Nebraska Lincoln Holland CompuIng Center Carrie Brown, Natasha Pavlovikj, Emelie Harstad, Adam Caprez, Jingchao Zhang, Caughlin Bohn 2

June Workshop Series Schedule June 5th: Introductory Bash Learn how to connect to Holland's High Performance CompuIng cluster, Crane and how to navigate in the Linux command line with the Bash shell. Topics include the use of pipes, filters, loops, and handling of text files. Lesson materials based on So`ware Carpentry lessons. June 12th: Advanced Bash and Git ConInue mastering the Bash shell by learning how to write reusable shell scripts to help automate tasks. We will also be looking at how using version control with Git can help streamline collaboraion and manage revisions. Lesson materials based on So`ware Carpentry lessons. June 19th: Introductory HCC Learn the basics of working with Holland resources. Topics include an overview of HCC offerings, submibng batch jobs, handling data and Ips on how to opimize workflow. June 26th: Anvil: HCC s Cloud and the Open Science Grid Learn about Anvil, HCC's Open Stack resource which allows researchers greater flexibility by offering cloud-based virtual machines which are ideal for applicaions that are not well suited to the Linux command line environment. Also, learn about the Open Science Grid (OSG), a naion-wide collecion of opportunisic resources that are well suited to large high-throughput applicaions. Learn what jobs are ideal for OSG and how to begin using this service. 3

LogisIcs LocaIon: Avery Hall Room 347 Name tags, sign-in sheet SIcky notes: Red = need help, Green = all good 15 min break at about 2:30 Link to June 26 th a;endee portal (with command history): h;p://hcc.unl.edu/swc-history/index.html Link to slides: h;ps://hcc-docs.unl.edu/display/hccdoc/ June+Workshop+Series+2018 4

IntroducIon to the Open Science Grid HCC June Workshop Series June 26, 2018 Emelie Harstad <eharstad@unl.edu> OSG User Support UNL Holland CompuIng Center ApplicaIons Specialist 5

Outline What is the OSG? Who uses OSG? Owned vs. OpportunisIc Use CharacterisIcs of OSG-Friendly Jobs Is OSG Right for Me? Hands-on: How to submit jobs to the OSG from HCC clusters 6

A framework for large scale distributed resource sharing addressing the technology, policy, and social requirements of sharing compuing resources. The OSG is a consorium of so`ware, service and resource providers and researchers, from universiies, naional laboratories and compuing centers across the U.S., who together build and operate the OSG project. Funded by the NSF and DOE. The Open Science Grid > 50 research communiies > 130 sites > 100,000 cores accessible 7

The Open Science Grid Over 1.6 billion CPU hours per year!! 8

Astrophysics Biochemistry BioinformaIcs Earthquake Engineering GeneIcs GravitaIonal-wave physics MathemaIcs Nanotechnology Nuclear and paricle physics Text mining And more Who is Using the OSG? 9

OSG Usage Wall Hours by VO in past 30 days VO = Virtual OrganizaIon Most OSG use is on dedicated resources (used by resource owners) atlas, cms About 13% is opportunis2c use osg, hcc, glow 10

OSG Jobs High Throughput CompuPng Ø Sustained compuing over long periods of Ime. Usually serial codes, or low number of cores threaded/mpi. vs. High Performance CompuPng Ø Great performance over relaive short periods of Ime. Large scale MPI. Distributed HTC Ø No shared file system Ø Users ship input files and (some) so`ware packages with their jobs. OpportunisPc Use Ø ApplicaIons (esp. with long run Imes) can be preempted (or killed) by resource owner s jobs. Ø ApplicaIons should be relaively short or support being restarted. 11

OSG Jobs High Throughput CompuPng Ø Sustained compuing over long periods of Ime. Usually serial codes, or low number of cores treaded/mpi. vs. High Performance CompuPng Ø Great performance over relaive short periods of Ime. Large scale MPI. Distributed HTC Ø No shared file system Ø Users ship input files and (some) so`ware packages with their jobs. OpportunisPc Use Ø ApplicaIons (esp. with long run Imes) can be preempted (or killed) by resource owner s jobs. Ø ApplicaIons should be relaively short or support being restarted. 12

OSG Jobs High Throughput CompuPng Ø Sustained compuing over long periods of Ime. Usually serial codes, or low number of cores treaded/mpi. vs. High Performance CompuPng Ø Great performance over relaive short periods of Ime. Large scale MPI. Distributed HTC Ø No shared file system Ø Users ship input files and (some) so`ware packages with their jobs. OpportunisPc Use Ø ApplicaIons (esp. with long run Imes) can be preempted (or killed) by resource owner s jobs. Ø ApplicaIons should be relaively short or support being restarted. 13

OSG Jobs Run-Ime: 1-12 hours Single-core Require <2 GB Ram StaIcally compiled executables (transferred with jobs) Non-proprietary so`ware 14

OSG Jobs Run-Ime: 1-12 hours Single-core Require <2 GB Ram StaIcally compiled executables (transferred with jobs) Non-proprietary so`ware These are not hard limits! CheckpoinIng for long jobs that are preempted - Many applicaions support built-in checkpoining - Job image is saved periodically so that it can be restarted on a new host a`er it is killed (without losing the progress that was made on the first host) Limited support for larger memory jobs ParIIonable slots for parallel applicaions using up to 8 cores Modules available a collecion of pre-installed so`ware packages Can run compiled Matlab executables 15

Is OSG right for me? Are your jobs OSG-friendly? no ConInue submibng to HCC clusters yes Would you like to have access to more compuing resources? no yes Consider submibng your jobs to OSG You will need to change your submit script slightly (to use HTCondor scheduler). Please contact us for help hcc-support@unl.edu 16

For more informaion on the Open Science Grid: h;ps://www.opensciencegrid.org/ For instrucions on submibng jobs to OSG: h;ps://hcc-docs.unl.edu/display/hccdoc/ The+Open+Science+Grid 17

Quickstart Exercise Submit a simple job to OSG from Crane ssh <username>@crane.unl.edu cd $WORK git clone https://github.com/unlhcc/hccworkshops.git cd $WORK/HCCWorkshops/OSG/quickstart Exercise 1: osg-template-job.submit (HTCondor submit script) short.sh (job executable) Exercise 2: osg-template-job-input-and-transfer.submit short_with_input_output_transfer.sh 18

Quickstart Exercise HTCondor Commands: condor_submit <submit_script> # submit a job to osg condor_q <username> # monitor your jobs condor_rm <jobid> # remove a job condor_rm <username> # remove all of your jobs Everything you need to know and more about HTCondor submit scripts: http://research.cs.wisc.edu/htcondor/manual/current/ condor_submit.html 19

Scaling Up on OSG 20

Scaling Up Exercise cd $WORK/HCCWorkshops/OSG/ScalingUp-Python scalingup-python-wrapper.sh # job executable (wrapper) rosen_brock_brute_opt.py # Python script Example1/ScalingUp-PythonCals.submit # submit script 1 Example2/ScalingUp-PythonCals.submit # submit script 2 Example3/ScalingUp-PythonCals.submit # submit script 3 Example4/ScalingUp-PythonCals.submit # submit script 4 21

Python Brute Force OpImizaIon 2-D Rosenbrock funcion Used to test the robustness of an opimizaion method Python script rosen_brock_brute_opt.py finds the minimum of the funcion for a set of points (grid) between selected boundary values. By default, Python script will randomly select the boundary values of the grid that the opimizaion procedure will scan over. These values can be overidden by user supplied values. 22

Python Brute Force OpImizaIon 23

Python Brute Force OpImizaIon 24

Python Brute Force OpImizaIon 25