Introduction to High Performance Computing (HPC) Resources at GACRC

Similar documents
Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to HPC Using the New Cluster at GACRC

High Performance Compu2ng Using Sapelo Cluster

Introduction to HPC Using Sapelo Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC

Migrating from Zcluster to Sapelo

Using Sapelo2 Cluster at the GACRC

Introduction to HPC Using the New Cluster (Sapelo) at GACRC

Introduction to HPC Using the New Cluster (Sapelo) at GACRC

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC

Introduction to HPC Using Sapelo Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC

GACRC User Training: Migrating from Zcluster to Sapelo

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using Sapelo Cluster at GACRC

Introduction to HPC Using Sapelo at GACRC

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Introduction to HPC Using Sapelo Cluster at GACRC

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC

Introduction to High Performance Computing Using Sapelo2 at GACRC

Introduction to HPC Using zcluster at GACRC

Python on GACRC Computing Resources

Using the computational resources at the GACRC

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster PHYS8602

Introduction to Linux Basics

Linux Training. for New Users of Cluster. Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Batch Systems. Running calculations on HPC resources

Our new HPC-Cluster An overview

Introduction to Linux Basics Part II. Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala

Introduction to High-Performance Computing (HPC)

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

For Dr Landau s PHYS8602 course

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

How to Run NCBI BLAST on zcluster at GACRC

Introduction to Discovery.

Introduction to Discovery.

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

User Guide of High Performance Computing Cluster in School of Physics

Introduction to High-Performance Computing (HPC)

OBTAINING AN ACCOUNT:

NBIC TechTrack PBS Tutorial

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Introduction to Discovery.

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Workshop Set up. Workshop website: Workshop project set up account at my.osc.edu PZS0724 Nq7sRoNrWnFuLtBm

Computing with the Moore Cluster

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

New User Tutorial. OSU High Performance Computing Center

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Batch Systems. Running your jobs on an HPC machine

Using the Yale HPC Clusters

Python Language Basics II. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Center for Research Informatics

Getting started with the CEES Grid

Quick Guide for the Torque Cluster Manager

Guillimin HPC Users Meeting. Bryan Caron

Cheese Cluster Training

Name Department/Research Area Have you used the Linux command line?

Introduction to PICO Parallel & Production Enviroment

HPC Resources at Lehigh. Steve Anthony March 22, 2012

GPU Cluster Usage Tutorial

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

HPCC New User Training

Introduction to GALILEO

Programming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018

PBS Pro and Ansys Examples

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Introduction to GALILEO

CS/Math 471: Intro. to Scientific Computing

Introduction to GALILEO

Introduction to HPC Resources and Linux

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

Crash Course in High Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

New High Performance Computing Cluster For Large Scale Multi-omics Data Analysis. 28 February 2018 (Wed) 2:30pm 3:30pm Seminar Room 1A, G/F

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Sharpen Exercise: Using HPC resources and running parallel applications

Knights Landing production environment on MARCONI

UF Research Computing: Overview and Running STATA

Choosing Resources Wisely. What is Research Computing?

Using the Yale HPC Clusters

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University

BRC HPC Services/Savio

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

Running Jobs, Submission Scripts, Modules

How to Use a Supercomputer - A Boot Camp

Introduction to HPCC at MSU

Transcription:

Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1

Outline GACRC? High Performance Computing (HPC) GACRC Sapelo Cluster 2

GACRC Who Are We? Georgia Advanced Computing Resource Center Collaboration between the Office of Vice President for Research (OVPR) and the Office of the Vice President for Information Technology (OVPIT) Guided by a faculty advisory committee (GACRC-AC) Why Are We Here? To provide computing hardware and network infrastructure in support of high-performance computing (HPC) at UGA Where Are We? http://gacrc.uga.edu (Web) http://gacrc.uga.edu/help/ (Web Help) https://wiki.gacrc.uga.edu/wiki/getting_help (Wiki Help) http://wiki.gacrc.uga.edu (Wiki) 3

High Performance Computing (HPC) Serial: Serial problem can not be broken Discrete instructions executed sequentially Only 1 instruction executed at any moment on a single processor Parallel: Problem broken into parallel parts can be solved concurrently Instructions executed simultaneously on multiply processors Synchronization/communication employed Shared-memory multithreaded job or MPI job (Message Passing Interface) 4

GACRC Sapelo Cluster Cluster Structural Diagram Cluster Overview Computing Resources Storage Environment Software Modules Run Batch Jobs: Workflow, Submit Job, and Job Submission Script 5

9/26/2016 MIBO8150, FALL 2016 6

Cluster Overview Sapelo is a Linux high-performance computing (HPC) cluster: OS: 64-bit CentOS Linux 6.5 You can log on to: Login (username@sapelo1.gacrc.uga.edu) : edit script, submit batch job Transfer (username@xfer.gacrc.uga.edu) : transfer, compress, package data Login Interactive Node : run interactive job, edit script, submit batch job Queueing System: Torque + Moab with qsub, qstat, qdel commands 7

Computing Resources Queue Node Feature Total Processor Cores /Node RAM (GB) /Node Max RAM (GB) /Single-node Job GPU GPU Cards /Node InfiniBand AMD 112 128 126 AMD 48 Opteron 4 256 252 N/A N/A Yes batch HIGHMEM 7 AMD Opteron 48 512 (6) 504 1024 (1) 997 N/A N/A Yes GPU 2 Intel Xeon 16 128 126 NVIDIA K40m 8 Yes Peak Performance per Node: 500Gflops/Node Home: /home/username: 100GB Global scratch: /lustre1/username: NO quota limit, auto-moved to /project if no modification in 30 days! 8

9

Storage Environment 4 Filesystems Role Quota Intended Use Notes /home/username Home 100GB Highly static data being used frequently Snapshots /lustre1/username Global Scratch No Limit Temporarily storing large data being used by jobs Auto-moved to /project if 30 days no modification /project/abclab Storage Variable Long-term data storage Group sharing possible Note: /usr/local/apps : Software installation directory /db : Bioinformatics database installation directory 10

Storage Environment Accessing Rule of 123: /home /lustre1 /project qlogin Login Interactive Transfer 1 2 3 11

Software Modules Sapelo uses environment modules to define paths for software Current number of modules installed is ~300 and expanding daily! module avail module list module load List all modules available on Sapelo List modules currently being loaded for use Load modules needed module unload Unload modules not needed 12

13

14

Run Batch Jobs Components you need to run a job: Software already loaded. If not, used module load Job submission script to run the software, and specify computing resources: Number of nodes and cores Amount of memory Type of nodes Maximum wallclock time, etc. Common commands you need: qsub, qstat, qdel 15

Run Batch Jobs Submit Job [zhuofei@n15 workdir]$ pwd n15: interactive node /lustre1/zhuofei/workdir /lustre1/zhuofei/: global scratch [zhuofei@n15 workdir]$ [zhuofei@n15 workdir]$ qsub sub.sh 1165617.pbs.scm qsub is to submit a job sub.sh is your job submission script specifying: Number of nodes and cores Amount of memory Type of nodes Maximum wallclock time, etc. 16

Run Batch Jobs Job Submission Script Example: Serial job submission script sub.sh running NCBI Blast + #PBS -S /bin/bash Linux shell (bash) #PBS -q batch Queue name (batch) #PBS -N testblast Name of the job (testblast) #PBS -l nodes=1:ppn=1:amd Number of nodes (1), number of cores/node (1), node type (AMD) #PBS -l mem=20gb Maximum amount of physical memory (20 GB) used by the job #PBS -l walltime=48:00:00 Maximum wall clock time (48 hours) for the job, default 6 minutes cd $PBS_O_WORKDIR Use the directory from which the job is submitted as the working directory module load ncbiblast+/2.2.29 Load the module of ncbiblast+, version 2.2.29 time blastn [options] > outputfile Run blastn with time command to measure the amount of time it takes to run the application 17

User Account User Account: UGAMyID@sapelo1.gacrc.uga.edu A valid official UGA MyID is a MUST to create a user account! To get a user account: 1. Computing Lab Registration: http://help.gacrc.uga.edu/labacct.php (for PI of a new group) 2. User Account Request: http://help.gacrc.uga.edu/useracct.php (for PI of an existing group) 3. New User Training: http://gacrc.uga.edu/help/training/ 4. Welcome letter with whole package of information about your Sapelo user account 18

Useful Links GACRC Web: http://gacrc.uga.edu/ GACRC Wiki: https://wiki.gacrc.uga.edu/wiki/main_page GACRC Help : http://gacrc.uga.edu/help/ GACRC Training: https://wiki.gacrc.uga.edu/wiki/training GACRC User Account: https://wiki.gacrc.uga.edu/wiki/user_accounts GACRC Software: https://wiki.gacrc.uga.edu/wiki/software Georgia Advanced Computing Resource Center 4098C Stegeman Coliseum University of Georgia Athens, GA 30602 Telephone Support EITS HELPDESK: 706-542-3106 MONDAY THURSDAY: 8AM 10PM FRIDAY: 8AM 6PM SATURDAY SUNDAY: 1PM 7PM 19

Thank You! 20