Introduc)on to Pacman

Size: px
Start display at page:

Download "Introduc)on to Pacman"

Transcription

1 Introduc)on to Pacman Don Bahls User Consultant (Significant Slide Content from Tom Logan)

2 Overview Connec)ng to Pacman Hardware Programming Environment Compilers Queuing System Interac)ve Work SoLware Stack Image 1: Picture of Pacman

3 Connec)ng to Pacman ARSC supports ssh, s*p and scp clients Linux and OS come with na)ve command line versions of ssh, s*p and scp. Windows requires you download a terminal and file transfer program (and op)onally an X11 server). We have the most experience suppor)ng PuTTY, FileZilla, and Xming.

4 Other File Transfer Alterna)ves File Transfer Mac and Linux have GUI based file transfer programs (e.g. Fetch and Filezilla) which can be convenient. Others programs should work so long as they support either slp or scp. Plain Lp will not work for file transfer files to pacman. Rsync works too when started via ssh.

5 Image 2: Filezilla on a Mac

6 Other Windows Connec)on Op)ons If you prefer the command line environment, Cygwin offers command line ssh, slp and scp commands as well as an X11 Server. For more informa)on on Cygwin see: h]p://

7 PACMAN Penguin Compu)ng Cluster 88- Sixteen Core Compute Nodes: 2- Eight core 2.3 GHz AMD Opteron Processors 64 GB of memory per node (4 GB per core) 14- Twelve Core Compute Nodes: 2- Six core 2.2 GHz AMD Opteron Processors 32 GB of memory per node (2.6 GB per core) 3- Large Memory Nodes: 4- Eight core 2.3 GHz AMD Opteron Processors 256 GB of memory per node (8 GB per core) 2- GPU Nodes: 2- Quad core 2.4 GHz Intel Processors 64 GB of memory per node (8 GB per core) 2- M2050 nvidia Fermi GPU cards 256- Four Core Compute Nodes: (NEW) 2- Dual Core 2.6 GHz AMD Opteron Processors 16 GB of memory per node (4 GB per core) Infiniband Interconnects (Qlogic and Voltaire) 278 TB Center- wide Lustre file system OperaVng System Red Hat Enterprise Linux Twelve Core Login Nodes 2- Six core 2.2 GHz AMD Opteron Processors 64 GB of memory per node (5.3 GB per core) 1- Large Memory Login Nodes 4- Eight core 2.3 GHz AMD Opteron Processors 256 GB of memory per node (8 GB per core)

8 Pacman Storage Pacman supports the ARSC standard storage loca)ons $HOME: small, backed up, not purged $CENTER: large, not backed up, purged, available on fish and worksta)ons $ARCHIVE: no quota, backed up, not purged. NOTE: $ARCHIVE not available from compute nodes! $ARCHIVE is only accessible to the pacman login nodes ARSC recommends that you avoid accessing $HOME in parallel jobs. Purging is not currently enabled on $CENTER but may be enabled at a future date (with ample warning)

9 Monitoring Storage Usage Quotas are enabled on $HOME and $CENTER. Use the show_storage command to list your current usage: pacman1 % show_storage Current Filesystem Quotas as of Thu Jan 10 17:33:06 AKST 2013: Filesystem Used_GB Soft_GB Hard_GB Files Soft Files Hard Files ========== ======== ======== ======== ========== ========== ========== $ARCHIVE $CENTER $HOME /projects

10 Parallel Programming Models Pacman Support Threads (OpenMP and pthreads) MPI- using the OpenMPI distribu)on with Infiniband Support Serial Applica)ons Autoparalleliza)on (via the PGI compiler) GPU Accelera)on (on several nodes)

11 Parallel Programming Models Level Model Descrip)on Node Auto Automa)c paralleliza)on of basic loops. Only available with PGI compilers (use Mconcur= op)on) Node OpenMP Explicit shared memory model using direc)ves to achieve loop level parallelism (use mp op)on with PGI compilers) System MPI Most common and portable method for scalable distributed memory parallelism Node GPU Pacman has two node with Nvidia GPU accelerators. The CUDA toolkit 4.0 is available.

12 Modules Pacman uses a package called modules which allows you to quickly and easily switch from one solware version to another. This makes it possible for us to provide mul)ple versions of a solware package One person can access the newest version of a package while someone else has an unchanging environment.

13 Modules Modules do the following: Set your $PATH, $MANPATH, $LD_LIBRARY_PATH Set environment variables needed by packages (e.g. NCARG_ROOT for NCL, etc). Keep track of what they set so the environment variables and aliases can be unset when a module is unloaded.

14 Modules By default all accounts have a compiler stack loaded which is set using the modules package. You can put alternate and addi)onal modules commands in shell login files. Module Name PrgEnv-pgi!! DescripVon Programming environment using the PGI compilers & OpenMPI stack (default module). PrgEnv-gnu Programming environment using GNU compilers & OpenMPI stack.

15 Sample Module Commands!"##$%&'' ()$#*+,'-.,'' /01*".,!"#$%&'()(*%'!"#$%&'()(*%' %*+,+'(%%'()(*%(-%&'!"#$%&+'."/',0&' +1+,&!2!"#$%&'%"(#'3!"#$'!"#$%&'%"(#'4/567)' %"(#+'('!"#$%&'.*%&'./"!',0&' &7)*/"7!&7,!"#$%&'$7%"(#'3!"#$'!"#$%&'$7%"(#'4/567)' $7%"(#+'('!"#$%&'.*%&'./"!',0&' &7)*/"7!&7,!"#$%&'%*+,'!"#$%&'%*+,' #*+8%(1+',0&'!"#$%&+'90*:0'(/&' :$//&7,%1'%"(#&#2!"#$%&'+9*,:0'"%#'7&9'!"#$%&'+9*,:0'4/567);85*'4/567);57$' /&8%(:&+',0&'!"#$%&'"%#'9*,0'!"#$%&'7&9'*7',0&'&7)*/"7!&7,!"#$%&'8$/5&'!"#$%&'8$/5&' $7%"(#'(%%'!"#$%&'+&,,*75+<' /&+,"/*75',0&'&7)*/"7!&7,',"',0&' +,(,&'-&."/&'(71'!"#$%&+'9&/&' %"(#&#2

16 Module Use Currently available modules include: abaqus, cmake, ferret, git, grads, idl, jdk, matlab, nco, ncl, ncview, octave, petsc, python, totalview More con)nue to be added to the system constantly. If the default version of a package is not what you want, check to see if a newer version is available. The modules command lets you see all versions of a package (e.g. module avail ncl lists all ncl modules)

17 Module Command Examples pacman3 % module avail PrgEnv-pgi /usr/local/pkg/modulefiles PrgEnv-pgi/10.5 PrgEnv-pgi/12.5(default) PrgEnv-pgi/11.2 PrgEnv-pgi/9.0.4 pacman3 % module list No Modulefiles Currently Loaded. pacman3 % module load PrgEnv-pgi pacman3 % module list Currently Loaded Modulefiles: 1) pgi/12.5 2) openmpi-pgi/ ) PrgEnv-pgi/12.5 pacman3 % module swap PrgEnv-pgi/12.5 PrgEnv-pgi/11.2 pacman3 % module list Currently Loaded Modulefiles: 1) PrgEnv-pgi/11.2 2) openmpi-pgi/ ) pgi/11.2 pacman3 % which pgf90 /usr/local/pkg/pgi/pgi-11.2/linux86-64/11.2/bin/pgf90

18 Modules in jobs If you use a alternate module to compile, be sure to load that module in your job script so the correct shared libraries are available. #!/bin/bash #PBS q standard_4... source /etc/profile.d/modules.sh module purge module load PrgEnv-pgi/11.2 mpirun np 128./wrf.exe

19 Modules in jobs You could also use module switch or module swap : #!/bin/bash #PBS q standard_4... source /etc/profile.d/modules.sh module swap PrgEnv-pgi PrgEnv-pgi/11.2 mpirun np 128./wrf.exe

20 Last Words on Modules You can create your own modules if you want to maintain mul)ple versions of programs you build yourself. If you have ques)on how to do it let us know.

21 Compilers on Pacman Item MPI PGI GNU Fortran 77 mpif77 pgf77 g77 Fortran 90/95 mpif90 pgf90 gfortran C mpicc pgcc gcc / cc C++ mpicc pgcc g++ / c++ Serial Debugger pgdbg gdb Parallel Debugger totalview totalview totalview Performance pgprof gprof Default Module PrgEnv-pgi PrgEnv-gnu PrgEnv-pgi PrgEnv-gnu

22 - c - g - O3 A few PGI compiler op)ons Generate object file but don t link Add debugging informa)on Higher level of op)miza)on (default is O2) - fast Higher level of op)miza)on than O3 - Mipa - Mconcur - mp - Minfo - Mneginfo Perform interprocedural analysis Enable autoparalleliza)on Enables paralleliza)on via OpenMP direc)ves Provides addi)onal informa)on on op)miza)ons done by the compiler. This flag has numerous op)ons (e.g. Minfo=mp,par) Provides addi)onal informa)on on why op)miza)ons are not done by the compiler.

23 Batch System Batch queuing Allows job scheduling on shared compute nodes Requires users to specify resource requests Ensures that nodes are shared fairly Manages resources to maximize throughput Pacman uses the Torque/Moab batch queuing system, which is based on PBS, the Portable Batch System

24 Batch Script Contents 1. Queuing Commands Shell dialect (e.g. bash, ksh) Execu)on queue to use Job wall)me Number of nodes required 2. Commands to Execute File/Directory Manipula)ons Code to execute

25 Torque Script - MPI Example #!/bin/bash #PBS -q standard_16 #PBS -l walltime=8:00:00 #PBS -l nodes=4:ppn=16 #PBS -j oe cd $PBS_O_WORKDIR mpirun./myprog

26 Torque Script - MPI Example #!/bin/bash #PBS -q standard #PBS -l walltime=8:00:00 #PBS -l nodes=6:ppn=16 #PBS -j oe cd $PBS_O_WORKDIR # Use only 2 PEs per node mpirun -np 12 -npernode 2./myprog

27 Torque Script - OpenMP #!/bin/bash #PBS -q standard #PBS -l walltime=8:00:00 #PBS -l nodes=1:ppn=16 #PBS -j oe cd $PBS_O_WORKDIR export OMP_NUM_THREADS=16./myprog

28 Torque Script Shared Node #!/bin/bash #PBS -q shared #PBS -l walltime=8:00:00 #PBS -l nodes=1:ppn=1 #PBS -j oe cd $PBS_O_WORKDIR./myprog

29 Torque Script - Data Staging #!/bin/bash #PBS -q transfer #PBS -l walltime=8:00:00 #PBS -l nodes=1:ppn=1 #PBS -j oe cd $PBS_O_WORKDIR batch_stage $ARCHIVE/mydataset/* cp -r $ARCHIVE/mydataset/*. exit 1 qsub mpi_job.pbs

30 Torque Script MPI Example #!/bin/bash #PBS -q standard #PBS -l walltime=8:00:00 #PBS -l nodes=4:ppn=4 #PBS -j oe cd $PBS_O_WORKDIR (4 core nodes) mpirun np 16 --mca mpi_paffinity_alone 1./myprog # --mca mpi_paffinity_alone 1 enables processor affinity.

31 Common PBS commands qsub job.pbs- queue the script job.pbs to be run by PBS. qstat- list jobs which are queued, running or recently completed. qdel jobid- delete a job with ID=jobid from the qstat command and is also provided when qsub is run. qmap- show a graphical list of the current work on nodes

32 Common Queues standard_4 - regular work. Uses 4 core nodes. 240 hour max wall)me. standard (standard_16)- regular work. Uses 16 core nodes. 48 hour max wall)me. standard_12- regular work. Uses 12 core nodes. 240 hour max wall)me. debug- for quick turn around debugging work. Runs on 16 core nodes. background- lowest priority queue, but doesn t require that you have an alloca)on. shared shared node jobs, for running serial codes on shared nodes transfer- For data transfer to and from $ARCHIVE. Be sure to bring all $ARCHIVE files online using batch_stage prior to the file copy. Use news queues to find out more details on number of nodes and wall)me allowed per queue

33 Which Queue Should I Use? General Recommenda)on: standard_16 - for larger parallel jobs that don t need long run )mes. standard_4- for jobs that need fewer than 16 cores or larger jobs that need long run )mes. standard_12- for single node jobs need more than 4 cores and long run )mes.

34 Example Batch Queue Use # Submit job pacman1 % qsub run_hello.cmd scyld # Check job status pacman1 % qstat scyld Job id Name User Time Use S Queue scyld run_hello.cmd bahls 0 Q debug # Check job status a bit later pacman1 % qstat scyld Job id Name User Time Use S Queue scyld run_hello.cmd bahls 00:00:01 C debug # Output from job is returned to the working directory by default. pacman1 % ls run_hello.cmd.o run_hello.cmd.o206782

35 Select Torque Environment Variables Torque sets environment variables for use within job scripts. $PBS_JOBID Job id assigned by torque. $PBS_NP Number of tasks assigned to the job. $PBS_O_WORKDIR Original working directory on job submission. $PBS_ARRAYID Assigned to the array index when t is used.

36 Example Torque Environment Variable Use #!/bin/bash #PBS l nodes=4:ppn=16 #PBS q standard_16 #PBS j oe #PBS l walltime=1:00:00 cd $PBS_O_WORKDIR mpirun np ${PBS_NP}./a.out > job.${pbs_jobid}.out 2>&1

37 More Torque Features There a number of other features that we aren t going to cover today, but may be useful: Job dependencies make one job run aler another. Job arrays Let you submit an array of jobs. Command line opvons You can override seyngs in a torque script when a job is submi]ed.

38 Interac)ve Work We have a number of login nodes on pacman. The login nodes pacman3 through pacman9 have higher memory and CPU limits for interac)ve work. Check who or top to see if someone else is doing something that might not work well with what you want to do.

39 Example Interac)ve Work # Load a Programming Environment pacman4 % module load PrgEnv-gnu # Compile the code pacman4 % make mpicc hello.c -O3 -o hello.exe # Run it interactively (don t use np greater than 12) pacman4 % mpirun -np 4./hello.exe pacman4: hello world- 0 pacman4: hello world- 3 pacman4: hello world- 1 pacman4: hello world- 2

40 Interac)ve Work Con)nued If your code uses significant amounts of memory or you are interested in )ming measurements, you can get dedicated interac)ve nodes. (Subject to node availability) # request a 2 node debug job. pacman4 % qsub -l nodes=2:ppn=16 -q debug -I qsub: waiting for job scyld to start qsub: job scyld ready n118 % n118 % cd $PBS_O_WORKDIR n118 % module load PrgEnv-gnu n118 % mpirun -np 32./hello.exe... n118 % exit

41 SoLware Stack Pacman has a solware stack based on requests from people on campus. If we have a package installed there s likely already someone on campus using that package. When possible we put example job scripts and test cases in $SAMPLES_HOME If you want a test case for a package, let us know and we will make one!

42 Meteorology, Earth Sciences, Oceanography Related SoLware Netcdf library (version 3, 4) pnetcdf NCO Netcdf Operators NCL NCAR Command Language HDF4 HDF5 GMT GRADS Python with add- ons Octave with Netcdf Support Matlab* IDL** PyNGL Ferret NCView

43 Engineering & Chemistry SoLware Engineering OpenFOAM CFD Abaqus** COMSOL** Matlab Octave Chemistry Gaussian NWChem NAMD

44 Visualiza)on SoLware Ferret GNUPlot Grace NCL NCView Python matplotlib Imagemagick Octave Visit

45 Other SoLware Python with many add- ons R CUDA

46 Addi)onal Informa)on Watch for upcoming training and addi)onal informa)on about ARSC More info at ARSC How to: h]p:// PGI Compilers: h]p:// GNU Compilers: h]p://gcc.gnu.org/

47 Exercises If you would like some prac)ce on things covered today, see: $SAMPLES_HOME/training/introToPacman This covers modules, interac)ve work and batch jobs.

48 Ques)ons and Feedback If you have addi)onal ques)ons, feel free to contact us: ARSC Help Desk Phone: (907) Web: h]p:// We welcome your feedback on this class!

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Introduction to CINECA HPC Environment

Introduction to CINECA HPC Environment Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

Working on the NewRiver Cluster

Working on the NewRiver Cluster Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD  Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ Duke Compute Cluster Workshop 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ rescompu>ng@duke.edu Outline of talk Overview of Research Compu>ng resources Duke Compute Cluster overview Running interac>ve and

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

Running Jobs, Submission Scripts, Modules

Running Jobs, Submission Scripts, Modules 9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

Practical: a sample code

Practical: a sample code Practical: a sample code Alistair Hart Cray Exascale Research Initiative Europe 1 Aims The aim of this practical is to examine, compile and run a simple, pre-prepared OpenACC code The aims of this are:

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009

ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009 ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009 What is ACEnet? Shared resource......for research computing... physics, chemistry, oceanography, biology, math, engineering,

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx

More information

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

PACE Orientation. Research Scientist, PACE

PACE Orientation. Research Scientist, PACE PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

Crash Course in High Performance Computing

Crash Course in High Performance Computing Crash Course in High Performance Computing Cyber-Infrastructure Days October 24, 2013 Dirk Colbry colbrydi@msu.edu Research Specialist Institute for Cyber-Enabled Research https://wiki.hpcc.msu.edu/x/qamraq

More information

Introduction to HPCC at MSU

Introduction to HPCC at MSU Introduction to HPCC at MSU Chun-Min Chang Research Consultant Institute for Cyber-Enabled Research Download this presentation: https://wiki.hpcc.msu.edu/display/teac/2016-03-17+introduction+to+hpcc How

More information

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

XSEDE New User Tutorial

XSEDE New User Tutorial June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

The JANUS Computing Environment

The JANUS Computing Environment Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Introduction to HPC Numerical libraries on FERMI and PLX

Introduction to HPC Numerical libraries on FERMI and PLX Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of

More information

UF Research Computing: Overview and Running STATA

UF Research Computing: Overview and Running STATA UF : Overview and Running STATA www.rc.ufl.edu Mission Improve opportunities for research and scholarship Improve competitiveness in securing external funding Matt Gitzendanner magitz@ufl.edu Provide high-performance

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Introduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende

Introduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

DDT: A visual, parallel debugger on Ra

DDT: A visual, parallel debugger on Ra DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,

More information

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal

More information

Data Management at ARSC

Data Management at ARSC Data Management at ARSC David Newman (From slides by Tom Logan) (from Slides from Don Bahls) Presentation Overview 1. ARSC storage 2. Data Management within ARSC 3. Additional Notes on Long Term Storage

More information

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne

More information

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

XSEDE New User Tutorial

XSEDE New User Tutorial October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Concept

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou OVERVIEW GACRC High Performance

More information

HPC Resources at Lehigh. Steve Anthony March 22, 2012

HPC Resources at Lehigh. Steve Anthony March 22, 2012 HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Getting started with the CEES Grid

Getting started with the CEES Grid Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

CSC Supercomputing Environment

CSC Supercomputing Environment CSC Supercomputing Environment Jussi Enkovaara Slides by T. Zwinger, T. Bergman, and Atte Sillanpää CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. CSC IT Center for Science Ltd. Services:

More information

Cerebro Quick Start Guide

Cerebro Quick Start Guide Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware

More information

KISTI TACHYON2 SYSTEM Quick User Guide

KISTI TACHYON2 SYSTEM Quick User Guide KISTI TACHYON2 SYSTEM Quick User Guide Ver. 2.4 2017. Feb. SupercomputingCenter 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X5570 2.93GHz(Nehalem) Nodes 3,200 total Cores

More information

Compiling applications for the Cray XC

Compiling applications for the Cray XC Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers

More information

High Performance Compu2ng Using Sapelo Cluster

High Performance Compu2ng Using Sapelo Cluster High Performance Compu2ng Using Sapelo Cluster Georgia Advanced Compu2ng Resource Center EITS/UGA Zhuofei Hou, Training Advisor zhuofei@uga.edu 1 Outline GACRC What is High Performance Compu2ng (HPC) Sapelo

More information

Introduction to HPC at MSU

Introduction to HPC at MSU Introduction to HPC at MSU CYBERINFRASTRUCTURE DAYS 2014 Oct/23/2014 Yongjun Choi choiyj@msu.edu Research Specialist, Institute for Cyber- Enabled Research Agenda Introduction to HPCC Introduction to icer

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

High Performance Beowulf Cluster Environment User Manual

High Performance Beowulf Cluster Environment User Manual High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is HPC Concept? What

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC High Performance

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology Computational Biology @ TACC Goal: Establish TACC as a leading center for ENABLING computational biology

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

First steps on using an HPC service ARCHER

First steps on using an HPC service ARCHER First steps on using an HPC service ARCHER ARCHER Service Overview and Introduction ARCHER in a nutshell UK National Supercomputing Service Cray XC30 Hardware Nodes based on 2 Intel Ivy Bridge 12-core

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:

More information

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/

More information

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

Accelerator programming with OpenACC

Accelerator programming with OpenACC ..... Accelerator programming with OpenACC Colaboratorio Nacional de Computación Avanzada Jorge Castro jcastro@cenat.ac.cr 2018. Agenda 1 Introduction 2 OpenACC life cycle 3 Hands on session Profiling

More information

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information