Using the Yale HPC Clusters

Size: px
Start display at page:

Download "Using the Yale HPC Clusters"

Transcription

1 Using the Yale HPC Clusters Robert Bjornson Yale Center for Research Computing Yale University Feb 2017

2 What is the Yale Center for Research Computing? Independent center under the Provost s office Created to support your research computing needs Focus is on high performance computing and storage 15 staff, including applications specialists and system engineers Available to consult with and educate users Manage compute clusters and support users Located at 160 St. Ronan st, at the corner of Edwards and St. Ronan Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

3 What is a cluster? A cluster usually consists of a hundred to a thousand rack mounted computers, called nodes. It has one or two login nodes that are externally accessible, but most of the nodes are compute nodes and are only accessed from a login node via a batch queueing system (BQS). The CPU used in clusters may be similar to the CPU in your desktop computer, but in other respects they are rather different. Linux operating system Command line oriented Many cores (cpus) per node No monitors, no CD/DVD drives, no audio or video cards Very large distributed file system(s) Connected internally by a fast network Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

4 Why use a cluster? Clusters are very powerful and useful, but it may take some time to get used to them. Here are some reasons to go to that effort: Don t want to tie up your own machine for many hours or days Have many long running jobs to run Want to run in parallel to get results quicker Need more disk space Need more memory Want to use software installed on the cluster Want to access data stored on the cluster Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

5 Limitations of clusters Clusters are not the answer to all large scale computing problems. Some of the limitations of clusters are: Cannot run Windows or Mac programs Not for persistent services (DBs or web servers) Not ideal for interactive, graphical tasks Jobs that run for weeks can be a problem (unless checkpointed) Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

6 Summary of Yale Clusters (Dec 2016) Omega Grace Louise/Farnam Ruddle Role FAS FAS LS/Med YCGA Total nodes Cores/node 8 20 mostly 20 Mem/node 36 GB/48 GB 128 GB GB Network QDR IB FDR IB 10 Gb EN File system Lustre GPFS GPFS NAS + GPFS Batch queueing Torque LSF Torque/Slurm Torque Duo MFA? No No No/Yes Yes Details on each cluster here: Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

7 Setting up a account Accounts are free of charge to Yale researchers. Request an account at: After your account has been approved and created, you will receive an describing how to access your account. This may involve setting up ssh keys or using a secure login application. Details vary by cluster. If you need help setting up or using ssh, send an to: hpc@yale.edu. Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

8 Ssh to a login node To access any of the clusters, you must use ssh. From a Mac or Linux machine, you simply use the ssh command: laptop$ ssh netid@omega.hpc.yale.edu laptop$ ssh netid@grace.hpc.yale.edu laptop$ ssh netid@louise.hpc.yale.edu laptop$ ssh netid@farnam.hpc.yale.edu laptop$ ssh netid@ruddle.hpc.yale.edu From a Windows machine we recommend using putty: For more information on using PuTTY see: secure-shell-for-microsoft-windows Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

9 Ssh key pairs We use key pairs instead of passwords to log into clusters Keys are generated as pair: public (id rsa.pub) and private (id rsa) You should always use a pass phrase to protect private key! You can freely give the public key to anyone NEVER give the private key to anyone! Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

10 Sshing to Farnam and Ruddle Farnam and Ruddle have an additional level of ssh security, using Multi Factor Authentication (MFA) Example: We use Duo, the same MFA as other secure Yale sites ssh Enter passphrase for key /home/bjornson/.ssh/id_rsa : Duo two-factor login for rdb9 Enter a passcode or select one of the following options: 1. Duo Push to XXX-XXX Phone call to XXX-XXX SMS passcodes to XXX-XXX-9022 Passcode or option (1-3): 1 Success. Logging you in... Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

11 More about MFA Don t have a smartphone? Get a hardware device from the ITS Helpdesk Register another phone: e.g your home or office phone, as a backup See Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

12 Two ways to run jobs on a cluster: Interactive: Batch: you request an allocation Running jobs on a cluster system grants you one or more nodes you are logged onto one of those nodes you run commands you exit and system automatically releases nodes you write a job script containing commands you submit the script system grants you one or more nodes your script is automatically run on one of the nodes your script terminates and system releases nodes system sends a notification via Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

13 Interactive vs. Batch Interactive jobs: like a remote session require an active connection for development, debugging, or interactive environments like R and Matlab Batch jobs: non-interactive can run many jobs simultaneously your best choice for production computing Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

14 General overview on BQS Each BQS supports: Interactive allocation Batch submission Specifying the resources you need for your job Listing running and pending jobs Cancelling jobs Different node partitions/queues Priority rules For information specific to each cluster and BQS: Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

15 Interactive allocations Torque (omega) : qsub -I -q fas_devel Torque (louise): qsub -I -q general Torque (ruddle): qsub -I -q interactive LSF (grace) : bsub -Is -q interactive bash Slurm (farnam): srun -p interactive --pty bash You ll be logged into a compute node and can run your commands. To exit, type exit or ctrl-d Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

16 Torque: Example of an interactive job laptop$ ssh Last login: Thu Nov 6 10:48: from akw105.cs.yale.internal [ snipped ascii art, etc ] login-0-0$ qsub -I -q fas_devel qsub: waiting for job rocks.omega.hpc.yale.internal to start qsub: job rocks.omega.hpc.yale.internal ready compute-34-15$ cd ~/workdir compute-34-15$ module load Apps/R/3.0.3 compute-34-15$ R --slave -f compute.r compute-34-15$ exit login-0-0$ Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

17 LSF: Example of an interactive job laptop$ ssh Last login: Thu Nov 6 10:47: from akw105.cs.yale.internal [ snipped ascii art, etc ] grace1$ bsub -Is -q interactive bash Job <358314> is submitted to queue. <<Waiting for dispatch...>> <<Starting on c01n01>> c01n01$ cd ~/workdir c01n01$ module load Apps/R/3.0.3 c01n01$ R --slave -f compute.r c01n01$ exit login-0-0$ Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

18 Slurm: Example of an interactive job farnam-0:~ $ srun -p interactive --pty bash c01n01$ module load Apps/R c01n01$ R --slave -f compute.r c01n01$ exit farnam-0:~ $ Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

19 Batch jobs create a script wrapping your job declares resources required contains the command(s) to run Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

20 Example batch script (Torque) #!/bin/bash #PBS -q fas_normal #PBS -m abe -M cd $PBS_O_WORKDIR module load Apps/R/3.0.3 R --slave -f compute.r Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

21 Example of a non-interactive/batch job (Torque) login-0-0$ qsub batch.sh rocks.omega.hpc.yale.internal -bash-4.1$ qstat -1 -n rocks.omega.hpc.yale.internal: Req d Req d Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time sw464 fas_norm batch.sh :00:00 R 00:00:34 compute-34-15/2 Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

22 Example batch script (LSF) The same script in LSF: #!/bin/bash #BSUB -q shared # automatically in submission dir module load Apps/R/3.0.3 R --slave -f compute.r Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

23 Example of a batch job (LSF) [rdb9@grace0 ~]$ bsub < batch.sh Job <113409> is submitted to queue <shared>. [rdb9@grace0 ~]$ bjobs JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME rdb9 RUN shared grace0 c13n12 *name;date Sep 7 11:18 Note use of <, required for LSF Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

24 Slurm: Example of a batch script The same script in Slurm: #!/bin/bash #SBATCH -p debug #SBATCH -t 00:30:00 #SBATCH -J testjob module load Apps/R R --slave -f compute.r Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

25 Slurm: Example of a batch job $ sbatch test.sh Submitted batch job 42 $ squeue -j 42 JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 42 general my_job rdb9 R 0:03 1 c13n10 The script runs in the current directory. Output goes to slurm-jobid.out by default, or use -o Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

26 Copy scripts and data to the cluster On Linux and Mac, use scp and rsync to copy scripts and data files from between local computer and cluster. laptop$ scp -r ~/workdir laptop$ rsync -av ~/workdir Cyberduck is a good graphical tool for Mac or Windows. For Windows, WinSCP is available from the Yale Software Library: Other windows options include: Bitvise ssh Cygwin (Linux emulator in Windows: Graphical copy tools usually need configuration when used with Duo. See: Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

27 Summary of Torque commands (Omega/Louise/Ruddle) Description Command Submit a job qsub [opts] SCRIPT Status of a job qstat JOBID Status of a user s jobs qstat -u NETID Detailed status of a user s jobs qstat -1 -n -u NETID Cancel a job qdel JOBID Get queue info qstat -Q Get node info pbsnodes NODE Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

28 Summary of qsub options Description Option Queue -q QUEUE Process count -l nodes=n:ppn=m Wall clock limit -l walltime=dd:hh:mm:ss Memory limit -l mem=jgb Interactive job -I Interactive/X11 job -I -X Job name -N NAME Examples: login-0-0$ qsub -I -q fas_normal -l mem=32gb,walltime=24:00:00 login-0-0$ qsub -q fas_long -l mem=32gb,walltime=3:00:00:00 job.sh Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

29 Summary of LSF commands (Grace) Description Submit a job Status of a job Status of a user s jobs Cancel a job Get queue info Get node info Command bsub [opts] <SCRIPT bjobs JOBID bjobs -u NETID bkill JOBID bqueues bhosts Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

30 Summary of LSF bsub options (Grace) Description Queue Process count Process count Wall clock limit Memory limit Interactive job Interactive/X11 job Job name Option -q QUEUE -n P -R span[hosts=1] -n P -W HH:MM -M M -Is bash -Is -XF bash -J NAME Examples: grace1$ bsub -Is -q shared -M W 24:00 grace1$ bsub -q long -M W 72:00 < job.sh # 32gb for 1 day # 32gb for 3 days Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

31 Summary of Slurm commands (Farnam) Description Command Submit a batch job sbatch [opts] SCRIPT Submit an interactive job srun -p interactive --pty [opts] bash Status of a job squeue -j JOBID Status of a user s jobs squeue -u NETID Cancel a job scancel JOBID Get queue info squeue -p PART Get node info sinfo -N Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

32 Summary of Slurm sbatch/srun options (Farnam) Description Partition Process count Node count Wall clock limit Memory limit Interactive job Job name Option -p QUEUE -c Cpus -N Nodes -t HH:MM or DD-HH mem-per-cpu M (MB) srun --pty bash -J NAME Examples: farnam1$ srun --pty -p general --mem-per-cpu t 24:00 bash # 32gb for 1 day farnam1$ sbatch -p general --mem-per-cpu t 72:00 job.sh # 32gb for 3 days Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

33 Controlling memory usage It s crucial to understand memory required by your program. Nodes (and thus memory) are often shared Jobs have default memory limits that you can override Each BQS specifies this differently To specify 8 cores on one node with 8 GB RAM for each core: Torque: qsub -lnodes=1:ppn=8 -lmem=64gb t.sh LSF: bsub -n 8 -M t.sh Slurm: sbatch -c 8 --mem-per-cpu=8g t.sh Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

34 Controlling walltime Each job has a default maximum walltime The job is killed if that is exceeded You can specify longer walltime to prevent being killed You can specify shorter walltime to get resources faster To specify walltime limit of 2 days: Torque: qsub -lwalltime=2:00:00:00 t.sh LSF: bsub -W 48:00 t.sh Slurm: sbatch -t 2- t.sh Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

35 Submission Queues Regardless of which cluster you are on, you will submit jobs to a particular queue, depending on: Job s requirements Access rights for particular queues Queue rules Each cluster has its own queues, with specific rules, such as maximum walltime, cores, nodes, jobs, etc. Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

36 Modules Software is setup with module $ module avail find modules $ module avail name find particular module $ module load name use a module $ module list show loaded modules $ module unload name unload a module $ module purge unload all $ module save collection save loaded modules as collection $ module restore collection restore collection $ module describe collection list modules in collection Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

37 Run your program/script When you re finally ready to run your script, you may have some trouble determining the correct command line, especially if you want to pass arguments to the script. Here are some examples: compute-20-1$ python compute.py input.dat compute-20-1$ R --slave -f compute.r --args input.dat compute-20-1$ matlab -nodisplay -nosplash -nojvm < compute.m compute-20-1$ math -script compute.m compute-20-1$ MathematicaScript -script compute.m input.dat You often can get help from the command itself using: compute-20-1$ matlab -help compute-20-1$ python -h compute-20-1$ R --help Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

38 Running graphical programs on compute nodes Two different ways: X11 forwarding easy setup ssh -Y to cluster, then qsub -Y/bsub -XF/srun x11 works fine for most applications bogs down for very rich graphics Remote desktop (VNC) more setup allocate node, start VNC server there, connect via ssh tunnels works very well for rich graphics Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

39 Cluster Filesystems Each cluster has a number of different filesystems for you to use, with different rules and performance characteristics. It is very important to understand the differences. Generally, each cluster will have: Home : Backed up, small quota, for scripts, programs, documents, etc. Scratch : Not backed up. Automatically purged. For temporary files. Project : Not backed up. For longer term storage. Local HD : /tmp For local scratch files. RAMDISK : /dev/shm For local scratch files. Storage@Yale : University-wide storage (active and archive). Consider using local HD or ramdisk for intermediate files. Also consider avoiding files by using pipes. For more info: Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

40 Large datasets (e.g. Genomes) Please do not install your own copies of popular files (e.g. genome refs). We have a number of references installed, and can install others. For example, on Ruddle: /home/bioinfo/genomes If you don t find what you need, please ask us, and we will install them. Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

41 Wait, where is the Parallelism? Qsub/Bsub/Sbatch can allocate multiple cores and nodes, but the script runs on one core on one node sequentially. Simply allocating more nodes or cores DOES NOT make jobs faster. How do we use multiple cores to increase speed? Two classes of parallelism: Single job parallelized (somehow) Lots of independent sequential jobs Some options: Submit many batch jobs simultaneously (not good) Use job arrays, or SimpleQueue (much better) Submit a parallel version of your program (great if you have one) Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

42 Job Arrays Useful when you have many nearly identical, independent jobs to run Starts many copies of your script, distinguished by a task id. Submit jobs like this: Torque: qsub -t Slurm: sbatch --array= LSF: bsub -J "job [1-100]"... You batch script uses an environment variable:./mycommand -i input.${pbs_arrayid} \ -o output.${pbs_arrayid} Other BQS are similar Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

43 SimpleQueue Useful when you have many similar, independent jobs to run Automatically schedules jobs onto a single PBS allocation Advantages Handles startup, shutdown, errors Only one batch job to keep track of Keeps track of status of individual jobs More flexible than job arrays Automatically schedules jobs onto a single PBS allocation Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

44 Using SimpleQueue 1 Create file containing list of commands to run (jobs.list) cd ~/mydir/myrun; prog arg1 arg2 -o job1.out cd ~/mydir/myrun; prog arg1 arg2 -o job2.out... 2 Create launch script module load Tools/SimpleQueue sqcreatescript -q queuename -n 4 jobs.list > run.sh 3 Submit launch script qsub run.sh bsub < run.sh sbatch run.sh # Omega, Ruddle # Grace # Farnam For more info, see Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

45 Parallel-enabled programs Many modern programs are able to use multiple cpus on one node. Typically specify something like -p 20 or -t 20 (see man page) You must allocate matching number of cores: Torque: -lnodes=1:ppn=20 LSF: -n 20 Slurm: -c 20 #!/bin/bash #SBATCH -c 20 myapp -t Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

46 MPI-enabled programs Some programs use MPI to run on many nodes Impressive speedups are possible MPI programs are compiled and run under MPI MPI must cooperate with BQS #!/bin/bash #SBATCH -N 4 -n 20 --mem-per-cpu 4G module load MPI/OpenMPI mpirun./mpiprog... Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

47 Best Practices Start Slowly Run your program on a small dataset interactively In another ssh, watch program with top. Track memory usage. Check outputs Only then, scale up dataset, convert to batch run Input/Output Think about input and output files, number and size Should you use local or ram filesystem? Memory Use top or /usr/bin/time -a to monitor usage Specify memory and walltime requirements Be considerate! Clusters are shared resources. Don t run programs on the login nodes. Use a compute node. Don t submit a huge number of jobs. Use simplequeue or job array. Don t do heavy IO to /home. Keep an eye on quotas. Don t fill filesystems. Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

48 Plug for scripting languages Learning basics of a scripting language is a great investment. Very useful for automating lots of day to day activities Parsing data files Converting file formats Verifying collections of files Creating processing pipelines Summarizing, etc. etc. Python (strongly recommended) Bash Perl (if your lab uses it) R (if you do a lot of statistics or graphing) Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

49 To get help Send an to: Read documentation at: Come to office hours: office-hours-getting-help Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

50 Resources Very useful table of equivalent BQS commands: Recommended Books Learning Python: Mark Lutz Python Cookbook: Alex Martelli Bioinformatics Programming using Python: Mitchell Model Learning Perl: Randal Schwarz Robert Bjornson (Yale) Using the Yale HPC Clusters Feb / 70

Using the Yale HPC Clusters

Using the Yale HPC Clusters Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support

More information

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Slurm basics. Summer Kickstart June slide 1 of 49

Slurm basics. Summer Kickstart June slide 1 of 49 Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource

More information

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ Duke Compute Cluster Workshop 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ rescompu>ng@duke.edu Outline of talk Overview of Research Compu>ng resources Duke Compute Cluster overview Running interac>ve and

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

UMass High Performance Computing Center

UMass High Performance Computing Center .. UMass High Performance Computing Center University of Massachusetts Medical School October, 2015 2 / 39. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract

More information

Submitting batch jobs

Submitting batch jobs Submitting batch jobs SLURM on ECGATE Xavi Abellan Xavier.Abellan@ecmwf.int ECMWF February 20, 2017 Outline Interactive mode versus Batch mode Overview of the Slurm batch system on ecgate Batch basic concepts

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into

More information

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.

More information

Introduction to the Cluster

Introduction to the Cluster Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be

More information

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders

More information

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER 1 Introduction This handout contains basic instructions for how to login in to ARCHER and submit jobs to the

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

Batch Systems. Running calculations on HPC resources

Batch Systems. Running calculations on HPC resources Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between

More information

Batch Systems. Running your jobs on an HPC machine

Batch Systems. Running your jobs on an HPC machine Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

High Performance Computing Cluster Basic course

High Performance Computing Cluster Basic course High Performance Computing Cluster Basic course Jeremie Vandenplas, Gwen Dawes 30 October 2017 Outline Introduction to the Agrogenomics HPC Connecting with Secure Shell to the HPC Introduction to the Unix/Linux

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

High Performance Computing Cluster Advanced course

High Performance Computing Cluster Advanced course High Performance Computing Cluster Advanced course Jeremie Vandenplas, Gwen Dawes 9 November 2017 Outline Introduction to the Agrogenomics HPC Submitting and monitoring jobs on the HPC Parallel jobs on

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Introduction to High Performance Computing Using Sapelo2 at GACRC

Introduction to High Performance Computing Using Sapelo2 at GACRC Introduction to High Performance Computing Using Sapelo2 at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu 1 Outline High Performance Computing (HPC)

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

HPC Introductory Course - Exercises

HPC Introductory Course - Exercises HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne

More information

CSC Supercomputing Environment

CSC Supercomputing Environment CSC Supercomputing Environment Jussi Enkovaara Slides by T. Zwinger, T. Bergman, and Atte Sillanpää CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. CSC IT Center for Science Ltd. Services:

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

XSEDE New User Training. Ritu Arora November 14, 2014

XSEDE New User Training. Ritu Arora   November 14, 2014 XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative

More information

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing HPC Workshop Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing NEEDED EQUIPMENT 1. Laptop with Secure Shell (ssh) for login A. Windows: download/install putty from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

More information

Using a Linux System 6

Using a Linux System 6 Canaan User Guide Connecting to the Cluster 1 SSH (Secure Shell) 1 Starting an ssh session from a Mac or Linux system 1 Starting an ssh session from a Windows PC 1 Once you're connected... 1 Ending an

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,

More information

Introduction to GACRC Teaching Cluster PHYS8602

Introduction to GACRC Teaching Cluster PHYS8602 Introduction to GACRC Teaching Cluster PHYS8602 Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Introduction to BioHPC

Introduction to BioHPC Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2015-06-03 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance

More information

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders

More information

Sherlock for IBIIS. William Law Stanford Research Computing

Sherlock for IBIIS. William Law Stanford Research Computing Sherlock for IBIIS William Law Stanford Research Computing Overview How we can help System overview Tech specs Signing on Batch submission Software environment Interactive jobs Next steps We are here to

More information

Introduction to RCC. September 14, 2016 Research Computing Center

Introduction to RCC. September 14, 2016 Research Computing Center Introduction to HPC @ RCC September 14, 2016 Research Computing Center What is HPC High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

Quick Guide for the Torque Cluster Manager

Quick Guide for the Torque Cluster Manager Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit CPU cores : individual processing units within a Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

XSEDE New User Tutorial

XSEDE New User Tutorial October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.

More information

Introduction to RCC. January 18, 2017 Research Computing Center

Introduction to RCC. January 18, 2017 Research Computing Center Introduction to HPC @ RCC January 18, 2017 Research Computing Center What is HPC High Performance Computing most generally refers to the practice of aggregating computing power in a way that delivers much

More information

June Workshop Series June 27th: All About SLURM University of Nebraska Lincoln Holland Computing Center. Carrie Brown, Adam Caprez

June Workshop Series June 27th: All About SLURM University of Nebraska Lincoln Holland Computing Center. Carrie Brown, Adam Caprez June Workshop Series June 27th: All About SLURM University of Nebraska Lincoln Holland Computing Center Carrie Brown, Adam Caprez Setup Instructions Please complete these steps before the lessons start

More information

For Dr Landau s PHYS8602 course

For Dr Landau s PHYS8602 course For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your

More information

ICS-ACI System Basics

ICS-ACI System Basics ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents

More information

Introduction to SLURM & SLURM batch scripts

Introduction to SLURM & SLURM batch scripts Introduction to SLURM & SLURM batch scripts Anita Orendt Assistant Director Research Consulting & Faculty Engagement anita.orendt@utah.edu 16 Feb 2017 Overview of Talk Basic SLURM commands SLURM batch

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational

More information

Logging in to the CRAY

Logging in to the CRAY Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter

More information

Choosing Resources Wisely. What is Research Computing?

Choosing Resources Wisely. What is Research Computing? Choosing Resources Wisely Scott Yockel, PhD Harvard - Research Computing What is Research Computing? Faculty of Arts and Sciences (FAS) department that handles nonenterprise IT requests from researchers.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

CRUK cluster practical sessions (SLURM) Part I processes & scripts

CRUK cluster practical sessions (SLURM) Part I processes & scripts CRUK cluster practical sessions (SLURM) Part I processes & scripts login Log in to the head node, clust1-headnode, using ssh and your usual user name & password. SSH Secure Shell 3.2.9 (Build 283) Copyright

More information

Exercises: Abel/Colossus and SLURM

Exercises: Abel/Colossus and SLURM Exercises: Abel/Colossus and SLURM November 08, 2016 Sabry Razick The Research Computing Services Group, USIT Topics Get access Running a simple job Job script Running a simple job -- qlogin Customize

More information

How to run a job on a Cluster?

How to run a job on a Cluster? How to run a job on a Cluster? Cluster Training Workshop Dr Samuel Kortas Computational Scientist KAUST Supercomputing Laboratory Samuel.kortas@kaust.edu.sa 17 October 2017 Outline 1. Resources available

More information

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx

More information

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Introduction Workshop 11th 12th November 2013

Introduction Workshop 11th 12th November 2013 Introduction Workshop 11th 12th November Lecture II: Access and Batchsystem Dr. Andreas Wolf Gruppenleiter Hochleistungsrechnen Hochschulrechenzentrum Overview Access and Requirements Software packages

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

Getting started with the CEES Grid

Getting started with the CEES Grid Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Introduction to SLURM & SLURM batch scripts

Introduction to SLURM & SLURM batch scripts Introduction to SLURM & SLURM batch scripts Anita Orendt Assistant Director Research Consulting & Faculty Engagement anita.orendt@utah.edu 6 February 2018 Overview of Talk Basic SLURM commands SLURM batch

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information