User Guide of High Performance Computing Cluster in School of Physics
|
|
- Kathlyn Owens
- 5 years ago
- Views:
Transcription
1 User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang This document aims at helping users to quickly log into the cluster, set up the software environment and get their jobs run. It does not cover cluster s hardware and operation system (eg. linux shell, file systems, etc.), nor does it cover the usage of software development tools and parallel programming. If required, those topics may be included in future. Connecting to the cluster The full host name of the cluster is headnode.physics.usyd.edu.au You can use secure shell (ssh) to connect to the cluster (using either short name or full name): ~ > ssh headnode Software environments You often need to set up software environment for a program you wish to use on a computer system. For example, add PBS to your search PATH. This can be specified in your login shell script file,.cshrc or.bashrc. If you have done it, you can keep using it. Otherwise, you are encouraged to use Environment Modules package for this purpose. The package provides a great way to easily customize your shell environment, especially on the fly. To find the list of software for which your environment needs to be set up for using the software, enter module avail, headnode: ~ > module avail /usr/physics/modules/3.2.8/modulefiles IntelCompilerSuite PBS ROOT-v5.28 openmpi gnu openmpi intel headnode: ~ > module whatis PBS PBS : Sets up torque and maui in your enviornment You can set an environment on a fly eg. for PBS, headnode: ~ > module load PBS or add this line in your.cshrc permanently for PBS, 1
2 module load PBS So that, each time when you login, configuration for PBS will be done for you. You can then run PBS commands such as qsub, qstat etc. or man qsub to get information about qsub. You may need to unload a package before loading another one to avoid conflict. For example, you have already set up openmpi with Intel compilers and now want to use it with GNU compilers, do this, headnode: ~ > module unload openmpi intel headnode: ~ > module load openmpi gnu command module help or man module should give you some information about how to use Environment Modules package. Workload Management System (PBS) This section covers the topics: What is PBS; Basic PBS user commands; Available queues on the cluster; Job submission and Job Script Template; Tips for specifying resources; Additional job script templates; Monitoring jobs; Interactive jobs, Array jobs What is PBS PBS is a distributed workload management system. As such, PBS handles the management of computational workload on a set of compute nodes. PBS plays three primary roles: queuing, scheduling and monitoring jobs. From the user's perspective, PBS allows you to make more efficient use of your time. You specify the tasks you need executed. The system takes care of running these tasks and returning the results to you. If the available compute nodes are full, then PBS holds your work and runs it when the resources are available. you create a batch job which you then submit to PBS. A batch job is a file (a shell script) containing a set of commands you want to run on a set of execution machines. It also contains directives which specify the characteristics (attributes) of the job, and resource requirements (e.g. number of processors, amount of memory and length of time) that your job needs. Once you create your PBS job, you can reuse it if you wish. Or, you can modify it for subsequent runs. Basic PBS user commands headnode: ~ > qsub run-job.csh job submission with job script file: run-job.csh headnode: ~ > qstat -u uid 2
3 display job status for user uid only headnode: ~ > qstat -n display all jobs status headnode: ~ > qstat -Q shows various queue types headnode: ~ > qstat -f job-id display status details for the specified running job headnode: ~ > qdel job-id delete job: job-id all PBS client commands are in headnode:/usr/physics/torque/bin. Use man page for detail usage of each command. Available queues on the cluster Queue name for all physics users (jobs will run on node and 31-35): physics Queue name for Complex Systems users (jobs will run on node 21-23): yossarian Queue name for Medical Physics users (jobs will run on node and 31-35): hippocrates Queue name for Condensed Matter Theory users (jobs will run on node 41-45): cmt Job submission and Job Script Template job submission is done by running PBS command qsub, headnode: ~ > qsub run-job.csh where run-job.csh is a batch job which contains qsub options and commands/programs that you want to run. Here is what an example run-job.csh has: #!/bin/csh #PBS -N MyJobName #PBS -o demo.txt #PBS -j oe 3
4 #PBS -q yossarian #PBS -l nodes=1:ppn=4 #PBS -l walltime=00:01:00 #PBS -m ea #PBS M username@physics.usyd.edu.au #PBS -V cd "$PBS_O_WORKDIR" #your commands/programs start here, for example: hostname exit If you submit this job, it will generate a file demo.txt with the hostname of the node it ran on printed. The output may contain harmless TTY warnings related to using tcsh rather than bash. Notes of above example run-job.csh are as follows: #!/bin/csh This indicates it will run C shell. Lines starting with #PBS are options of PBS command qsub. -N MyJobName The name for your job -o demo.txt The filename to write standard output from your job -j oe <optional> Merges stdout and stderr into the output file. Otherwise, PBS will automatically create a separate error log -q yossarian Select which PBS queue to use. Use the queue corresponding to your group -l nodes=1:ppn=4 Specify the CPU resources required, 4 processors on 1 node specified here. -l walltime=00:01:00 maximum wall time requested to run job, 1 minute specified here. Warning: if the job hasn t finished when the time reaches this walltime, you job will be killed. -m ea Sends a notification when job ends/aborts. M username@physics.usyd.edu.au your address specified here. -V Declares that all environment variables in the qsub commands environment are to be exported to the batch job. If this directive is missed, your job may be terminated because eg. $TERM is not set. cd "$PBS_O_WORKDIR" change to the directory where (the variable $PBS_O_WORKDIR contains the path from which) you submit this file. Tips for specifying resources 4
5 The main cluster resources are compute nodes and processors, memories and execution time. Multiple users share the resources on the cluster. The general advice is to request resources as accurate as your job need. As seen above, you specify the number of nodes and number of processors with the option l nodes=*:ppn=*. nodes designates how many nodes your job should be executed on, and ppn specifies the number of processors that will be allocated one each node. For example, -l nodes=1:ppn=1 1 processor on 1 node. This is what you should use for a non-parallel program -l nodes=2:ppn=1 1 processor per node, for a total of 2 processors -l nodes=1:ppn=14 14 processors on 1 node. This option will cause the queue to reject your job because no nodes have enough processors PBS will reserve the number of nodes and processors you have specified for your job no matter how many processors your job actually run on. These nodes and processors will not be given any new tasks when your job is running. On the other hand, if you request l nodes=1:ppn=1 for a Matlab job which uses a matlabpool of size 8 (it will run on 8 processors), PBS won t know your matlab program uses 8 processors and may assign some processors on the node to other jobs. Your job and the other jobs will share 7 processors and this will cause all the jobs to slow down. Therefore, it is important that you request correct number of nodes and processors for your job. Each node has about 32GB swap space, which means that when jobs use up all physical memories, memory swapping will occur to keep jobs running. Memory swapping will slow down all jobs running on the node, too. You can reserve certain physical memory by specifying l mem=??mb or l mem=??gb (maximum amount of physical memory used by the job) to avoid using swap space. For example, -l mem=3gb to reserve 3GB physical memory for your job A little trial and error may be required to find how much memory your job is using. Your job will only run if there is free memory as sufficient (more than 3GB in above example) as requested so making a sensible memory request will allow your job to run sooner. If your job needs memory more than what you have specified, the job will terminates when it reaches mem. Users may reserve more memory for his job by simply requesting all/more processors instead of specifying requested memory size on a node. It is ok, but actually blocks jobs of other users with less memory usage to run. It is recommended that you use l walltime=* instead of l cput=* to specify how much time your program is allowed to run for. walltime literally refers to wall time, the amount of time that a clock on the wall shows (as opposed to CPU time, the time all processors actually spends on a task). After it reaches walltime, your job will be terminated by PBS. It is always best to make as accurate this request as possible. Additional Job Script Templates Job script for running Matlab program: 5
6 #!/bin/csh #PBS -q physics #PBS -l walltime=1:00:00 #PBS -l nodes=1:ppn=4 #PBS -V #PBS -N test-matlab #PBS -m ea #PBS -M #PBS -j oe #PBS -o output.txt cd ${PBS_O_WORKDIR} # run matlab file yourmatlabscripts.m : matlab -nodisplay -r "yourmatlabscript, exit" Job script for running MPI program: #!/bin/csh #PBS -q physics #PBS -l walltime=10:00:00 #PBS -l nodes=4:ppn=2 #PBS -V #PBS -N test-mpi #PBS -m ea #PBS -M firstname.lastname@sydney.edu.au #PBS -j oe #PBS -o output.txt cd "$PBS_O_WORKDIR" mpirun -n 8 yourmpicode # n = nodes x ppn (see resource request) exit Monitoring jobs Use command qsub n to view all submitted jobs status. Alternatively, you can monitor execution of your job by using qload or qtop. By default, qload shows you a list of all jobs currently in the queue, a summary of which users are using the system and information on workload over the cluster. For example, headnode:~ > qload Job ID Job name Owner Queue N/CPU Time remaining Status Sensor sxy cmt 5/25 1h 00m 00s Running USER LOAD 1- SXY/XUE YANG (25 CORES) 1h 00m 00s remaining AVAILABILITY: Medical 0/0, Complex 0/0, CMT 0/0 node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node (12) GB node41 xxxxx (11) GB node (12) GB node42 xxxxx (11) GB node (12) GB node43 xxxxx (11) GB 6
7 node (12) GB node44 xxxxx (11) GB node (12) GB node45 xxxxx (11) GB node (31) GB node (31) GB node (31) GB Your jobs are colored red in the node availability report, so you can see which nodes your job is running on. Several switches are available for qload -a view jobs from all users, not just yourself -u USER view jobs from a different user, and highlight their jobs instead of yours. If you combine u and -a it will show jobs from all users, with highlights from the user specified with -u. -s only show a summary of node availability (to quickly check available resources) If you want to delete your job before it finishes, use the qdel command and provide your Job ID from qload. To remove the job Sensor owned by sxy as shown above, user sxy would run, sxy@headnode: ~ > qdel Interactive jobs You can start an interactive session via PBS by using qsub I. This will create an interactive job, and you will be given a shell on a compute node as though you had used ssh. For example: headnode: ~ > qsub -I q physics qsub: waiting for job 2945.headnode.physics.usyd.edu.au to start qsub: job 2945.headnode.physics.usyd.edu.au ready node02: ~ > This is ideal for compiling code and testing. When using an interactive job, you can specify the number of nodes and CPUs to lock out (although requesting a number of nodes for an interactive job is only useful if you are going to be using mpirun). For example user@headnode: ~ > qsub I l nodes=1:ppn=8 would start an interactive job that locks out an entire node. Interactive jobs will also appear in qload. Please do not use interactive jobs to perform unattended runs (e.g. with batch or screen). Interactive jobs are ONLY for attended interactive use. By default, interactive jobs will terminate after 1 hour. You can set the walltime variable with l flag to increase this, just the same as in the PBS script file. Please do not start interactive jobs with excessive walltime requests. 7
8 Array jobs Array jobs are one of the most powerful features of PBS for single-cpu jobs, are a very compelling reason for many users to learn and switch to the PBS system. They are useful when you want to run the same program many times, operating on different input files or with different input arguments. Array jobs allow you to quickly submit all of the jobs at once, and will run several instances of your job at the same time. For example, suppose I had a directory with files data1.csv, data2.csv and data3.csv, and I wanted to run my program myprog FILE on each of them. I can do this very easily using the -t option #!/bin/csh #PBS -N MyJobName #PBS -o demo.txt #PBS -q yossarian #PBS -l nodes=1:ppn=4 #PBS -l walltime=00:01:00 #PBS -m ea #PBS M username@physics.usyd.edu.au #PBS -V #PBS -t 1-3 cd "$PBS_O_WORKDIR" myprog data${pbs_arrayid}.csv The -t switch instructs PBS to submit this as an array job. You can specify a range of indices (1-3) or individual indices (1,3,5). For each index, PBS creates a separate job. Submitting this script will cause 3 job to be created, each of them requesting 4 CPUs on 1 node. The variable $PBS_ARRAYID stores the value of the array index in each submitted job. So each of the 3 jobs will run with a different value of $PBS_ARRAYID. In this way, myprog will run on each of the 3 data files, even though only one script was submitted to PBS. You can of course do fancier things with the index, like use more sophisticated scripting to operate on the array ID before calling your program etc. Another useful way to use the array ID is as an argument to a Matlab function. For example, if the command in the PBS script was matlab -nodisplay -r "mymatlabscript(${pbs_arrayid});exit" then mymatlabscript.m would be run for each of the different array ID values. You can then write code in Matlab to decide what each of the array ID values will do. 8
Introduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationPBS Pro Documentation
Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationQuick Guide for the Torque Cluster Manager
Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationNBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS
More informationParameter searches and the batch system
Parameter searches and the batch system Scientific Computing Group css@rrzn.uni-hannover.de Parameter searches and the batch system Scientific Computing Group 1st of October 2012 1 Contents 1 Parameter
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationRunning Jobs, Submission Scripts, Modules
9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of
More informationOpenPBS Users Manual
How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00
More informationBatch Systems. Running your jobs on an HPC machine
Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationAnswers to Federal Reserve Questions. Training for University of Richmond
Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and
More informationDDT: A visual, parallel debugger on Ra
DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationBatch Systems & Parallel Application Launchers Running your jobs on an HPC machine
Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationIntroduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende
Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationWorking on the NewRiver Cluster
Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationKnights Landing production environment on MARCONI
Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI
More informationUF Research Computing: Overview and Running STATA
UF : Overview and Running STATA www.rc.ufl.edu Mission Improve opportunities for research and scholarship Improve competitiveness in securing external funding Matt Gitzendanner magitz@ufl.edu Provide high-performance
More informationXSEDE New User Tutorial
June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please
More informationThe cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group
The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationReduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection
Switching Operational modes: Store-and-forward: Each switch receives an entire packet before it forwards it onto the next switch - useful in a general purpose network (I.e. a LAN). usually, there is a
More informationXSEDE New User Tutorial
May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.
More informationHigh Performance Beowulf Cluster Environment User Manual
High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how
More informationPractical: a sample code
Practical: a sample code Alistair Hart Cray Exascale Research Initiative Europe 1 Aims The aim of this practical is to examine, compile and run a simple, pre-prepared OpenACC code The aims of this are:
More informationPBS Pro and Ansys Examples
PBS Pro and Ansys Examples Introduction This document contains a number of different types of examples of using Ansys on the HPC, listed below. 1. Single-node Ansys Job 2. Single-node CFX Job 3. Single-node
More informationNew User Seminar: Part 2 (best practices)
New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency
More informationSimple examples how to run MPI program via PBS on Taurus HPC
Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationIntroduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende
Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides
More informationIntroduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006
Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006 The following is a practical introduction to running parallel MPI jobs on COGNAC, the SGI Altix machine (160 Itanium2 cpus)
More informationCheese Cluster Training
Cheese Cluster Training The Biostatistics Computer Committee (BCC) Anjishnu Banerjee Dan Eastwood Chris Edwards Michael Martens Rodney Sparapani Sergey Tarima and The Research Computing Center (RCC) Matthew
More informationIntroduction to CINECA Computer Environment
Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationCS/Math 471: Intro. to Scientific Computing
CS/Math 471: Intro. to Scientific Computing Getting Started with High Performance Computing Matthew Fricke, PhD Center for Advanced Research Computing Table of contents 1. The Center for Advanced Research
More informationXSEDE New User Tutorial
October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationSGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH
SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH reiner@sgi.com Module Objectives After completion of this module you should be able to Submit batch jobs Create job chains Monitor your
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationIntroduction to HPC Resources and Linux
Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center
More informationNew User Tutorial. OSU High Performance Computing Center
New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac
More informationSGI OpenFOAM TM Quick Start Guide
SGI OpenFOAM TM Quick Start Guide 007 5817 001 COPYRIGHT 2012, SGI. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to
More informationGuillimin HPC Users Meeting March 17, 2016
Guillimin HPC Users Meeting March 17, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training
More informationMartinos Center Compute Cluster
Why-N-How: Intro to Launchpad 8 September 2016 Lee Tirrell Laboratory for Computational Neuroimaging Adapted from slides by Jon Kaiser 1. Intro 2. Using launchpad 3. Summary 4. Appendix: Miscellaneous
More informationIntroduction to High Performance Computing (HPC) Resources at GACRC
Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationRunning LAMMPS on CC servers at IITM
Running LAMMPS on CC servers at IITM Srihari Sundar September 9, 2016 This tutorial assumes prior knowledge about LAMMPS [2, 1] and deals with running LAMMPS scripts on the compute servers at the computer
More informationThe DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.
The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides
More informationUsing ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University
Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationViglen NPACI Rocks. Getting Started and FAQ
Viglen NPACI Rocks Getting Started and FAQ Table of Contents Viglen NPACI Rocks...1 Getting Started...3 Powering up the machines:...3 Checking node status...4 Through web interface:...4 Adding users:...7
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationPROGRAMMING MODEL EXAMPLES
( Cray Inc 2015) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various
More informationGPU Cluster Usage Tutorial
GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions
More informationSubmitting and running jobs on PlaFRIM2 Redouane Bouchouirbat
Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.
More informationGrid Engine Users Guide. 5.5 Edition
Grid Engine Users Guide 5.5 Edition Grid Engine Users Guide : 5.5 Edition Published May 08 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the Rocks License
More informationLogging in to the CRAY
Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter
More informationIntroduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)
Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationIntroduction to High Performance Computing (HPC) Resources at GACRC
Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Concept
More informationPRACTICAL MACHINE SPECIFIC COMMANDS KRAKEN
PRACTICAL MACHINE SPECIFIC COMMANDS KRAKEN Myvizhi Esai Selvan Department of Chemical and Biomolecular Engineering University of Tennessee, Knoxville October, 2008 Machine: a Cray XT4 system with 4512
More informationJob Management on LONI and LSU HPC clusters
Job Management on LONI and LSU HPC clusters Le Yan HPC Consultant User Services @ LONI Outline Overview Batch queuing system Job queues on LONI clusters Basic commands The Cluster Environment Multiple
More informationIntroduc)on to Hyades
Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on
More informationUser interface for a computational cluster: resource description approach
User interface for a computational cluster: resource description approach A. Bogdanov 1,a, V. Gaiduchok 1,2, N. Ahmed 2, P. Ivanov 2, M. Kamande 2, A. Cubahiro 2 1 Saint Petersburg State University, 7/9,
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationUsing a Linux System 6
Canaan User Guide Connecting to the Cluster 1 SSH (Secure Shell) 1 Starting an ssh session from a Mac or Linux system 1 Starting an ssh session from a Windows PC 1 Once you're connected... 1 Ending an
More informationInstalling and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.
Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is the new cluster
More informationHigh Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin
High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting
More informationARMINIUS Brief Instructions
ARMINIUS Brief Instructions Version 19.12.2017 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 ARMINIUS BRIEF INSTRUCTIONS
More informationHTC Brief Instructions
HTC Brief Instructions Version 18.08.2018 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 HTC BRIEF INSTRUCTIONS Table
More informationCSC Supercomputing Environment
CSC Supercomputing Environment Jussi Enkovaara Slides by T. Zwinger, T. Bergman, and Atte Sillanpää CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. CSC IT Center for Science Ltd. Services:
More informationQueue systems. and how to use Torque/Maui. Piero Calucci. Scuola Internazionale Superiore di Studi Avanzati Trieste
Queue systems and how to use Torque/Maui Piero Calucci Scuola Internazionale Superiore di Studi Avanzati Trieste March 9th 2007 Advanced School in High Performance Computing Tools for e-science Outline
More informationIntroduction to HPC Numerical libraries on FERMI and PLX
Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationCloud Computing Research Cloud: NeCTAR Commercial Cloud: Amazon AWS, Microsoft Azure, etc. Seed money for exploration of new cloud technologies
High Performance Computing (HPC) As a service: NCI Raijin Katana local HPC cluster Cloud Computing Research Cloud: NeCTAR Commercial Cloud: Amazon AWS, Microsoft Azure, etc. Seed money for exploration
More information