Guillimin HPC Users Meeting March 16, 2017
|
|
- Ashlee Paul
- 5 years ago
- Views:
Transcription
1 Guillimin HPC Users Meeting March 16, 2017 McGill University / Calcul Québec / Compute Canada Montréal, QC Canada
2 Please be kind to your fellow user meeting attendees Limit to two slices of pizza per person to start please And please recycle your pop cans. Thank you! 2
3 Outline Compute Canada News System Status Software Updates Training News Special Topic Best Practices for Job Submission 3
4 Compute Canada News 2017 Resource Allocation Competitions Scientific reviews completed Announcement of Awards: Soon! Implementation of Awards: Mid April 2017 Important notes: There will be some small number of migrations of allocations, either partial or full, from Guillimin to other Compute Canada systems (such as the new Cedar and Graham) Once announced, we can help answer any questions regarding migration if your allocation is on a different system 4
5 Compute Canada News 2017 High Performance Computing Symposium HPCS June 5th 9th Queen's University, Kingston (Ontario) Call for papers and posters: submissions due April 17 5
6 System Status March 6 - GPFS unresponsive on login nodes Caused by long waiters in GPFS GPFS communications unable to complete their actions Source: Infiniband communications issues between some worker nodes and the rest of the cluster, which can have adverse effects on general GPFS functions Problematic nodes were identified and removed from the cluster network GPFS waiters were cleaned and regular access restored in the afternoon of March 6 6
7 System Status Upcoming scheduled power maintenance: April Precise dates within the week still to be confirmed Major power maintenance by ETS to upgrade 25kV feeds to campus and therefore to the HPC Centre Impact Significantly reduced or no access to worker nodes To be Confirmed: Guillimin storage and login nodes may be placed on generator so as to enable access to data during that week Recommendations Attempt to complete any important project beforehand If you need to work on any code or data during that week, make sure to keep a copy at another site, when feasible More details to be announced soon 7
8 New Software Installations Please use module spider modulename for load instructions. Java/1.6.0_24 - Programming language (old version for compatibility) Python/{2.7.12, 3.5.2} - Programming language NAMD/2.9-PACE - Molecular dynamics code with PACE force field support Stacks/ Pipeline for building loci from short-read sequences SAMtools/ Manipulates alignments in the SAM format HDF5/ serial - Library for storing and managing data (no-mpi version) Bazel/ Build tool for Tensorflow Tensorflow/{ Python , Python , Python , Python-3.5.2} - Package for machine learning Vim/8.0 - The ubiquitous text editor (Vi Improved) tmux/2.3 - Terminal multiplexer RandomLib/ Library for random numbers 8
9 Training News All upcoming events: calculquebec.eventbrite.ca March 23 - Introduction to OpenMP (McGill) Apr. 4 - Analyse de données massives avec Spark (U. Laval) Apr. 6 - Programmation en R intermédiaire (UdeM) May Recently completed: Feb Data Analysis in Ecology, R/Python (UQAM) Mar. 2, 9 - Software Carpentry, Python (McGill) Mar. 7, 8 - Software Carpentry, Python (U. Laval) Mar Introduction a OpenMP (U. Sherbrooke) All materials from previous workshops are available online: wiki.calculquebec.ca/w/formations/en All user meeting presentations online at 9
10 User Feedback and Discussion Questions? Comments? We value your feedback. Contact us at: Guillimin Operational News for Users Status Pages (all CQ systems) Follow us on Twitter 10
11 Best Practices for Job Submission March 16, 2017 McGill University / Calcul Québec / Compute Canada Montréal, QC Canada
12 The Scheduler is Playing Tetris Lower priority Time Low priority High priority (reservation) Unused cores Nodes 12
13 The Scheduler is Playing Tetris Backfill (small, low priority job can run when higher priority jobs can't) Time Unused cores Nodes 13
14 Hardware Resources Available on Guillimin Partition Count (W, SB) Memory per core on Westmere nodes (ppn=12) Memory per core on Sandy Bridge nodes (ppn=16) Debug (SW2) - for short test jobs 0, 3 4 GB Serial Workload (SW, SW2) - for serial jobs and "light" parallel jobs 576, GB 4 GB High Bandwidth (HB) - for massively parallel jobs 384, 0 2 GB Large Memory (LM, LM2) - for jobs, requiring large memory footprint 192, GB 8 GB Extra Large Memory (XLM2) - limited selection of extra large memory nodes 0, 12 12, 16 or 32 GB Accelerated Workload (AW) - nodes with GPUs and Xeon Phis 0, or 8 GB 14
15 Let the Scheduler Choose the Right Queue Description (queue name) nodes=1:ppn<12 ppn=12 ppn=16 procs=n, n 12 Default (metaq) SW, SW2, AW SW, HB, LM SW2, LM2, XLM2 SW, HB, LM, SW2, LM2, XLM2 High Bandwidth (hb) Serial Workload (sw) Large Memory (lm) Accelerated Workload (k20, phi) Debug (debug) NOT ALLOWED HB SW2 HB SW, SW2 SW SW2 SW, SW2 NOT ALLOWED LM LM2 LM, LM2 AW AW AW AW SW2 SW2 SW2 SW2 15
16 Let the Default Queue Route Your Serial Job PBS -l value where n<12 walltime 36h (serial-short) walltime > 36h (sw-serial) #PBS -l nodes=1:ppn=n SW, SW2, AW SW #PBS -l nodes=1:ppn=n:westmere SW SW #PBS -l nodes=1:ppn=n:sandybridge SW2, AW SW2 Serial: Default memory: pmem=2700m (2.7G per core) Recommended: n 6, or n=12 otherwise (full node) Serial (Sandy Bridge): Optional memory: pmem=3700m (3.7G per core) Recommended: n 8, or n=16 otherwise (full node) 16
17 How to Pack Serial Jobs The Linux operating system can run your process in the background so that your script continues without waiting for it to finish Use the ampersand symbol, & The wait command says to wait for all background processes to finish #!/bin/bash #PBS -l walltime=30:00:00 #PBS -l nodes=1:ppn=12 SRC=$HOME/program_dir cd $SCRATCH/dir1 ; $SRC/prog > output & cd $SCRATCH/dir2 ; $SRC/prog > output & cd $SCRATCH/dir3 ; $SRC/prog > output &... cd $SCRATCH/dir12 ; $SRC/prog > output& wait #!/bin/bash #PBS -l walltime=30:00:00 #PBS -l nodes=1:ppn=12 SRC=$HOME/program_dir for i in $(seq 12) do cd $SCRATCH/dir$i $SRC/prog > output & done wait 17
18 How to Pack Thousands of Serial Tasks GNU Parallel is an easy-to-use tool for launching processes in parallel Example: testing all combinations of two parameters: {1, 2, 3} x {94, 95, 96} $ parallel echo {1} x {2} ::: $(seq 1 3) ::: $(seq 94 96) 1 x 94 1 x 95 1 x 96 2 x 94 2 x 95 2 x 96 3 x 94 3 x 95 3 x 96 18
19 GNU Parallel Run different commands in parallel parallel ::: hostname date 'echo hello world' Input sources from a file parallel -a input-file echo Input sources from the command line parallel echo ::: A B C Input sources from STDIN cat input-file parallel echo Input from multiple sources parallel -a abc-file -a def-file echo cat abc-file parallel -a - -a def-file echo # Will operate on each pair of inputs 19
20 Let the Default Queue Route Your Parallel Job pmem value walltime <= 72h walltime > 72h ppn=12 ppn=16 procs ppn=12 ppn=16 procs 1700m(*) HB, SW, LM SW2, LM2 HB, SW, LM, SW2, LM2 2700m( ) SW, LM SW2, LM2 SW, LM, SW2, LM2 3700m( ) LM SW2, LM2 LM, SW2, LM2 HB SW2 HB SW SW2 SW LM SW2 SW2 5700m LM LM2 LM, LM2 LM LM2 LM, LM2 7700m N. A. LM2 LM2 N. A. LM2 LM2 >7800m XLM2 XLM2 XLM2 XLM2 XLM2 XLM2 (*) pmem=1700m is default if procs>12 or nodes>1 ( ) pmem=2700m is default if procs=12 or nodes=1:ppn=12 ( ) pmem=3700m is default if ppn=16 20
21 Let the Default Queue Route Your Parallel Job Parallel (ppn=12, Westmere): #PBS -l nodes=n:ppn=12 Default pmem: 2700m if n=1, 1700m otherwise Parallel (ppn=16, Sandy Bridge): #PBS -l nodes=n:ppn=16 Default pmem: 3700m Parallel (procs=k, k>11, multiples of 48 are best): #PBS -l procs=k Default pmem: 2700m if k=12, 1700m otherwise 21
22 Submission styles (accelerators, debug) GPUs: #PBS -l nodes=2:ppn=16:gpus=2 #PBS -l pmem=123200m Reserves two full nodes with 2 GPUs each pmem is per node for GPUs! Xeon Phi: #PBS -l nodes=1:ppn=8:mics=1,pmem=29600m Public Queues: Default queue: metaq, generally no need to specify queue name Exception: debug queue: #PBS -q debug, for test jobs (default walltime 30 mins, max 2 hours) 22
23 Private Queues pmem value walltime <= 72h walltime > 72h ppn=12 ppn= m hbplus hb sw2-parallel 2700m swplus sw-parallel sw2-parallel 3700m sw2plus sw2-parallel sw2-parallel 5700m lm lm lm 7700m lm2 N. A. lm2 >7800m xlm2 xlm2 xlm2 Other queues: k20 phi debug 23
24 How to Monitor Your Job in Queue Idle queues with partitions for accurate priority: showq -i -p gm-1r16-n04 showq -i -p k20 showq -i -p phi Idle queue for your account: showq -i -w acct=abc-123-ax -v Idle queue for serial jobs: showq -i -p gm-1r16-n04 -w qos=serial Idle queue for any queue. For example, debug: showq -i -w class=debug Note: priority ranking goes by QOS = Quality of Service: serial,normal,avx(sw2),lm,xlm2,aw This way, lm jobs get priority over normal jobs on LM nodes. 24
25 Conclusion Any question? For other questions: 25
Guillimin HPC Users Meeting March 17, 2016
Guillimin HPC Users Meeting March 17, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training
More informationGuillimin HPC Users Meeting April 13, 2017
Guillimin HPC Users Meeting April 13, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit to
More informationGuillimin HPC Users Meeting November 16, 2017
Guillimin HPC Users Meeting November 16, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit
More informationGuillimin HPC Users Meeting October 20, 2016
Guillimin HPC Users Meeting October 20, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit
More informationGuillimin HPC Users Meeting December 14, 2017
Guillimin HPC Users Meeting December 14, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit
More informationGuillimin HPC Users Meeting January 13, 2017
Guillimin HPC Users Meeting January 13, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit
More informationGuillimin HPC Users Meeting. Bryan Caron
July 17, 2014 Bryan Caron bryan.caron@mcgill.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News Upcoming Maintenance Downtime in August Storage System
More informationGuillimin HPC Users Meeting July 14, 2016
Guillimin HPC Users Meeting July 14, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training
More informationGuillimin HPC Users Meeting. Bart Oldeman
June 19, 2014 Bart Oldeman bart.oldeman@mcgill.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News Upcoming Maintenance Downtime in August Storage System
More informationGuillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada
Guillimin HPC Users Meeting February 11, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Compute Canada News Scheduler Updates Software Updates Training
More informationOutline. March 5, 2012 CIRMMT - McGill University 2
Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started
More informationGuillimin HPC Users Meeting June 16, 2016
Guillimin HPC Users Meeting June 16, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Compute Canada News System Status Software Updates Training News
More informationIntroduction to Python for Scientific Computing
1 Introduction to Python for Scientific Computing http://tinyurl.com/cq-intro-python-20151022 By: Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@calculquebec.ca, Bart.Oldeman@mcgill.ca Partners and
More informationIntroduction to High Performance Computing
Introduction to High Performance Computing By Pier-Luc St-Onge pier-luc.st-onge@mcgill.ca September 11, 2014 Objectives Familiarize new users with the concepts of High Performance Computing (HPC). Outline
More informationPractical Introduction to
1 2 Outline of the workshop Practical Introduction to What is ScaleMP? When do we need it? How do we run codes on the ScaleMP node on the ScaleMP Guillimin cluster? How to run programs efficiently on ScaleMP?
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationGuillimin HPC Users Meeting
Guillimin HPC Users Meeting July 16, 2015 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News Storage Updates Software Updates Training
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationBRC HPC Services/Savio
BRC HPC Services/Savio Krishna Muriki and Gregory Kurtzer LBNL/BRC kmuriki@berkeley.edu, gmk@lbl.gov SAVIO - The Need Has Been Stated Inception and design was based on a specific need articulated by Eliot
More informationGenius Quick Start Guide
Genius Quick Start Guide Overview of the system Genius consists of a total of 116 nodes with 2 Skylake Xeon Gold 6140 processors. Each with 18 cores, at least 192GB of memory and 800 GB of local SSD disk.
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System
More informationBatch Systems & Parallel Application Launchers Running your jobs on an HPC machine
Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike
More informationTools for Handling Big Data and Compu5ng Demands. Humani5es and Social Science Scholars
Tools for Handling Big Data and Compu5ng Demands Humani5es and Social Science Scholars Outline Overview of Compute Canada and WestGrid Focus on Humani5es and Social Sciences The Resource Alloca5on Compe55on
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationIntroduc)on to Hyades
Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on
More informationOpenPBS Users Manual
How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00
More information4/20/15. Blue Waters User Monthly Teleconference
4/20/15 Blue Waters User Monthly Teleconference Agenda Utilization Recent events Recent changes Upcoming changes Blue Waters Data Sharing 2015 Blue Waters Symposium PUBLICATIONS! 2 System Utilization Utilization
More informationCrash Course in High Performance Computing
Crash Course in High Performance Computing Cyber-Infrastructure Days October 24, 2013 Dirk Colbry colbrydi@msu.edu Research Specialist Institute for Cyber-Enabled Research https://wiki.hpcc.msu.edu/x/qamraq
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationBatch Systems. Running your jobs on an HPC machine
Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationIntroduction to HPCC at MSU
Introduction to HPCC at MSU Chun-Min Chang Research Consultant Institute for Cyber-Enabled Research Download this presentation: https://wiki.hpcc.msu.edu/display/teac/2016-03-17+introduction+to+hpcc How
More informationThe Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center
The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationWorking on the NewRiver Cluster
Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced
More informationSuperMike-II Launch Workshop. System Overview and Allocations
: System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of
More informationUF Research Computing: Overview and Running STATA
UF : Overview and Running STATA www.rc.ufl.edu Mission Improve opportunities for research and scholarship Improve competitiveness in securing external funding Matt Gitzendanner magitz@ufl.edu Provide high-performance
More informationPractical Introduction to Message-Passing Interface (MPI)
1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your
More informationCloud Computing Research Cloud: NeCTAR Commercial Cloud: Amazon AWS, Microsoft Azure, etc. Seed money for exploration of new cloud technologies
High Performance Computing (HPC) As a service: NCI Raijin Katana local HPC cluster Cloud Computing Research Cloud: NeCTAR Commercial Cloud: Amazon AWS, Microsoft Azure, etc. Seed money for exploration
More informationand how to use TORQUE & Maui Piero Calucci
Queue and how to use & Maui Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 We Are Trying to Solve 2 Using the Manager
More informationUAntwerpen, 24 June 2016
Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides
More informationQueue systems. and how to use Torque/Maui. Piero Calucci. Scuola Internazionale Superiore di Studi Avanzati Trieste
Queue systems and how to use Torque/Maui Piero Calucci Scuola Internazionale Superiore di Studi Avanzati Trieste March 9th 2007 Advanced School in High Performance Computing Tools for e-science Outline
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationHTCondor on Titan. Wisconsin IceCube Particle Astrophysics Center. Vladimir Brik. HTCondor Week May 2018
HTCondor on Titan Wisconsin IceCube Particle Astrophysics Center Vladimir Brik HTCondor Week May 2018 Overview of Titan Cray XK7 Supercomputer at Oak Ridge Leadership Computing Facility Ranked #5 by TOP500
More informationHabanero Operating Committee. January
Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationRWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU-Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de March 2012 Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative
More informationDDT: A visual, parallel debugger on Ra
DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationSlurm basics. Summer Kickstart June slide 1 of 49
Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource
More informationCuda C Programming Guide Appendix C Table C-
Cuda C Programming Guide Appendix C Table C-4 Professional CUDA C Programming (1118739329) cover image into the powerful world of parallel GPU programming with this down-to-earth, practical guide Table
More informationHow to Use a Supercomputer - A Boot Camp
How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationIntroduction to the Cluster
Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be
More informationIntroduction to Advanced Research Computing (ARC)
Introduction to Advanced Research Computing (ARC) September 29, 2016 By: Pier-Luc St-Onge 1 Financial Partners 2 Setup for the workshop 1. Get a user ID and password paper (provided in class): ##: **********
More informationWorking with Shell Scripting. Daniel Balagué
Working with Shell Scripting Daniel Balagué Editing Text Files We offer many text editors in the HPC cluster. Command-Line Interface (CLI) editors: vi / vim nano (very intuitive and easy to use if you
More informationTransitioning to Leibniz and CentOS 7
Transitioning to Leibniz and CentOS 7 Fall 2017 Overview Introduction: some important hardware properties of leibniz Working on leibniz: Logging on to the cluster Selecting software: toolchains Activating
More informationIntroduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU
Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac
More informationIntroduction To HPCC Faculty Seminars in Research and Instructional Technology Dec 16, 2014
Introduction To HPCC Faculty Seminars in Research and Instructional Technology Dec 16, 2014 https://wiki.hpcc.msu.edu/x/jwjiaq Dirk Colbry colbrydi@msu.edu Director, High Performance Computing Center Institute
More informationUsing Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta
Using Compute Canada Masao Fujinaga Information Services and Technology University of Alberta Introduction to cedar batch system jobs are queued priority depends on allocation and past usage Cedar Nodes
More informationParameter searches and the batch system
Parameter searches and the batch system Scientific Computing Group css@rrzn.uni-hannover.de Parameter searches and the batch system Scientific Computing Group 1st of October 2012 1 Contents 1 Parameter
More informationParallel Applications on Distributed Memory Systems. Le Yan HPC User LSU
Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming
More informationHybrid Implementation of 3D Kirchhoff Migration
Hybrid Implementation of 3D Kirchhoff Migration Max Grossman, Mauricio Araya-Polo, Gladys Gonzalez GTC, San Jose March 19, 2013 Agenda 1. Motivation 2. The Problem at Hand 3. Solution Strategy 4. GPU Implementation
More informationWVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services
WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized
More informationUsing ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University
Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and
More informationCENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing
Overview of CHPC Martin Čuma, PhD Center for High Performance Computing m.cuma@utah.edu Spring 2014 Overview CHPC Services HPC Clusters Specialized computing resources Access and Security Batch (PBS and
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationCluster Network Products
Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationSubmitting and running jobs on PlaFRIM2 Redouane Bouchouirbat
Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.
More informationThe GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
The GPU-Cluster Sandra Wienke wienke@rz.rwth-aachen.de Fotos: Christian Iwainsky Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative computer
More informationIntroduction to HPC Resources and Linux
Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center
More informationINTRODUCTION TO THE CLUSTER
INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster
More informationRunning Jobs, Submission Scripts, Modules
9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of
More informationManaging and Deploying GPU Accelerators. ADAC17 - Resource Management Stephane Thiell and Kilian Cavalotti Stanford Research Computing Center
Managing and Deploying GPU Accelerators ADAC17 - Resource Management Stephane Thiell and Kilian Cavalotti Stanford Research Computing Center OUTLINE GPU resources at the SRCC Slurm and GPUs Slurm and GPU
More informationReduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection
Switching Operational modes: Store-and-forward: Each switch receives an entire packet before it forwards it onto the next switch - useful in a general purpose network (I.e. a LAN). usually, there is a
More informationNew User Tutorial. OSU High Performance Computing Center
New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationWelcome to the XSEDE Big Data Workshop
Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Who are we? Your hosts: Pittsburgh Supercomputing Center Our satellite sites:
More informationNERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber
NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori
More informationCerebro Quick Start Guide
Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware
More informationTECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)
TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationFUJITSU PHI Turnkey Solution
FUJITSU PHI Turnkey Solution Integrated ready to use XEON-PHI based platform Dr. Pierre Lagier ISC2014 - Leipzig PHI Turnkey Solution challenges System performance challenges Parallel IO best architecture
More informationAnswers to Federal Reserve Questions. Training for University of Richmond
Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node
More informationJob Management on LONI and LSU HPC clusters
Job Management on LONI and LSU HPC clusters Le Yan HPC Consultant User Services @ LONI Outline Overview Batch queuing system Job queues on LONI clusters Basic commands The Cluster Environment Multiple
More informationMemory Footprint of Locality Information On Many-Core Platforms Brice Goglin Inria Bordeaux Sud-Ouest France 2018/05/25
ROME Workshop @ IPDPS Vancouver Memory Footprint of Locality Information On Many- Platforms Brice Goglin Inria Bordeaux Sud-Ouest France 2018/05/25 Locality Matters to HPC Applications Locality Matters
More informationIntroduction to High Performance Computing (HPC) Resources at GACRC
Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Concept
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More information