Introduction to W&M. Feb 12, 2019 Eric J. Walter Manager of HPC
|
|
- Edwina Johnson
- 5 years ago
- Views:
Transcription
1 Introduction to W&M th Laura Hild System Engineer Feb 12, 2019 Eric J Walter Manager of HPC Jay Kanukurthy Applications Analyst
2 Using HPC OR Important topics: Getting an account Linux command line / text editors Logging into the clusters Selecting software How to use filesystems efficiently How to use to use the batch system Compiling / installing your own applications HPC ticket system mail: hpc-help@wmedu Using HPC
3 Intel Broadwell /sciclone/home20 bora 11T bo01-bo55 hi01hi07 20/ /224 Opteron Seoul /sciclone/home10 27T sciclonewmedu vx29vortex vx01vx28 vx36 12/432 Intel Nehalem hu01hurricane hu12 147T 8/384 storm Opteron Magny-Cours ha01ha38 wi01wi26 8/304 16/416 ice01 48 ice02 32 statwmedu /sciclone/scr10 breeze /sciclone/scr20 twister 86T 73T Opteron Santa Rosa rain W&M W&M W&M W&M /sciclone/pscr wh01wh52 Opteron Shanghai W&M W&M W&M W&M Cluster Diagram /sciclone/scr30 ra115-ra186 17T tornado or W&M VPN /sciclone/data10 360T tempest 4/288 Intel Knights Landing / Xeon Phi meltemi mlt001-mlt100 64/6400 /sciclone/scr-mlt 74T mistral proj gale
4 Logging In Must use Secure Shell client (SSH) Linux / Mac built-in (terminal) Windows SSH Secure Shell Client / PuTTY [ewalter@particle ~]$ ssh hurricanesciclonewmedu Password: Last login: Tue Feb 2 13:57: from particlehpcwmedu William and Mary Information Technology / SciClone Cluster 1 [hurricane] Questions to ask yourself: Am I on or off campus? If you are off-campus log into statwmedu first using your W&M username and password Is my username the same as my current machine? If it is different use: ssh <username>@<host><domain> Do I need graphics? If yes, then log in with -X
5 Start-up Files & Modules Start-up files control global and cluster specific settings Using start-up files is the recommended way to select software for particular cluster $PLATFORM variable: 11 [vortex] echo $PLATFORM rhel6-opteron This means that startup is controlled by cshrcrhel6-opteron for vortex
6 Modules and Start-up 12 [vortex] module avail /usr/local/modules/modulefiles acml/531/gcc mongo/2612 acml/531/open64 mpfr/313 acml/531/pgi mpi4py/200/gcc-520 acml-int64/531/gcc mpi4py/300/gcc-520 acml-int64/531/open64 multiwell-2017/gcc acml-int64/531/pgi mvapich2-ib/22/gcc-520 acml-mp/531/gcc mvapich2-ib/22/intel acml-mp/531/open64 mvapich2-ib/22/pgi-177 acml-mp/531/pgi mvapich2-tv-ib/22/gcc-520 acml-mp-int64/531/gcc mvapich2-tv-ib/22/intel acml-mp-int64/531/open64 mvapich2-tv-ib/22/pgi [vortex] echo $PLATFORM rhel6-opteron 14 [vortex] less cshrcrhel6-opteron # # Environment configuration for RHEL 6x / Opteron environment # # # # VORTEX # module load isa/seoul module load espresso/61/intel isa controls which software stack is used 16 [vortex] ml Currently Loaded Modulefiles: 1) modules 4) isa/seoul 2) maui/r156-gres 5) intel/2017 3) torque/6111 6) openmpi/211/intel 7) espresso/61/intel Must start new session (log out and back in) to load new start-up modules - online module help
7 Files and I/O There are multiple files-systems available Some are for ongoing / project storage data, homexx Some are for running jobs scrxx, pscr Only data/homexx backed up Use local scratch when possible (every node has some) Users are responsible for using disk space responsibly!! Misuse of file-systems can disturb other jobs and can result in a administrative action! Don't use data10 for job writes or large job reads Use scratch space for jobs (90 day purge)
8 Transferring Files hpcwmedu Using HPC Files & Filesystems Transferring Files Each file-system has a server that runs it For direct access you are STRONGLY encouraged to use the recommended node eg : Logged into bora ; cd d into data10 ; transfer off-site Do this from tempest since files won't have to hop through bora to get off-site /sciclone/home20 11T /sciclone/data10 360T bora tempest Remote site Good 1 hop Bad 2 hops Globus - We have direct endpoints for most file-systems
9 Sharing Files & Folders : Permissions I see for more information 33 [hurricane] pwd /sciclone/home04/ewalter print working directory list directory 34 [hurricane] ls -ld results drwx ewalter hpcf 512 Jun 16 14:37 results directory user group other r-read w-write x-execute associated associated user group **directories need to be x to be entered/passed through 35 [hurricane] ls -l total 3 -rw ewalter -rw ewalter -rw ewalter results list files in directory hpcf 194 Jun 16 14:37 wwdat hpcf 194 Jun 16 14:37 yydat hpcf 194 Jun 16 14:37 zzdat
10 Common Permissions Tasks I Change the permissions of a directory chmod: [hurricane] ls -ld VASP drwx ewalter hpcf 512 Apr 25 [hurricane] chmod go+rx VASP 26 [hurricane] ls -ld VASP drwxr-xr-x 4 ewalter hpcf 512 Apr 27 [hurricane] chmod o-rx VASP 28 [hurricane] ls -ld VASP drwxr-x--- 4 ewalter hpcf 512 Apr 29 [hurricane] chmod g-rx VASP 30 [hurricane] ls -ld VASP drwx ewalter hpcf 512 Apr VASP VASP VASP VASP Change the permissions of a directory and everything under it chmod -R: 32 [hurricane] ls -ld VASP drwx ewalter hpcf 512 Apr 33 [hurricane] ls -l VASP total rw ewalter hpcf rw ewalter hpcf drwx ewalter hpcf [hurricane] chmod -R 4 Apr Apr Aug 2014 VASP potpaw_lda52targz potpaw_pbe52targz 6 00:01 vasp53 g+rx VASP 35 [hurricane] ls -ld VASP drwxr-x--- 4 ewalter hpcf 512 Apr 36 [hurricane] ls -l VASP total rw-r ewalter hpcf rw-r ewalter hpcf drwxr-x--- 2 ewalter hpcf Apr Apr Aug 2014 VASP potpaw_lda52targz potpaw_pbe52targz 6 00:01 vasp53 Use this command if you want to allow group access to your home, scrxx, and dataxx directories
11 Common Permissions Tasks II How do I change the initial permissions that files and folders are given when created: You need to edit your cshrc file in your home directory and add: umask umask 077 umask 027 umask 022 files get -rw folders get drwx files get -rw-r----- folders get drwxr-x--- files get -rw-r--r-- folders get drwxr-xr-x HPC default umask is 077 What groups are I in?: groups 52 [hurricane] groups ewalter ewalter : hpcf wmall hpcstaff www seadas vasp sysadmin wm hpcadmin wheel hpsmh My primary (default) group is hpcf and the rest are secondary Change a group associated with a file or directory: chgrp 54 [hurricane] ls -ld project drwx ewalter hpcf 512 Aug 18 20:47 project 55 [hurricane] chgrp hpcstaff project 56 [hurricane] ls -ld project drwx ewalter hpcstaff 512 Aug 18 20:47 project
12 Software and Compilers There are many software packages available on the HPC systems! Ways to find out whether a package is available Check the modules on a particular cluster with: module avail Look at software web page ( Install it yourself hpc-help@wmedu We encourage users to install their own software in their home directory if possible We will also do it for you or at least help, but we get LOTS of request so try not to abuse Compilers: Intel, PGI, GNU MPI libraries: Intel, mvapich2,openmpi mvp2run wrapper for all three with extra functionality (node load checking)
13 Using the Batch System HPC uses Torque (PBS) to schedule and run jobs Nodes are selected via the node type qsub submits the job to the batch system node type 27 [vortex] qsub -I -l walltime=30:00 -l nodes=1:vortex:ppn=12 qsub: waiting for job to start qsub: job ready 1 [vx01] python progpy JobID Interactive job puts you on a node ready to work There are many node types The default node type is simply the sub-cluster name vortex It is also possible to select certain subsets within a cluster or a collection of subclusters x5672 any hurricane or whirlwind node c18b only large memory vortex nodes See online documentation or send to hpc-help@wmedu for more information DO NOT RUN JOBS ON FRONT-END/LOGIN MACHINES!
14 Using the Batch System II You can also submit a batch job which does not run interactively First you must write a batch script: 34 [hurricane] cat run #!/bin/tcsh #PBS -N test #PBS -l nodes=1:x5672:ppn=8 #PBS -l walltime=0:10:00 #PBS -j oe cd $PBS_O_WORKDIR python progpy >& progout #!/bin/tcsh -N -l -j interpret the following in tcsh syntax name of the job job specifications (walltime ; nodespec) combine stderr and stdout cd $PBS_O_WORKDIR cd to where I submitted the job /aout run the job 35 [hurricane] qsub run 148 [vortex] more testo Warning: no access to tty (Bad file descriptor) Thus no job control in this shell tput: No value for $TERM and no -T specified most widely used batch commands qsub submit job qdel delete job qstat list jobs qsu list my jobs
15 Using the Batch System III MATLAB example 107 [hurricane] more run #!/bin/tcsh #PBS -N test #PBS -l nodes=1:x5672:ppn=8 #PBS -l walltime=12:00:00 #PBS -j oe #PBS -q matlab must add -q matlab for matlab jobs load matlab module (if needed) cd $PBS_O_WORKDIR module load matlab redirect stdout and stderr matlab file for stdout and stderr -nodisplay -r "readmatrix" >& OUT 108 [hurricane] head readmatrixm tic %parpool(8) syms a b c d; meshpoints = meshgenerator(); eigfile = fopen('eigfiletxt', 'wt'); count = 1; count2 = 1; %set(0, 'CurrentFigure', 1); %plot3(0,0,0,''); %grid on
16 Where to get help? HPC webpage: HPC ticket system mail: Using the ticket system is useful since it is monitored by three of us WE'RE HERE TO HELP!
17 Q&A
Intro to W&M and VIMS. Feb 18, 2019 Eric J. Walter Manager of HPC
Intro to HPC @ W&M and VIMS th Laura Hild System Engineer Feb 18, 2019 Eric J. Walter Manager of HPC Jay Kanukurthy Applications Analyst Using HPC https://hpc.wm.edu OR https://www.wm.edu/offices/it/services/hpc/atwm/index.php
More informationHPC Resources at W&M...and How To Use Them
HPC Resources at W&M and How To Use Them What resources are available How to log into HPC machines Setting up your environment How to run a job on the cluster How to get more help January 24, 2017 Laura
More informationHPC at W&M / VIMS. June 17, 2015 Eric J. Walter
HPC at W&M / VIMS June 17, 2015 Eric J. Walter HPC at W&M and VIMS is active and in constant development About 150 total users / 50 active users in the last month Active Departments: Physics, VIMS/CCRM,
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationUser Guide of High Performance Computing Cluster in School of Physics
User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationNew User Tutorial. OSU High Performance Computing Center
New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File
More informationGPU Cluster Usage Tutorial
GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationIntroduction to HPC Resources and Linux
Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationKnights Landing production environment on MARCONI
Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI
More informationUsing the MaRC2 HPC Cluster
Using the MaRC2 HPC Cluster Manuel Haim, 06/2013 Using MaRC2??? 2 Using MaRC2 Overview Get access rights and permissions Starting a terminal session (Linux, Windows, Mac) Intro to the BASH Shell (and available
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationHigh Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin
High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationXSEDE New User Tutorial
October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.
More informationIntroduction: What is Unix?
Introduction Introduction: What is Unix? An operating system Developed at AT&T Bell Labs in the 1960 s Command Line Interpreter GUIs (Window systems) are now available Introduction: Unix vs. Linux Unix
More informationXSEDE New User Tutorial
May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.
More informationXSEDE New User Tutorial
June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac
More informationIntroduction to HPCC at MSU
Introduction to HPCC at MSU Chun-Min Chang Research Consultant Institute for Cyber-Enabled Research Download this presentation: https://wiki.hpcc.msu.edu/display/teac/2016-03-17+introduction+to+hpcc How
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationIntroduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende
Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne
More informationThe DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.
The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationIntroduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER
Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER 1 Introduction This handout contains basic instructions for how to login in to ARCHER and submit jobs to the
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationHPC Introductory Course - Exercises
HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationHPC Resources at Lehigh. Steve Anthony March 22, 2012
HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,
More informationMigrating from Zcluster to Sapelo
GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and
More informationMIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationIntroduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende
Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing
More informationPACE Orientation. Research Scientist, PACE
PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to
More informationThe cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group
The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationIntroduction in Unix. Linus Torvalds Ken Thompson & Dennis Ritchie
Introduction in Unix Linus Torvalds Ken Thompson & Dennis Ritchie My name: John Donners John.Donners@surfsara.nl Consultant at SURFsara And Cedric Nugteren Cedric.Nugteren@surfsara.nl Consultant at SURFsara
More informationWorking on the NewRiver Cluster
Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationAmbiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)
Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS
More informationShell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved
AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment
More informationCluster Clonetroop: HowTo 2014
2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's
More informationIntroduction to Linux
Introduction to Linux M Tech CS I 2015-16 Arijit Bishnu Debapriyo Majumdar Sourav Sengupta Mandar Mitra Login, Logout, Change password $ ssh, ssh X secure shell $ ssh www.isical.ac.in $ ssh 192.168 $ logout,
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationIntroduction to the Linux Command Line
Introduction to the Linux Command Line May, 2015 How to Connect (securely) ssh sftp scp Basic Unix or Linux Commands Files & directories Environment variables Not necessarily in this order.? Getting Connected
More informationUnix. Examples: OS X and Ubuntu
The Command Line A terminal is at the end of an electric wire, a shell is the home of a turtle, tty is a strange abbreviation, and a console is a kind of cabinet. - Some person on SO Learning Resources
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationBasic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala
Basic UNIX commands HORT 59000 Lab 2 Instructor: Kranthi Varala Client/Server architecture User1 User2 User3 Server (UNIX/ Web/ Database etc..) User4 High Performance Compute (HPC) cluster User1 Compute
More informationShark Cluster Overview
Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 1 Graphical login node (rivershark) 800 Cores = slots 714 TB Storage RAW Slide 1/14 Introduction What is a cluster? A cluster is a group of
More informationBatch system usage arm euthen F azo he Z J. B T
Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/
More informationGACRC User Training: Migrating from Zcluster to Sapelo
GACRC User Training: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/28/2017 GACRC Zcluster-Sapelo Migrating Training 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems
More informationExercise Sheet 2. (Classifications of Operating Systems)
Exercise Sheet 2 Exercise 1 (Classifications of Operating Systems) 1. At any given moment, only a single program can be executed. What is the technical term for this operation mode? 2. What are half multi-user
More informationIntroduction to High Performance Computing Using Sapelo2 at GACRC
Introduction to High Performance Computing Using Sapelo2 at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu 1 Outline High Performance Computing (HPC)
More informationIntroduction to HPC Using zcluster at GACRC On-Class GENE 4220
Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC
More informationIntroduction to UNIX. SURF Research Boot Camp April Jeroen Engelberts Consultant Supercomputing
Introduction to UNIX SURF Research Boot Camp April 2018 Jeroen Engelberts jeroen.engelberts@surfsara.nl Consultant Supercomputing Outline Introduction to UNIX What is UNIX? (Short) history of UNIX Cartesius
More informationCrash Course in High Performance Computing
Crash Course in High Performance Computing Cyber-Infrastructure Days October 24, 2013 Dirk Colbry colbrydi@msu.edu Research Specialist Institute for Cyber-Enabled Research https://wiki.hpcc.msu.edu/x/qamraq
More informationExploring UNIX: Session 3
Exploring UNIX: Session 3 UNIX file system permissions UNIX is a multi user operating system. This means several users can be logged in simultaneously. For obvious reasons UNIX makes sure users cannot
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the
More informationTemple University Computer Science Programming Under the Linux Operating System January 2017
Temple University Computer Science Programming Under the Linux Operating System January 2017 Here are the Linux commands you need to know to get started with Lab 1, and all subsequent labs as well. These
More informationICS-ACI System Basics
ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents
More informationIntroduction to HPC at MSU
Introduction to HPC at MSU CYBERINFRASTRUCTURE DAYS 2014 Oct/23/2014 Yongjun Choi choiyj@msu.edu Research Specialist, Institute for Cyber- Enabled Research Agenda Introduction to HPCC Introduction to icer
More informationQuick Guide for the Torque Cluster Manager
Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days
More informationTech Computer Center Documentation
Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationXSEDE New User Training. Ritu Arora November 14, 2014
XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative
More informationIntroduction to High Performance Computing (HPC) Resources at GACRC
Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance
More informationCISC 220 fall 2011, set 1: Linux basics
CISC 220: System-Level Programming instructor: Margaret Lamb e-mail: malamb@cs.queensu.ca office: Goodwin 554 office phone: 533-6059 (internal extension 36059) office hours: Tues/Wed/Thurs 2-3 (this week
More informationIntroduction to Linux for BlueBEAR. January
Introduction to Linux for BlueBEAR January 2019 http://intranet.birmingham.ac.uk/bear Overview Understanding of the BlueBEAR workflow Logging in to BlueBEAR Introduction to basic Linux commands Basic file
More informationAnswers to Federal Reserve Questions. Training for University of Richmond
Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node
More informationIntroduction to CINECA HPC Environment
Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems
More informationUsing ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University
Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and
More informationGuillimin HPC Users Meeting March 17, 2016
Guillimin HPC Users Meeting March 17, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training
More informationIntroduction to the HPC Resources at the University of Pittsburgh
Introduction to the HPC Resources at the University of Pittsburgh Kim F. Wong and Albert DeFusco Center for Simulation and Modeling July 18, 2013 Agenda This hands-on introductory workshop will cover the
More informationKISTI TACHYON2 SYSTEM Quick User Guide
KISTI TACHYON2 SYSTEM Quick User Guide Ver. 2.4 2017. Feb. SupercomputingCenter 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X5570 2.93GHz(Nehalem) Nodes 3,200 total Cores
More information