Introduction to HPCC at MSU

Size: px
Start display at page:

Download "Introduction to HPCC at MSU"

Transcription

1 Introduction to HPCC at MSU Chun-Min Chang Research Consultant Institute for Cyber-Enabled Research Download this presentation:

2 How this workshop works We are going to cover some basics with hands on exampes. Exercises are denoted by the following icon in this presents: Bold italic means commands which I expect you type on your terminal in most cases.

3 Green and Red Sticky Use the sticky notes provided to help me help you. - No Sticky = I am working - Green = I am done and ready to move on (yea!) - Red = I am stuck and need more time and/or some help If you have not followed the set up instructions: Please do now

4 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

5 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

6 icer Overview What does icer stand for? Institute for Cyber-Enabled Research Established in 2009 to encourage and support the application of advanced computing resources and techniques by MSU researchers. Mission Reducing the Mean time to Science (Latency and Throughput) icer s mission to help researchers with computational components of their research

7 icer Overview icer is a research unit at MSU. We: Maintain the university s supercomputer Organize Training/workshops Provide 1-on-1 consulting Help with grant proposal

8 Funding From The Vice President office for Research and Graduate Studies (VPRGS) Engineering College, College of Natural Science and College of Social Science This allows us to provides services and resources for FREE!!!

9 Our Fleet

10 Why use HPCC? (Latency and Throughput) Your own computer is too slow You have large amounts of data to process (hundreds of gigabytes) Your program requires massive amounts of memory ( 64gb - 6 TB ) Your work could be done in parallel You need specialized software (in a Linux environment) You need licensed software (Stata, Matlab, Gaussian, etc) Your work takes a long time (days, weeks) You want to collaborate using shared programs and data Software that can take advantage of accelerator cards (Graphics Processors)

11 HPC vs PC Each computing node in a cluster is a single computer Your Computer 2014 Cluster Processors 1 2 per node Cores per processor per node Speed ghz 2.7 ghz High Performance => Running work in parallel, for long periods of time Connection to other computers Computers (nodes) Campus ethernet 1,000 MBit/sec 1 => 8 cores total Users 1 ~1,200 "Infiniband" 50,000 MBit/sec 223 => 4,460 cores total Schedule On Demand 24/7, Queue

12 MSU HPC System Large Memory Nodes (up to 6TB!) GPU Accelerated cluster (K20, M1060) Phi Accelerated clusters (5110p) Over 550 nodes, 7000 computing cores Access to high throughput condor cluster New 2PB high speed parallel scratch file space 50GB replicated file spaces (upgragtable to 1TB) Access to large open-source software stack Shared!!! (& More nodes coming in 2016!!!)

13 Cluster Resources (for Computing Nodes) Year Name Description ppn Memory Nodes Total Cores 2007 intel07 Quad-core 2.3GHz Intel Xeon E GB gfx10 NVIDIA CUDA Node (no IB) 8 18GB intel10 Intel Xeon E5620 (2.40 GHz) 8 24GB intel11 Intel Xeon 2.66 GHz E GB TB TB intel14 Intel Xeon E v2 (2.6 GHz) 20 64GB GB NVIDIA K20 GPUs GB Xeon Phi 5110P GB Large Memory 48 or TB

14 Available Software Compilers, debuggers, and profilers Intel compilers, Impi, openmpi, mvapich, totalview, DDD, DDT Libraries BLAS, FFTW, LaPACK, MKL, PETSc, Trilinos Commercial Software Mathematica, FLUENT, Abaqus, MATLAB NOTE: Not include software installed privately under users own space

15 Online Resources icer.msu.edu: icer Home hpcc.msu.edu: HPCC Home wiki.hpcc.msu.edu: HPCC User Wiki Documentation and user manual contact HPCC Reporting system problems HPC program writing/debugging consultation Help with HPC grant writhing System requests Other general questions

16 Training For workshops and registration details, visit:

17 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

18 Get an Accounts PIs can request accounts (for each group member) at Each account has access up to: 50 GB of replicated file space (/mnt/home/userid) 520 processing cores 363 TB of high-speed scratch space (/mnt/scratch/userid) Also available: shared group folder upon request Three types of space: Home space /mnt/home/$user Group space /mnt/research/<group> Scratch space /mnt/scratch/$user /mnt/ls15/scratch/groups/<group>

19 Connecting to the HPCC using ssh Windows: we recommand mobaxterm as connection client. Insert the thumb drive Open appropriate folder Looking for MobaXterm Install MobaXterm Server: hpcc.msu.edu or rdp.hpcc.msu.edu Username=your netid; password = your netid password OSX: Access spotlight from the menu bar icon (or SPACE) Type terminal In terminal, type: ssh netid@gateway.hpcc.msu.edu

20 Log in to gateway Our supercomputer is a remote service

21 Successful connection HPCC Gateway HPCC message Hostname is Gateway Command prompt at bottom (Note: file list on left side does not present in Mac terminal)

22 Gateway Nodes Shared: accessible by anyone with an MSU-HPCC account. Shared resource -- hundreds of users on the gateway nodes Only means of accessing HPCC compute resources. Useful information (status, file space, messages) ** DO NOT RUN ANYTHING ON THESE NODES! **

23 ssh to a dev node Our supercomputer is a remote service

24 All the board! If you re not connected to the HPCC, please do so now. From gateway, please connect to a development node: ssh dev-intel10 or ssh dev-gfx10 ssh == secure shell (allows you to securely connect to other computers) Take a look at what you have in your account (ls) Take a look at what available on system (ml) Jump to other dev-node and exit

25 MobaXterm (ssh to a dev node)

26 Developement Nodes Shared: accessible by anyone with an MSU-HPCC account Meant for testing / short jobs. Currently, up to 2 hours of CPU time Development nodes are identical to compute nodes in the cluster Evaluation nodes are unique nodes Node name descriptive (name= feature, #=year) (HPCC Quick Reference Sheet)

27 Compute Nodes Use them by job submission (reservation) Dedicated: you request # cores, memory, walltime (also for advanced users: accelerators, temporary file space, licenses) Queuing system: when those resources are available for your job, those resources are assigned to you. Two modes batch mode: generate a script that runs a series of commands (use pbs job scription) Interactive mode (qsub I)

28 Nodes & Clusters A node is a single computer, usually with a few processors and many cores. Each node has it s own host name/address like 'dev-intel14' A cluster is composed of dozens (or hundreds) of nodes, tied together with high speed network Types of Nodes: gateway : computer available to outside world for you to login ; doorware only; not enough to run programs. From here, one can connect to other nodes inside the HPCC network dev-node : computer you choose and log-in to do setup, development, compile, and testing your programs using similar configuration as cluster nodes for the most accurate testing. dev-nodes includes dev-intel10, dev-intel14, dev-intel14- k20 (which includes k20 GPU add-in card). compute-nodes: run actual computation; accessible via the scheduler/queue; can t log in directly eval : unique nodes purchased to evaluated a specific kind of hardware prior to cluster investment. gpu, phi: nodes within a cluster that have special-purpose hardware. fat-nodes have large RAM

29 Basic navigation commands: ls Use the Command with the Options to look into your home directory: ls list files and directories Some options for ls command -a list all files and directories -F append indicator (one of ) to entries -h print sizes in human readable format (e.g., 1k, 2.5M, 3.1G) -l list with a long listing format -t sort by modification time

30 Basic navigation commands: cd Use the Command with the Options to try: cd directory_name cd cd ~ cd.. cd - change to named directory change to home-directory change to home-directory change to parent directory change to the previous directory pwd display the path of the current directory

31 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

32 Old fashion scp source destination scp == secure copy SFTP Use hostname: rsync.hpcc.msu.edu

33 Load files Load to/from your computer MobaXterm Cyberduck (thumb drive) Mapping to HPCC Load to/from HPCC Module Installation From shared space

34 MobaXterm (Sessions -> New session -> SFTP)

35 Transferring Files SFTP program download and install cyberduck Hostname: rsync.hpcc.msu.edu or: rdp.hpcc.msu.edu Username: your netid Password: your netid password Port : 22

36 Cyberduck

37 Mapping HPC drives to a campus computer You can connect your laptop to the HPCC home & research directories Can be very convenient and work on Windows 10! For more information see : 1. determine your file system using the command df -h ~ Filesystem Size Used Avail Use% Mounted on ufs-10-b.i:/b10/u/home/billspat 2. server path Mac: smb: //ufs-10-b.hpcc.msu.edu/billspat Windows: \\ufs-10-b.hpcc.msu.edu\billspat 3. connect Mac : Finder connect to server put in server path netid and Password Windows : My Computer Map Network Drive select letter Windows fix: user name = hpcc\netid (same password)

38 Summary of MSU HPCC File Systems GLOBAL function path (mount point) Env Home directories your personal /mnt/home/$user $HOME Research Space Shared with working group /mnt/research/<group> Scratch temporary computational workspace /mnt/scratch/$user /mnt/ls15/scratch/groups/<group> $SCRATCH Software software installed /opt/software $PATH LOCAL Operating system temp or local local disk, don't use these fast in each single node only, some programs use /tmp /etc, /usr/lib /usr/bin /tmp or /mnt/local

39 File Systems and static link Files can have aliases or shortcuts, so one file or folder can be accessed multiple way, e.g. /mnt/scratch is actually a static link to /mnt/ls15/scratch/users try this: ls l /mnt/scratch cd /mnt/scratch/$user pwd # replace netid with YOUR netid What do you see? Note that "ls15" stands for "Lustre Scratch 2015", meaning the scratch lustre file system installed in 2015.

40 Private and Shared files Home directories are private; Research folders are shared by group cd ~ ls -al # -rw billspat staff-np Apr 9 10:35 myfiles.sam Group Example: (you may not have a research group) ls -al /mnt/research/holekamplab #this won't work for you -rw-rw-r-- 1 billspat holekamplab Apr 9 10:35 eg1.sam The Difference is read and write (rw) by group # /mnt/scratch/netid may be public to system unless you make it private To share a file using your scratch directory: touch ~/testfile cp ~/testfile /mnt/scratch/<netid>/testfile chmod g+r /mnt/scratch/<netid>/testfile

41 Copying files to/from HPCC Can copy files from system to laptop with secure ftp = sftp or secure copy = scp Example 1. create a file to copy, please type cd ~/class echo 'Hello from $USER' > greeting.txt echo 'Created on `date`' >>greeting.txt cat greeting.txt # make sure you are in the directory with pwd command, and it s lower case 2. exit from the HPCC exit; exit 3. in terminal or Moba issue following command scp => secure copy scp netid@rsync.hpcc.msu.edu:class/testfile.txt testfile.txt open testfile.txt Windows: MobaXTerm auto connected, files are listed on the side

42 Load files Load to/from your computer MobaXterm Cyberduck Mapping to HPCC Load to/from HPCC Module Installation From shared space

43 Module System For maximizing the different types of software and system configurations that are available to the users, HPCC uses a Module system to set Paths and Libraries. Key Commands module list : List currently loaded modules module load modulename : Load a module module unload modulename : Unload a module module spider keyword : Search modules for a keyword module show modulename : Show what is changed by a module module purge : Unload all modules

44 Using Modules module list your profile module purge module list # show modules currently loaded in # clear all modules # none loaded # which version of Matlab do you need? module spider Python module spider Python/2.7.2 # after purging (no modules), find the path to Python with the Linux "which" command which python; python --version module load python which python

45 Some Software requires other base programs to be loaded module purge # remove all modules module load R/3.1.1 # error: why? base modules not loaded # to find out what is needed, please use spider command for detailed outpu module spider R/3.1.1 module load gnu module load open mpi module load R/3.1.1

46 Default Modules re-load every time you connect module purge # remove all modules module list exit # return to gateway ssh dev-intel10 module list # what do you see? default modules module load Mathematica; module list ssh dev-intel14 module list Need to load all modules you need for every session or job or look into auto loading of modules using profile (covered later)

47 Powertools Powertools is a collection of software tools and examples that allows researchers to better utilize High Performance Computing (HPC) systems. module load powertools quota # list your home directory disk usage poweruser # Set up your account to load powertools by default priority_status #IF part of a buy-in group; Shows the status of an individuals (buy-in nodes) userinfo <$USER> # get details of a user (from public MSU directory)

48 Load supply from HPCC PleaseLoad Software modules type (in your terminal): module load powertools getexample intro2hpcc See the changes before and after module load This will copy some example files (for this workshop) to your directory. Exercise: try to find the file youfoundit.txt in the hidden directory.

49 getexample You can obtain a lot of examples through getexample. Take advantage of it!

50 Useful Powertools commands getexample : Download user examples powertools : Display a list of current powertools or a long description licensecheck : Check software licenses avail : Show currently available node resources For more information, refer to this webpage (HPCC Powertools)

51 Short Exercise Unload all modules and load these modules GNU, powertools Check which modules are loaded Several versions of MATLAB are installed on HPC. Find what versions are available on HPC. Load the latest version. (What are the commands you use to accomplish above?)

52 getexample If you do not load powertools, please do it now: Download the helloworld example using getexample Check what you downloaded. What is the biggest file?

53 Using MSU HPCC 'getexample' tool We will test using a simple program from HPCC examples. So, after connecting to hpcc, please type : ssh dev-intel10 mkdir class cd class module load powertools getexample getexample helloworld cd helloworld ls cat README cat hello.c cat hello.qsub connect to dev node create directory change directory HPCC scripts list all examples copy one example change directory list files show README file show c program show qsub program

54 Compile, Test, and Queue The README file has instructions for compiling and running gcc hello.c -o hello # compile and output to new program 'hello'./hello # test run this hello program. Does it work? qsub hello.qsub # submit to queue to run on cluster This 'job' was assigned the 'job' id and put into the 'queue.' Now, we wait for it to be scheduled, run, and output files created...

55 Great job! Let s take a 5 min break.

56 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

57 CLI vs. GUI CLI: Command Line Interface: So far, we used CLI GUI: Graphical User Interface

58 Run Matlab on HPCC Run Matlab in command line mode Can you Run Matlab in GUI? If you have problem open GUI, you may need some more tools

59 What is X11 and what is needed for X11? Method for running Graphical User Interface (GUI) across a network connection SSH X11

60 What is needed for X11? X11 server running on your personal computer SSH connection with X11 enabled Fast network connection Preferably on campus

61 Run X11 on Windows Install MobaXterm at MobaXterm, ssh -X

62 Run X11 on Mac (OSX) Mac users: Install XQuartz from icer thumb drive Run XQuartz Quit terminal Run terminal again ssh -X

63 Test GUI using X11 run X11 Try one of the following commands xeyes or/and firefox

64 Batch vs. Interactive Send job to scheduler and run in batch queue Interactive

65 Running Jobs on the HPC The developer (dev) nodes are used to compile, test and debug programs Two ways to run jobs submission job scripts interactive way Submission scripts are used to run (heavy/many) jobs on the cluster.

66 Advantages of running Interactively Yo do not need to write a submission script Yo do not need to wait in the queue You can provide input to and get feedback from your programs as they are running

67 Disadvantages of running Interactively All the resources on developer nodes are shared between all uses Any single process is limited to 2 hours of cpu time. If a process runs longer than 2 hours it will be killed. Programs that overutilize the resources on a developer node (preventing other to use the system) can be killed without warning.

68 Developer Nodes Names Cores Memory (GB) Accelerators Notes Dev-intel dev-gfx xM1060 Nvidia Graphics Node dev-intel dev-intel dev-intel14-phi xPhi Xeon Phi Node dev-intel14-k xK20 Nvidia Graphics Node

69 Programs that can use GUI MATLAB Mathematica totalview : C/C++/fortran debugger especially for multiple processors DDD (Data Display Debugger) : graphical front-end for command-line debugger Etc, etc, and etc

70 Batch commands We ve focused so far on working interactively (command line) Need dedicated resources for a computationally intensive task? List resource requests in a job script List commands in the job script Submit job to the scheduler Check results when job is finished

71 The Queue The system is busy and used by many people may want to run one program for a long time may want to run many programs all at once on multiple nodes or cores To share we have a 'resource manager' that takes in requests to run programs, and allocates them you write a special script that tells the resource manager what you need, and how to run your program, and 'submit to the queue' (get in line) with the qsub command We submitted our job to the queue using the command qsub hello.qsub And the 'qsub file' had the instructions to the scheduler on how to run

72 Submission of Job to the Queue A Submission Script is a mini program for the scheduler that has : 1.List of required resources 2.All command line instructions needed to run the computation including loading all modules 3.Ability for script to identify arguments 4.collect diagnostic output (qstat -f) Script tells the scheduler how to run your program on the cluster. Unless you specify, your program can run on any one of the 7,500 cores available.

73 Submission Script List of required resources All command line instructions needed to run the computation

74 Typical Submission Script #!/bin/bash -login ### define resources needed: #PBS -l walltime=00:01:00 #PBS -l nodes=5:ppn=1 #PBS -l mem=2gb ### you can give your job a name for easier identification #PBS -N name_of_job ### reserve license for conmercial software #PBS W x=gres:matlab ### load necessary modules, e.g. module load MATLAB ### change to the working directory where your code is located cd ${PBS_O_WORKDIR} ### call your executable matlab nodisplay r test(10) # Record Job execution information qstat f ${PBS_JOBID}

75 Submit a job Go to the helloworld directory cd ~/helloworld Create a simple submission script nano hello.qsub See next slide to edit the file

76 hello.qsub #!/bin/bash login #PBS l walltime=00:05:00 #PBS l nodes=1:ppn=1 cd ${PBS_O_WORKDIR}./hello qstat f ${PBS_JOBID}

77 Details about job script # is normally a comment except #! special system commands #!/bin/bash #PBS instructions to the scheduler #PBS -l nodes=1:ppn=1 #PBS -l walltime=hh:mm:ss #PBS -l mem=2gb (!!! Not per core but total) (Scheduling Jobs)

78 Submitting and monitoring Once job script created, submit the file to the queue: qsub hello.qsub Record job id number (######) and wait around 30 seconds Check jobs in the queue with: qstat -u netid & showq -u netid What about using the Commands qs & sq? Status of a job: qstat f jobid Delete a job in a queue: qdel jobid

79 Common Commands qsub submission script Submit a job to the queue qdel job id Delete a job from the queue showq - u user id Show the current job queue of the user checkjob -v job id Check the status of the current job showstart -e all job id Show the estimated start time of the job

80 Scheduling Priorities NOT First Come First Serve! Jobs that use more resources get higher priority (because these are hard to schedule) Smaller jobs are backfilled to fit in the holes created by the bigger jobs Eligible jobs acquire more priority as they sit in the queue Jobs can be in three basic states: (check states by showq) Blocked, eligible or running Jobs in BatchHold

81 Scheduling Tips Requesting more resources does not make a job run faster unless you are running a parallel program. The more resources you request, the harder it is for the scheduler to reserve those resources. First time: over-estimate how many resources you need, and then modify appropriately. (qstat f ${PBS_JOBID} at the bottom of your scripts will give you resources information when the job is done) Job finished time = Job waiting time + Job running time

82 Advanced Scheduling Tips Resources A large proportion of the cluster can only run jobs that are four hours or less Most nodes have at least 24 GB of memory Half have at least 64 GB of memory Few have more than 64 GB of memory Maximum running time of jobs: 7 days (168 hours) Maximum memory that can be requested: 6 TB Scheduling Limit total 520 cores using on HPCC at a time 15 eligible jobs at a time 1000 submitted jobs

83 Cluster Resources (HPCC Computing Nodes) Year Name Description ppn Memory Nodes Total Cores 2007 intel07 Quad-core 2.3GHz Intel Xeon E GB gfx10 NVIDIA CUDA Node (no IB) 8 18GB intel10 Intel Xeon E5620 (2.40 GHz) 8 24GB intel11 Intel Xeon 2.66 GHz E GB TB TB intel14 Intel Xeon E v2 (2.6 GHz) 20 64GB GB NVIDIA K20 GPUs GB Xeon Phi 5110P GB Large Memory 48 or TB

84 Job Completion By default the job will automatically generate two files when it completes: Standard Output: Ex: jobname.o Standard Error: Ex: jobname.e You can combine these files if you add the join option in your submission script: #PBS -j oe You can change the output file name #PBS -o /mnt/home/netid/myoutputfile.txt

85 Other Job Properties resources (-l) Walltime, memory, nodes, processor, network, etc. #PBS l feature=gpgpu,gbe #PBS l nodes=2:ppn=8:gpu=2 #PBS l mem=16gb address (-M) #PBS M yournetid@msu.edu - Options (-m) #PBS m abe Many others, see the wiki:

86 Advanced Environment Variables The scheduler adds a number of environment variables that you can use in your script: PBS_JOBID The job number for the current job. PBS_O_WORKDIR The original working directory which the job was submitted Example mkdir ${PBS_O_WORKDIR}/${PBS_JOBID}

87 Software (Modules) again icer has over 2500 software titles installed Not all titles are available by default We use modules to setup the software Some modules are loaded by default module list To see different version: module spider MATLAB To search, also use the module spider command

88 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

89 Getting Help Documentation and User Manual wiki.hpcc.msu.edu Contact HPCC and icer Staff for: Reporting System Problems HPC Program writing/debugging Consultation Help with HPC grant writing System Requests Other General Questions Primary form of contact - HPCC Request tracking system rt.hpcc.msu.edu Open Office Hours 1-2 pm Monday/Thursday (PBS 1440)

90 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

91 Terms ICER, HPC Account: login, directory (folder) Node: Laptop, gateway, dev-nodes, comp-nodes Cores: nodes=2:ppn=4 Memory: total, per node, mem=2gb Storage: home, research, scratch, disk Module: list, load, spider, unload, purge, show, Files: software, data, job script Path: Queue: qsub, qstat, qdel. Jobs: JobID,

92 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

93 Commands ssh cd, ls, pwd, module sub_comand rm, mv, cp, scp more, less, cat nano, vi, qsub, qstat, qdel, showq echo, export Mkdir, tar, unzip, zip

94 Agenda Introduction icer & MSU HPC system How to Use the HPC Get on Load your supply Work on board Available Services Summary Terms Commands Websites

95 Websites HPCC user wiki: Contact icer, Training information Command manual An exhaustive list A useful cheatsheet Explain a command given to you

96 Q & A

97 Thnanks!

Introduction to HPC at MSU

Introduction to HPC at MSU Introduction to HPC at MSU CYBERINFRASTRUCTURE DAYS 2014 Oct/23/2014 Yongjun Choi choiyj@msu.edu Research Specialist, Institute for Cyber- Enabled Research Agenda Introduction to HPCC Introduction to icer

More information

Crash Course in High Performance Computing

Crash Course in High Performance Computing Crash Course in High Performance Computing Cyber-Infrastructure Days October 24, 2013 Dirk Colbry colbrydi@msu.edu Research Specialist Institute for Cyber-Enabled Research https://wiki.hpcc.msu.edu/x/qamraq

More information

Introduction To HPCC Faculty Seminars in Research and Instructional Technology Dec 16, 2014

Introduction To HPCC Faculty Seminars in Research and Instructional Technology Dec 16, 2014 Introduction To HPCC Faculty Seminars in Research and Instructional Technology Dec 16, 2014 https://wiki.hpcc.msu.edu/x/jwjiaq Dirk Colbry colbrydi@msu.edu Director, High Performance Computing Center Institute

More information

Introduction To HPCC. Faculty Seminars in Research and Instructional Technology May 6,

Introduction To HPCC. Faculty Seminars in Research and Instructional Technology May 6, Introduction To HPCC Faculty Seminars in Research and Instructional Technology May 6, 2014 https://wiki.hpcc.msu.edu/x/m4e_aq Dirk Colbry colbrydi@msu.edu Director, High Performance Computing Center Institute

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

High Performance Computing Resources at MSU

High Performance Computing Resources at MSU MICHIGAN STATE UNIVERSITY High Performance Computing Resources at MSU Last Update: August 15, 2017 Institute for Cyber-Enabled Research Misson icer is MSU s central research computing facility. The unit

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

Working on the NewRiver Cluster

Working on the NewRiver Cluster Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD  Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides

More information

UF Research Computing: Overview and Running STATA

UF Research Computing: Overview and Running STATA UF : Overview and Running STATA www.rc.ufl.edu Mission Improve opportunities for research and scholarship Improve competitiveness in securing external funding Matt Gitzendanner magitz@ufl.edu Provide high-performance

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting

More information

ICS-ACI System Basics

ICS-ACI System Basics ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents

More information

Introduction to Linux/Unix. Xiaoge Wang, ICER Jan. 14, 2016

Introduction to Linux/Unix. Xiaoge Wang, ICER Jan. 14, 2016 Introduction to Linux/Unix Xiaoge Wang, ICER wangx147@msu.edu Jan. 14, 2016 How does this class work We are going to cover some basics with hands on examples. Exercises are denoted by the following icon:

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

HPC Resources at Lehigh. Steve Anthony March 22, 2012

HPC Resources at Lehigh. Steve Anthony March 22, 2012 HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,

More information

An Introduction to Cluster Computing Using Newton

An Introduction to Cluster Computing Using Newton An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Getting started with the CEES Grid

Getting started with the CEES Grid Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

PACE Orientation. Research Scientist, PACE

PACE Orientation. Research Scientist, PACE PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to

More information

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

HPC Introductory Course - Exercises

HPC Introductory Course - Exercises HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands

More information

Guillimin HPC Users Meeting March 17, 2016

Guillimin HPC Users Meeting March 17, 2016 Guillimin HPC Users Meeting March 17, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Introduction to CINECA HPC Environment

Introduction to CINECA HPC Environment Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

Using Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta

Using Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta Using Compute Canada Masao Fujinaga Information Services and Technology University of Alberta Introduction to cedar batch system jobs are queued priority depends on allocation and past usage Cedar Nodes

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the

More information

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance

More information

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.

Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Before you login to the Blue Waters system, make sure you have the following information

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Introduction to the Cluster

Introduction to the Cluster Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be

More information

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing Overview of CHPC Martin Čuma, PhD Center for High Performance Computing m.cuma@utah.edu Spring 2014 Overview CHPC Services HPC Clusters Specialized computing resources Access and Security Batch (PBS and

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information

Running Jobs, Submission Scripts, Modules

Running Jobs, Submission Scripts, Modules 9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of

More information

GPU Cluster Usage Tutorial

GPU Cluster Usage Tutorial GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

GACRC User Training: Migrating from Zcluster to Sapelo

GACRC User Training: Migrating from Zcluster to Sapelo GACRC User Training: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/28/2017 GACRC Zcluster-Sapelo Migrating Training 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems

More information

XSEDE New User Tutorial

XSEDE New User Tutorial October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

HPCC New User Training

HPCC New User Training High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda

More information

Introduction to Linux for BlueBEAR. January

Introduction to Linux for BlueBEAR. January Introduction to Linux for BlueBEAR January 2019 http://intranet.birmingham.ac.uk/bear Overview Understanding of the BlueBEAR workflow Logging in to BlueBEAR Introduction to basic Linux commands Basic file

More information

XSEDE New User Tutorial

XSEDE New User Tutorial June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please

More information

Logging in to the CRAY

Logging in to the CRAY Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter

More information

CSE Linux VM. For Microsoft Windows. Based on opensuse Leap 42.2

CSE Linux VM. For Microsoft Windows. Based on opensuse Leap 42.2 CSE Linux VM For Microsoft Windows Based on opensuse Leap 42.2 Dr. K. M. Flurchick February 2, 2017 Contents 1 Introduction 1 2 Requirements 1 3 Procedure 1 4 Usage 3 4.1 Start/Stop.................................................

More information

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk

More information

Please include the following sentence in any works using center resources.

Please include the following sentence in any works using center resources. The TCU High-Performance Computing Center The TCU HPCC currently maintains a cluster environment hpcl1.chm.tcu.edu. Work on a second cluster environment is underway. This document details using hpcl1.

More information

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

The JANUS Computing Environment

The JANUS Computing Environment Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala Basic UNIX commands HORT 59000 Lab 2 Instructor: Kranthi Varala Client/Server architecture User1 User2 User3 Server (UNIX/ Web/ Database etc..) User4 High Performance Compute (HPC) cluster User1 Compute

More information

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA) Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit

More information

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and

More information

Carnegie Mellon. Linux Boot Camp. Jack, Matthew, Nishad, Stanley 6 Sep 2016

Carnegie Mellon. Linux Boot Camp. Jack, Matthew, Nishad, Stanley 6 Sep 2016 Linux Boot Camp Jack, Matthew, Nishad, Stanley 6 Sep 2016 1 Connecting SSH Windows users: MobaXterm, PuTTY, SSH Tectia Mac & Linux users: Terminal (Just type ssh) andrewid@shark.ics.cs.cmu.edu 2 Let s

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information