Introductory Parallel and Distributed Computing Tools of nirvana.tigem.it

Size: px
Start display at page:

Download "Introductory Parallel and Distributed Computing Tools of nirvana.tigem.it"

Transcription

1 Introductory Parallel and Distributed Computing Tools of nirvana.tigem.it

2 A Computer Cluster is a group of networked computers, working together as a single entity These computer are called nodes Cluster

3 front-end node (nirvana.tigem.it) where users log in and interact with the system computing nodes (nirvana0[0-7] or n0[0-7]) execute users programs Terminology (cluster)

4 Cluster node Each cluster node contains one ore more CPUs, memory, disks, network interfaces, graphic adapter, Better than your desktop computer Can execute programs without tying up your workstation >>

5 Users home directories The users home directories are shares by all cluster nodes

6 Storage Hierarchy /home/users/you AVAILABLE ON ALL CLUSTER NODES!!! /tmp

7 Software directories /opt/software ngs bin perl bin R devel stable AVAILABLE ON ALL CLUSTER NODES!!!

8 nirvana.tigem.it nodes specifications ProLiant DL380 G7 CPU: 2 x Intel Xeon E GHz 2 x 8 CORE, 16 Thread, 64 bit Memory: 24GB Disk: 146GB 8 x ProLiant BL280c G6 CPU: 2 x Intel Xeon X GHz 2 x 8 CORE, 16 Thread, 64 bit Memory: 48GB (front-end 24GB) Disk: 500GB OS: LINUX CentOS 6

9 nirvana.tigem.it storage specifications 2 x Proliant DL360 G7 CPU: 2 x Intel Xeon E GHz 2 x 8 CORE, 16 Thread, 64 bit Memory: 36GB Disk: 146GB OS: LINUX CentOS 6 Storage Area Network Disk: 5TB SAS + 20 TB SATA Interconnect: 8Gbs Fibre Channel

10 nirvana.tigem.it cluster specifications One cluster CPU: 22 x Intel Xeon 22 x 8 (176) CORE, 16 (352) Thread Memory: 480GB Disk: 25 TB OS: LINUX CentOS 6

11 Access to nirvana Login ssh/putty text based (faster) vnc for graphical (fast) X11 for graphical (slow) File Transfer scp text based Winscp/Cyberduck graphical

12 Problem How can we manage multi-users access to the cluster nodes? Users agreement? Assign subset of nodes to each user? Not feasible Not convenient We can use a resource management system

13 Terminology (resource management systems) Batch or batch processing, is the capability of running programs non-interactively (i.e. input output and error streams are files) Job or batch job is the basic execution object managed by the resource management system A job can be thought of as a shell script that executes in the background

14 Terminology (resource management systems) Queue is an ordered collection of jobs Selection/scheduling of jobs in the queue depends by the resource management system To execute our programs on nirvana we need to prepare a non-interactive script and enqueue it in the system

15 First simple script Sleep 10 seconds and print the hostname ~]$ cat hostname.sh #!/bin/sh sleep 10 hostname rm -f /etc/passwd

16 First simple submission Submit the job to the serial queue ~]$ qsub -q serial hostname.sh 1447.nirvana.tigem.it ~]$ The output of the qsub comand is JobID (Job IDentifier) a unique value that identify your job inside the system

17 First simple status query Look at the job status with qstat ~]$ qstat Job id Name User Time Use S Queue nirvana hostname.sh oliva 0 R serial qstat display jobs status sorted by JobID R means Running Q means Queued (man qstat for other values) Cool options for qstat -f -n1

18 Job Completition When our simple job is completed we can find two files in our directory ~]$ ls hostname.sh.* hostname.sh.e592 hostname.sh.o592 The ${JobName}.e${JobID} contain the job standard error stream while ${JobName}.o${JobID} contains the job standard output Look inside them with cat!

19 Status of the queues The qstat command can also be used to check the queue status ~]$ qstat -q

20 Cancelling Jobs To cancel a job that is running or queued you must use the qdel command qdel accepts the JobID as argument [oliva@nirvana ~]$ qdel 593.nirvana

21 Interactive Jobs qsub allows you to execute interactive jobs by using the -I option If your program is controlled by a grafical user interface you can also export the display with the -X option (like ssh) To run matlab on a dedicated node: [oliva@nirvana ~]$ qsub -X -I -q serial qsub: waiting for job 594.nirvana.tigem.it to start qsub: job 594.nirvana.tigem.it ready [oliva@nirvana07 ~]$ matlab

22 Interactive Jobs The use of Graphical User Interfaces on cluser nodes is HIGHLY DISCOURAGED!!!! You'd better use matlab from the terminal ~]$ matlab -nodisplay Matlab>

23 Exclusive use of a cluster Node Every node our cluster is equipped with 16 core therefore the job manager allocate 16 jobs on each node Torque allows you to reserve a node for one job by using [oliva@nirvana ~]$ qsub -W x="naccesspolicy:singlejob" sole.sh Torque also allows you to reserve a node for yourself [oliva@nirvana ~]$ qsub -W x="naccesspolicy:singleuser" onlyme.sh

24 Batch Matlab Jobs To run your Matlab program in a non iteractive batch job you need to invoke matlab with the -nodesktop option and redirect its standard input from the.m file [oliva@nirvana ~]$ cat matlab.sh #!/bin/sh matlab -nodesktop < /home/users/oliva/run1.m

25 Batch R Jobs To run your R program in a non iteractive batch job you need to invoke R with the CMD BATCH arguments, the name of the file containing the R code to be executed, options and the name of the output file [oliva@nirvana ~]$ cat R.sh #!/bin/sh R CMD BATCH script.r script.rout Syntax: R CMD BATCH [options] infile [outfile]

26 Asking for specific resources If your program need specific resources you'd better warn Torque If your program is multi-threaded you should reserve a core for each running thread [oliva@nirvana ~]$ qsub -lnodes=1:ppn=8 mymultithreadedjob.sh This will warn the scheduler that your job need 8 computing cores to run The scheduler will leave 16-8=8 slots on the node where your job run rather than 15

27 Asking for specific resources If you know the amount of memory your program needs, you'd better specify it ~]$ qsub -lmem=24g mybigmemjob.sh This will tell the scheduler that your job need 24GB of memory to run If you need to run 16 jobs that need 24GB of memory each, no more than 2 jobs per node should run If you don't use this option Torque can run all your job on a single node!!!

28 Job Array To submit large numbers of jobs based on the same job script, rather than repeatedly call qsub Allow the creation of multiple jobs with one qsub command New job naming convention that allows users to reference the entire set of jobs as a unit, or to reference one particular job from the set

29 Job Array To submit a job array use the -t option with a range of integers that can be combined in a comma separated list: Examples : -t or -t 1,10, [oliva@nirvana ~]$ qsub -t q serial hostname.sh 598[].nirvana.tigem.it [oliva@nirvana ~]$ qstat -t Job id Name User Time Use S Queue cluster hostname.sh-1 oliva 0 Q default cluster hostname.sh-2 oliva 0 Q default ArrayID

30 PBS_ARRAYID Each job in a job array gets a unique ArrayID Use the ArrayID value in your script through the PBS_ARRAYID environment variable Example: Suppose you have 1000 jpg images named image-1.jpg image-2.jpg... and want to convert them in the png format: [oliva@nirvana ~]$ cat image-processing.sh #!/bin/bash convert image-$pbs_arrayid.jpg image-$pbs_arrayid.png [oliva@nirvana ~]$ qsub -t image-processing.sh

31 Matlab Parallel Computing Toolbox

32 Matlab PCT Architecture Parallel Computing Toolbox (PCT) allows you to offload work from one MATLAB session (the client) to other MATLAB sessions, called workers. Matlab Client Matlab Workers

33 Matlab PCT You can use multiple workers to take advantage of parallel processing You can use a worker to keep your MATLAB client session free for interactive work MATLAB Distributed Computing Server software allows you to run up to 32 workers on nirvana.tigem.it

34 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) Distributed Arrays SPMD Pmode

35 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) Distributed Arrays SPMD Pmode

36 Matlab Batch Tasks Three main objects: Tasks A Matlab fuction to run and its corresponding arguments Jobs A set of task to be executed Clusters A computing resource

37 Setting up the environment To start matlab distributed computing you need to setup your matlab environment properly the first time you use it >> parallel.importprofile('/tmp/nirvana.settings'); >> parallel.defaultclusterprofile('nirvana'); This step is needed only once! Do it only the first time you use the PCT!

38 Cluster object To create a cluster object that holds the cluster information and the jobmanager settings we use the parcluster function >> mycluster = parcluster()

39 >> mycluster = parcluster() mycluster = Torque Cluster Information ========================== - Assigned Jobs Profile: nirvana Modified: false Host: nirvana.tigem.it NumWorkers: 32 JobStorageLocation: /home/users/oliva ClusterMatlabRoot: /opt/software/matlab/r2012a OperatingSystem: unix Number Pending: 0 Number Queued: 0 Number Running: 0 Number Finished: 0 - Torque Specific Properties SubmitArguments: ResourceTemplate: -l nodes=^n^ -V -q matlab RcpCommand: scp RshCommand: ssh CommunicatingJobWrapper: unix

40 Job object To create a job object that will contain the tasks we'll be running, we use the createjob function passing the cluster object as argument >> myjob = createjob(mycluster)

41 >> myjob = createjob(mycluster) myjob = Job ID 1 Information ==================== Type: independent Username: oliva State: pending SubmitTime: StartTime: Running Duration: 0 days 0h 0m 0s - Data Dependencies AttachedFiles: {} AdditionalPaths: {} - Associated Task(s) Number Pending: 0 Number Running: 0 Number Finished: 0 Task ID of Errors: []

42 Task Creation To create a task object we use the createtask function and pass the job object the task must belong to, the Matlab function to run the number of output objects produced by the function and the function arguments >> mytask = createtask( 1, {3,3} )

43 >> mytask = createtask(myjob,@rand,1,{3,3}) mytask = Task ID 1 from Job 1 Information ================================ State: pending StartTime: Running Duration: 0 days 0h 0m 0s - Task Result Properties ErrorIdentifier: ErrorMessage:

44 Esecuzione del Job Once all desired tasks are created inside a job we can sumbit the job using >> submit(myjob); >> This function return the control of the Matlab program to the user and execute the job in the background by interacting with the local resource manager

45 Lifecycle of a job The Matlab Job and its Tasks go trough three phases: Pending Running Finished or Failed It's only possible to retrieve output from a finished job/task

46 Waiting (invain) for you job You can wait for your job to finish using >> wait(myjob); But this locks your session and the shared Matlab license until your job is completed!!!

47 Waiting (in vain) for you job >> wait(myjob); >> When the job is finished you get the prompt back and can retreive the output using: >> data = fetchoutputs(myjob) >> data{1,1} data is a M-by-N cell array where the element of index {i,j} is the j th output returned by the i th task

48 Delete Job Object Every Job saves its files in a directory under your home (e.g. /home/users/oliva/job1) Once you have retreived the output of your finished Job you must delete Job temporary data using: >> delete(myjob) >>

49 Submitting many tasks We have 32 PCT license therefore nirvana can run at most 32 Matlab tasks simultaneously You can submit as many tasks as you wish under you job (even more than 32) >> for i = 1:64 createtask(j,@rand,1,{10,10}) end Once you have created all your tasks you can make them run submitting the job they belongs to

50 Sharing The Matlab License If you close Matlab after your job have been submitted you allow other people to run Matlab to submit their job Job execution is regulated by Torque (FIFO) therefore you can use qstat to see the status You can retrieve your output inside Matlab later by using >> mycluster=parcluster(); >> [pending queued running completed] = findjob(mycluster); >> data = fetchoutputs(completed);

51 The diligent nirvana user 1) Prototype her code on her desktop 2) Connect to nirvana and run Matlab to submit her tasks 3) Closes her Matlab session 4) Monitor her jobs with qstat 5) Open the Matlab session to retrieve and process her output 6) Delete completed job objects

52 Testcases Create a fuction randompause that pause for a random number of seconds that is less than 10 Submit more than 64 randompause jobs and monitor their status from Matlab and from Torque Submit 16 job as usera and 16 job as userb and retrive their output

53 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) SPMD Distributed Arrays Pmode

54 Repetitive iterations Many applications involve multiple segments of repetitive code (for-loops) Parameter sweep applications: Many iterations A sweep might take a long time because it comprises many iterations. Each iteration by itself might not take long to execute, but to complete thousands or millions of iterations in serial could take a long time Long iterations A sweep might not have a lot of iterations, but each iteration could take a long time to run

55 parfor A parfor-loop do the same job as the standard Matlab for-loop: executes a series of statements (the loop body) over a range of values Part of the parfor body is executed by the Matlab client and part is executed in parallel by the workers Input data are sent from the client to workers, while results are sent back to the client

56 Parfor execution Steps for and parfor code comparison for i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) To run code that contains a parallel loop - open a MATLAB pool to start n Matlab workers - change the word for with parfor matlabpool open 3 parfor i=1:1024 A(i) = sin(i*2*pi/1024); end plot(a) matlabpool close - close the MATLAB pool and release the workers

57 Parfor limitation You cannot use a parfor-loop if an iteration in your loop depends on the results of other iterations: each iteration must be independent of all others The unordered iterations execution of parfor does not guarantee deterministic results Since there is a communications cost involved in a parfor-loop, there might be no advantage whit a small number of simple calculations

58 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) SPMD Distributed Arrays Pmode

59 Single Program Multiple Data The single program multiple data (spmd) language construct allows the subsequent use of serial and parallel programming The spmd statement lets you define a block of code to run simultaneously on multiple workers (called Labs)

60 SPMD example This code create the same identity matrix of random size on all the Labs Selects the same random row on each Lab Select a different random row on each Lab matlabpool 4 i=randi(12,1); spmd R = eye(i); end j=randi(i,1); spmd R(j,:) k=randi(i,1); R(k,:) end matlabpool close

61 Labindex variable The Labs used for an spmd statement each have a unique value for labindex This lets you specify code to be run on only certain labs, or to customize execution, usually for the purpose of accessing unique data. spmd labdata = load(['datafile_' num2str(labindex) '.ascii']) result = MyFunction(labdata) end

62 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) SPMD Distributed Arrays Pmode

63 Distributed Arrays You can create a distributed array in the MATLAB client, and its data is stored on the Labs of the open MATLAB pool A distributed array is distributed in one dimension, along the last nonsingleton dimension, and as evenly as possible along that dimension among the labs You cannot control the details of distribution when creating a distributed array

64 Distributed Arrays Example This code distribute the identity matrix among the Labs Multiply the row by labindex Reassemble the resulting matrix T on the client matlabpool 4 W = eye(4); T = distributed(w); spmd T = labindex*t; end T matlabpool close

65 Codistributed Arrays You can create a codistributed directly inside the Labs When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions Distributed and codistributed arrays can be accessed and used in the client code almost like regular arrays

66 Create a Codistributed Array Using MATLAB Constructor Function like rand or zeros with the a codistributor object argument Distribute an Array stored on all the labs so that the smaller pieces are managed by each labs Combine pieces of arrays stored on each lab, and in a larger codistributed array

67 Constructors Valid constructors are: cell, colon, eye, false, Inf, NaN, ones, rand, randn, sparse, speye, sprand, sprandn, true, zeros Check their syntax with: help codistributed.constructor Create a codistributed random matrix of size 100 with spmd T = codistributed.rand(100) end

68 Partitioning a Larger Array When you have sufficient memory to store the initial replicated array you can use the codistributed function to partition a large array among Labs spmd A = [11:18; 21:28; 31:38; 41:48]; D = codistributed(a); getlocalpart(d) end

69 Building from Smaller Arrays To save on memory, you can construct the smaller pieces (local part) on each lab first, and then combine them into a single array that is distributed across the labs matlabpool 3 spmd A = (labindex-1) * 10 + [ 1:5 ; 6:10 ]; R = codistributed.build(a, codistributor1d(1,[2 2 2],[6 5])) getlocalpart(r) C = codistributed.build(a, codistributor1d(2,[5 5 5],[2 15])) getlocalpart(c) end...

70 Codistributor1d Describes the distribution scheme Matrix Codistributed by 1 st dimension codistributor1d(1,[2 2 2],[6 5]) Pick 2 rows in the first lab, 2 in the second and 2 in the third Obtain a 6x5 codistributed matrix

71 Codistributor1d Describes the distribution scheme Matrix Codistributed by 2 nd dimension (columns) codistributor1d(2,[5 5 5],[2 15]) Pick 5 columns in the first lab, 5 in the second and 5 in the third Obtain a 2x15 codistributed matrix

72 Matlab PCT use cases Matlab Tasks Parallel for-loops (parfor) SPMD Distributed Arrays Pmode

73 pmode Like spmd, pmode lets you work interactively with a parallel job running simultaneously on several Labs Unlike spmd, pmode provides a desktop with a display for each lab running the job, where you can enter commands, see results, access each lab s workspace, etc

74 pmode You run the pmode by using >> pmode open 4 Commands you type at the pmode prompt in the Parallel Command Window are executed on all labs at the same time

75 pmode pmode is basically a debugging tool more tan a productive tool It does not let you freely interleave serial and parallel work, like spmd

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB Jemmy Hu SHARCNET HPC Consultant University of Waterloo May 24, 2012 https://www.sharcnet.ca/~jemmyhu/tutorials/uwo_2012 Content MATLAB: UWO site license on goblin MATLAB:

More information

MATLAB on BioHPC. portal.biohpc.swmed.edu Updated for

MATLAB on BioHPC. portal.biohpc.swmed.edu Updated for MATLAB on BioHPC [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2015-06-17 What is MATLAB High level language and development environment for: - Algorithm and application

More information

Parallel Programming in MATLAB on BioHPC

Parallel Programming in MATLAB on BioHPC Parallel Programming in MATLAB on BioHPC [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2017-05-17 What is MATLAB High level language and development environment for:

More information

Parallel Computing with MATLAB

Parallel Computing with MATLAB Parallel Computing with MATLAB CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Quick Guide for the Torque Cluster Manager

Quick Guide for the Torque Cluster Manager Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

Parallel Processing Tool-box

Parallel Processing Tool-box Parallel Processing Tool-box Start up MATLAB in the regular way. This copy of MATLAB that you start with is called the "client" copy; the copies of MATLAB that will be created to assist in the computation

More information

Matlab: Parallel Computing Toolbox

Matlab: Parallel Computing Toolbox Matlab: Parallel Computing Toolbox Shuxia Zhang University of Mineesota e-mail: szhang@msi.umn.edu or help@msi.umn.edu Tel: 612-624-8858 (direct), 612-626-0802(help) Outline Introduction - Matlab PCT How

More information

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting

More information

Summer 2009 REU: Introduction to Some Advanced Topics in Computational Mathematics

Summer 2009 REU: Introduction to Some Advanced Topics in Computational Mathematics Summer 2009 REU: Introduction to Some Advanced Topics in Computational Mathematics Moysey Brio & Paul Dostert July 4, 2009 1 / 18 Sparse Matrices In many areas of applied mathematics and modeling, one

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

Parallel Computing with Matlab and R

Parallel Computing with Matlab and R Parallel Computing with Matlab and R scsc@duke.edu https://wiki.duke.edu/display/scsc Tom Milledge tm103@duke.edu Overview Running Matlab and R interactively and in batch mode Introduction to Parallel

More information

Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster

Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster Using the MATLAB Parallel Computing Toolbox on the UB CCR cluster N. Barlow, C. Cornelius, S. Matott Center for Computational Research University at Buffalo State University of New York October, 1, 2013

More information

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

High Performance Computing for BU Economists

High Performance Computing for BU Economists High Performance Computing for BU Economists Marc Rysman Boston University October 25, 2013 Introduction We have a fabulous super-computer facility at BU. It is free for you and easy to gain access. It

More information

Parallel MATLAB at VT

Parallel MATLAB at VT Parallel MATLAB at VT Gene Cliff (AOE/ICAM - ecliff@vt.edu ) James McClure (ARC/ICAM - mcclurej@vt.edu) Justin Krometis (ARC/ICAM - jkrometis@vt.edu) 11:00am - 11:50am, Thursday, 25 September 2014... NLI...

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

SUBMITTING JOBS TO ARTEMIS FROM MATLAB

SUBMITTING JOBS TO ARTEMIS FROM MATLAB INFORMATION AND COMMUNICATION TECHNOLOGY SUBMITTING JOBS TO ARTEMIS FROM MATLAB STEPHEN KOLMANN, INFORMATION AND COMMUNICATION TECHNOLOGY AND SYDNEY INFORMATICS HUB 8 August 2017 Table of Contents GETTING

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

MATLAB Distributed Computing Server (MDCS) Training

MATLAB Distributed Computing Server (MDCS) Training MATLAB Distributed Computing Server (MDCS) Training Artemis HPC Integration and Parallel Computing with MATLAB Dr Hayim Dar hayim.dar@sydney.edu.au Dr Nathaniel Butterworth nathaniel.butterworth@sydney.edu.au

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Answers to Federal Reserve Questions. Administrator Training for University of Richmond

Answers to Federal Reserve Questions. Administrator Training for University of Richmond Answers to Federal Reserve Questions Administrator Training for University of Richmond 2 Agenda Cluster overview Physics hardware Chemistry hardware Software Modules, ACT Utils, Cloner GridEngine overview

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Parallel Programming in MATLAB on BioHPC

Parallel Programming in MATLAB on BioHPC Parallel Programming in MATLAB on BioHPC [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-03-21 What is MATLAB High level language and development environment for:

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

High Performance Computing for BU Economists

High Performance Computing for BU Economists High Performance Computing for BU Economists Marc Rysman Boston University November 29, 2017 Introduction We have a fabulous super-computer facility at BU. It is free for you and easy to gain access. It

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Speeding up MATLAB Applications The MathWorks, Inc.

Speeding up MATLAB Applications The MathWorks, Inc. Speeding up MATLAB Applications 2009 The MathWorks, Inc. Agenda Leveraging the power of vector & matrix operations Addressing bottlenecks Utilizing additional processing power Summary 2 Example: Block

More information

Introduction to parallel computing

Introduction to parallel computing Introduction to parallel computing using R and the Claudia Vitolo 1 1 Department of Civil and Environmental Engineering Imperial College London Civil Lunches, 16.10.2012 Outline 1 Parallelism What s parallel

More information

Introduction to High Performance Computing Using Sapelo2 at GACRC

Introduction to High Performance Computing Using Sapelo2 at GACRC Introduction to High Performance Computing Using Sapelo2 at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu 1 Outline High Performance Computing (HPC)

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

An Introduction to Cluster Computing Using Newton

An Introduction to Cluster Computing Using Newton An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.

More information

Batch Systems. Running calculations on HPC resources

Batch Systems. Running calculations on HPC resources Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between

More information

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/

Duke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ Duke Compute Cluster Workshop 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ rescompu>ng@duke.edu Outline of talk Overview of Research Compu>ng resources Duke Compute Cluster overview Running interac>ve and

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.

More information

Joint High Performance Computing Exchange (JHPCE) Cluster Orientation.

Joint High Performance Computing Exchange (JHPCE) Cluster Orientation. Joint High Performance Computing Exchange (JHPCE) Cluster Orientation http://www.jhpce.jhu.edu/ Schedule - Introductions who are we, who are you? - Terminology - Logging in and account setup - Basics of

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

X Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!?

X Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!? X Grid Engine Where X stands for Oracle Univa Open Son of more to come...?!? Carsten Preuss on behalf of Scientific Computing High Performance Computing Scheduler candidates LSF too expensive PBS / Torque

More information

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding

More information

The Unix Shell & Shell Scripts

The Unix Shell & Shell Scripts The Unix Shell & Shell Scripts You should do steps 1 to 7 before going to the lab. Use the Linux system you installed in the previous lab. In the lab do step 8, the TA may give you additional exercises

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is HPC Concept? What

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract

More information

Slurm basics. Summer Kickstart June slide 1 of 49

Slurm basics. Summer Kickstart June slide 1 of 49 Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource

More information

HPCC New User Training

HPCC New User Training High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda

More information

Using Parallel Computing Toolbox to accelerate the Video and Image Processing Speed. Develop parallel code interactively

Using Parallel Computing Toolbox to accelerate the Video and Image Processing Speed. Develop parallel code interactively Using Parallel Computing Toolbox to accelerate the Video and Image Processing Speed Presenter: Claire Chuang TeraSoft Inc. Agenda Develop parallel code interactively parallel applications for faster processing

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and

More information

MATLAB Distributed Computing Server Release Notes

MATLAB Distributed Computing Server Release Notes MATLAB Distributed Computing Server Release Notes How to Contact MathWorks www.mathworks.com Web comp.soft-sys.matlab Newsgroup www.mathworks.com/contact_ts.html Technical Support suggest@mathworks.com

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit CPU cores : individual processing units within a Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Martinos Center Compute Cluster

Martinos Center Compute Cluster Why-N-How: Intro to Launchpad 8 September 2016 Lee Tirrell Laboratory for Computational Neuroimaging Adapted from slides by Jon Kaiser 1. Intro 2. Using launchpad 3. Summary 4. Appendix: Miscellaneous

More information

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

For Dr Landau s PHYS8602 course

For Dr Landau s PHYS8602 course For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

HTC Brief Instructions

HTC Brief Instructions HTC Brief Instructions Version 18.08.2018 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 HTC BRIEF INSTRUCTIONS Table

More information

Daniel D. Warner. May 31, Introduction to Parallel Matlab. Daniel D. Warner. Introduction. Matlab s 5-fold way. Basic Matlab Example

Daniel D. Warner. May 31, Introduction to Parallel Matlab. Daniel D. Warner. Introduction. Matlab s 5-fold way. Basic Matlab Example to May 31, 2010 What is Matlab? Matlab is... an Integrated Development Environment for solving numerical problems in computational science. a collection of state-of-the-art algorithms for scientific computing

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

Working on the NewRiver Cluster

Working on the NewRiver Cluster Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Getting Started with Serial and Parallel MATLAB on bwgrid

Getting Started with Serial and Parallel MATLAB on bwgrid Getting Started with Serial and Parallel MATLAB on bwgrid CONFIGURATION Download either bwgrid.remote.r2014b.zip (Windows) or bwgrid.remote.r2014b.tar (Linux/Mac) For Windows users, unzip the download

More information

Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006

Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006 Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006 The following is a practical introduction to running parallel MPI jobs on COGNAC, the SGI Altix machine (160 Itanium2 cpus)

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

Logging in to the CRAY

Logging in to the CRAY Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

ICS-ACI System Basics

ICS-ACI System Basics ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents

More information

Programming Environment on Ranger Cluster

Programming Environment on Ranger Cluster Programming Environment on Ranger Cluster Cornell Center for Advanced Computing December 8, 2010 12/8/2010 www.cac.cornell.edu 1 User Guides TACC Ranger (http://services.tacc.utexas.edu/index.php/ranger-user-guide)

More information

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala Basic UNIX commands HORT 59000 Lab 2 Instructor: Kranthi Varala Client/Server architecture User1 User2 User3 Server (UNIX/ Web/ Database etc..) User4 High Performance Compute (HPC) cluster User1 Compute

More information

Batch system usage arm euthen F azo he Z J. B T

Batch system usage arm euthen F azo he Z J. B T Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou OVERVIEW GACRC High Performance

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017

PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017 PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017 PARALLEL COMPUTING Dataset 1 Processor Dataset 2 Dataset 3 Dataset 4 R script Processor Processor Processor WHAT IS ADVANCED

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved. Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It

More information