XSEDE New User Training. Ritu Arora November 14, 2014

Size: px
Start display at page:

Download "XSEDE New User Training. Ritu Arora November 14, 2014"

Transcription

1 XSEDE New User Training Ritu Arora November 14,

2 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative Support and Services Training, Education and Outreach activities Science Gateways Provide information on using XSEDE resources Finding the User-Guide Connecting to XSEDE resources remotely Linux OS on remote machines Transferring Data Working in a remote computing environment User Environment, Module System Running computational jobs in a batch and interactive mode Accessing help through the ticket system Provide information on requesting additional computational time: supplemental request, extension request 2

3 XSEDE is a single virtual system that researchers can use to interactively share computing resources, data and expertise. People around the world use these resources and services - things like supercomputers, collections of data and new tools - to improve our planet. 3

4 XSEDE Offerings Access to resources that include HPC machines, High Throughput Computing (HTC) machines, visualization, data storage, test-beds, & services Extended Collaborative Support Service (ECSS) through which researchers can request to be paired with expert staff members for an extended period (weeks up to a year). ECSS staff provide expertise in many areas of advanced CI and can work with a research team to advance their work through that knowledge ECSS staff engaged in Extended Support for Training, Education and Outreach Science Gateways enable entire communities of users associated with a common discipline to use national resources through a common interface that is configured for optimal use 4

5 5

6 How to Get Started? 6

7 Getting an Allocation In order to use XSEDE resources 1. Create an XSEDE portal account 2. PIs should then submit a request for a start-up allocation (computing hours) for his or her project 1. PI eligibility: A U.S.-based scientist, engineer, or educator who has a joint appointment with a university or non-profit research institution; Research staff from federal and state agencies or federally funded research and development centers 3. The project PI can add his or her group members (having active portal accounts) to the allocation by logging in to the XSEDE portal 4. Once the project team has their accounts and allocation approved, they can use their XSEDE credentials (and/or the credentials from the Service Providers) to log into the requested resources 1. Additional steps might be involved depending upon a Service Provider (SP) for example, for using TACC resources through direct SSH, you will need to sign into the TACC portal account and complete the verification process 7

8 What does a supercomputer look like? 8

9 This is Gordon One of the XSEDE Resources 9

10 U This is Stampede One of the XSEDE Resources 10

11 An example of a node 1. Infiniband HCA card 2. Intel Xeon processors 3. Memory 4. Storage/File-System 5. Space for future Expandability using coprocessors or accelerators 6. Intel Xeon Phi Coprocessor 11

12 Connecting to a Remote Resource 12

13 Local Access vs. Remote Access Local (Desktop/Laptop) Remote (Servers) Program Client Internet Server 13

14 Accessing a Computational Resource like Stampede or Gordon (Oversimplified Diagram) SSH 1 => resource manager & job scheduler Login Node Login Node Login Node 1 (login1) 1 (login2) 1 (login3) 1 Login Node (login4) Interconnect Typical Compute Nodes (e.g., C , ) Specialized Typical Compute Nodes Nodes (e.g., (e.g., large C , memory ) nodes, Visualization Nodes) Interconnect File-Systems ($HOME, $WORK, $SCRATCH) 14

15 For Connecting to Remote Servers For secure (encrypted communication), including data transfer across networks, you need an SSH Secure Shell Client You can also use XSEDE Single Sign-On (SSO) login hub this is through the SSH client Next few slides show how to use SSO and SSH client from a Windows or Mac computer 15

16 How to access Linux systems remotely from a Windows machine? Using client programs on Windows machines SSH Secure Shell Client PuTTY Other options: Install Linux on a USB stick: Use Cygwin/VM Ware (runs as a windows process) 16

17 Using SSH Secure Shell Client - Step 1 On Windows, double click on the SSH Secure Shell Client, the following window will appear 17

18 Using SSH Secure Shell Client - Step 2 Click on Quick Connect, enter Host Name and Username 18

19 Using SSH Secure Shell Client - Step 3 Click on Quick Connect, enter Host Name, Username, click Connect, enter password, click on OK for Enter Authentication 19

20 Using SSH Secure Shell Client - Step 4 Enter commands at the command prompt 20

21 For Mac Users You can have remote access to servers through your Terminal application After opening the terminal type the SSH command below after replacing username with the one provided to you you will be prompted for password after that staff$ ssh 21

22 Steps for SSO SSO via SSH Login Hub (1) Log into the hub with the username and password of your XSEDE portal account A 12 hour proxy certificate is automatically generated, allowing the user to access XSEDE resources for the duration of the proxy Users may then gsissh to any XSEDE compute resource without the need for a resource-specific username and password More information on this topic is available at the following link: 22

23 SSO via SSH Login Hub (2) staff$ ssh Welcome to the XSEDE Single Sign-On (SSO) Hub! You may connect from here to any XSEDE resource on which you have an account. Here are the login commands for common XSEDE resources: Blacklight: gsissh blacklight.psc.xsede.org Gordon Compute Cluster: gsissh gordon.sdsc.xsede.org Stampede: gsissh -p 2222 stampede.tacc.xsede.org ~]$ gsissh -p 2222 stampede.tacc.xsede.org login1$ exit logout Connection to stampede.tacc.xsede.org closed. ~]$ 23

24 After you log into a remote XSEDE resource Welcome to the Linux World! For a quick Linux tutorial click the following Link: 24

25 After you are connected Once you are connected to an XSEDE resource remotely, you will need to understand the user environment on those resources Linux OS (already mentioned) Usage Policies: Resources shared with other users and hence understand the resource usage policies (a slide on it later) Bring your data Do software installation in your account if what you need is not already available on a resource Use ticket system for assistance Do the processing and/or post-processing Move the results to a secondary or tertiary storage media on which you could be having access through XSEDE or to your institution 25

26 Before you proceed, know the 26

27 Know the File Systems on XSEDE Resources (1) User-owned storage on a system could be available in different directories On Stampede, three directories that are identified by $HOME, $WORK and $SCRATCH environment variables These directories are separate file systems, and accessible from any node in the system $HOME $WORK $SCRATCH 5 GB quota, maximum 150K files allowed 400 GB quota, maximum 3M files allowed No Quota Restriction Backed up Not backed up Not backed up No purge policy No purge policy Files with access times of greater than 10 days can be purged Store your source code and build your software here Store large files here Store large files here Parallel File System named Lustre makes hundreds of Spinning disks act like a single disk 27

28 Know the File Systems on XSEDE Resources (2) XSEDE Wide File System (XWFS) is a wide-area file system based on IBM's General Parallel File System (GPFS) technology and is currently mounted on some of the XSEDE computational resources - for example, Stampede, Gordon, Blacklight It presents a single file system view of data across multiple systems and is available only on the login nodes Data needs to be staged to $WORK/$SCRATCH to be used on compute nodes In all cases the XWFS will be mounted as /xwfs, and your project path under /xwfs will be accessible in the same location on all resources It is available only via allocation request 28

29 Know that XSEDE Resources are Shared Amongst Multiple Users Avoid running time-consuming jobs (could be programs or scripts) on the login node All such jobs should be run on the compute nodes Compute nodes can be accessed via a batch job or interactively Development queue for quick testing rather than running on login node Avoid running large jobs from $HOME (small quota) Run such jobs from $SCRATCH Avoid running more than 2 (or 3) rsync processes simultaneously for data transfer Avoid parking your data for months on $SCRATCH without accessing it periodically 29

30 Data Transfer 30

31 Protocols for Data Transfer Different protocols exist for data transfer to (and between) remote sites, e.g., 1. Linux command-line utilities scp & rsync 2. Globus' globus-url-copy command-line utility 3. Globus Connect 31

32 Data Transfer Using scp If your local computer is a Mac or a Linux laptop, you can use the scp commands to transfer data to and from a remote resource like Stampede localhost% scp filename username@stampede.tacc.utexas.edu:/path/to/project/di rectory If you are using a Windows computer, you can download and use the WinSCP application (GUI-based), or download and use Cygwin (command-line based, can run the aforementioned commands) For small amounts of data, you may also use the File Transfer Window available in the SSH client drag an drop the files across the local laptop and a remote resource 32

33 Data Transfer Using rsync (1) The rsync command is another way to transfer data and to keep the data at the source and destination in sync If transferring the data for the first time to a remote resource, rsync and scp might show similar performance except when the connection drops If a connection drops, upon restart of the data transfer, rsync will automatically transfer only the remaining files to the destination, it will skip the already transferred files rsync transfers only the actual changed parts of a file (instead of transferring an entire file) this selective method of data transfer can be much more efficient than scp because it reduces the amount of data sent over the network 33

34 Data Transfer Using rsync (2) The following example demonstrates the usage of the rsync command for transferring a file named myfile.c from the current location of a user to $WORK on Stampede at TACC login1$ rsync myfile.c username@stammpede.tacc.utexas.edu:/work/01698/username /data 34

35 Data Transfer Using rsync (3) Transferring an entire directory from a resource to Stampede To preserve the modification times use the -t option To preserve symbolic links, devices, attributes, permissions, ownerships, etc. transfer in the archive mode using the -a option To increase the amount of information displayed during transfer use the -v option (verbose mode) To compress the data for transfer, use the z option The following example demonstrates the usage of the -avtz options for transferring a directory named gauss from the present working directory of a user to a directory named data in the $WORK file system on Stampede login1$ rsync -avtz./gauss username@stampede.tacc.utexas.edu:/work/01698/u sername/data 35

36 Using globus-url-copy globus-url-copy is a command-line implementation of the GridFTP protocol, provided high-speed transport between GridFTP servers at XSEDE sites, use this command to transfer large files Steps for using globus-url-copy through an example: staff$ module load CTSSV4 staff$ myproxy-logon -T -l rauta staff$ globus-url-copy -tcp-bs 11M -vb gsiftp://gridftp.stampede.tacc.xsede.org:2811/scratch/01698/ra uta/training/trainingmpi/example1.c gsiftp://oasisdm.sdsc.xsede.org:2811/home/rauta/example1_transferred.c Use your XSEDE username in place of rauta See the following link for more: 36

37 Using Globus Connect Globus Connect provides fast, secure transport via an easy-touse web interface using pre-defined and user-created "endpoints". XSEDE users automatically have access to Globus Connect via their XUP username/password. Other users may sign up for a free Globus Connect Personal account. Globus Connect makes it possible to create a transfer endpoint on any machine (including campus servers and home laptops) with few clicks For more information on Globus Connect: 37

38 Data Transfer Issues Real World Scenario During one project, transferring 4.3 TB of data from the Stampede Supercomputer in Austin to the Gordon Supercomputer in San Diego, took approx. 210 hours The transfer was restarted about 14 times during June 3 to June 18, about 15 days If the data transfer would have completed without any interruptions, it would have completed in about 9 days at the given speed Multiple reasons for interruption - sometimes maintenance on Stampede or Gordon, some other file-system issue, network traffic/available bandwidth - all are factors affecting the data transfer rate 38

39 Working in a remote computing environment understand your user environment, module system, job submission system 39

40 User Environment An important component of a user's environment is the login shell as it interprets text on command-line and statements in shell scripts - echo $SHELL command tells the shell you are using There are environment variables for defining values used by the shell (e.g., bash, tcsh) and programs executed on command line e.g., the PATH environment variable defines a list of directories that the shell should search to find an executable program that you have referred to on the command line - this allows you to execute that program without having to type the entire directory path to the executable file An environment management package provides a command-line interface to manage the collection of environment variables associated with various software packages, and to automatically modify environment variables as needed e.g., modules User environment can be customized via startup scripts 40

41 Modules - how to use them? Environment variables make many tasks including the following easy: Creating scripts, porting code, running compilers and software Viewing and managing applications, tools and libraries The preferred environment management package for XSEDE systems is Modules Some of the module commands are To see what modules have been loaded: module list To see what modules are available: module avail To swap one module for MPI library with another: module swap mvapich2 impi To get help on a module (named foo) : module help foo 41

42 Batch Mode and Interactive Mode A sequence of commands to be executed on the compute nodes is listed in a file (often called a batch file, command file, or shell script) and submitted for execution as a single unit this is batch mode of job submission Various resource managers and job schedulers used across different XSEDE sites Example, Gordon uses TORQUE resource manager and PBS job scheduler whereas Stampede uses SLURM as resource manager and job scheduler Interactive mode is opposite of batch mode - commands to be run are typed individually on the command-prompt Interactive access to compute nodes is allowed on some XSEDE resources like Stampede see user-guide for more information Do not run your programs on login nodes only do installation and compiling of code here 42

43 Submitting a Batch Job (1) Refer the user-guide of the XSEDE resource that you are using to find a sample batch script. A sample SLURM job script, named myjob.sh, that can be used for Stampede is shown below: #!/bin/bash #SBATCH -J mympi # Job Name #SBATCH -o mympi.o%j # Name of the output file #SBATCH -n 32 # Requests 16 tasks/node, 32 cores total #SBATCH -p normal # Queue name normal #SBATCH -t 00:10:00 # Run time (hh:mm:ss) hours #SBATCH -A A-ccsc # Mention your account name (xxxxx) set -x # Echo commands ibrun./example1 43

44 Submitting, Monitoring and Cancelling SLURM Job staff$ sbatch myjob.sh Welcome to the Stampede Supercomputer > Verifying valid submit host (staff)...ok --> Verifying valid jobname...ok --> Enforcing max jobs per user...ok --> Verifying availability of your home dir (/home1/01698/rauta)...ok --> Verifying availability of your work dir (/work/01698/rauta)...ok --> Verifying availability of your scratch dir (/scratch/01698/rauta)...ok --> Verifying valid ssh keys...ok --> Verifying access to desired queue (development)...ok --> Verifying job request is within current queue limits...ok --> Checking available allocation (A-ccsc)...OK Submitted batch job staff$ squeue -u rauta JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) development mympi rauta R 0:04 1 c staff$ scancel

45 Ticket System for Assistance 45

46 46

47 47

48 When you have exhausted your compute time 48

49 If your allocation is over Submit a request that is applicable Supplement request: A supplement is a request for additional resources during an existing allocation's one-year time frame. Its purpose is to support changes in the original computational research plan that are required to achieve the scientific goals of the project. This may include altered or new projects or support for projects proceeding more rapidly than anticipated or that require more resources than anticipated. Supplement awards are highly dependent upon availability of resources and limited when allocation awards at the previous XRAC meeting have been reduced to eliminate oversubscriptions. Supplements are not a mechanism to acquire additional resources for awards that were granted for less than the amount originally requested (see Justification for information on appealing reduced awards). Renewal request: A submission should be a "renewal" if a PI (a) has had an allocation of the same request type, which was active within the past two years, (b) is continuing the same or similar line of research, and (c) is in the same field of science. A renewal request must address the progress of the prior allocation's research. 49

50 More Information on Allocation Policies Projects that encounter problems in consuming their allocations, such as unexpected staffing changes, can request an extension to their allocation end date In such instances, PIs may request a single extension of an allocation, extending the expiration date by a maximum of six months The PI will be asked to specify the project number and the length of the extension (1-6 months), along with a brief reason for the extension For more information: 50

51 For Further Information XSEDE website XSEDE Allocation Policies Submitting tickets through the XSEDE portal 51

52 Thanks for listening! Any questions or comments? 52

Getting Started with XSEDE. Dan Stanzione

Getting Started with XSEDE. Dan Stanzione November 3, 2011 Getting Started with XSEDE Dan Stanzione Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing large scale computing, data, and visualization resources. XSEDE is the evolution

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

XSEDE New User Tutorial

XSEDE New User Tutorial June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

XSEDE New User Tutorial

XSEDE New User Tutorial August 22, 2017 XSEDE New User Tutorial Marcela Madrid, Tom Maiden PSC XSEDE New User Tutorial Today s session is a general overview of XSEDE for prospective and new XSEDE users. It is NOT going to teach

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Introduction to BioHPC

Introduction to BioHPC Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2015-06-03 Overview Today we re going to cover: What is BioHPC? How do I access

More information

XSEDE New User Tutorial

XSEDE New User Tutorial March 27, 2018 XSEDE New User Tutorial Marcela Madrid, Tom Maiden PSC XSEDE New User Tutorial Today s session is a general overview of XSEDE for prospective and new XSEDE users. It is NOT going to teach

More information

For Dr Landau s PHYS8602 course

For Dr Landau s PHYS8602 course For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

ICS-ACI System Basics

ICS-ACI System Basics ICS-ACI System Basics Adam W. Lavely, Ph.D. Fall 2017 Slides available: goo.gl/ss9itf awl5173 ICS@PSU 1 Contents 1 Overview 2 HPC Overview 3 Getting Started on ACI 4 Moving On awl5173 ICS@PSU 2 Contents

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

XSEDE Software and Services Table For Service Providers and Campus Bridging

XSEDE Software and Services Table For Service Providers and Campus Bridging XSEDE Software and Services Table For Service Providers and Campus Bridging 24 September 2015 Version 1.4 Page i Table of Contents A. Document History iv B. Document Scope v C. 1 Page ii List of Figures

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Stampede User Environment

Stampede User Environment Version DJ2013-01 8 Jan 2012 Stampede User Environment Doug James 10 Jan 2012 Overview Effective users, good citizens Getting Started Access to Stampede Getting Acquainted A Tour of Stampede Getting Work

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer Daniel Yorgov Department of Mathematical & Statistical Sciences, University of Colorado Denver

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

High Performance Computing Cluster Basic course

High Performance Computing Cluster Basic course High Performance Computing Cluster Basic course Jeremie Vandenplas, Gwen Dawes 30 October 2017 Outline Introduction to the Agrogenomics HPC Connecting with Secure Shell to the HPC Introduction to the Unix/Linux

More information

Choosing Resources Wisely. What is Research Computing?

Choosing Resources Wisely. What is Research Computing? Choosing Resources Wisely Scott Yockel, PhD Harvard - Research Computing What is Research Computing? Faculty of Arts and Sciences (FAS) department that handles nonenterprise IT requests from researchers.

More information

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat

Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.

More information

Introduction to the Cluster

Introduction to the Cluster Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be

More information

Using a Linux System 6

Using a Linux System 6 Canaan User Guide Connecting to the Cluster 1 SSH (Secure Shell) 1 Starting an ssh session from a Mac or Linux system 1 Starting an ssh session from a Windows PC 1 Once you're connected... 1 Ending an

More information

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

Leveraging the InCommon Federation to access the NSF TeraGrid

Leveraging the InCommon Federation to access the NSF TeraGrid Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University of Illinois at Urbana-Champaign jbasney@ncsa.uiuc.edu

More information

Remote & Collaborative Visualization. Texas Advanced Computing Center

Remote & Collaborative Visualization. Texas Advanced Computing Center Remote & Collaborative Visualization Texas Advanced Computing Center TACC Remote Visualization Systems Longhorn NSF XD Dell Visualization Cluster 256 nodes, each 8 cores, 48 GB (or 144 GB) memory, 2 NVIDIA

More information

HPC Introductory Course - Exercises

HPC Introductory Course - Exercises HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

XSEDE and XSEDE Resources

XSEDE and XSEDE Resources October 26, 2012 XSEDE and XSEDE Resources Dan Stanzione Deputy Director, Texas Advanced Computing Center Co-Director, iplant Collaborative Welcome to XSEDE! XSEDE is an exciting cyberinfrastructure, providing

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

Programming Environment

Programming Environment Programming Environment Cornell Center for Advanced Computing June 11, 2013 Thanks to Dan Stanzione, Bill Barth, Lars Koesterke, Kent Milfeld, Doug James, and Robert McLay for their materials developed

More information

Introduction to BioHPC New User Training

Introduction to BioHPC New User Training Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-04-04 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

Sherlock for IBIIS. William Law Stanford Research Computing

Sherlock for IBIIS. William Law Stanford Research Computing Sherlock for IBIIS William Law Stanford Research Computing Overview How we can help System overview Tech specs Signing on Batch submission Software environment Interactive jobs Next steps We are here to

More information

COMPUTE CANADA GLOBUS PORTAL

COMPUTE CANADA GLOBUS PORTAL COMPUTE CANADA GLOBUS PORTAL Fast, user-friendly data transfer and sharing Jason Hlady University of Saskatchewan WestGrid / Compute Canada February 4, 2015 Why Globus? I need to easily, quickly, and reliably

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

XSEDE New User/Allocation Mini-Tutorial

XSEDE New User/Allocation Mini-Tutorial February 23, 2015 XSEDE New User/Allocation Mini-Tutorial Vincent C. Betro, Ph.D. University of Tennessee NICS/ORNL XSEDE Training Manager Outline What s XSEDE? How do I get an alloca7on? What is the User

More information

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

XSEDE Software and Services Table For Service Providers and Campus Bridging

XSEDE Software and Services Table For Service Providers and Campus Bridging XSEDE Software and Services Table For Service Providers and Campus Bridging 19 February 2013 Version 1.1 Page i Table of Contents A. Document History iv B. Document Scope v C. 1 Page ii List of Figures

More information

Regional & National HPC resources available to UCSB

Regional & National HPC resources available to UCSB Regional & National HPC resources available to UCSB Triton Affiliates and Partners Program (TAPP) Extreme Science and Engineering Discovery Environment (XSEDE) UCSB clusters https://it.ucsb.edu/services/supercomputing

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu

Duke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Outline. March 5, 2012 CIRMMT - McGill University 2

Outline. March 5, 2012 CIRMMT - McGill University 2 Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started

More information

Introduction to BioHPC

Introduction to BioHPC Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-03-07 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Exercises: Abel/Colossus and SLURM

Exercises: Abel/Colossus and SLURM Exercises: Abel/Colossus and SLURM November 08, 2016 Sabry Razick The Research Computing Services Group, USIT Topics Get access Running a simple job Job script Running a simple job -- qlogin Customize

More information

UGP and the UC Grid Portals

UGP and the UC Grid Portals UGP and the UC Grid Portals OGF 2007 Documentation at: http://www.ucgrid.org Prakashan Korambath & Joan Slottow Research Computing Technologies UCLA UGP (UCLA Grid Portal) Joins computational clusters

More information

Programming Environment

Programming Environment Programming Environment Cornell Center for Advanced Computing January 23, 2017 Thanks to Dan Stanzione, Bill Barth, Lars Koesterke, Kent Milfeld, Doug James, and Robert McLay for their materials developed

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions Pradeep Sivakumar pradeep-sivakumar@northwestern.edu Contents What is XSEDE? Introduction Who uses XSEDE?

More information

Introduction to BioHPC New User Training

Introduction to BioHPC New User Training Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2019-02-06 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Getting Started with XSEDE Andrew Grimshaw and Karolina Sarnowska- Upton

Getting Started with XSEDE Andrew Grimshaw and Karolina Sarnowska- Upton January 10, 2012 Getting Started with XSEDE Andrew Grimshaw and Karolina Sarnowska- Upton Audience End users and developers who want to Access and use NSF funded XSEDE compute, data, and storage resources

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Deploying a Production Gateway with Airavata

Deploying a Production Gateway with Airavata Deploying a Production Gateway with Airavata Table of Contents Pre-requisites... 1 Create a Gateway Request... 1 Gateway Deploy Steps... 2 Install Ansible & Python...2 Deploy the Gateway...3 Gateway Configuration...

More information

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational

More information

An Introduction to Gauss. Paul D. Baines University of California, Davis November 20 th 2012

An Introduction to Gauss. Paul D. Baines University of California, Davis November 20 th 2012 An Introduction to Gauss Paul D. Baines University of California, Davis November 20 th 2012 What is Gauss? * http://wiki.cse.ucdavis.edu/support:systems:gauss * 12 node compute cluster (2 x 16 cores per

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

Batch Systems. Running your jobs on an HPC machine

Batch Systems. Running your jobs on an HPC machine Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Using SDSC Systems (part 2)

Using SDSC Systems (part 2) Using SDSC Systems (part 2) Running vsmp jobs, Data Transfer, I/O SDSC Summer Institute August 6-10 2012 Mahidhar Tatineni San Diego Supercomputer Center " 1 vsmp Runtime Guidelines: Overview" Identify

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

Moab Workload Manager on Cray XT3

Moab Workload Manager on Cray XT3 Moab Workload Manager on Cray XT3 presented by Don Maxwell (ORNL) Michael Jackson (Cluster Resources, Inc.) MOAB Workload Manager on Cray XT3 Why MOAB? Requirements Features Support/Futures 2 Why Moab?

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

Introduction to GACRC Teaching Cluster

Introduction to GACRC Teaching Cluster Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders

More information

Introduction to BioHPC

Introduction to BioHPC Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2017-01-04 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Protected Environment at CHPC

Protected Environment at CHPC Protected Environment at CHPC Anita Orendt anita.orendt@utah.edu Wayne Bradford wayne.bradford@utah.edu Center for High Performance Computing 5 November 2015 CHPC Mission In addition to deploying and operating

More information

MEDIASEAL Encryptor Client Manual

MEDIASEAL Encryptor Client Manual MEDIASEAL Encryptor Client Manual May 2018 Version 3.7.1 Fortium Technologies Ltd www.fortiumtech.com Copyright 2018 - Fortium Technologies Ltd Information contained in this document is subject to change

More information

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018 Linux Tutorial Ken-ichi Nomura 3 rd Magics Materials Software Workshop Gaithersburg Marriott Washingtonian Center November 11-13, 2018 Wireless Network Configuration Network Name: Marriott_CONFERENCE (only

More information

Using MATLAB on the TeraGrid. Nate Woody, CAC John Kotwicki, MathWorks Susan Mehringer, CAC

Using MATLAB on the TeraGrid. Nate Woody, CAC John Kotwicki, MathWorks Susan Mehringer, CAC Using Nate Woody, CAC John Kotwicki, MathWorks Susan Mehringer, CAC This is an effort to provide a large parallel MATLAB resource available to a national (and inter national) community in a secure, useable

More information

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013

Slurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT November 13, 2013 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

HPCC New User Training

HPCC New User Training High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Bitnami Apache Solr for Huawei Enterprise Cloud

Bitnami Apache Solr for Huawei Enterprise Cloud Bitnami Apache Solr for Huawei Enterprise Cloud Description Apache Solr is an open source enterprise search platform from the Apache Lucene project. It includes powerful full-text search, highlighting,

More information

Introduction to GACRC Teaching Cluster PHYS8602

Introduction to GACRC Teaching Cluster PHYS8602 Introduction to GACRC Teaching Cluster PHYS8602 Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three

More information

Introduction to CARC. To provide high performance computing to academic researchers.

Introduction to CARC. To provide high performance computing to academic researchers. Introduction to CARC To provide high performance computing to academic researchers. Machines Metropolis Nano Galles Poblano Pequena Gibbs Ulam Machines (cont.) Machine details, just for example: Nano (supercomputer)

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and

More information

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing HPC Workshop Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing NEEDED EQUIPMENT 1. Laptop with Secure Shell (ssh) for login A. Windows: download/install putty from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

More information