Logging in to the CRAY

Similar documents
Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

New User Tutorial. OSU High Performance Computing Center

Introduction to Linux Environment. Yun-Wen Chen

Introduction to Linux. Fundamentals of Computer Science

(7) Get the total energy from the output (1 Hartree = kcal/mol) and label the structure with the energy.

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

Sharpen Exercise: Using HPC resources and running parallel applications

CS CS Tutorial 2 2 Winter 2018

Sharpen Exercise: Using HPC resources and running parallel applications

AMS 200: Working on Linux/Unix Machines

Using the computational resources at the GACRC

Introduction to the Linux Command Line

User Guide Version 2.0

Introduction to parallel computing

Session 1: Accessing MUGrid and Command Line Basics

For Dr Landau s PHYS8602 course

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Introduction to the UNIX command line

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

Using Sapelo2 Cluster at the GACRC

Linux/Cygwin Practice Computer Architecture

Joint High Performance Computing Exchange (JHPCE) Cluster Orientation.

UF Research Computing: Overview and Running STATA

STA 303 / 1002 Using SAS on CQUEST

Migrating from Zcluster to Sapelo

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Introduction to GALILEO

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Introduction to HPC Resources and Linux

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

CpSc 1111 Lab 1 Introduction to Unix Systems, Editors, and C

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

CENG 334 Computer Networks. Laboratory I Linux Tutorial

Working on the NewRiver Cluster

Please include the following sentence in any works using center resources.

The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

Introduction: What is Unix?

Batch Systems. Running calculations on HPC resources

Quick Guide for the Torque Cluster Manager

An Introduction to Cluster Computing Using Newton

PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

C++ Programming on Linux

Getting Started with UNIX

Linux Essentials. Programming and Data Structures Lab M Tech CS First Year, First Semester

ICS-ACI System Basics

Batch environment PBS (Running applications on the Cray XC30) 1/18/2016

NBIC TechTrack PBS Tutorial

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

User Guide of High Performance Computing Cluster in School of Physics

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.

Parallel Programming Pre-Assignment. Setting up the Software Environment

PBS Pro Documentation

CMSC 104 Lecture 2 by S Lupoli adapted by C Grasso

sftp - secure file transfer program - how to transfer files to and from nrs-labs

HPC Introductory Course - Exercises

Introduction to UNIX

Introduction to Discovery.

Using the Zoo Workstations

Physics REU Unix Tutorial

Getting Started With UNIX Lab Exercises

Lab 1 Introduction to UNIX and C

Introduction to Discovery.

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala

No Food or Drink in this room. Logon to Windows machine

Lezione 8. Shell command language Introduction. Sommario. Bioinformatica. Mauro Ceccanti e Alberto Paoluzzi

CS 261 Recitation 1 Compiling C on UNIX

Running applications on the Cray XC30

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Unix/Linux Basics. Cpt S 223, Fall 2007 Copyright: Washington State University

An Introduction to Unix

Introduction to Discovery.

Supercomputing environment TMA4280 Introduction to Supercomputing

Introduction to remote command line Linux. Research Computing Team University of Birmingham

CSCI 2132 Software Development. Lecture 4: Files and Directories

Unix Workshop Aug 2014

Lezione 8. Shell command language Introduction. Sommario. Bioinformatica. Esercitazione Introduzione al linguaggio di shell

PRACTICAL MACHINE SPECIFIC COMMANDS KRAKEN

GPU Cluster Usage Tutorial

Unix/Linux Operating System. Introduction to Computational Statistics STAT 598G, Fall 2011

A Brief Introduction to The Center for Advanced Computing

Bioinformatics Facility at the Biotechnology/Bioservices Center

Introduction to Unix - Lab Exercise 0

CSE Linux VM. For Microsoft Windows. Based on opensuse Leap 42.2

Batch Systems. Running your jobs on an HPC machine

Name Department/Research Area Have you used the Linux command line?

CS Fundamentals of Programming II Fall Very Basic UNIX

Introduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen

ISTeC Cray High Performance Computing System User s Guide Version 5.0 Updated 05/22/15

Running LAMMPS on CC servers at IITM

This lab exercise is to be submitted at the end of the lab session! passwd [That is the command to change your current password to a new one]

Using a Linux System 6

Introduction to Linux for BlueBEAR. January

Quality Control of Illumina Data at the Command Line

A Brief Introduction to The Center for Advanced Computing

Lab 1 Introduction to UNIX and C

Transcription:

Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter password 4. to change password type passwd2 then enter 1. Open Putty On a PC 2. ssh to cray2.colostate.edu with your username 3. enter password

Basic Unix commands for the Cray Moving around in Unix ls gives the contents of the directory of interest cd path changes directory to path cd.. takes you one level backwards in the directory structure cd (or cd ) takes you to your home directory pwd gives the filepath of the curent directory name* wildcard character, finds all files in the current directory that start with name... *name wildcard character, finds all files in the current directory that end with name... *name* wildcard character, finds all files in the current directory that contain name... logout logout of the Cray

Basic Unix commands for the CRAY Copying, Moving, and Removing Files in Unix cp file1 file2 copies the file named file1 to the file named file2 cp file1 /path/file2 copies the file named file1 to the file named file2 in the directory file/path mv file1 file2 renames the file named file1 to file2 (note, this removes file1) mv file1 /path/file2 moves the file named file1 to the file named file2 in the directory file/path rm filename permanently deletes the file filename mkdir directoryname makes the directory named directoryname rmdir directoryname removes the empty directory named directoryname rm -r directoryname removes the directory named directoryname and all the files in directoryname

Text Editing - Emacs There are two primay Unix editors, vi and Emacs. I know Emacs and will focus on this. There are also many Emacs ports that include links to R, Latex, Sweave, Knitr, and other tools to improve workflow. In Unix emacs opens the text editor emacs emacs filename opens the file filename in emacs In Emacs There are two primary command keys in Emacs - Command(C-) and Meta(M-) C- is the CTRL key on a Mac M- is the ESC (or ALT) key on a Mac

Emacs Commands C-x C-c (CTRL-x CTRL-c) closes the emacs document and asks if you want to save if there are unsaved changes C-x C-s saves the current emacs document C-v page-down one screen M-v (ESC-v or ALT-v) page-up one screen C-a goes to the beginning of the current line C-e goes to the end of the current line C-k deletes the current line C-y restores the deleted line

Editing your.bash profile - To setup the Cray for each login and link the R program to the home directory When logged into your home directory on the Cray, type emacs.bash profile In Emacs enter the text: export PATH=/apps/R-2.14.2/bin:$PATH export LD_LIBRARY_PATH=/opt/gcc/4.1.2/cnos/lib64:/opt/gcc/4.4.4/snos/lib64/:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/apps/openmpi-1.6.2/lib/:$LD_LIBRARY_PATH export TMP=$HOME/lustrefs/tmp/ export MPI_LIBPATH=/apps/openmpi-1.6.2/lib export MPI_INCLUDE_PATH=/apps/openmpi-1.6.2/include/ export RMPI_TYPE= OPENMPI then enter C-x C-s (CTRL-x CTRL-s) to save and C-x C-c to exit emacs. If you copy and paste this, Type logout to exit the Cray system and ssh back into the Cray to start a new session with the.bash profile. Test that this has been successful by typing R in your home directory.

Transferring files to and from the CRAY Windows use winscp this is a gui interface and self explanatory Mac sftp username@cray2.colostate.edu log in to the Cray using sftp put /path1/filename1 /path2/filename2 uploads filename1 from /path1/ on the local system to /path2/filename2 to the Cray get /path1/filename1 /path2/filename2 downloads filename1 from /path1/ on the Cray to /path2/filename2 on the local system quit logout of sftp

Setting up the Cray for R version 2.14.2 Copy RMPISNOW into your lustrefs directory by typing cp /apps/r-2.14.2/lib64/r/library/snow/rmpisnow /home/username/lustrefs/rmpisnow where username is your username and there is a space between cp /apps and between RMPISNOW /home/. If you are unsure about your username type whoami. Create a R temporary directory tmp under lustrefs by typing mkdir tmp while in the lustrefs directory. Then type export TMP=$HOME/lustrefs/tmp/

Useful Cray commands qsub filename submits the job filename to the queue using batch commands qstat shows the status of jobs in queues qstat -Q shows all available queues (brief) qstat -Qf shows all available queues (full) qstat -u username shows the status of jobs that belong to you xtnodestat shows the status of compute nodes qdel jobid deletes the job with job ID = jobid from the batch queues as long as you are the owner of the job

Batch Script - to submit parallel jobs To create a batch script, type emacs and then enter: #!/bin/bash #PBS -N job.name #PBS -j oe #PBS -l mppwidth=24 #PBS -l walltime=01:00:00 #PBS -q small cd $PBS_O_WORKDIR date aprun -n 24 RMPISNOW <script.r> out.txt date and then type C-x C-s (CTRL-x CTRL-s) and save the file as run.txt then type C-x C-c to close emacs.

Batch Script description #!/bin/bash specifies the Unix shell environment for the batch job -N jobname renames the output file as jobname -j oe specifies that output and errors will be combined -l mppwidth specifies the number of cores to allocate to your job. The number of cores should be a multiple of 24 and not exceed 1,344 -l walltime specifies the maximum amount of time in hours, minutes and seconds that the job may run -q small specifies which queue the job should be run in. There are four options: small, medium, large, and ccm queue PBS O WORKDIR contains the absolute path to the directory from which you submitted your job

Parallel Code Example - using the snowfall, parallel, and rlecuyer libraries Let s try some parallel code on our desktop/laptop machines and see how this works. See the code parallel.example.r Any for loop where each iteration of the loop is independent of the other iterations (no MCMC) can be written as an apply command (apply, sapply, and lapply) and this can be parallelized by using sfapply, sfsapply, and sflapply inside the snowfall library.

Parallel Code Example - using the snowfall, parallel, and rlecuyer libraries Detect the number of cores on your system using cps - detectcores() Setup the snowfall cluster using sfinit(parallel = TRUE, cpus = cps Make sure to set up the cluster random number generator using sfclustersetuprng() Make sure to export the functions, variables, and libraries needed for parallel calculation by using sfexport( function ), sfexport( variable ), sfexportall(), and sflibrary( library ). End the snowfall cluster using sfstop()

Parallel Code Example - using the snowfall, parallel, and rlecuyer libraries Detect the number of cores on your system using cps - detectcores() Setup the snowfall cluster using sfinit(parallel = TRUE, cpus = cps Make sure to set up the cluster random number generator using sfclustersetuprng() Make sure to export the functions, variables, and libraries needed for parallel calculation by using sfexport( function ), sfexport( variable ), sfexportall(), and sflibrary( library ). End the snowfall cluster using sfstop()

Let s put it all together - running code on the Cray sftp the files script.r and run.txt onto your lustrefs directory on the Cray Submit the job to the Cray compute nodes by typing qsub run.txt Check the status of the job on the Cray by typing qstat Review the files in your lustrefs directory by typing ls Once the job is finished check the results by typing emacs out.txt to see the R code and emacs my.first.run.oxxxxxx to see the Cray outpur and errors, where xxxxxx is the job id number

How to parallelize your own code for the Cray Setup the cluster cl makecluster() Export the cluster using clusterexport(cl, ls()) Setup the cluster random number generator using clustersetuprng(cl) Can parallelize your code using parapply, parsapply, and parlapply Stop the cluster with clusterstop()

Some notes about the Cray There are 24 cores on each node of the Cray, make sure you don t waste these resources by not using all the cores It appears that you can have 23 slaves with one master and the master sits idle (at least from my experience) Small jobs have higher priority Jobs with fewer numbers of cores needed are easier to schedule (and get run quicker) Make sure you have enough runtime to complete the job otherwise it will be cut-off Small queue - less than 1 hour runtime, highest priority, can have up to 20 jobs at once Medium queue - less than 24 hour runtime, medium priority, can have up to two jobs at once Large queue - less than 7 days runtime, lowest priority, can have only one job at a time