HPC Course Session 3 Running Applications
|
|
- Harvey Allison
- 5 years ago
- Views:
Transcription
1 HPC Course Session 3 Running Applications Checkpointing long jobs on Iceberg 1.1 Checkpointing long jobs to safeguard intermediate results For long running jobs we recommend using checkpointing this allows you to run a job for a shorter period, and then save the entire state of the application at the end of the job. The job can then be re submitted, and will continue from where it was saved. This protects you against loosing days/weeks of calculations if a single, long running job were to fail through a power cut or by timing out before the analysis completed. 1.2 Submitting your checkpointing jobs to ICEBERG Jobs submission scripts must include extra options for using checkpointing that indicate in the script that the intermediate results should be saved: #$ ckpt blcr #$ c sx Additionally when submitting a job you need to prefix your executable command with 'cr_run'. E.g.: cr_run [normal command to execute]. In the given example, to submit a job this would be: cr_run(space)qsub(space)colony_molecol_job.sh The scheduler will then run your job as normal, but after the time requested (i.e. h_rt) it will create a snapshot of the running process before terminating it. The checkpoint file (which is a complete copy of the application's in memory state) will be saved in the current working directory under the file name checkpoint.[jobid].[pid]. E.g. checkpoint (see below) The amount of memory in use by the application determines the size of the checkpoint file, so if you are using a large amount of memory make sure you have sufficient free disk space in your working directory to save the file. 1.3 Restarting a job that has timed out using the intermediate checkpoint file To restart a checkpointed job you need to create a new job submission script with the same options but add the cr_restart command to the script and the name of the intermediate checkpoint file as below: #!/bin/bash #$ l h_rt=168:00:00 [... any other normal options...] #$ ckpt blcr #$ c sx cr_restart checkpoint.[jobid].[pid]
2 Replace [jobid] and [Pid] with the values for the generated checkpoint file, E.g. checkpoint To restart the job, resubmit the job normally using the qsub command. i.e. qsub(space)colony_molecol_job.sh Each time the job ends a new checkpoint file will be generated, which you can then use to resubmit the job (until such time that it completes). How to run COLONY in the ICEBERG platform using checkpointing 2.1 Colony description and analysis on a Desktop PC versus Iceberg Colony is a computer program implementing a maximum likelihood method to assign/infer parentage and sibship among individuals using their multi locus genotypes. Colony can be used, among others, in estimating full and half sib relationships, assigning parentage, inferring mating systems (polygamous / monogamous, selfing rate) and reproductive skew in both diploid and haplo diploid species. It can also be used to simulate genotype data with a particular sibship and parentage structure. The method is formally described in the following papers Wang (2004, 2012, 2013) and Wang and Santure (2009). Several factors determine how fast Colony runs and thus how much time it takes to analyze your data. These are discussed in detail in the software manual. In general, the analysis method used dictates computational time. The full likelihood method (FL) and the pair likelihood score method (PLS), incur very different computational intensities; full likelihood takes longest. Depending on the analysis method used and dataset, a Colony analysis can be quite computationally intensive and to run a single job using FL may require most of the CPU on a desktop and may take several days or weeks. If you wish to run a quick and short preliminary analysis using PLS then a desktop is sufficient. For your analysis, you may wish to run several replicate jobs using different numbers of markers and different genotyping error rates. This would require you to run several jobs. The advantage of using High Performance Computing and the Iceberg computer cluster is that you can run multiple jobs simultaneously. This protocol describes how to submit a job to the serial version of Colony on the Iceberg computer cluster. Comparative tests of run times in a Linux environment indicated that serial versions were faster than the parallel. The Colony software is downloadable from the website
3 2.2 molecol ICEBERG access molecol ICEBERG is a private computer cluster belonging to NERC Biomolecular Analysis Facility (NBAF), with exclusive access for NBAF and Molecular Ecology group members. To run short and test jobs you can use the Iceberg computer cluster, the maximum run time of Iceberg is 168 hours (7 days). If you have long jobs to run, you may wish to use molecol ICEBERG which has a maximum run time of 672 hours (28 days). 2.3 Input files for Colony analysis software Example 2 Example 2 : An ant (Leptothorax acervorum) dataset The original dataset comprises of 377 ant workers (diploid) sampled from 10 known colonies, each headed by a single monogamous queen (diploid). Males are also monogamous. Therefore, the sampled workers are either fullsibs from the same colony or nonsibs from different colonies. Candidate males and females are not available. Ant workers are genotyped at up to 6 microsatellite loci, which have a number of observed alleles varying between 3 to 22. In the HPC exercise we will use a subset of the original dataset from Example 2 comprising of the first 25 ant workers (diploid) in the dataset. The ants have been genotyped at up to 6 microsatellite loci. Using Colony software, we will reconstruct the sibships amongst individuals to identify how many colonies the individuals belong to. The rates of genotyping errors are assumed in the analysis to be 5% for both allelic dropout and other kind of errors at each locus. This modified data input file requires 5 7 minutes to complete analysis. In the HPC exercise we will simulate a job timing out after three minutes prior to analysis completion. Then we will re start the job using the intermediate checkpoint file and run until completion. 2.4 Prepare your input files for Colony analysis using the ICEBERG platform To run COLONY in the molecol ICEBERG platform you need the following files: 1 Your COLONY project data input file in.dat data format (colony2.dat) as well as the other accessory files. Using the desktop version of COLONY, create a project, choosing the settings according with your species and data etc. Find the created files Colony2.dat and accessory files in the programme folder e.g. at C:\Program Files\ZSL\Colony\...(projectname). 2 The job submission file (colony_molecol_job.sh) to submit jobs to the molecol Iceberg cluster using checkpointing (see description below). The job submission file contains information regarding job run length (e.g. 672 hours, note this is the run length of the molecol cluster), ICEBERG cluster for job submission (e.g. molecol), job to use checkpointing ( ckpt blcr; c sx), e mail notification about job progress (e.g. at bea; b (beginning) e (end) or (a) abort) and e mail address for notification (e.g. c.mcinerney@sheffield.ac.uk), path directory to software executable file, path directory to input file; for more information see
4 This job submission script is specific to submitting a job to the molecol Iceberg queue. This is denoted by the lines : P molecol, q molecol.q. You must have permission to use this cluster (contact the administrator). Alternatively, for test runs and short jobs, these should be submitted to the Iceberg (short or long queues). Your job submission script to submit a job to the Iceberg queue: a) will not have the lines : P molecol, q molecol.q b) the line denoting the run time will have either 8 or 168 hours depending on which Iceberg queue you wish to submit your job to short or long queues, respectively (e.g: l h_rt=8:00:00 or l h_rt=168:00:00). For more information see: The job submission file (colony_molecol_job.sh) to submit jobs to the Iceberg using checkpointing: Prior to beginning a Colony analysis both your colony project input data file.dat and your job submission script files need to be modified. This must be done within ICEBERG using the nano function. This is explained next in the HPC exercise.
5 3. 1 HPC exercise to run Colony ICEBERG using checkpointing 1. Login to ICEBERG via the internet at my APPS Portal using the following links: or Log in with your User ID and Password for your ICEBERG account (provided by CICS). 2. Select Interactive job that opens a command window (e.g. bo1cem@testnode02:~): 3. In the ICEBERG window, move to the your data folder identified by your account username (bo1cem in this example) by typing
6 cd(space)/data/bo1cem/hpc_course/colony 4. Check the folder contents by typing the unix command to list files ls. ls Notice data files are present but no job submission script. We will now copy the job submission script from the example job submission scripts available in the Genomics folder. 5. You should now copy the colony job submission script colony_molecol_job.sh into your folder. You do this by typing the cp command followed by the directory of the job submission script and then followed by the directory of your colony folder in your file directory location cp (space) /usr/local/extras/genomics/submit_scripts/colony_molecol_job.sh (space) /data/bo1cem/hpc_course/colony 6. Check you have successfully copied the file by typing ls (list files unix command). ls The jobs submission script colony_molecol_job.sh should now be listed as a file in your folder. 3.2 Modify your Job submission script file 1. Open the job submission script using the nano text editor function by typing: nano(space)colony_molecol_job.sh This opens this job submission script which must now be edited.
7 2. Modify the time: In a real analysis the line denoting the run time will have either 8 or 168 hours depending on which Iceberg queue you wish to submit your job to short or long queues, respectively (e.g: l h_rt=8:00:00 or l h_rt=168:00:00). For the molecol Iceberg cluster the maximum time is 672 hours (28 days). In this exercise we will simulate a job timing out prior to completion; please modify the run time to 3 minutes. l h_rt=00:03:00 3. Modify the job queue: This script is specific for sending jobs to the molecol Iceberg cluster job queue. As you do not have permissions to submit jobs to the molecol Iceberg cluster queue you must now delete the lines: P molecol, q molecol.q 4. Modify the e mail address for notifications about the job progress: Change the e mail address to your own (e.g. c.mcinerney@sheffield.ac.uk) to receive job progress notifications at the job beginning (b), end (e) or when a job failed /aborted (a) m bea M c.mcinerney@sheffield.ac.uk 5. Modify the path directory to your colony data input file: Specify your folder directory containing your colony files and your data input file: /data/bo1cem/hpc_course/colony/colony2.dat 6. To save the changes and exit the nano text editor: Press Control O to write the file i.e. save the changes. Save the file using the same name by pressing enter. Press Control X to exit. 3.3 Modify your Colony data input file Newer versions of Colony save the data input file as Colony2.Dat. The source code in the Linux environment is an older version, however, and the file name required is colony2.dat. We need to modify the file name and some other parameters in the data input file. 1. Open the Colony2.Dat using the nano function by typing: nano(space)colony2.dat 2. Change the directory location in the two first lines of the file (highlighted) to match the directory location of your colony folder where you wish to carry out your analysis: /data/bo1cem/hpc_course/colony/
8 3. To save the changes and exit the nano text editor: Press Control O to write the file i.e. save the changes. Save the file using a new name, save the file as colony2.dat. Press Control X to exit. 4. Remove the old Colony2.Dat file by typing: rm(space)colony2.dat 5. Convert your colony2.dat file to a compatible unix file by typing in the ICEBERG window: dos2unix (space) colony2.dat or alternatively by typing /usr/bin/dos2unix (space) colony2.dat Now you re ready to start submitting your COLONY job(s) to ICEBERG 3.4 Submitting your COLONY jobs using checkpointing in the ICEBERG platform 1. To submit your COLONY job(s) to ICEBERG type: cr_run(space) qsub(space)colony_molecol_job.sh. 2. Check the status of your jobs submitted to Iceberg by typing:
9 Qstat 3. Alternatively, to check the status of jobs submitted to molecol ICEBERG cluster type: Qstat(space) q (space)molecol.q Running jobs are indicated by an r while queued jobs are indicated by qw. 3.5 Restarting a Colony job that has timed out using the intermediate checkpoint file In this exercise we are simulating a job that times out. After the job has timed out you should have received e mail notification and a checkpoint file (which is a complete copy of the application's in memory state) was saved in the current working directory under the file name checkpoint.[jobid].[pid]. This intermediate checkpoint file may be used to restart a Colony job. 1. Identify the intermediate checkpoint file for your Colony job by listing the folder files by typing: ls 2. Make a note of the checkpoint file, for example: checkpoint To restart a checkpointed job you need to create a new job submission script. Open the job submission script colony_molecol_job.sh using the nano text editor function by typing: nano(space)colony_molecol_job.sh 4. Modify the job submission script to change the run time to10 minutes. l h_rt=00:03:00 5. Modify the job submission script to include a line with the cr_restart command and the name of the intermediate checkpoint file as below: #!/bin/bash #$ l h_rt=00:10:00 [... any other normal options...] #$ ckpt blcr #$ c sx cr_restart checkpoint.[jobid].[pid] 6. To save the changes and exit the nano text editor: Press Control O to write the file i.e. save the changes. Save the file using the same name. Press Control X to exit. 7. Restart your job using the normal job submission command: qsub(space)colony_molecol_job.sh Your job should complete and you should receive e mail notification!
10 Examining and copying your results files using WinSCP WinSCP is a programme for file transfer from / to your desktop into ICEBERG. 1. Download and install WinsSCP using the following link: If possible, choose this option: Download the stand alone version(4.37) from a local copy. 2. Open Winscp, to login insert the Host name for ICEBERG and your User name and password for your ICEBERG account, as shown in the figure below, and select Login. On the left side of the window you can search for your input files while on the right side you will create folders and import your input files to ICEBERG from the left side. 3. Within Winscp, use the Go to option to navigate your windows to your file directories.
11 After the Colony job has finished you will have the same 17 output files as those created by COLONY in Windows. Some of these are not visible in the Linux environment but can be viewed using Winscp. To ensure that your job has completed check that you have the Bestconfig file (see below). Result files can be transferred and saved to a desktop using Winscp. References: (1) Wang J Sibship reconstruction from genetic data with typing errors. Genetics 166: (2) Wang J & Santure AW Parentage and sibship inference from multi locus genotype data under polygamy. Genetics 181: (3) Jones OR & Wang J COLONY: a program for parentage and sibship inference from multilocus genotype data. Molecular Ecology Resources 10: (4) Wang J Computationally efficient sibship and parentage assignment from multilocus marker data. Genetics 191: (5) Wang J A simulation module in the computer program COLONY for sibship and parentage analysis. Molecular Ecology Resources 13: The Colony software is downloadable from the website
User Guide. Updated February 4th, Jason A. Coombs. Benjamin H. Letcher. Keith H. Nislow
PedAgree User Guide Updated February 4th, 2010 Jason A. Coombs Benjamin H. Letcher Keith H. Nislow Program In Organismic and Evolutionary Biology University of Massachusetts, Amherst, MA 01003 jcoombs@cns.umass.edu
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationSOLOMON: Parentage Analysis 1. Corresponding author: Mark Christie
SOLOMON: Parentage Analysis 1 Corresponding author: Mark Christie christim@science.oregonstate.edu SOLOMON: Parentage Analysis 2 Table of Contents: Installing SOLOMON on Windows/Linux Pg. 3 Installing
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More informationUsing CLC Genomics Workbench on Turing
Using CLC Genomics Workbench on Turing Table of Contents Introduction...2 Accessing CLC Genomics Workbench...2 Launching CLC Genomics Workbench from your workstation...2 Launching CLC Genomics Workbench
More informationPARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017
PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017 PARALLEL COMPUTING Dataset 1 Processor Dataset 2 Dataset 3 Dataset 4 R script Processor Processor Processor WHAT IS ADVANCED
More informationUSER S MANUAL FOR THE AMaCAID PROGRAM
USER S MANUAL FOR THE AMaCAID PROGRAM TABLE OF CONTENTS Introduction How to download and install R Folder Data The three AMaCAID models - Model 1 - Model 2 - Model 3 - Processing times Changing directory
More informationSiemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.
Siemens PLM Software HEEDS MDO 2018.04 Setting up a Windows-to- Linux Compute Resource www.redcedartech.com. Contents Introduction 1 On Remote Machine B 2 Installing the SSH Server 2 Configuring the SSH
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationEmile R. Chimusa Division of Human Genetics Department of Pathology University of Cape Town
Advanced Genomic data manipulation and Quality Control with plink Emile R. Chimusa (emile.chimusa@uct.ac.za) Division of Human Genetics Department of Pathology University of Cape Town Outlines: 1.Introduction
More informationNew User Tutorial. OSU High Performance Computing Center
New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationEfficient and scalable parallel reconstruction of sibling relationships from genetic data in wild populations
Efficient and scalable parallel reconstruction of sibling relationships from genetic data in wild populations Saad Sheikh, Ashfaq Khokhar, Tanya Berger-Wolf Department of Computer Science, University of
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationStrAuto. Automation and Parallelization of STRUCTURE Analysis. User Manual Version 1.0.
StrAuto Automation and Parallelization of STRUCTURE Analysis User Manual Version 1.0 http://strauto.popgen.org Vikram E. Chhatre 1,2 & Kevin J. Emerson 3 1 Dept. of Plant Biology, University of Vermont
More informationAn introduction to checkpointing. for scientific applications
damien.francois@uclouvain.be UCL/CISM - FNRS/CÉCI An introduction to checkpointing for scientific applications November 2013 CISM/CÉCI training session What is checkpointing? Without checkpointing: $./count
More informationCLC Genomics Workbench. Setup and User Guide
CLC Genomics Workbench Setup and User Guide 1 st May 2018 Table of Contents Introduction... 2 Your subscription... 2 Bookings on PPMS... 2 Acknowledging the Sydney Informatics Hub... 3 Publication Incentives...
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationLogging in to the CRAY
Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter
More informationIntroduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER
Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER 1 Introduction This handout contains basic instructions for how to login in to ARCHER and submit jobs to the
More informationHelsinki 19 Jan Practical course in genome bioinformatics DAY 0
Helsinki 19 Jan 2017 529028 Practical course in genome bioinformatics DAY 0 This document can be downloaded at: http://ekhidna.biocenter.helsinki.fi/downloads/teaching/spring2017/exercises_day0.pdf The
More informationPrint Audit 6. Print Audit 6 Documentation Apr :07. Version: Date:
Print Audit 6 Version: Date: 37 21-Apr-2015 23:07 Table of Contents Browse Documents:..................................................... 3 Database Documentation.................................................
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationNBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS
More informationN1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart. Liang PENG Lip Kian NG
N1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart Liang PENG Lip Kian NG N1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart Liang PENG Lip Kian NG APSTC-TB-2004-005 Abstract: N1GE6, formerly
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationQuick Start Guide. Table of Contents
Quick Start Guide Table of Contents Account Registration... 2 Signup Request... 2 Account Activation... 4 Running FLOW-3D on POD... 9 Launching the GUI... 9 Running Simulations... 11 Collaborating with
More informationIntroduction to BioHPC
Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2015-06-03 Overview Today we re going to cover: What is BioHPC? How do I access
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationComputational Skills Primer. Lecture 2
Computational Skills Primer Lecture 2 Your Background Who has used SCC before? Who has worked on any other cluster? Do you have previous experience working with basic linux and command line usage (CLI)?
More informationGetting Started with Serial and Parallel MATLAB on bwgrid
Getting Started with Serial and Parallel MATLAB on bwgrid CONFIGURATION Download either bwgrid.remote.r2014b.zip (Windows) or bwgrid.remote.r2014b.tar (Linux/Mac) For Windows users, unzip the download
More informationCS CS Tutorial 2 2 Winter 2018
CS CS 230 - Tutorial 2 2 Winter 2018 Sections 1. Unix Basics and connecting to CS environment 2. MIPS Introduction & CS230 Interface 3. Connecting Remotely If you haven t set up a CS environment password,
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What
More informationHigh Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin
High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting
More information1 Download the latest version of ImageJ for your platform from the website:
Using ImageJ with OMERO 4.4 This covers and version 4.4.x and the initial section assumes you have never installed ImageJ or Fiji before. For OMERO versions 5.1.x and 5.0.x see the Using ImageJ with OMERO
More informationAdmin Guide Hosted Applications
Admin Guide Hosted Applications DOCUMENT REVISION DATE: December, 2010 Hosted Applications Admin Guide / Table of Contents Page 2 of 32 Table of Contents OVERVIEW... 3 1. ABOUT THE GUIDE... 3 1.1 AUDIENCE
More informationIntroduction to HPC Resources and Linux
Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is HPC Concept? What
More informationRemote Support 19.1 Web Rep Console
Remote Support 19.1 Web Rep Console 2003-2019 BeyondTrust Corporation. All Rights Reserved. BEYONDTRUST, its logo, and JUMP are trademarks of BeyondTrust Corporation. Other trademarks are the property
More informationJob Management on LONI and LSU HPC clusters
Job Management on LONI and LSU HPC clusters Le Yan HPC Consultant User Services @ LONI Outline Overview Batch queuing system Job queues on LONI clusters Basic commands The Cluster Environment Multiple
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationIntroduction to HPC Using zcluster at GACRC On-Class GENE 4220
Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC
More informationMigrating from Zcluster to Sapelo
GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment
More informationSession 1: Accessing MUGrid and Command Line Basics
Session 1: Accessing MUGrid and Command Line Basics Craig A. Struble, Ph.D. July 14, 2010 1 Introduction The Marquette University Grid (MUGrid) is a collection of dedicated and opportunistic resources
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationSTEP 1: PREPARE FOR DATA MIGRATION 1. Right-click the desktop and choose New > Folder. a. Type For Transferring and press Enter to name the folder.
PC Support and Repair Chapter 5 Data Migration Lab 5144 When a new computer is purchased or a new operating system is installed, it is often desirable to migrate a user s data to the new computer or OS.
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationAgent and Agent Browser. Updated Friday, January 26, Autotask Corporation
Agent and Agent Browser Updated Friday, January 26, 2018 2018 Autotask Corporation Table of Contents Table of Contents 2 The AEM Agent and Agent Browser 3 AEM Agent 5 Privacy Mode 9 Agent Browser 11 Agent
More informationIntroduction to Discovery.
Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment
More informationIntroduction to BioHPC
Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-03-07 Overview Today we re going to cover: What is BioHPC? How do I access
More informationLab 3a Using the vi editor
Lab 3a Using the vi editor Objectives: Become familiar with the vi Editor Review the three vi Modes Review keystrokes to move between vi modes Create a new file with vi Editor Invoke vi with show mode
More informationIntroduction to the Cluster
Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be
More informationReconstruction of Half-Sibling Population Structures
Reconstruction of Half-Sibling Population Structures by Daniel Dexter A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationIntroduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU
Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.
More informationConsensus Methods for Reconstruction of Sibling Relationships from Genetic Data
Consensus Methods for Reconstruction of Sibling Relationships from Genetic Data Saad I. Sheikh and Tanya Y. Berger-Wolf and Ashfaq A. Khokhar and Bhaskar DasGupta {ssheikh,tanyabw,ashfaq,dasgupta}@cs.uic.edu
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationIntroduction to Linux and Supercomputers
Introduction to Linux and Supercomputers Doug Crabill Senior Academic IT Specialist Department of Statistics Purdue University dgc@purdue.edu What you will learn How to log into a Linux Supercomputer Basics
More informationCapacitated Clustering Problem in Computational Biology: Combinatorial and Statistical Approach for Sibling Reconstruction
Capacitated Clustering Problem in Computational Biology: Combinatorial and Statistical Approach for Sibling Reconstruction Chun-An Chou Department of Industrial and Systems Engineering, Rutgers University,
More informationHPC Introductory Course - Exercises
HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac
More informationThis guide shows you how to set up Data Director to replicate Data from Head Office to Store.
Install Data Director 3 This guide shows you how to set up Data Director to replicate Data from Head Office to Store. Installation Run the setup file LS.DataDirector.3.02.xx.Setup.exe and set the location
More informationX Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!?
X Grid Engine Where X stands for Oracle Univa Open Son of more to come...?!? Carsten Preuss on behalf of Scientific Computing High Performance Computing Scheduler candidates LSF too expensive PBS / Torque
More informationNew User Seminar: Part 2 (best practices)
New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationReal-Time Monitoring Configuration
CHAPTER 7 This chapter contains the following information for configuring the Cisco Unified Presence Server Real-Time Monitoring Tool (RTMT). Some options that are available in the current version of the
More informationRemote Support Web Rep Console
Remote Support Web Rep Console 2017 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their
More informationJHU Economics August 24, Galaxy How To SSH and RDP
Galaxy How To SSH and RDP The host name for the Econ Linux server is galaxy.econ.jhu.edu. It is running Ubuntu 14.04 LTS. Please NOTE: you need to be connected to the Hopkins VPN before attempting a connection
More informationBitnami Apache Solr for Huawei Enterprise Cloud
Bitnami Apache Solr for Huawei Enterprise Cloud Description Apache Solr is an open source enterprise search platform from the Apache Lucene project. It includes powerful full-text search, highlighting,
More informationLab 1 Introduction to UNIX and C
Name: Lab 1 Introduction to UNIX and C This first lab is meant to be an introduction to computer environments we will be using this term. You must have a Pitt username to complete this lab. NOTE: Text
More informationXton Access Manager GETTING STARTED GUIDE
Xton Access Manager GETTING STARTED GUIDE XTON TECHNOLOGIES, LLC PHILADELPHIA Copyright 2017. Xton Technologies LLC. Contents Introduction... 2 Technical Support... 2 What is Xton Access Manager?... 3
More informationInstalling, Migrating, and Uninstalling HCM Dashboard
CHAPTER 2 Installing, Migrating, and Uninstalling HCM Dashboard This chapter describes how to install, migrate data from HCM 1.0, and uninstall HCM Dashboard. It includes: HCM Dashboard Server Requirements,
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationRM Assessor guide to completing the standardisation process (Marker guide)
RM Assessor guide to completing the standardisation process (Marker guide) As an examiner, this takes you through the standardisation process which begins with accessing the mark scheme, completing the
More informationPBS Pro Documentation
Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted
More informationIntroduction to Linux and Cluster Computing Environments for Bioinformatics
Introduction to Linux and Cluster Computing Environments for Bioinformatics Doug Crabill Senior Academic IT Specialist Department of Statistics Purdue University dgc@purdue.edu What you will learn Linux
More informationSlurm and Abel job scripts. Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012
Slurm and Abel job scripts Katerina Michalickova The Research Computing Services Group SUF/USIT October 23, 2012 Abel in numbers Nodes - 600+ Cores - 10000+ (1 node->2 processors->16 cores) Total memory
More informationBatch Systems. Running your jobs on an HPC machine
Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationIntroduction to GACRC Teaching Cluster
Introduction to GACRC Teaching Cluster Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Overview Computing Resources Three Folders
More informationWorking with Basic Linux. Daniel Balagué
Working with Basic Linux Daniel Balagué How Linux Works? Everything in Linux is either a file or a process. A process is an executing program identified with a PID number. It runs in short or long duration
More informationHow to create a System Logon Account in Backup Exec for Windows Servers
How to create a System Logon Account in Backup Exec for Windows Servers Problem How to create a System Logon Account in Backup Exec for Windows Servers Solution The Backup Exec System Logon Account (SLA)
More informationSUBMITTING JOBS TO ARTEMIS FROM MATLAB
INFORMATION AND COMMUNICATION TECHNOLOGY SUBMITTING JOBS TO ARTEMIS FROM MATLAB STEPHEN KOLMANN, INFORMATION AND COMMUNICATION TECHNOLOGY AND SYDNEY INFORMATICS HUB 8 August 2017 Table of Contents GETTING
More informationFor Dr Landau s PHYS8602 course
For Dr Landau s PHYS8602 course Shan-Ho Tsai (shtsai@uga.edu) Georgia Advanced Computing Resource Center - GACRC January 7, 2019 You will be given a student account on the GACRC s Teaching cluster. Your
More informationKB How to upload large files to a JTAC Case
KB23337 - How to upload large files to a JTAC Case SUMMARY: This article explains how to attach/upload files larger than 10GB to a JTAC case. It also and describes what files can be attached/uploaded to
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationThe DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.
The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk
More informationHow to run computations using GaussView on PC and Gaussian03 on Hamilton with ITS machines
How to run computations using GaussView on PC and Gaussian03 on Hamilton with ITS machines Set up of the FTP(file transfer protocol) Program: Use winscp Start > Durham Network > winscp404 Log into hamilton.
More informationGuillimin HPC Users Meeting April 13, 2017
Guillimin HPC Users Meeting April 13, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit to
More informationAvida Checkpoint/Restart Implementation
Avida Checkpoint/Restart Implementation Nilab Mohammad Mousa: McNair Scholar Dirk Colbry, Ph.D.: Mentor Computer Science Abstract As high performance computing centers (HPCC) continue to grow in popularity,
More informationApplication Guide. Connection Broker. Advanced Connection and Capacity Management For Hybrid Clouds
Application Guide Connection Broker Advanced Connection and Capacity Management For Hybrid Clouds Version 9.0 June 2018 Contacting Leostream Leostream Corporation 271 Waverley Oaks Rd Suite 206 Waltham,
More informationJetVote User Guide. Table of Contents
User Guide English Table of Contents 1 General Information... 3 Minimum System Requirements... 3 2 Getting Started... 4 Software Installation... 4 Installing the Server... 4 Installing Quiz Packets (Optional)...
More informationParallel Programming Pre-Assignment. Setting up the Software Environment
Parallel Programming Pre-Assignment Setting up the Software Environment Authors: B. Wilkinson and C. Ferner. Modification date: Aug 21, 2014 (Minor correction Aug 27, 2014.) Software The purpose of this
More informationConsensus Methods for Reconstruction of Sibling Relationships from Genetic Data
Consensus Methods for Reconstruction of Sibling Relationships from Genetic Data Saad I. Sheikh and Tanya Y. Berger-Wolf and Ashfaq A. Khokhar and Bhaskar DasGupta {ssheikh,tanyabw,ashfaq,dasgupta}@cs.uic.edu
More information