National Biochemical Computational Research Familiarize yourself with the account policy
|
|
- Brian Spencer
- 6 years ago
- Views:
Transcription
1 Track 3: Molecular Visualization and Virtual Screening NBCR Summer Institute Session: NBCR clusters introduction August 11, 2006 Nadya Williams Where to start National Biochemical Computational Research How to get an account: Familiarize yourself with the account policy Subscribe to NBCR-support mailing list Subscribe to NBCR-announce mailing list 8/11/ UC Regents 2 1
2 Where to get help For support to User services web page: Access to training sessions on Wiki Tools/downloads Documentation Cluster monitoring access User guides at Wiki 8/11/ UC Regents 3 Remote login For login use ssh (not rsh or telnet) % ssh accname@clustername or % ssh clustername -l accname At first login set your ssh keys If not asked and there is no ~/.ssh/ do ssh-keygen -t rsa Add your local ssh keys % scp -p ~/.ssh/id_dsa.pub user@kryptonite.nbcr.net:local.pub % ssh user@kryptonite.nbcr.net % cat local.pub >> ~/.ssh/authorized_keys % rm local.pub Available clusters: kryptonite.nbcr.net athena.nbcr.net 8/11/ UC Regents 4 2
3 Keys management with Agent Login on a cluster % ssh user@kryptonite.nbcr.net Start an agent % ssh-agent $SHELL Add identities to your agent % ssh-add or % ssh-add ~/.ssh/mykeys/my_special_key.pub Verify that identities are added % ssh-add -l 1024 e9:a6:59:89:f0:f1:87:8e:88:54 /Users/nadya/.ssh/id_dsa (DSA) - OK Could not open connection to your authentication agent - ERROR! Can execute any command now on any node % cluster-fork ps -u$user % ssh c0-0 8/11/ UC Regents 5 Introduction to Sun Grid Engine What is a grid? A collection of computing resources that perform tasks A grid node can be a compute server, data collector, visualisation terminal.. SGE is a resource management software Accepts jobs submitted by users Schedules them for execution on appropriate systems based on resource management policies Can submit 100s of jobs without worrying where it will run 8/11/ UC Regents 6 3
4 What is SGE? Two versions of SGE: Sun Grid Engine (on Rocks clusters) Distributed under the open source license From sunsource.net Sun N1 Grid Engine N1 stack is available at no cost Paid support from SUN 8/11/ UC Regents 7 Job Management Not recommended to run jobs directly! Use installed load scheduler SUN Grid Engine Load management tool for HETEROGENEOUS distributed computing environment PBS/Torque More sophisticated scheduling Why? You can submit multiple jobs and have it queued (and go home!) Fair Share Allow other people to use the cluster also! (for Myrinet MPI jobs) 8/11/ UC Regents 8 4
5 Host Roles Master Host Controls overall cluster activity Frontend, head node It runs the master daemon: sge_qmaster, controlling queues, jobs, status, user access permission Also the scheduler: sge_schedd Execution Host executes SGE jobs execution daemon: sge_execd Runs jobs on its hosts Forwards sys status/info to sge_qmaster 8/11/ UC Regents 9 Host Roles continued Submit Host They are allowed for submitting and controlling batch job only No daemon required to run in this type of host. Administration Host SGE administrator console usually 8/11/ UC Regents 10 5
6 Job Management Your administrator must setup a global default queue (all.q) More fine-tunned queues can be setup depending on cluster/user community short.q, long.q, weekend.q, fluent.0.q, fluent.1.q As a user, you only need to know how to Submit your jobs (serial or MPI) Monitor your jobs Get the results 8/11/ UC Regents 11 Some SGE Commands Command qconf qmod qacct qalter qdel qhold qhost qmon qrsh qselect qsh qstat qsub qtcsh Description SGE's cluster, queue etc configuration Modify queue statues: enabled or suspended Extract accounting information from cluster Changes the attributes of submitted but pending jobs Job deletion Holds back submitted jobs for execution Shows status information about SGE hosts X-windows Motif interface SGE queue based rsh facility List queue matching selection criteria Opens an interactive shell on a low-loaded hosts Status listing of jobs and queues Commandline interface to submit jobs to SGE SGE queue based TCSH facility qtcsh, qsh - extended command shells that can transparently distribute execution of programs/applications to least loaded hosts via SGE. 8/11/ UC Regents 12 6
7 $ qhost $ qhost HOSTNAME ARCH NPROC LOAD MEMTOT MEMUSE SWAPTO SWAPUS global compute-0-1 lx26-amd G 585.3M M 0.0 compute-0-2 lx26-amd G 560.9M M 0.0 compute-0-3 lx26-amd G 534.3M M 0.0 compute-0-4 lx26-amd G 555.0M M 0.0 compute-0-5 lx26-amd G 559.8M M 0.0 compute-0-6 lx26-amd G 561.5M M 0.0 compute-0-7 lx26-amd G 551.6M M 0.0 compute-0-8 lx26-amd G 557.9M M 0.0 compute-0-9 lx26-amd G 541.8M M 0.0 8/11/ UC Regents 13 QMON GUI interface for SGE Administration/Submission Requires you to run either Linux/Unix on your desktop or have a X-emulator (Hummingbird) on your Windows PC. 8/11/ UC Regents 14 7
8 Submitting Jobs Command line (qsub) & Graphical (qmon) Standard, Batch, Array, Interactive, Parallel SGE schedule jobs based on Job priorities User -> FIFO Admin -> can affect with priority settings Equal-Share-Scheduling Scheduler -> user_sort setting Prevents a single user from hogging the queues Recommended!!! 8/11/ UC Regents 15 $ qsub Output/error by default in home directory Look in /opt/gridengine/examples/jobs % qsub simple.sh your job 224 ("simple.sh") has been submitted % cd ~ % more simple.sh.e224 % more simple.sh.o224 Wed Aug 9 14:56:16 PDT 2006 Wed Aug 9 14:56:36 PDT 2006 Use qstat to check job status #!/bin/sh date sleep 10 hostname 8/11/ UC Regents 16 8
9 Submit autodock job % qsub adsub.sh your job 225 (adsub.sh") has been submitted #!/bin/sh # request Bourne shell as shell for job #$ -S /bin/sh # work from current dir and put stderr/stdout here #$ -cwd ulimit -s unlimited autodock3 -p test.dpf -l test.dlg status=$? if [ "$status" = "0" ] ; then echo "successful completion $status" else echo "error running autodock3" fi 8/11/ UC Regents 17 GUI Submit Monitor Control 8/11/ UC Regents 18 9
10 $ qconf Show all the queues % qconf -sql Show the given queue % qconf -sq all.q Show command usage % qconf -help Show complex attributes % qconf -sc 8/11/ UC Regents 19 Advanced Submit Advanced or Batch jobs == shell scripts Can be as complicated as you want or even an application! #!/bin/bash # # compiles my program every time and create the executable and run it! # # change to my working directory cd TEST # compile the job f77 flow.f -o flow -lm -latlas # run the job./flow myinput.dat 8/11/ UC Regents 20 10
11 Requestable Attributes User submit jobs by specifying a job requirement profile of the hosts or of the queues SGE will match the job requirements and run on suitable hosts Attributes Disk space CPU Memory Software (Fluent lic) OS 8/11/ UC Regents 21 Attributes continued Relop Relational operation used to compute whether a queue meets a user request Requestable Can be specified by user or not (eg in qsub) Consumable Manage limited resources, eg licence or cpu #name shortcut type value relop requestable consumable defs arch a STRING none == YES NO none num_proc p INT 1 == YES NO 0 load_avg la DOUBLE >= NO NO 0 slots s INT 0 <= YES YES 1 % qsub -l arch=glinux load_avg=0.01 myjob.sh 8/11/ UC Regents 22 11
12 Attributes continued By default, all requests are hard Hard requests are checked first, followed by soft If hard request is not satisfied job is not run For soft requests, SGE attempts to run on best fit Important resources mt - memory total mf - memory free s - processor slots st - total swap How to request specifc memory/swap space/cpu/? % qsub -soft -l mt=250k,st=100k,mf=300g simple.sh % qsub -hard -l mt=250k,st=100k,mf=300g simple.sh 8/11/ UC Regents 23 Array Jobs Parameterized and repeated execution of the same program (in a script) is ideal for the array job facility SGE provides efficient implementation of array jobs Handle computations as an array of independent tasks joined into a single job Can monitor and controlled as a total or by individual tasks or subset of tasks 8/11/ UC Regents 24 12
13 $ qsub Submitting an Array Job from command line -l option requests for a hard CPU time limit of 45mins -t option defines the task index range 2-10:2 specifies 2,4,6,8,10 Uses $SGE_TASK_ID to find out whether they are task 2, 4, 6, 8 or 10 To find input record As seed for random number generator % qsub -l h_cpu=0:45:0 -t 2-10:2 render.sh data.in 8/11/ UC Regents 25 Job cleanup Use SGE command % qdel <job_id> Use Rocks command % cluster-fork killall <your_executable_name> 8/11/ UC Regents 26 13
14 SGE submit script Script contents #!/bin/tcsh #$ -S /bin/tcsh setenv MPI=/opt/mpich/gnu/bin $MPI/mpirun -machinefile machines -np $NSLOTS appname make it executable $ chmod +x runprog.sh 8/11/ UC Regents 27 Submit file options # meet given resource request #$ -l h_rt=600 # specify interpreting shell for the job #$ -S /bin/sh # use path for standard output of the job #$ -o /your/path # execute from current dir See man qsub for more options #$ -cwd # run on 32 processes in mpich PE #$ -pe mpich 32 # Export all environmental variables #$ -V # Export these environmental variables #$ -v MPI_ROOT,FOOBAR=BAR 8/11/ UC Regents 28 14
15 Online resources Rocks documentation: Rocks SGE roll documentation: 8/11/ UC Regents 29 15
SGE Roll: Users Guide. Version Edition
SGE Roll: Users Guide Version 4.2.1 Edition SGE Roll: Users Guide : Version 4.2.1 Edition Published Sep 2006 Copyright 2006 University of California and Scalable Systems This document is subject to the
More informationWhy You Should Consider Grid Computing
Why You Should Consider Grid Computing Kenny Daily BIT Presentation 8 January 2007 Outline Motivational Story Electric Fish Grid Computing Overview N1 Sun Grid Engine Software Use of UCI's cluster My Research
More informationGrid Engine Users Guide. 5.5 Edition
Grid Engine Users Guide 5.5 Edition Grid Engine Users Guide : 5.5 Edition Published May 08 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the Rocks License
More informationSGE Roll: Users Guide. Version 5.3 Edition
SGE Roll: Users Guide Version 5.3 Edition SGE Roll: Users Guide : Version 5.3 Edition Published Dec 2009 Copyright 2009 University of California and Scalable Systems This document is subject to the Rocks
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationGrid Engine Users Guide. 7.0 Edition
Grid Engine Users Guide 7.0 Edition Grid Engine Users Guide : 7.0 Edition Published Dec 01 2017 Copyright 2017 University of California and Scalable Systems This document is subject to the Rocks License
More informationKohinoor queuing document
List of SGE Commands: qsub : Submit a job to SGE Kohinoor queuing document qstat : Determine the status of a job qdel : Delete a job qhost : Display Node information Some useful commands $qstat f -- Specifies
More informationGrid Engine - A Batch System for DESY. Andreas Haupt, Peter Wegner DESY Zeuthen
Grid Engine - A Batch System for DESY Andreas Haupt, Peter Wegner 15.6.2005 DESY Zeuthen Introduction Motivations for using a batch system more effective usage of available computers (e.g. reduce idle
More informationShark Cluster Overview
Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 2 Graphical login nodes 800 Cores = slots 714 TB Storage RAW Slide 1/17 Introduction What is a High Performance Compute (HPC) cluster? A HPC
More informationSun Grid Engine - A Batch System for DESY
Sun Grid Engine - A Batch System for DESY Wolfgang Friebel, Peter Wegner 28.8.2001 DESY Zeuthen Introduction Motivations for using a batch system more effective usage of available computers (e.g. more
More informationSINGAPORE-MIT ALLIANCE GETTING STARTED ON PARALLEL PROGRAMMING USING MPI AND ESTIMATING PARALLEL PERFORMANCE METRICS
SINGAPORE-MIT ALLIANCE Computational Engineering CME5232: Cluster and Grid Computing Technologies for Science and Computing COMPUTATIONAL LAB NO.2 10 th July 2009 GETTING STARTED ON PARALLEL PROGRAMMING
More informationX Grid Engine. Where X stands for Oracle Univa Open Son of more to come...?!?
X Grid Engine Where X stands for Oracle Univa Open Son of more to come...?!? Carsten Preuss on behalf of Scientific Computing High Performance Computing Scheduler candidates LSF too expensive PBS / Torque
More informationCluster User Training
Cluster User Training From Bash to parallel jobs under SGE in one terrifying hour Christopher Dwan, Bioteam First delivered at IICB, Kolkata, India December 14, 2009 UNIX ESSENTIALS Unix command line essentials
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationGrid Engine Users s Guide
Univa Corporation Grid Engine Documentation Grid Engine Users s Guide Author: Univa Engineering Version: 8.5.4 October 18, 2017 Copyright 2012 2017 Univa Corporation. All rights reserved. Contents Contents
More informationACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009
ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009 What is ACEnet? Shared resource......for research computing... physics, chemistry, oceanography, biology, math, engineering,
More informationCluster Clonetroop: HowTo 2014
2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's
More informationJune 26, Explanatory meeting for users of supercomputer system -- Overview of UGE --
June 26, 2012 Explanatory meeting for users of supercomputer system -- Overview of UGE -- What is Univa Grid Engine (UGE)? It is software that is used to construct a grid computing system. It functions
More informationBatch system usage arm euthen F azo he Z J. B T
Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/
More informationAnswers to Federal Reserve Questions. Administrator Training for University of Richmond
Answers to Federal Reserve Questions Administrator Training for University of Richmond 2 Agenda Cluster overview Physics hardware Chemistry hardware Software Modules, ACT Utils, Cloner GridEngine overview
More informationShark Cluster Overview
Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 1 Graphical login node (rivershark) 800 Cores = slots 714 TB Storage RAW Slide 1/14 Introduction What is a cluster? A cluster is a group of
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationMERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced
MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced Sarvani Chadalapaka HPC Administrator University of California
More informationProgramming Environment on Ranger Cluster
Programming Environment on Ranger Cluster Cornell Center for Advanced Computing December 8, 2010 12/8/2010 www.cac.cornell.edu 1 User Guides TACC Ranger (http://services.tacc.utexas.edu/index.php/ranger-user-guide)
More informationResource Management Systems
Resource Management Systems RMS DCC/FCUP Grid Computing 1 NQE (Network Queue Environment) DCC/FCUP Grid Computing 2 NQE #QSUB eo #QSUB J m #QSUB o %fred@gale/nppa_latte:/home/gale/fred/mary.jjob.output
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationUsing the MaRC2 HPC Cluster
Using the MaRC2 HPC Cluster René Sitt / Manuel Haim, 09/2016 Get access rights and permissions Students / Staff account needed Ask your workgroup leader if MaRC2 is already being used he/she must accept
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationGPU Cluster Usage Tutorial
GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions
More informationN1 Grid Engine 6 User s Guide
N1 Grid Engine 6 User s Guide Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. Part No: 817 6117 10 June 2004 Copyright 2004 Sun Microsystems, Inc. 4150 Network Circle, Santa Clara,
More informationOracle Grid Engine. User Guide Release 6.2 Update 7 E
Oracle Grid Engine User Guide Release 6.2 Update 7 E21976-01 August 2011 Oracle Grid Engine User Guide, Release 6.2 Update 7 E21976-01 Copyright 2000, 2011, Oracle and/or its affiliates. All rights reserved.
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationAn Introduction to Cluster Computing Using Newton
An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.
More informationHigh Performance Beowulf Cluster Environment User Manual
High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how
More informationSubmit a Job. Want to run a batch script: #!/bin/sh echo Starting job date /usr/bin/time./hello date echo Ending job. qsub A HPC job.
Submit a Job Want to run a batch script: #!/bin/sh echo Starting job date /usr/bin/time./hello date echo Ending job Have to ask scheduler to do it. qsub A 20090528HPC job.sge #!/bin/sh #$ -N ht3d-hyb #$
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What
More informationJoint High Performance Computing Exchange (JHPCE) Cluster Orientation.
Joint High Performance Computing Exchange (JHPCE) Cluster Orientation http://www.jhpce.jhu.edu/ Schedule - Introductions who are we, who are you? - Terminology - Logging in and account setup - Basics of
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationGridengine. Contents. Aim. Configuration of gridengine. From reading group / nlp lunch
Gridengine From reading group / nlp lunch Contents 1 Aim 2 Configuration of gridengine 3 Preparation (Login info) 4 How to use gridengine (Hello World Example) 5 Useful Commands 6 Other environmental variables
More informationAnswers to Federal Reserve Questions. Training for University of Richmond
Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node
More informationUser Training Eclipse Cluster
User Training Eclipse Cluster Morning Agenda (Morning) Understand the Eclipse Configuration both Hardware and Networking configuration List of bundle software Overview Cluster Concept High Performance
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationTo connect to the cluster, simply use a SSH or SFTP client to connect to:
RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationAdvanced Topics in High Performance Scientific Computing [MA5327] Exercise 1
Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationUsing the MaRC2 HPC Cluster
Using the MaRC2 HPC Cluster Manuel Haim, 06/2013 Using MaRC2??? 2 Using MaRC2 Overview Get access rights and permissions Starting a terminal session (Linux, Windows, Mac) Intro to the BASH Shell (and available
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationInstalling and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.
Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It
More informationSun Grid Engine 5.3 Administration and User s Guide
Sun Grid Engine 5.3 Administration and User s Guide Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. 650-960-1300 Part No. 816-2077-11 April 2002, Revision 01 Send comments about
More informationViglen NPACI Rocks. Getting Started and FAQ
Viglen NPACI Rocks Getting Started and FAQ Table of Contents Viglen NPACI Rocks...1 Getting Started...3 Powering up the machines:...3 Checking node status...4 Through web interface:...4 Adding users:...7
More informationRequesting Resources on an HPC Facility
Requesting Resources on an HPC Facility (Using the Sun Grid Engine Job Scheduler) Deniz Savas dsavas.staff.shef.ac.uk/teaching June 2017 Outline 1. Using the Job Scheduler 2. Interactive Jobs 3. Batch
More informationIntroduction to HPC Using zcluster at GACRC On-Class GENE 4220
Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC
More informationGrid Examples. Steve Gallo Center for Computational Research University at Buffalo
Grid Examples Steve Gallo Center for Computational Research University at Buffalo Examples COBALT (Computational Fluid Dynamics) Ercan Dumlupinar, Syracyse University Aerodynamic loads on helicopter rotors
More informationUser Guide of High Performance Computing Cluster in School of Physics
User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software
More informationN1 Grid Engine 6 Administration Guide
N1 Grid Engine 6 Administration Guide Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. Part No: 817 5677 10 June 2004 Copyright 2004 Sun Microsystems, Inc. 4150 Network Circle, Santa
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationHPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-
HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is HPC Concept? What
More informationCGRB Computational Infrastructure
CGRB Computational Infrastructure shell.cgrb.oregonstate.edu (ssh) waterman CGRB Computational Infrastructure (compute nodes)... shell.cgrb.oregonstate.edu (ssh) waterman CGRB Computational Infrastructure
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationA Virtualized SGE-based Computational Cluster for Heterogeneous Environments
A Virtualized SGE-based Computational Cluster for Heterogeneous Environments Asif, M. IIASA Interim Report October 2009 Asif, M. (2009) A Virtualized SGE-based Computational Cluster for Heterogeneous Environments.
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More informationShell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved
AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment
More informationOpenPBS Users Manual
How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00
More informationHow to for compiling and running MPI Programs. Prepared by Kiriti Venkat
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat What is MPI? MPI stands for Message Passing Interface MPI is a library specification of message-passing, proposed as a standard
More informationCycleServer Grid Engine Support Install Guide. version
CycleServer Grid Engine Support Install Guide version 1.34.4 Contents CycleServer Grid Engine Guide 1 Administration 1 Requirements 1 Installation 1 Monitoring Additional Grid Engine Clusters 3 Monitoring
More informationHigh Performance Computing with Iceberg. Mike Griffiths Bob Booth November 2005 AP-Unix4. University of Sheffield
High Performance Computing with Iceberg. Mike Griffiths Bob Booth November 2005 AP-Unix4 University of Sheffield Contents 1. REVIEW OF AVAILABLE COMPUTE RESOURCES 3 1.1 GENERAL INFORMATION ABOUT ICEBERG
More informationN1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart. Liang PENG Lip Kian NG
N1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart Liang PENG Lip Kian NG N1GE6 Checkpointing and Berkeley Lab Checkpoint/Restart Liang PENG Lip Kian NG APSTC-TB-2004-005 Abstract: N1GE6, formerly
More informationExercise Companion Guide
Exercise Companion Guide Exercise: Man Up! To create a new PE called dummy, use qconf -ap dummy. The command will open an editor (vi) and present you with a list of attributes. You can simply save and
More informationFeb 22, Explanatory meeting for users of supercomputer system -- Knowhow for entering jobs in UGE --
Feb 22, 2013 Explanatory meeting for users of supercomputer system -- Knowhow for entering jobs in UGE -- Purpose of this lecture To understand the method for smoothly executing a few hundreds or thousands
More informationHigh-Performance Reservoir Risk Assessment (Jacta Cluster)
High-Performance Reservoir Risk Assessment (Jacta Cluster) SKUA 2009.3 and GOCAD 2009.3 Rock & Fluid Canvas 2009 Epos 4.0 Rollup 3 Configuration Guide 2008 2010 Paradigm Ltd. or its affiliates and subsidiaries.
More informationHPCC - Hrothgar Getting Started User Guide Gromacs
HPCC - Hrothgar Getting Started User Guide Gromacs High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment... 3 For
More informationJob Management on LONI and LSU HPC clusters
Job Management on LONI and LSU HPC clusters Le Yan HPC Consultant User Services @ LONI Outline Overview Batch queuing system Job queues on LONI clusters Basic commands The Cluster Environment Multiple
More informationIntroduction to the Marc2 Cluster
Introduction to the Marc2 Cluster René Sitt 29.10.2018 HKHLR is funded by the Hessian Ministry of Sciences and Arts 1/30 Table of Contents 1 Preliminaries 2 Cluster Architecture and Access 3 Working on
More informationNBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationQuick Guide for the Torque Cluster Manager
Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationMigrating from Zcluster to Sapelo
GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment
More informationBatch Systems & Parallel Application Launchers Running your jobs on an HPC machine
Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationProgramming Environment on Ranger Cluster
Programming Environment on Ranger Cluster Cornell Center for Advanced Computing October 12, 2009 10/12/2009 www.cac.cornell.edu 1 User Guides TACC Ranger (http://services.tacc.utexas.edu/index.php/ranger-user-guide)
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationKISTI TACHYON2 SYSTEM Quick User Guide
KISTI TACHYON2 SYSTEM Quick User Guide Ver. 2.4 2017. Feb. SupercomputingCenter 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X5570 2.93GHz(Nehalem) Nodes 3,200 total Cores
More informationParallelism. Wolfgang Kastaun. May 9, 2008
Parallelism Wolfgang Kastaun May 9, 2008 Outline Parallel computing Frameworks MPI and the batch system Running MPI code at TAT The CACTUS framework Overview Mesh refinement Writing Cactus modules Links
More informationSlurm basics. Summer Kickstart June slide 1 of 49
Slurm basics Summer Kickstart 2017 June 2017 slide 1 of 49 Triton layers Triton is a powerful but complex machine. You have to consider: Connecting (ssh) Data storage (filesystems and Lustre) Resource
More informationExercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.
Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Before you login to the Blue Waters system, make sure you have the following information
More informationSGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH
SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH reiner@sgi.com Module Objectives After completion of this module you should be able to Submit batch jobs Create job chains Monitor your
More informationThe DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.
The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationHPCC New User Training
High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda
More informationTutorial on MPI: part I
Workshop on High Performance Computing (HPC08) School of Physics, IPM February 16-21, 2008 Tutorial on MPI: part I Stefano Cozzini CNR/INFM Democritos and SISSA/eLab Agenda first part WRAP UP of the yesterday's
More informationPBS Pro Documentation
Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted
More information