Viglen NPACI Rocks. Getting Started and FAQ

Size: px
Start display at page:

Download "Viglen NPACI Rocks. Getting Started and FAQ"

Transcription

1 Viglen NPACI Rocks Getting Started and FAQ

2 Table of Contents Viglen NPACI Rocks...1 Getting Started...3 Powering up the machines:...3 Checking node status...4 Through web interface:...4 Adding users:...7 Job Submission:...8 FAQ...10 Reinstalling a compute node...10 Adding applications to the cluster...11 Enabling web access to the cluster...11 Synchronising files across the cluster...12

3 Getting Started Powering up the machines: Power on the headnode first and allow the node to boot to the login screen. The cluster compute nodes rely on the headnode when booting client daemons, such as the queue client. Once the headnode is up, power on the rest of the compute nodes.

4 Checking node status Once all the nodes are powered on, you can check for any dead nodes using the CLI or Web interface. Through CLI: Login as root and perform the following tentakel uptime ### compute-0-0 (stat:0, dur(s): 0.32) down Any nodes that failed to boot will be reported as down. Through web interface: The Ganglia scalable distributed monitoring system is configured on the cluster and can be used to discover dead nodes (along with cluster usage metrics) To access the ganglia web interface, user must launch the firefox browser from the headnode. (Please note external web access is disabled by default through the iptables configuration on the headnode, only ssh access is permitted) When logging into the headnode make sure to specify the X argument for ssh to enable XForwarding. [user@host:~]$ ssh X user@headnode [user@headnode:~]$ firefox And point the browser to and you should see a welcome screen as below.

5 Click on the Cluster Status at the right side of the screen to display the ganglia monitoring screen.

6 Any nodes that are down or in use are reported through this interface. Please note that a node reported as dead here does not necessarily mean the node is dead, it could just mean that the ganglia monitoring daemon on the client has died. It is recommended that the users confirms the node status using tools such as ssh and ping

7 Adding users: To add a user, login as root and issue the useradd, set a password and synchronise using the rocks-user-sync command. [root@headnode:~]$ useradd <username> [root@headnode:~]$ passwd <username> [root@headnode:~]$ rocks sync users It is also recommend that the administrator for the cluster login as the user and set the ssh keys to allow passwordless access on the cluster. Any time a user account is created, this initial login will prompt the user for the path to the rsa key pair and an ssh passphrase. Leave all values blank for normal cluster operation.

8 Job Submission: Jobs are submitted to the queue in the form of a (bash) script. This script should contain a list of the command and arguments that you want to run as if you were running from the command line. Sample Job Submit Script #!/bin/bash # # SGE/PBS options can be specified here # e.g. set job to run in the current working directory #$ -cwd $HOME/myjobdir/myjobexecutable arg1 arg2 Save this file as ~/myjob.sh. To submit this job to the queue, use the qsub command passing the job script file as an argument. E.g. [user@headnode:~]$ qsub myjobs.sh Job status can be checked using qstat and jobs can be removed using qdel <jobid>. For more detailed usage, please refer to the SGE/PBS manuals supplied with the cluster.

9 Sample MPICH Job (64 core MPICH Job): #!/bin/bash #PBS -S /bin/bash #PBS -V # Request job to run on 8 nodes with 8 processes per node #PBS -l nodes=8:ppn=8 # Change to the working directory cd ${PBS_O_WORKDIR} # set global mem size export P4_GLOBMEMSIZE= # USERS MODIFY HERE # set file name and arguements MPIEXEC=/opt/mpiexec/bin/mpiexec EXECUTABLE=/opt/hpl/mpich-hpl/bin/xhpl ARGS="-v" # Load the mpich modules environment module load mpich/1.2.7 ########################################## # # # Output some useful job information. # # # ########################################## NPROCS=`wc -l < $PBS_NODEFILE` echo echo ' This job is allocated on '${NPROCS}' cpu(s)' echo 'Job is running on node(s): ' cat $PBS_NODEFILE echo echo PBS: qsub is running on $PBS_O_HOST echo PBS: originating queue is $PBS_O_QUEUE echo PBS: executing queue is $PBS_QUEUE echo PBS: working directory is $PBS_O_WORKDIR echo PBS: execution mode is $PBS_ENVIRONMENT echo PBS: job identifier is $PBS_JOBID echo PBS: job name is $PBS_JOBNAME echo PBS: node file is $PBS_NODEFILE echo PBS: current home directory is $PBS_O_HOME echo PBS: PATH = $PBS_O_PATH echo # Launch the Job ${MPIEXEC} ${ARGS} ${EXECUTABLE} Viglen provide sample job submission scripts for all the above jobs (~/qsub-scripts/). A directory is created when a new user is added with sample job submission scripts. Feel free to edit these for use with local applications/codes.

10 FAQ Reinstalling a compute node The recommended way of dealing with any node failures is to reinstall the problematic compute node (unless a hardware fault is present). The compute nodes are all set to boot from the network first, then local disk. The headnode can be configured to force any compute node to reinstall by setting the pxeboot flag. To see what pxeboot flags are currently set on your cluster, use the rocks command: [root@headnode:~]$ rocks list host pxeboot HOST ACTION headnode: -----compute-0-0 os compute-0-1 os... To flag a node for reinstallation use the rocks command to set the pxeboot flag to install. [..]$ rocks set host pxeboot compute-0-0 action=install Then power cycle the node (or reboot if node is still alive). The command can also be set back to boot local disk by setting the pxeboot flag back to os using the rocks command. [..]$ rocks set host pxeboot compute-0-0 action=os Power cycle the node: If IPMI modules are installed ipmitool U [user] P [passwd] h compute-0-0 chassis power off ipmitool U [user] P [passwd] h compute-0-0 chassis power on If APC PDUs are used: apc off compute-0-0 apc on compute-0-0 Note: If power cycling using the APC PDU, the BIOS power settings (Advance Boot Features Restore on AC Power Off) needs to be set to Last state on the compute node.

11 Adding applications to the cluster This topic is covered in details in the rocks user guide and online at Enabling web access to the cluster By default, the firewall on the headnode will only allow ssh access. If you want to enable web access to the cluster (for e.g. monitoring through ganglia) you can allow this through the iptables configuration file /etc/sysconfig/iptables In the file, lines 18 and 19 are commented out, to enable web access uncomment these and restart iptables (/etc/init.d/iptables restart) # Uncomment the lines below to activate web access to the cluster. #-A INPUT m state --state NEW p tcp -dport https j ACCEPT #-A INPUT m state -state NEW p tcp -dport www j ACCEPT

12 Synchronising files across the cluster If there is a configuration file in/etc (or anywhere that s not an NFS share) that you want to synchronise across all nodes, there are two ways to do this. Option 1: Through the extend-compute.xml file (*note* changes to the file on the headnode will not automatically be synchronised more useful for configurations that need to be consistent on compute nodes only) In the <post> section of this file, create a <file> tag: <post> <file name= /path/to/file > # contents of file </file> When finished, verify the additions are syntactically correct: [user@host:site-profiles]$ xmllint extend-compute.xml Rebuild the distribution: [user@host:site-profiles]$ cd /home/install [user@host:install]$ rocks create distro Verify kickstart files are being generated correctly (from the /home/install directory. [user@host:install]$./sbin/kickstart.cgi -client= compute-0-0 This command should dump the kickstart file to screen (provided the node has already been installed and exists in the database). If not, repeat the above steps and try again. If the command is successful, set the pxeboot flag for the nodes (See Reinstalling a compute node ) and reboot the node. Option 2: Through the 411 subsystem Edit the file /var/411/files.mk The last line should read #FILE += /path/to/file

13 Uncomment this line, update with the path to the file you want to synchronise and run the command rocks sync users to apply this change. ie: FILE += /etc/hosts

PBS Pro Documentation

PBS Pro Documentation Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted

More information

Answers to Federal Reserve Questions. Training for University of Richmond

Answers to Federal Reserve Questions. Training for University of Richmond Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node

More information

SGE Roll: Users Guide. Version Edition

SGE Roll: Users Guide. Version Edition SGE Roll: Users Guide Version 4.2.1 Edition SGE Roll: Users Guide : Version 4.2.1 Edition Published Sep 2006 Copyright 2006 University of California and Scalable Systems This document is subject to the

More information

Advanced Scripting Using PBS Environment Variables

Advanced Scripting Using PBS Environment Variables Advanced Scripting Using PBS Environment Variables Your job submission script has a number of environment variables that can be used to help you write some more advanced scripts. These variables can make

More information

Batch Systems. Running calculations on HPC resources

Batch Systems. Running calculations on HPC resources Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between

More information

Introduction to Grid Computing!

Introduction to Grid Computing! Introduction to Grid Computing! Rocks-A-Palooza II! Lab Session! 2006 UC Regents! 1! Using The Grid Roll! Option 1! Reinstall! frontend upgrade! Option 2! Use kroll! 2006 UC Regents! 2! Option 1: Re-Installation

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

Rocks Cluster Administration. Learn how to manage your Rocks Cluster Effectively

Rocks Cluster Administration. Learn how to manage your Rocks Cluster Effectively Rocks Cluster Administration Learn how to manage your Rocks Cluster Effectively Module 1: Customizing Your Cluster Customizing Nodes Using built in node attributes and the Rocks Command line Using extend-node.xml

More information

Grid Engine Users Guide. 5.5 Edition

Grid Engine Users Guide. 5.5 Edition Grid Engine Users Guide 5.5 Edition Grid Engine Users Guide : 5.5 Edition Published May 08 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the Rocks License

More information

SGE Roll: Users Guide. Version 5.3 Edition

SGE Roll: Users Guide. Version 5.3 Edition SGE Roll: Users Guide Version 5.3 Edition SGE Roll: Users Guide : Version 5.3 Edition Published Dec 2009 Copyright 2009 University of California and Scalable Systems This document is subject to the Rocks

More information

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved. Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

High Performance Beowulf Cluster Environment User Manual

High Performance Beowulf Cluster Environment User Manual High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how

More information

SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH

SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH reiner@sgi.com Module Objectives After completion of this module you should be able to Submit batch jobs Create job chains Monitor your

More information

Quick Guide for the Torque Cluster Manager

Quick Guide for the Torque Cluster Manager Quick Guide for the Torque Cluster Manager Introduction: One of the main purposes of the Aries Cluster is to accommodate especially long-running programs. Users who run long jobs (which take hours or days

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

Please include the following sentence in any works using center resources.

Please include the following sentence in any works using center resources. The TCU High-Performance Computing Center The TCU HPCC currently maintains a cluster environment hpcl1.chm.tcu.edu. Work on a second cluster environment is underway. This document details using hpcl1.

More information

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding

More information

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University

Using ITaP clusters for large scale statistical analysis with R. Doug Crabill Purdue University Using ITaP clusters for large scale statistical analysis with R Doug Crabill Purdue University Topics Running multiple R jobs on departmental Linux servers serially, and in parallel Cluster concepts and

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

Marvell SATA3 RAID Installation Guide

Marvell SATA3 RAID Installation Guide Marvell SATA3 RAID Installation Guide Overview The Marvell RAID Utility (MRU) is a browser-based graphical user interface (GUI) tool for the Marvell RAID adapter. It supports IO Controllers (IOC) and RAID-On-Chip

More information

Grid Engine Users Guide. 7.0 Edition

Grid Engine Users Guide. 7.0 Edition Grid Engine Users Guide 7.0 Edition Grid Engine Users Guide : 7.0 Edition Published Dec 01 2017 Copyright 2017 University of California and Scalable Systems This document is subject to the Rocks License

More information

Running Jobs, Submission Scripts, Modules

Running Jobs, Submission Scripts, Modules 9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of

More information

TORQUE Resource Manager Quick Start Guide Version

TORQUE Resource Manager Quick Start Guide Version TORQUE Resource Manager Quick Start Guide Version High Performance Computing Center Ferdowsi University of Mashhad http://www.um.ac.ir/hpcc Jan. 2006 1 Contents 1 Introduction 3 1.1 Feature................................

More information

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011) UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................

More information

Dell EMC ME4 Series vsphere Client Plug-in

Dell EMC ME4 Series vsphere Client Plug-in Dell EMC ME4 Series vsphere Client Plug-in User's Guide Regulatory Model: E09J, E10J, E11J Regulatory Type: E09J001, E10J001, E11J001 Notes, cautions, and warnings NOTE: A NOTE indicates important information

More information

Kohinoor queuing document

Kohinoor queuing document List of SGE Commands: qsub : Submit a job to SGE Kohinoor queuing document qstat : Determine the status of a job qdel : Delete a job qhost : Display Node information Some useful commands $qstat f -- Specifies

More information

GPU Cluster Usage Tutorial

GPU Cluster Usage Tutorial GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions

More information

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource. Siemens PLM Software HEEDS MDO 2018.04 Setting up a Windows-to- Linux Compute Resource www.redcedartech.com. Contents Introduction 1 On Remote Machine B 2 Installing the SSH Server 2 Configuring the SSH

More information

UF Research Computing: Overview and Running STATA

UF Research Computing: Overview and Running STATA UF : Overview and Running STATA www.rc.ufl.edu Mission Improve opportunities for research and scholarship Improve competitiveness in securing external funding Matt Gitzendanner magitz@ufl.edu Provide high-performance

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix

More information

DDT: A visual, parallel debugger on Ra

DDT: A visual, parallel debugger on Ra DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,

More information

Test Lab Introduction to the Test Lab Linux Cluster Environment

Test Lab Introduction to the Test Lab Linux Cluster Environment Test Lab 1.0 - Introduction to the Test Lab Linux Cluster Environment Test lab is a set of three disposable cluster environments that can be used for systems research. All three environments are accessible

More information

HPC Resources at Lehigh. Steve Anthony March 22, 2012

HPC Resources at Lehigh. Steve Anthony March 22, 2012 HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,

More information

Base Roll: Users Guide. Version 5.2 Edition

Base Roll: Users Guide. Version 5.2 Edition Base Roll: Users Guide Version 5.2 Edition Base Roll: Users Guide: Version 5.2 Edition Published Aug 2009 Copyright 2009 University of California This document is subject to the Rocks License (see Rocks

More information

New User Tutorial. OSU High Performance Computing Center

New User Tutorial. OSU High Performance Computing Center New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File

More information

Introduction to HPC Resources and Linux

Introduction to HPC Resources and Linux Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Configuring the JUNOS Software the First Time on a Router with a Single Routing Engine

Configuring the JUNOS Software the First Time on a Router with a Single Routing Engine Configuring the JUNOS Software the First Time on a Router with a Single Routing Engine When you turn on a router the first time, the JUNOS Software automatically boots and starts. You must enter basic

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

Reset the Admin Password with the ExtraHop Rescue CD

Reset the Admin Password with the ExtraHop Rescue CD Reset the Admin Password with the ExtraHop Rescue CD Published: 2018-01-19 This guide explains how to reset the administration password on physical and virtual ExtraHop appliances with the ExtraHop Rescue

More information

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

This tutorial will guide you how to setup and run your own minecraft server on a Linux CentOS 6 in no time.

This tutorial will guide you how to setup and run your own minecraft server on a Linux CentOS 6 in no time. This tutorial will guide you how to setup and run your own minecraft server on a Linux CentOS 6 in no time. Running your own server lets you play together with your friends and family with your own set

More information

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:

More information

Shark Cluster Overview

Shark Cluster Overview Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 1 Graphical login node (rivershark) 800 Cores = slots 714 TB Storage RAW Slide 1/14 Introduction What is a cluster? A cluster is a group of

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Shark Cluster Overview

Shark Cluster Overview Shark Cluster Overview 51 Execution Nodes 1 Head Node (shark) 2 Graphical login nodes 800 Cores = slots 714 TB Storage RAW Slide 1/17 Introduction What is a High Performance Compute (HPC) cluster? A HPC

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

Check the FQDN of your server by executing following two commands in the terminal.

Check the FQDN of your server by executing following two commands in the terminal. LDAP or Lightweight Directory Access Protocol, is a protocol designed to manage and access related information in a centralized, hierarchical file and directory structure. An LDAP server is a non-relational

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

Initial Setup. Cisco APIC Documentation Roadmap. This chapter contains the following sections:

Initial Setup. Cisco APIC Documentation Roadmap. This chapter contains the following sections: This chapter contains the following sections: Cisco APIC Documentation Roadmap, page 1 Simplified Approach to Configuring in Cisco APIC, page 2 Changing the BIOS Default Password, page 2 About the APIC,

More information

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de

More information

OPENSTACK CLOUD RUNNING IN A VIRTUAL MACHINE. In Preferences, add 3 Host-only Ethernet Adapters with the following IP Addresses:

OPENSTACK CLOUD RUNNING IN A VIRTUAL MACHINE. In Preferences, add 3 Host-only Ethernet Adapters with the following IP Addresses: OPENSTACK CLOUD RUNNING IN A VIRTUAL MACHINE VirtualBox Install VirtualBox In Preferences, add 3 Host-only Ethernet Adapters with the following IP Addresses: 192.168.1.2/24 192.168.2.2/24 192.168.3.2/24

More information

Logging in to the CRAY

Logging in to the CRAY Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter

More information

SSL VPN Reinstallation

SSL VPN Reinstallation SSL VPN Reinstallation This software reinstallation procedure describes how to reinstall the software onto a previously formatted and programmed hard disk drive (HDD) on the Contivity SSL VPN 1000 card.

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Isilon InsightIQ. Version Installation Guide

Isilon InsightIQ. Version Installation Guide Isilon InsightIQ Version 4.1.0 Installation Guide Copyright 2009-2016 EMC Corporation All rights reserved. Published October 2016 Dell believes the information in this publication is accurate as of its

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Answers to Federal Reserve Questions. Administrator Training for University of Richmond

Answers to Federal Reserve Questions. Administrator Training for University of Richmond Answers to Federal Reserve Questions Administrator Training for University of Richmond 2 Agenda Cluster overview Physics hardware Chemistry hardware Software Modules, ACT Utils, Cloner GridEngine overview

More information

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

A Brief Introduction to The Center for Advanced Computing

A Brief Introduction to The Center for Advanced Computing A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,

More information

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER 1 Introduction This handout contains basic instructions for how to login in to ARCHER and submit jobs to the

More information

Computational Biology Software Overview

Computational Biology Software Overview Computational Biology Software Overview Le Yan HPC Consultant User Services @ LONI Outline Overview of available software packages for computational biologists Demo: run NAMD on a LONI Linux cluster What

More information

Using the computational resources at the GACRC

Using the computational resources at the GACRC An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center

More information

Using RDP with Azure Linux Virtual Machines

Using RDP with Azure Linux Virtual Machines Using RDP with Azure Linux Virtual Machines 1. Create a Linux Virtual Machine with Azure portal Create SSH key pair 1. Install Ubuntu Bash shell by downloading and running bash.exe file as administrator.

More information

Content Gateway v7.x: Frequently Asked Questions

Content Gateway v7.x: Frequently Asked Questions Content Gateway v7.x: Frequently Asked Questions Topic 60066 Content Gateway FAQs Updated: 22-October-2013 Websense Content Gateway v7.x, v7.x Websense Web Security Gateway / Anywhere v7.x, v7.x How do

More information

Tutorial on MPI: part I

Tutorial on MPI: part I Workshop on High Performance Computing (HPC08) School of Physics, IPM February 16-21, 2008 Tutorial on MPI: part I Stefano Cozzini CNR/INFM Democritos and SISSA/eLab Agenda first part WRAP UP of the yesterday's

More information

National Biochemical Computational Research https://nbcr.net/accounts/apply.php. Familiarize yourself with the account policy

National Biochemical Computational Research  https://nbcr.net/accounts/apply.php. Familiarize yourself with the account policy Track 3: Molecular Visualization and Virtual Screening NBCR Summer Institute Session: NBCR clusters introduction August 11, 2006 Nadya Williams nadya@sdsc.edu Where to start National Biochemical Computational

More information

Isilon InsightIQ. Version Installation Guide

Isilon InsightIQ. Version Installation Guide Isilon InsightIQ Version 4.0.1 Installation Guide Copyright 2009-2016 EMC Corporation. All rights reserved. Published in the USA. Published May, 2016 EMC believes the information in this publication is

More information

VERTIV. Avocent ACS8xxx Advanced Console System Release Notes VERSION 2.4.2, AUGUST 24, Release Notes Section Outline. 1 Update Instructions

VERTIV. Avocent ACS8xxx Advanced Console System Release Notes VERSION 2.4.2, AUGUST 24, Release Notes Section Outline. 1 Update Instructions VERTIV Avocent ACS8xxx Advanced Console System Release Notes VERSION 2.4.2, AUGUST 24, 2018 Release Notes Section Outline 1 Update Instructions 2 Appliance Firmware Version Information 3 Local Client Requirements

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

Using CLC Genomics Workbench on Turing

Using CLC Genomics Workbench on Turing Using CLC Genomics Workbench on Turing Table of Contents Introduction...2 Accessing CLC Genomics Workbench...2 Launching CLC Genomics Workbench from your workstation...2 Launching CLC Genomics Workbench

More information

Sharpen Exercise: Using HPC resources and running parallel applications

Sharpen Exercise: Using HPC resources and running parallel applications Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract

More information

Simple examples how to run MPI program via PBS on Taurus HPC

Simple examples how to run MPI program via PBS on Taurus HPC Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload

More information

Managing GSS User Accounts Through a TACACS+ Server

Managing GSS User Accounts Through a TACACS+ Server CHAPTER 4 Managing GSS User Accounts Through a TACACS+ Server This chapter describes how to configure the GSS, primary GSSM, or standby GSSM as a client of a Terminal Access Controller Access Control System

More information

Singularity: container formats

Singularity: container formats Singularity Easy to install and configure Easy to run/use: no daemons no root works with scheduling systems User outside container == user inside container Access to host resources Mount (parts of) filesystems

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Installing and Upgrading Cisco Network Registrar Virtual Appliance

Installing and Upgrading Cisco Network Registrar Virtual Appliance CHAPTER 3 Installing and Upgrading Cisco Network Registrar Virtual Appliance The Cisco Network Registrar virtual appliance includes all the functionality available in a version of Cisco Network Registrar

More information

Guillimin HPC Users Meeting October 20, 2016

Guillimin HPC Users Meeting October 20, 2016 Guillimin HPC Users Meeting October 20, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit

More information

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education

SLURM: Resource Management and Job Scheduling Software. Advanced Computing Center for Research and Education SLURM: Resource Management and Job Scheduling Software Advanced Computing Center for Research and Education www.accre.vanderbilt.edu Simple Linux Utility for Resource Management But it s also a job scheduler!

More information

Name Department/Research Area Have you used the Linux command line?

Name Department/Research Area Have you used the Linux command line? Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

SMRT Analysis Software Installation (v1.3.1)

SMRT Analysis Software Installation (v1.3.1) SMRT Analysis Software Installation (v1.3.1) Introduction This document describes the basic requirements for installing SMRT Analysis v1.3.1 on a customer system. This document is for use by Field Service

More information

Overview of the Cisco NCS Command-Line Interface

Overview of the Cisco NCS Command-Line Interface CHAPTER 1 Overview of the Cisco NCS -Line Interface This chapter provides an overview of how to access the Cisco Prime Network Control System (NCS) command-line interface (CLI), the different command modes,

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research

Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Introduction to SLURM on the High Performance Cluster at the Center for Computational Research Cynthia Cornelius Center for Computational Research University at Buffalo, SUNY 701 Ellicott St Buffalo, NY

More information

CompTIA Exam LX0-102 Linux Part 2 Version: 10.0 [ Total Questions: 177 ]

CompTIA Exam LX0-102 Linux Part 2 Version: 10.0 [ Total Questions: 177 ] s@lm@n CompTIA Exam LX0-102 Linux Part 2 Version: 10.0 [ Total Questions: 177 ] CompTIA LX0-102 : Practice Test Topic break down Topic No. of Questions Topic 1: Volume A 60 Topic 2: Volume B 59 Topic 3:

More information

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA) Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit

More information

Introduction to Linux and Cluster Computing Environments for Bioinformatics

Introduction to Linux and Cluster Computing Environments for Bioinformatics Introduction to Linux and Cluster Computing Environments for Bioinformatics Doug Crabill Senior Academic IT Specialist Department of Statistics Purdue University dgc@purdue.edu What you will learn Linux

More information