PBS Pro and Ansys Examples

Similar documents
PBS Pro Documentation

Effective Use of CCV Resources

Quick Guide for the Torque Cluster Manager

User Guide of High Performance Computing Cluster in School of Physics

Batch Systems. Running calculations on HPC resources

Shell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved

Our new HPC-Cluster An overview

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

OpenPBS Users Manual

Parameter searches and the batch system

Job Management on LONI and LSU HPC clusters

Installing and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.

Introduction to High Performance Computing (HPC) Resources at GACRC

CS/Math 471: Intro. to Scientific Computing

NBIC TechTrack PBS Tutorial

Answers to Federal Reserve Questions. Training for University of Richmond

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Supercomputing environment TMA4280 Introduction to Supercomputing

Introduction to HPC Using the New Cluster at GACRC

Running in parallel. Total number of cores available after hyper threading (virtual cores)

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

Please include the following sentence in any works using center resources.

Backgrounding and Task Distribution In Batch Jobs

Some PBS Scripting Tricks. Timothy H. Kaiser, Ph.D.

Cerebro Quick Start Guide

Batch Systems. Running your jobs on an HPC machine

Running applications on the Cray XC30

PROGRAMMING MODEL EXAMPLES

Backgrounding and Task Distribution In Batch Jobs

GACRC User Training: Migrating from Zcluster to Sapelo

High Performance Beowulf Cluster Environment User Manual

New User Tutorial. OSU High Performance Computing Center

Introduction to HPC Using the New Cluster at GACRC

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Introduction to GALILEO

Hybrid MPI+OpenMP Parallel MD

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Introduction to HPC Resources and Linux

Introduction to HPC Using the New Cluster at GACRC

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Grid Examples. Steve Gallo Center for Computational Research University at Buffalo

GPU Cluster Usage Tutorial

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

Introduction to HPC Using the New Cluster at GACRC

Simple examples how to run MPI program via PBS on Taurus HPC

Backgrounding and Task Distribution In Batch Jobs

Getting started with the CEES Grid

Shell Programming. Introduction to Linux. Peter Ruprecht Research CU Boulder

Big Data Analytics with Hadoop and Spark at OSC

Introduction to Linux Basics Part II. Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala

Using Sapelo2 Cluster at the GACRC

Name Department/Research Area Have you used the Linux command line?

Lisa User Day Lisa architecture. John Donners

Hybrid MPI+CUDA Programming

Computing with the Moore Cluster

MPI introduction - exercises -

Big Data Analytics at OSC

Guillimin HPC Users Meeting October 20, 2016

A Brief Introduction to The Center for Advanced Computing

SGE Roll: Users Guide. Version Edition

Introduction to PICO Parallel & Production Enviroment

Guillimin HPC Users Meeting March 17, 2016

OBTAINING AN ACCOUNT:

Introduction to High Performance Computing (HPC) Resources at GACRC

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

SGI OpenFOAM TM Quick Start Guide

Genome Assembly. 2 Sept. Groups. Wiki. Job files Read cleaning Other cleaning Genome Assembly

Workshop Set up. Workshop website: Workshop project set up account at my.osc.edu PZS0724 Nq7sRoNrWnFuLtBm

Hybrid MPI+OpenMP+CUDA Programming

Introduction to HPC Using the New Cluster (Sapelo) at GACRC

A Brief Introduction to The Center for Advanced Computing

Introduction to HPC Using the New Cluster (Sapelo) at GACRC

Exercises: Abel/Colossus and SLURM

Cloud Computing Research Cloud: NeCTAR Commercial Cloud: Amazon AWS, Microsoft Azure, etc. Seed money for exploration of new cloud technologies

UBDA Platform User Gudie. 16 July P a g e 1

L14 Supercomputing - Part 2

Introduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Running Jobs, Submission Scripts, Modules

HPC Resources at Lehigh. Steve Anthony March 22, 2012

Content. MPIRUN Command Environment Variables LoadLeveler SUBMIT Command IBM Simple Scheduler. IBM PSSC Montpellier Customer Center

New High Performance Computing Cluster For Large Scale Multi-omics Data Analysis. 28 February 2018 (Wed) 2:30pm 3:30pm Seminar Room 1A, G/F

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Grid Engine Users Guide. 5.5 Edition

Computational Biology Software Overview

Batch environment PBS (Running applications on the Cray XC30) 1/18/2016

Genius Quick Start Guide

SGI Altix Running Batch Jobs With PBSPro Reiner Vogelsang SGI GmbH

Introduction to RCC. September 14, 2016 Research Computing Center

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.

Cheese Cluster Training

Introduction to RCC. January 18, 2017 Research Computing Center

UF Research Computing: Overview and Running STATA

Programming introduction part I:

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Running LAMMPS on CC servers at IITM

High Performance Compu2ng Using Sapelo Cluster

Migrating from Zcluster to Sapelo

Transcription:

PBS Pro and Ansys Examples Introduction This document contains a number of different types of examples of using Ansys on the HPC, listed below. 1. Single-node Ansys Job 2. Single-node CFX Job 3. Single-node Fluent Job 4. Multi-node CFX Job 5. Parallel Fluent Job Single-node Ansys Job!/bin/bash -l Set shell PBS -S /bin/bash set job name PBS -N AnsysTest set default resource requirements for job (4 processors on 1 node requesting 3 hours 30 minutes of runtime) - these can be overridden on the qsub command line PBS -l select=1:ncpus=4 PBS -l walltime=03:30:00 load ansys module so that we find the cfx5solve command Change to directory from which job was submitted Set number of processors to run on (list of node names is in file $PBS_NODEFILE) Load ansys module so that we find the ansys121 command module load ansys Run ansys commands in file plate in batch mode ( the -m & -db flags increase the total amount of memory usable by ansys) ansys162 -b -m 20000 -db 10000 -np $nprocs -j plate < plate > plate.out

Single-node CFX Job!/bin/bash -l Set shell PBS -S /bin/bash set job name PBS -N CFXTest Set default resources requirements for job (4 processors on 1 node requesting 120 hours runtime using 4Gb of memory) - these can be overridden on the qsub command line PBS -l select=1:ncpus=4 PBS -l mem=4gb PBS -l walltime=120:00:00 request that regular output (stdout) and terminal output (stderr) go to the same file PBS -j oe mail options PBS -m abe PBS -M username@usq.edu.au set the queue to run job on PBS -q default load ansys module so that we find the cfx5solve command goto the directory from which you submitted the job set number of processors to run on nprocs=`cat $PBS_NODEFILE wc -l` start calculation cfx5solve -batch -def cfxtest.def -fullname cfxtest -monitor cfxtest.res -part $nprocs -par-local -solverdouble

Single-node Fluent Job!/bin/bash -l set shell PBS -S /bin/bash set job name PBS -N FluentTest set default resources requirements for job (1 processor on 1 node requesting 4 hours 30 minutes of runtime) - these can be overridden on the qsub command line PBS -l select=1:ncpus=1 PBS -l walltime=04:30:00 request that regular output (stdout) and terminal output (stderr) go to the same file PBS -j oe set mail options to send job notifications PBS -m abe PBS -M username@usq.edu.au set the queue to run job on PBS -q default set number of processors to run on (list of node names is in file $PBS_NODEFILE) load ansys module so that we find the fluent command specifies the version of ANSYS FLUENT to run version=3d specifies journal file to use journal=elbow1_journal change to the directory from which you submitted the job start computation fluent $version -t$nprocs -cnf=$pbs_nodefile -g -i $journal Update by HPC Team on the 11/05/17 Page 3 of 5

Multi-node CFX Job!/bin/bash set shell PBS -S /bin/bash set job name PBS -N CFXParallelTest torque job script to run CFX solver in parallel over multiple nodes set default resource requirements for job - these can be overridden on the qsub command line (this is for an 16 CPU job, requesting 6 hours) PBS -l select=2ncpus=8 PBS -l walltime=6:00:00 change to directory from which job was submitted set number of processors per host listing (set to 1 as $PBS_NODEFILE lists each node twice if :ppn=2) procs_per_host=1 Create host list hl="" for host in `cat $PBS_NODEFILE` do if [ "$hl" = "" ] then hl="$host*$procs_per_host" else hl="${hl},$host*$procs_per_host" fi done load ansys module so that we find the cfx5solve command run using PVM for message-passing between nodes by default cfx5solve -double -def BluntBody.def -parallel -par-dist $hl use following line to specify MPI for message-passing instead cfx5solve -double -def BluntBody.def -parallel -par-dist $hl -parallel-mode mpi

Parallel Fluent Job!/bin/bash set shell PBS -S /bin/bash Job Name PBS -N FluentParallelTest run simple fluent job, works on a single node or multiple nodes set default resource requirements for job - these can be overridden on the qsub command line (this is for a 4 processor job requesting 1 hr 30 minutes) PBS -l select=1:ncpus=4 PBS -l walltime=01:30:00 request that regular output (stdout) and terminal output (stderr) go to the same file PBS -j oe set mail options to send job notifications PBS -m abe PBS -M r.user@usq.edu.au set the queue to run job on PBS -q default change to directory from which job was submitted set number of processors to run on (list of node names is in file $PBS_NODEFILE) specifies the version of ANSYS FLUENT to run version=3d load ansys module so that we find the fluent command run default version of Fluent in 3d mode in parallel over $nprocs processors fluent commands are in file Fluent_par.jou, output messages go to output_file fluent $version -t$nprocs -cnf=$pbs_nodefile -g -i Fluent_par.jou > output_file Reference 1. Ansys Documentation Update by HPC Team on the 11/05/17 Page 5 of 5