Running LAMMPS on CC servers at IITM
|
|
- Mervyn Hawkins
- 5 years ago
- Views:
Transcription
1 Running LAMMPS on CC servers at IITM Srihari Sundar September 9, 2016 This tutorial assumes prior knowledge about LAMMPS [2, 1] and deals with running LAMMPS scripts on the compute servers at the computer center in IIT Madras. But, the procedure for building and using the serial and parallel version should work on any system with small modifications here and there. Further, it assumes basic experience using a command line interface, especially bash on a Linux terminal. Installing LAMMPS As a first step download the LAMMPS tarbal from the downloads page on the website and transfer it to the home directory of your account on the server, using winscp from windows or sftp from a Linux terminal. Once this is done login to your account. You should find the file you just transferred if you do an ls. Next, extract the contents of the tarbal using the following command. tar xvf lammps-stable.tar The stable version at the time of writing is from 30th July and hence you should a new folder named lammps-30jul16 or something like lammps-date in general. Move into the src folder within this folder using the following: cd lammps-30jul16/src The src folder has the source code for LAMMPS written in C++ (mostly). The following command gives a bare minimum serialized version of the code: make serial make also works in parallel and you can use multiple cores on the CPU to do this, using: make -j 16 serial To get the parallel version of LAMMPS with MPI, execute this command on the same folder: make -j 16 mpi This requires openmpi or MPICH already installed and configured for your compiler. You can execute the above on GNR/VIRGO out of the box. Next, create a soft link of the executables to a bin folder in your home directory. Add the bin folder to your bash search path ($PATH). Make this permanent by adding a line to the.bashrc file in the home directory which sets the environment for bash. Execute the following commands one by one on the terminal to do this. 1
2 mkdir $HOME/bin/ ln -s `readlink -f lmp_serial` $HOME/bin/ ln -s `readlink -f lmp_mpi` $HOME/bin/ echo "export PATH=\$PATH:\$HOME/bin/" >> $HOME/.bashrc source $HOME/.bashrc NOTE 1. The readlink -f... part of the command should be within backticks (The symbol below tilde on the key below Esc key on standard keyboards). On shell, a part of a command which is within backticks is executed by the shell and the output from this part is substituted in the whole command. If the backtick does not get copied from the pdf file, type the command out 2. Run all this from the src folder Now, executing lmp_serial from any folder should print this on the terminal : LAMMPS (30 Jul 2016) This shows that the build process was successful. Kill (ctrl-c) this process for now. Executing LAMMPS scripts on head node From the src directory we move to a directory with an example file for simulating crack propagation. You can use any LAMMPS input file in any directory (within your home folder) for this and replace/modify the commands appropriately. cd../examples/crack/ Listing the files should show the presence of a file named in.crack. Run this simulation using the serial version of LAMMPS using: lmp_serial < in.crack This should run and output some details of the simulation, with the last line being: Total wall time: 0:00:06 The time may vary according to the load on the system. This shows that the simulation completed successfully. Now, in order to use the parallel version of LAMMPS we need to setup some environment variables for getting the libraries right. For this open the.bashrc file from the home directory in vi. vi $HOME/.bashrc If using GNR, paste the following line at the end of the file and save it. source /Apps/intel-2016-up3/bin/compilervars.sh intel64 2
3 If using VIRGO, paste the following line at the end of the file and save it. source /IITM_GPFS_FS1/sware/intel2016/bin/compilervars.sh intel64 Source the environment file so that the changes are reflected in the current session too. source $HOME/.bashrc Now executing the next statement will give a similar output as using lmp_serial. mpirun -np 16 lmp_mpi < in.crack But, there should be a significant reduction in the total wall time. The previous statement indicates that 16 threads should be used to run the simulation. Job submission to a compute node on GNR While you can use the compute node to get your input script error free, it is never advisable to actually run your jobs on it. For this you should queue your jobs on the scheduler. For this purpose, GNR uses the PBS job scheduling software. So, you should write a job submission script and provide it to the software. Serial execution: Create and open a file called job serial.cmd using vim vi job_serial.cmd Paste the following in that file and save it. #! /bin/bash #PBS -o logfile.log #PBS -e errorfile.err #PBS -l cput=10:00:00 #PBS -l select=1:ncpus=1 tpdir=`echo $PBS_JOBID cut -f 1 -d.` tempdir=$home/work/job$tpdir mkdir -p $tempdir cd $tempdir cp -R $PBS_O_WORKDIR/*. lmp_serial < in.crack mv../job$tpdir $PBS_O_WORKDIR/. From the shell execute this: qsub job_serial.cmd This should give you an output of the form number.gnr. Running qstat should show that the job is running against that number. Once the job finishes the system moves the directory where the job executed, named jobnumber to the current folder along with two other files. The errorfile.err displays any error with execution and the logfile.log shows the stdout when the LAMMPS script executed. The folder should have all your outputs. 3
4 Parallel execution: For parallel execution the procedure is very similar except for small modifications to the job submission scripts. Create a file named job parallel.cmd and paste the following in the file: #! /bin/bash #PBS -o logfile.log #PBS -e errorfile.err #PBS -l cput=10:00:00 #PBS -l select=1:ncpus=16 tpdir=`echo $PBS_JOBID cut -f 1 -d.` tempdir=$home/work/job$tpdir mkdir -p $tempdir cd $tempdir cp -R $PBS_O_WORKDIR/*. mpirun -np 16 lmp_mpi < nve.in mv../job$tpdir $PBS_O_WORKDIR/. Execute with: qsub job_parallel.cmd You should see nearly same output, but in a shorter time. Job submission to a compute node on VIRGO While GNR uses PBS for job scheduling, VIRGO uses Load Leveler for this and we need to write LL batch job scripts. Serial execution: Create and open a file called job serial.cmd using vim vi job_serial.cmd Paste the following in that file and save it. #!/bin/bash #@ output= test.out #@ error= test.err #@ job_type= serial #@ class= Medium #@ environment = COPY_ALL #@ queue Jobid=`echo $LOADL_STEP_ID cut -f 6 -d.` tmpdir=$home/scratch/job$jobid mkdir -p $tmpdir; cd $tmpdir cp -R $LOADL_STEP_INITDIR/* $tmpdir lmp_serial < in.crack mv../job$jobid $LOADL_STEP_INITDIR 4
5 From the shell execute this: llsubmit job_serial.cmd This should give you an output of the form c1hn1.job-id Running llq job-id should show that the job is running against that number. Once the job finishes the system moves the directory where the job executed, named jobjob-id to the current folder. Parallel execution: For parallel execution the procedure is very similar except for small modifications to the job submission scripts. Create a file named job parallel.cmd and paste the following in the file: #!/bin/bash #@ output= test.out #@ error= test.err #@ job_type= MPICH #@ class= Medium #@ node = 1 #@ tasks_per_node = 16 #@ environment = COPY_ALL #@ queue Jobid=`echo $LOADL_STEP_ID cut -f 6 -d.` tmpdir=$home/scratch/job$jobid mkdir -p $tmpdir; cd $tmpdir cp -R $LOADL_STEP_INITDIR/* $tmpdir mpirun -np 16 lmp_mpi < in.crack mv../job$jobid $LOADL_STEP_INITDIR Execute with: llsubmit job_parallel.cmd Logging into the CC website ( and looking at the Virgo info under the hpce tab in the right side bar will give you more options you can use with the Load Leveller. Useful tools Some tools which will make life easier for you (and others): 1. Google: Straight forward. Any error you see, google it up. Someone would have encountered the same error and posted on stack overflow or on the forums for the software. A person with more experience or the software developers would have solved it. Google indexes all of this and so you have you solution then and there. This works most of the times. 2. GNU Screen 3. Bash scripting 5
6 There are numerous tutorials for 2 and 3 online and should be easily accessible with minimal Linux experience. Bash scripting especially will help your productivity. You can have one script for creating input scripts, executing it and post processing data. Go figure! Acknowledgment While I ve wanted to make this tutorial to make life easier for myself (as a teaching assistant), I was spurred on to get it done by Hari Haran. A big portion of this tutorial comes from the chat I had with him and also he provided me access to his GNR account as mine expired. Also thanks to Darshan for letting me use his VIRGO account. References [1] LAMMPS Molecular Dynamics Simulator. [2] S. Plimpton, P. Crozier, and A. Thompson. Lammps-large-scale atomic/molecular massively parallel simulator. Sandia National Laboratories, 18, Disclaimer: The author takes no responsibility for any part of this tutorial failing to work for you. Do not try this on the day before project deadline. Corrections: I ll be grateful if you can mail any corrections and suggestions to sriharisundar95 at gmail dot com. 6
Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationShell Scripting. With Applications to HPC. Edmund Sumbar Copyright 2007 University of Alberta. All rights reserved
AICT High Performance Computing Workshop With Applications to HPC Edmund Sumbar research.support@ualberta.ca Copyright 2007 University of Alberta. All rights reserved High performance computing environment
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationUser Guide of High Performance Computing Cluster in School of Physics
User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software
More informationIntroduction to HPC Resources and Linux
Introduction to HPC Resources and Linux Burak Himmetoglu Enterprise Technology Services & Center for Scientific Computing e-mail: bhimmetoglu@ucsb.edu Paul Weakliem California Nanosystems Institute & Center
More informationNew User Tutorial. OSU High Performance Computing Center
New User Tutorial OSU High Performance Computing Center TABLE OF CONTENTS Logging In... 3-5 Windows... 3-4 Linux... 4 Mac... 4-5 Changing Password... 5 Using Linux Commands... 6 File Systems... 7 File
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationLogging in to the CRAY
Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationWorking on the NewRiver Cluster
Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationIntroduction to CINECA Computer Environment
Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx
More informationNBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the
More informationStep 3: Access the HPC machine you will be using to run WRF: ocelote. Step 4: transfer downloaded WRF tar files to your home directory
Step 1: download WRF packages Get WRF tar file from WRF users page, Version 3.8.1. Also get WPS Version 3.8.1 (preprocessor) Store on your local machine Step 2: Login to UA HPC system ssh (UAnetid)@hpc.arizona.edu
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationAn Introduction to Cluster Computing Using Newton
An Introduction to Cluster Computing Using Newton Jason Harris and Dylan Storey March 25th, 2014 Jason Harris and Dylan Storey Introduction to Cluster Computing March 25th, 2014 1 / 26 Workshop design.
More information(7) Get the total energy from the output (1 Hartree = kcal/mol) and label the structure with the energy.
RUNNING GAUSSIAN ON THE RedHawk cluster computers: General Procedure: The general procedure is to prepare the input file on a PC using GausView, then transfer this file via WinSCP to the RedHawk cluster
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationJob Management on LONI and LSU HPC clusters
Job Management on LONI and LSU HPC clusters Le Yan HPC Consultant User Services @ LONI Outline Overview Batch queuing system Job queues on LONI clusters Basic commands The Cluster Environment Multiple
More informationXSEDE New User Tutorial
June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please
More informationHPCC - Hrothgar Getting Started User Guide Gromacs
HPCC - Hrothgar Getting Started User Guide Gromacs High Performance Computing Center Texas Tech University HPCC - Hrothgar 2 Table of Contents 1. Introduction... 3 2. Setting up the environment... 3 For
More information15-122: Principles of Imperative Computation
15-122: Principles of Imperative Computation Lab 0 Navigating your account in Linux Tom Cortina, Rob Simmons Unlike typical graphical interfaces for operating systems, here you are entering commands directly
More informationIntroduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)
Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit
More informationXSEDE New User Tutorial
May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationHigh Performance Beowulf Cluster Environment User Manual
High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how
More informationLinux Command Line Interface. December 27, 2017
Linux Command Line Interface December 27, 2017 Foreword It is supposed to be a refresher (?!) If you are familiar with UNIX/Linux/MacOS X CLI, this is going to be boring... I will not talk about editors
More informationPBS Pro Documentation
Introduction Most jobs will require greater resources than are available on individual nodes. All jobs must be scheduled via the batch job system. The batch job system in use is PBS Pro. Jobs are submitted
More informationDDT: A visual, parallel debugger on Ra
DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,
More informationBatch Systems & Parallel Application Launchers Running your jobs on an HPC machine
Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike
More informationEE516: Embedded Software Project 1. Setting Up Environment for Projects
EE516: Embedded Software Project 1. Setting Up Environment for Projects By Dong Jae Shin 2015. 09. 01. Contents Introduction to Projects of EE516 Tasks Setting Up Environment Virtual Machine Environment
More informationIntroduction to HPC Numerical libraries on FERMI and PLX
Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of
More informationXSEDE New User Tutorial
October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.
More informationMigrating from Zcluster to Sapelo
GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment
More informationOpenPBS Users Manual
How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00
More informationIntroduction to Unix The Windows User perspective. Wes Frisby Kyle Horne Todd Johansen
Introduction to Unix The Windows User perspective Wes Frisby Kyle Horne Todd Johansen What is Unix? Portable, multi-tasking, and multi-user operating system Software development environment Hardware independent
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More information24 Writing Your First Script
In the preceding chapters, we have assembled an arsenal of command line tools. While these tools can solve many kinds of computing problems, we are still limited to manually using them one by one on the
More informationGrid Engine Users Guide. 5.5 Edition
Grid Engine Users Guide 5.5 Edition Grid Engine Users Guide : 5.5 Edition Published May 08 2012 Copyright 2012 University of California and Scalable Systems This document is subject to the Rocks License
More informationLAB #5 Intro to Linux and Python on ENGR
LAB #5 Intro to Linux and Python on ENGR 1. Pre-Lab: In this lab, we are going to download some useful tools needed throughout your CS career. First, you need to download a secure shell (ssh) client for
More informationName Department/Research Area Have you used the Linux command line?
Please log in with HawkID (IOWA domain) Macs are available at stations as marked To switch between the Windows and the Mac systems, press scroll lock twice 9/27/2018 1 Ben Rogers ITS-Research Services
More informationRunning Jobs, Submission Scripts, Modules
9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of
More informationGACRC User Training: Migrating from Zcluster to Sapelo
GACRC User Training: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/28/2017 GACRC Zcluster-Sapelo Migrating Training 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems
More informationPractical 02. Bash & shell scripting
Practical 02 Bash & shell scripting 1 imac lab login: maclab password: 10khem 1.use the Finder to visually browse the file system (single click opens) 2.find the /Applications folder 3.open the Utilities
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationTech Computer Center Documentation
Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1
More informationBatch Systems. Running your jobs on an HPC machine
Batch Systems Running your jobs on an HPC machine Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationSGE Roll: Users Guide. Version 5.3 Edition
SGE Roll: Users Guide Version 5.3 Edition SGE Roll: Users Guide : Version 5.3 Edition Published Dec 2009 Copyright 2009 University of California and Scalable Systems This document is subject to the Rocks
More informationncsa eclipse internal training
ncsa eclipse internal training This tutorial will cover the basic setup and use of Eclipse with forge.ncsa.illinois.edu. At the end of the tutorial, you should be comfortable with the following tasks:
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationAmbiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)
Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS
More informationAdvanced Scripting Using PBS Environment Variables
Advanced Scripting Using PBS Environment Variables Your job submission script has a number of environment variables that can be used to help you write some more advanced scripts. These variables can make
More informationGrid Engine Users Guide. 7.0 Edition
Grid Engine Users Guide 7.0 Edition Grid Engine Users Guide : 7.0 Edition Published Dec 01 2017 Copyright 2017 University of California and Scalable Systems This document is subject to the Rocks License
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationThe Unix Shell & Shell Scripts
The Unix Shell & Shell Scripts You should do steps 1 to 7 before going to the lab. Use the Linux system you installed in the previous lab. In the lab do step 8, the TA may give you additional exercises
More informationInstalling and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.
Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationThe DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.
The DTU HPC system and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results. Niels Aage Department of Mechanical Engineering Technical University of Denmark Email: naage@mek.dtu.dk
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationThe JANUS Computing Environment
Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationRunning applications on the Cray XC30
Running applications on the Cray XC30 Running on compute nodes By default, users do not access compute nodes directly. Instead they launch jobs on compute nodes using one of three available modes: 1. Extreme
More informationIntroduction to Linux Environment. Yun-Wen Chen
Introduction to Linux Environment Yun-Wen Chen 1 The Text (Command) Mode in Linux Environment 2 The Main Operating Systems We May Meet 1. Windows 2. Mac 3. Linux (Unix) 3 Windows Command Mode and DOS Type
More informationHigh Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin
High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting
More informationSGE Roll: Users Guide. Version Edition
SGE Roll: Users Guide Version 4.2.1 Edition SGE Roll: Users Guide : Version 4.2.1 Edition Published Sep 2006 Copyright 2006 University of California and Scalable Systems This document is subject to the
More informationIntroduction to the Linux Command Line
Introduction to the Linux Command Line May, 2015 How to Connect (securely) ssh sftp scp Basic Unix or Linux Commands Files & directories Environment variables Not necessarily in this order.? Getting Connected
More informationStudy Guide Processes & Job Control
Study Guide Processes & Job Control Q1 - PID What does PID stand for? Q2 - Shell PID What shell command would I issue to display the PID of the shell I'm using? Q3 - Process vs. executable file Explain,
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationKohinoor queuing document
List of SGE Commands: qsub : Submit a job to SGE Kohinoor queuing document qstat : Determine the status of a job qdel : Delete a job qhost : Display Node information Some useful commands $qstat f -- Specifies
More informationCompiling applications for the Cray XC
Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers
More informationFtp Command Line Commands Linux Example Windows Put
Ftp Command Line Commands Linux Example Windows Put Examples of typical uses of the command ftp. This lists the commands that you can use to show the directory contents, transfer files, and delete files.
More informationPROGRAMMING MODEL EXAMPLES
( Cray Inc 2015) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various
More informationIntroduction in Unix. Linus Torvalds Ken Thompson & Dennis Ritchie
Introduction in Unix Linus Torvalds Ken Thompson & Dennis Ritchie My name: John Donners John.Donners@surfsara.nl Consultant at SURFsara And Cedric Nugteren Cedric.Nugteren@surfsara.nl Consultant at SURFsara
More informationMIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationIntroduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU
Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.
More informationWorking with Shell Scripting. Daniel Balagué
Working with Shell Scripting Daniel Balagué Editing Text Files We offer many text editors in the HPC cluster. Command-Line Interface (CLI) editors: vi / vim nano (very intuitive and easy to use if you
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationThe Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi
Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi The Supercomputing Facility for Bioinformatics & Computational Biology, IIT Delhi User Manual Dr. B. Jayaram (Professor of
More informationAdvanced Topics in High Performance Scientific Computing [MA5327] Exercise 1
Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and
More informationL14 Supercomputing - Part 2
Geophysical Computing L14-1 L14 Supercomputing - Part 2 1. MPI Code Structure Writing parallel code can be done in either C or Fortran. The Message Passing Interface (MPI) is just a set of subroutines
More informationIntroduction to Linux Basics Part II. Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala
Introduction to Linux Basics Part II 1 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu 2 Variables in Shell HOW DOES LINUX WORK? Shell Arithmetic I/O and
More informationIntroduction to the Linux Command Line January Presentation Topics
1/22/13 Introduction to the Linux Command Line January 2013 Presented by Oralee Nudson ARSC User Consultant & Student Supervisor onudson@alaska.edu Presentation Topics Information Assurance and Security
More informationPractical: a sample code
Practical: a sample code Alistair Hart Cray Exascale Research Initiative Europe 1 Aims The aim of this practical is to examine, compile and run a simple, pre-prepared OpenACC code The aims of this are:
More information