Introduction Workshop 11th 12th November 2013
|
|
- Zoe Fisher
- 6 years ago
- Views:
Transcription
1 Introduction Workshop 11th 12th November Lecture II: Access and Batchsystem Dr. Andreas Wolf Gruppenleiter Hochleistungsrechnen Hochschulrechenzentrum
2 Overview Access and Requirements Software packages Module system Batch system Queueing rules Commands Batch script examples for MPI and OpenMP Lecture II: Access and Batchsystem Dr. Andreas Wolf 2
3 Access and Requirements Software packages Module system Batch system Queueing rules Commands Batch script examples for MPI and OpenMP Lecture II: Access and Batchsystem Dr. Andreas Wolf 3
4 Access Requirements for the new Cluster Because of the size of the system Each potential user needs to be proved first against the export limitations of the Bundesamt für Wirtschaft und Ausfuhrkontrolle (BAFA) - Export control/embargo at HHLR@hrz.tu-darmstadt.de The new user rules document ( Nutzungsordnung ) Names, TU-ID, address, Institute and Institution affinity, Citizenship Project title Reports No private data ( , pictures etc.) No commercial use Limited data storage life Lecture II: Access and Batchsystem Dr. Andreas Wolf 4
5 Lecture II: Access and Batchsystem Dr. Andreas Wolf 5
6 Get Access and be Connected Registration Send the filled user rules document to us Wait for the automated answer for granting access Follow the instruction to setup your password via ANDO Accept the additional for registering to the HPC-Beta list Here we will inform you about News and Trouble Get Connected Windows: download SSH clients, e.g. putty Linux: typically SSH already installed Lecture II: Access and Batchsystem Dr. Andreas Wolf 6
7 Access to the Lichtenberg Cluster (lcluster) per SSH Login Nodes: lcluster1.hrz.tu-darmstadt.de lcluster2.hrz.tu-darmstadt.de lcluster3.hrz.tu-darmstadt.de lcluster4.hrz.tu-darmstadt.de <some where>:~> ssh <user <user password: <user Lecture II: Access and Batchsystem Dr. Andreas Wolf 7
8 Data Transfer between your Client and the Cluster per SCP command (a part of the SSH tools) Transfer over Login nodes <some where>:~> scp <file names> <user de: Password: <file name> 100% <Bytes> <1.0>KB/s <01:00> <some where>:~> <some where>:~> scp <user de:<file name> <dir> Password: <file name> 100% <Bytes> <1.0>KB/s <01:00> <some where>:~> Lecture II: Access and Batchsystem Dr. Andreas Wolf 8
9 HPC Newsletter Newsletter: Newsletter subscription Information about Planned Events, User meetings Planned Lectures, Workshops etc. Common information of the system / news Lecture II: Access and Batchsystem Dr. Andreas Wolf 9
10 Access and Requirements Software packages Module system Batch system Queueing rules Commands Batch script examples for MPI and OpenMP Lecture II: Access and Batchsystem Dr. Andreas Wolf 10
11 Software available / installable Operation System SLES (SP2), x86-64, 64Bit System Tools GCC 4.4.6, 4.7.2, Intel , (incl. Intel Cluster Studio XE last Workshop) PGI 13.1 ACML, Intel-MKL, SCALAPACK... OpenMPI 1.6.5, Intel-MPI... Totalview ( Dr. S. Boldyrev), Vampir ( C. Iwainsky) Applications Ansys 140, Abaqus Matlab 2012a COMSOL 4.3 Lecture II: Access and Batchsystem Dr. Andreas Wolf 11
12 Modular Load and Unload 1 > module list Shows all currently load software environments (load packages of the user) > module load <module name> Loads a specific software environment module Only when the module is successfully loaded the software is really useable! <user name>@hla0001:~> module load ansys Modules: loading ansys/14.5 <user name>@hla0001:~> > module unload <module name> Unloads a software module Lecture II: Access and Batchsystem Dr. Andreas Wolf 12
13 Modular Load and Unload 2 > module avail Shows all available software packages currently installed <user name>@hla0001:~> module avail /shared/modulefiles abaqus/ gcc/4.6.4 openmpi/gcc/1.6.5(def ansys/13.0 gcc/4.7.3(default) openmpi/intel/1.6.5 ansys/14.5(default) gcc/4.8.1 openspeedshop/2.0.2( <user name>@hla0001:~> Lecture II: Access and Batchsystem Dr. Andreas Wolf 13
14 Access and Requirements Software packages Module system Batch system Queueing rules Commands Batch script examples for MPI and OpenMP Lecture II: Access and Batchsystem Dr. Andreas Wolf 14
15 Queueing Scheduling Different queues for different purposes deflt Limited to maximal 24 hours Main part of all computing nodes long Limited to maximal 7 days Very small part (~ 8) of all computing nodes short Limited to maximal 30 minutes Advantages Depending on the demand reserved nodes, on all others nodes with highest scheduling priority Maintainability of the most computing nodes within 24 hours Because of the main focus to 24 hours jobs Small test jobs (30 minutes) will scheduled promptly Lecture II: Access and Batchsystem Dr. Andreas Wolf 15
16 Additional Queues Special queues may be changed in the future multi Limited to maximal 24 hours Inter island computing only MPI section testmem Limited to maximal 24 hours 1 TByte memory nodes MEM section testnvd Limited to maximal 24 hours Nvidia Tesla accelerators ACC-G section testphi Limited to maximal 24 hours Intel Xeon Phi accelerators ACC-M section Lecture II: Access and Batchsystem Dr. Andreas Wolf 16
17 Batch System - LSF Why using LSF? Scalability for a large number of nodes Professional support (for fine tuning) WebGUI graphical front-end for Creating Submitting Monitoring of batch jobs (At present unfortunately not ready for use later) Usability also from a Windows client Lecture II: Access and Batchsystem Dr. Andreas Wolf 17
18 Batch System Web GUI - 1 Simply connecting to the first Login node 'lcluster1' via web-browser - later Lecture II: Access and Batchsystem Dr. Andreas Wolf 18
19 Batch System Web GUI - 1 Lecture II: Access and Batchsystem Dr. Andreas Wolf 19
20 Batch System Commands - LSF > bsub < <batch script> Submit a new batch job to the queueing system > bqueue Shows all presently submitted or active batch jobs and their batch-id numbers > bkill <batch-id> Deletes own batch jobs (with ID...) > bjobs <batch-id> Shows specific configuration or runtime information for a batch job Lecture II: Access and Batchsystem Dr. Andreas Wolf 20
21 LSF bsub & bkill <user bsub < sample-script.sh Job <12345> is submitted to default queue <deflt>. <user bkill Job <12345> is being terminated <user The sign < is important for using a script Lecture II: Access and Batchsystem Dr. Andreas Wolf 21
22 LSF - bqueue <user name>@hla0001:~> bqueues QUEUE_NAME PRIO STATUS MAX JL/U JL/P JL/H NJOBS PEND RUN SUSP short 100 Open:Active multi 11 Open:Active deflt 10 Open:Active testnvd 10 Open:Active testphi 10 Open:Active testmem 10 Open:Active long 1 Open:Active <user name>@hla0001:~> Lecture II: Access and Batchsystem Dr. Andreas Wolf 22
23 LSF - bjobs <user name>@hla0001:~> bjobs JOBID USER STAT QUEUE FROM_HOST EXEC_HOST JOB_NAME SUBMIT_TIME APS <user> RUN testmem hla *mem <title> Nov 4 14: <user> PEND testmem hla0001 <title> Nov 4 14: <user name>@hla0001:~> Absolute priority scheduling depends on the users computing history, a queue factor and the job size Lecture II: Access and Batchsystem Dr. Andreas Wolf 23
24 Access and Requirements Software packages Module system Batch system Queueing rules Commands Batch script examples for MPI and OpenMP Lecture II: Access and Batchsystem Dr. Andreas Wolf 24
25 Batch Script for running a MPI Program - 1 #Job name #BSUB -J MPItest #File / path where STDOUT will be written, the %J is the job id #BSUB -o /home/<tu-id>/mpitest.out%j suppress full path #File / path where STDERR will be written, the %J is the job id #BSUB -e /home/<tu-id>/mpitest.err%j #Request the time you need for execution in minutes #The format for the parameter is: [hour:]minute, #that means for 80 minutes you could also use this: 1:20 #BSUB -W 10 #Request virtual memory you need for your job in MB #BSUB -M hard limit, but not for job scheduling Lecture II: Access and Batchsystem Dr. Andreas Wolf 25
26 Batch Script for running a MPI Program #Request the number of compute slots / #MPI tasks you want to use #BSUB -n 64 #Specify the MPI support #BSUB -a openmpi #Specify your mail address #BSUB -u < address> #Send a mail when job is done #BSUB -N Lecture II: Access and Batchsystem Dr. Andreas Wolf 26
27 Batch Script for running a MPI Program module load openmpi/gcc/1.6.5 #Prove the loaded modules module list Loading modules is saver than using the submit environment cd ~/<working path> Common use without any parameters given by LSF mpirun <program> #Otherwise generating an explicit hostfile echo "$LSB_HOSTS" sed -e "s/ /\n/g" > hostfile.$lsb_jobid #Specify number of tasks and hosts mpirun -n 64 -hostfile hostfile.$lsb_jobid <program> Lecture II: Access and Batchsystem Dr. Andreas Wolf 27
28 Batch Script for running a OpenMP Program... #Specify the OpenMP support #BSUB -a openmp #export OMP_NUM_THREADS=16 <OpenMP program call> Lecture II: Access and Batchsystem Dr. Andreas Wolf 28
29 Thank you for your attention Questions??? Lecture II: Access and Batchsystem Dr. Andreas Wolf 29
LSF at SLAC. Using the SIMES Batch Cluster. Neal Adams. Stanford Linear Accelerator Center
LSF at SLAC Using the SIMES Batch Cluster Neal Adams Stanford Linear Accelerator Center neal@slac.stanford.edu Useful LSF Commands bsub submit a batch job to LSF bjobs display batch job information bkill
More informationLaohu cluster user manual. Li Changhua National Astronomical Observatory, Chinese Academy of Sciences 2011/12/26
Laohu cluster user manual Li Changhua National Astronomical Observatory, Chinese Academy of Sciences 2011/12/26 About laohu cluster Laohu cluster has 85 hosts, each host has 8 CPUs and 2 GPUs. GPU is Nvidia
More informationThe RWTH Compute Cluster Environment
The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500
More informationThe GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
The GPU-Cluster Sandra Wienke wienke@rz.rwth-aachen.de Fotos: Christian Iwainsky Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative computer
More informationRWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky
RWTH GPU-Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de March 2012 Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative
More informationEIC system user manual
EIC system user manual how to use system Feb 28 th 2013 SGI Japan Ltd. Index EIC system overview File system, Network User environment job script Submitting job Displaying status of job Canceling,deleting
More informationMinerva Scientific Computing Environment. An Introduction to Minerva
Minerva Scientific Computing Environment An Introduction to Minerva https://hpc.mssm.edu Outline Logging in and Storage Layout Modules: Software Environment Management LSF: Load Sharing Facility, i.e,
More informationUMass High Performance Computing Center
.. UMass High Performance Computing Center University of Massachusetts Medical School October, 2015 2 / 39. Challenges of Genomic Data It is getting easier and cheaper to produce bigger genomic data every
More informationIntroduction to High-Performance Computing (HPC)
Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit CPU cores : individual processing units within a Storage : Disk drives HDD : Hard Disk Drive SSD : Solid
More informationExercise: Calling LAPACK
Exercise: Calling LAPACK In this exercise, we ll use the same conventions and commands as in the batch computing exercise. You should refer back to the batch computing exercise description for detai on
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationRHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK
RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)
More informationHigh Performance Computing How-To Joseph Paul Cohen
High Performance Computing How-To Joseph Paul Cohen This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. Abstract This talk discusses how HPC is used and how
More informationHPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda
KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Agenda 1 Agenda-Day 1 HPC Overview What is a cluster? Shared v.s. Distributed Parallel v.s. Massively Parallel Interconnects
More informationIntroduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende
Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne
More informationIntroduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS
Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationUsing the Yale HPC Clusters
Using the Yale HPC Clusters Robert Bjornson Yale Center for Research Computing Yale University Feb 2017 What is the Yale Center for Research Computing? Independent center under the Provost s office Created
More informationIntroduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS
Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides
More informationCluster Clonetroop: HowTo 2014
2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's
More informationBeginner's Guide for UK IBM systems
Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,
More informationSCALABLE HYBRID PROTOTYPE
SCALABLE HYBRID PROTOTYPE Scalable Hybrid Prototype Part of the PRACE Technology Evaluation Objectives Enabling key applications on new architectures Familiarizing users and providing a research platform
More informationUsing SHARCNET: An Introduction
Using SHARCNET: An Introduction High Performance Technical Computing Agenda: What is SHARCNET The HPC facilities People and support Getting an account SHARCNET essentials Working with the scheduler LSF
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationInstalling and running COMSOL 4.3a on a Linux cluster COMSOL. All rights reserved.
Installing and running COMSOL 4.3a on a Linux cluster 2012 COMSOL. All rights reserved. Introduction This quick guide explains how to install and operate COMSOL Multiphysics 4.3a on a Linux cluster. It
More informationJenkins and Load Sharing Facility (LSF) Enables Rapid Delivery of Device Driver Software. Brian Vandegriend. Jenkins World.
Jenkins and Load Sharing Facility (LSF) Enables Rapid Delivery of Device Driver Software Brian Vandegriend Jenkins and Load Sharing Facility (LSF) Enables Rapid Delivery of Device Driver Software Brian
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides
More informationCOMP Superscalar. COMPSs at BSC. Supercomputers Manual
COMP Superscalar COMPSs at BSC Supercomputers Manual Version: 2.4 November 9, 2018 This manual only provides information about the COMPSs usage at MareNostrum. Specifically, it details the available COMPSs
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationParallel Applications on Distributed Memory Systems. Le Yan HPC User LSU
Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming
More informationIntroduction to CINECA HPC Environment
Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems
More informationIntroduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)
Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit
More informationPlatform LSF Desktop Support User s Guide
Platform LSF Desktop Support User s Guide Version 7.0 Update 2 Release date: November 2007 Last modified: December 4 2007 Support: support@platform.com Comments to: doc@platform.com Copyright We d like
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationIntroduction to CINECA Computer Environment
Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch
More informationBest practices. Using Affinity Scheduling in IBM Platform LSF. IBM Platform LSF
IBM Platform LSF Best practices Using Affinity Scheduling in IBM Platform LSF Rong Song Shen Software Developer: LSF Systems & Technology Group Sam Sanjabi Senior Software Developer Systems & Technology
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationAdvanced Topics in High Performance Scientific Computing [MA5327] Exercise 1
Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de
More informationCSC Supercomputing Environment
CSC Supercomputing Environment Jussi Enkovaara Slides by T. Zwinger, T. Bergman, and Atte Sillanpää CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. CSC IT Center for Science Ltd. Services:
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges
More informationHPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. A Quick Tour of IBM Platform LSF
KFUPM HPC Workshop April 20-30 2015 Mohamed Mekias HPC Solutions Consultant A Quick Tour of IBM Platform LSF 1 Quick introduction to LSF for end users IBM Platform LSF (load sharing facility) is a suite
More informationPlatform LSF concepts and terminology
Platform LSF concepts and terminology Course materials may not be reproduced in whole or in part without the prior written permission of IBM. 7.0 Unit objectives After completing this unit, you should
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What
More informationAmbiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)
Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS
More informationHPC Introductory Course - Exercises
HPC Introductory Course - Exercises The exercises in the following sections will guide you understand and become more familiar with how to use the Balena HPC service. Lines which start with $ are commands
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationDuke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu
Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch
More informationKISTI TACHYON2 SYSTEM Quick User Guide
KISTI TACHYON2 SYSTEM Quick User Guide Ver. 2.4 2017. Feb. SupercomputingCenter 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X5570 2.93GHz(Nehalem) Nodes 3,200 total Cores
More informationSubmitting and running jobs on PlaFRIM2 Redouane Bouchouirbat
Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.
More informationUsing the Yale HPC Clusters
Using the Yale HPC Clusters Stephen Weston Robert Bjornson Yale Center for Research Computing Yale University Oct 2015 To get help Send an email to: hpc@yale.edu Read documentation at: http://research.computing.yale.edu/hpc-support
More informationOpenPBS Users Manual
How to Write a PBS Batch Script OpenPBS Users Manual PBS scripts are rather simple. An MPI example for user your-user-name: Example: MPI Code PBS -N a_name_for_my_parallel_job PBS -l nodes=7,walltime=1:00:00
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges
More informationARMINIUS Brief Instructions
ARMINIUS Brief Instructions Version 19.12.2017 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 ARMINIUS BRIEF INSTRUCTIONS
More informationIntroduction to HPC Numerical libraries on FERMI and PLX
Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of
More informationPACE Orientation. Research Scientist, PACE
PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to
More informationIntroduction to HPC Using zcluster at GACRC On-Class GENE 4220
Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationDuke Compute Cluster Workshop. 10/04/2018 Tom Milledge rc.duke.edu
Duke Compute Cluster Workshop 10/04/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch
More informationIntroduction to Using OSCER s Linux Cluster Supercomputer This exercise will help you learn to use Sooner, the
Introduction to Using OSCER s Linux Cluster Supercomputer http://www.oscer.ou.edu/education.php This exercise will help you learn to use Sooner, the Linux cluster supercomputer administered by the OU Supercomputing
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationHTC Brief Instructions
HTC Brief Instructions Version 18.08.2018 University of Paderborn Paderborn Center for Parallel Computing Warburger Str. 100, D-33098 Paderborn http://pc2.uni-paderborn.de/ 2 HTC BRIEF INSTRUCTIONS Table
More informationRunning Applications on The Sheffield University HPC Clusters
Running Applications on The Sheffield University HPC Clusters Deniz Savas dsavas.staff.sheffield.ac.uk June 2017 Topics 1. Software on an HPC system 2. Available Applications 3. Available Development Tools
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster
More informationAnswers to Federal Reserve Questions. Training for University of Richmond
Answers to Federal Reserve Questions Training for University of Richmond 2 Agenda Cluster Overview Software Modules PBS/Torque Ganglia ACT Utils 3 Cluster overview Systems switch ipmi switch 1x head node
More informationThe JANUS Computing Environment
Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors
More informationHPC on Windows. Visual Studio 2010 and ISV Software
HPC on Windows Visual Studio 2010 and ISV Software Christian Terboven 19.03.2012 / Aachen, Germany Stand: 16.03.2012 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda
More informationIntroduction to HPC2N
Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples
More informationSiemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.
Siemens PLM Software HEEDS MDO 2018.04 Setting up a Windows-to- Linux Compute Resource www.redcedartech.com. Contents Introduction 1 On Remote Machine B 2 Installing the SSH Server 2 Configuring the SSH
More informationIntroduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah
Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah m.cuma@utah.edu Overview Types of parallel computers. Parallel programming options. How to write
More informationIntroduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology
Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal
More informationIntroduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU
Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU What is Joker? NMSU s supercomputer. 238 core computer cluster. Intel E-5 Xeon CPUs and Nvidia K-40 GPUs. InfiniBand innerconnect.
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationLSF HPC :: getting most out of your NUMA machine
Leopold-Franzens-Universität Innsbruck ZID Zentraler Informatikdienst (ZID) LSF HPC :: getting most out of your NUMA machine platform computing conference, Michael Fink who we are & what we do university
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationShifter and Singularity on Blue Waters
Shifter and Singularity on Blue Waters Maxim Belkin June 7, 2018 A simplistic view of a scientific application DATA RESULTS My Application Received an allocation on Blue Waters! DATA RESULTS My Application
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters
More informationHPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing
HPC Workshop Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing NEEDED EQUIPMENT 1. Laptop with Secure Shell (ssh) for login A. Windows: download/install putty from https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationUsing the IBM Opteron 1350 at OSC. October 19-20, 2010
Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration
More informationCrash Course in High Performance Computing
Crash Course in High Performance Computing Cyber-Infrastructure Days October 24, 2013 Dirk Colbry colbrydi@msu.edu Research Specialist Institute for Cyber-Enabled Research https://wiki.hpcc.msu.edu/x/qamraq
More informationCAM Tutorial. configure, build & run. Dani Coleman July
CAM Tutorial configure, build & run Dani Coleman bundy@ucar.edu July 27 2009 CAM is a subset of CCSM Atmosphere Data Ocean Land Data Sea Ice Documentation of CAM Scientific description: http://www.ccsm.ucar.edu/models/atm-cam/docs/description/
More informationGPU Cluster Usage Tutorial
GPU Cluster Usage Tutorial How to make caffe and enjoy tensorflow on Torque 2016 11 12 Yunfeng Wang 1 PBS and Torque PBS: Portable Batch System, computer software that performs job scheduling versions
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the
More informationCerebro Quick Start Guide
Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware
More informationAltix Usage and Application Programming
Center for Information Services and High Performance Computing (ZIH) Altix Usage and Application Programming Discussion And Important Information For Users Zellescher Weg 12 Willers-Bau A113 Tel. +49 351-463
More informationIntroduction to the Marc2 Cluster
Introduction to the Marc2 Cluster René Sitt 29.10.2018 HKHLR is funded by the Hessian Ministry of Sciences and Arts 1/30 Table of Contents 1 Preliminaries 2 Cluster Architecture and Access 3 Working on
More informationHPC Resources at Lehigh. Steve Anthony March 22, 2012
HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,
More informationWindows-HPC Environment at RWTH Aachen University
Windows-HPC Environment at RWTH Aachen University Christian Terboven, Samuel Sarholz {terboven, sarholz}@rz.rwth-aachen.de Center for Computing and Communication RWTH Aachen University PPCES 2009 March
More informationDuke Compute Cluster Workshop. 11/10/2016 Tom Milledge h:ps://rc.duke.edu/
Duke Compute Cluster Workshop 11/10/2016 Tom Milledge h:ps://rc.duke.edu/ rescompu>ng@duke.edu Outline of talk Overview of Research Compu>ng resources Duke Compute Cluster overview Running interac>ve and
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx
More information