KISTI TACHYON2 SYSTEM Quick User Guide
|
|
- Noel Baker
- 5 years ago
- Views:
Transcription
1 KISTI TACHYON2 SYSTEM Quick User Guide Ver Feb. SupercomputingCenter
2 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X GHz(Nehalem) Nodes 3,200 total Cores 25,408 cores 8 cores/node Rpeak 300 TFlops (3,200 nodes) Memory DDR3/1333MHz 76.8TB 24GB/node, 3GB/core Storage interconnect Network 234TB (Disk) 2.3PB (Disk) 2,112TB (Tape) Infiniband 40G 4X QDR 2. USER Environment a. Access Node Type (# of nodes) Hostname(IP) Available Access methods Time Limit for Interactive session login(4) tachyon2.ksc.re.kr (tachyon2a ~ tachyon2d).ksc.re.kr IP : ~ 104 ssh, sftp, X11 20 Min. datamover(3) dm.ksc.re.kr (dm02 ~ dm04).ksc.re.kr IP : ~ 184 ssh, sftp, ftp - compute(3176) tachyon0001 ~ tachyon debug(24) tachyon3177 ~ tachyon3200 ssh 120 Min. To access Tachyon2 system, users have to open a ssh connection to the login nodes, for example, "ssh -l [user_id] tachyon2.ksc.re.kr" To change user's shell, use the "ldapchsh" command, To change password, use the "passwd" command, To check the process's resource Limit on compute node, use the "ulimit -a" command, To check the amount of used SRU Time per User Account, use the "isam" command. b. Storage Section Directory Quota Purge Policy Home Directory /home01 64 GB per account - Scratch Dirctory /scratch2 100 TB per account The file not accessed for last 15 days would be erased. Applications Directory /applic - - To check the directory quota and usage, use a command like this: # lfs quota /home01 (or lfs quota /scratch2) /scratch file system was terminated the service after May
3 c. Programming Environment Name / Path Compilers (/applic/compilers/<compiler>) Debuggers (/applic/debuggers/toolworks) MPI Libraries (/applic/compilers/<compiler>/<compiler_versio N/mpi) Specification l PGI CDK / 2014 l Intel Compiler 11.1 / 2013 / 2015 l gcc (/usr) /4.4.6 / l TotalView l MVAPICH2 1.4/1.5/2.0/2.1 l OpenMPI 1.3.3/1.4.2/1.4.3/1.6.3/1.8.2/1.8.5 Math Libraries (<COMPILER DIR>/applib1, <MPI DIR>/applib2) l FFTW l FFTW /3.3.4 l LAPACK l Scalapack Others Libraries (<COMPILER DIR>/applib1) l HDF / l HDF / l NCARG 4.4.2/5.2.1/6.0.0 l NetCDF Common Libraries (/applic/common) Applications (/applic/applications) Commercial Software (/applic/applications) l CAIRO l CURL l EXPAT l FONTCONFIG l FREETYPE l JASPER l JPEG 9a l LIBPNG l PIXMAN l SZIP 2.1 l ZLIB l Amber 10 l ncview l NAMD 2.9 l PYTHON / l gromacs / /5.1.4 l lammps 5Sep14 / 10Aug15 l siesta 3.2-p1-5 / 4.0 l Gaussian 09-b01, 09-d S/W Information reference - 3 -
4 Libraries Path Pattern Pattern /applic/compilers/pgi/<version>/{applib1,mpi/<mpi_version>/applib2} /applic/compilers/intel/<version>{applib1,mpi/<mpi_version>/applib2} /applic/compilers/gcc/4.1.2/{applib1,mpi/<mpi_version>/applib2} Contents Libraries for compiled with pgi Libraries for compiled with intel Libraries for compiled with gcc Compiler Example Compiler Example GNU PGI Intel gcc o test.exe O3 test.c pgcc/pgcpp/pgf95 o test.exe fast test.c/cc/f90 icc/ifort o test.exe O3 xsse4.2 -m64 test.c/cc/f90 d. Modules Environment Section How to use contents module purge module avail module load compiler/<compiler-version> mpi/<mpi-version> module load applic/<prog_name> module list module unload <list> module add <list> module rm <list> module help <list> module whatis <list> Example module purge module avail module load compiler/intel-11.1 mpi/openmpi module rm mpi module help mpi/openmpi module whatis mpi/openmpi module add mpi/openmpi MPI module avail reference Compiler module avail reference - 4 -
5 e. Running Jobs using SUN Grid Engine(SGE) Section Command Comment Job submission Queue Information Job monitoring qsub job_script showq qstat only own jobs qstat -u '*' all user's jobs ( * or username ) Host Monitoring Job deletion f. Queue Information showhost qdel <jobid> qdel -u <username> The job having the <jobid> would be deleted All jobs submitted by <username> would be deleted. [ As of February 2017 ] Queue Name Wall Clock Limit (hours) Host Range Max CPU Max Running Job Comment normal 48 tachyon[ ] (2970 nodes) 23, public queue exclusive unlimited tachyon[ ] (134 nodes) 1, special queue To check the latest queue configuration, use a command like this: #showq (Queue configuration can be changed flexibly) Serial program job script example (serial.sh) #$ -cwd # use current directory as working directory of job #$ -N serial_job # specify Job Name #$ -q normal # Queue name ##$ -wd /scratch2/<user01>/serialtest # working directory of job, in general it's not necessary. # multi-thread case, please do change OMP_NUM_THREADS value. # For example in gaussian case, # OMP_NUM_THREADS value must be specify to same %Nproc or %Nprocshared value #$ -l OMP_NUM_THREADS=1 export OMP_NUM_THREADS=1 serial.exe - 5 -
6 mpi program job script example(mpi_mvapich2.sh) #$ -cwd # use current directory as working directory of job #$ -N mvapich_job # specify Job Name #$ -pe mpi_fu 32 # specify pe cpu number. #$ -q normal # queue name ##$ -wd /scratch2/<user01>/mvapich # working directory of job, in general it's not necessary. #$ -M my address # registration #$ -m e # send when the Job is completed mpirun_rsh -hostfile $TMPDIR/machines -np $NSLOTS./mpi.exe mpi program job script example(mpi_openmpi.sh) #$ -cwd # use current directory as working directory of job #$ -N mvapich_job # specify Job Name #$ -pe mpi_fu 32 # specify pe cpu number. #$ -q normal # queue name ##$ -wd /scratch2/<user01>/mvapich # working directory of job, in general it's not necessary. #$ -M my address # registration #$ -m e # send when the Job is completed MCAArgs="-mca btl self,openib -mca plm_rsh_num_concurrent 400" MCAArgs="$MCAArgs -mca oob_tcp_listen_mode listen_thread" mpirun $MCAArgs -np $NSLOTS./mpi.exe - 6 -
7 MPI program that uses lots of memory job script example (mpi_mem.sh) #$ -V #$ -cwd #$ -N mvapich_job #$ -pe mpi_4cpu 32 #$ -q normal #$ -R yes #$ -l h_rt=01:00:00 ##$ -M my address ##$ -m e #unset existing MPI affinities export MV2_USE_AFFINITY=0 export MV2_ENABLE_AFFINITY=0 export VIADEV_USE_AFFINITY=0 export VIADEV_ENABLE_AFFINITY=0 mpirun_rsh -hostfile $TMPDIR/machines -np $NSLOTS./numa.sh numa.sh #socket numbers in a compute node SPN=2 #get my MPI rank [ "x$pmi_rank"!= "x" ] && RANK=$PMI_RANK [ "x$mpi_rank"!= "x" ] && RANK=$MPI_RANK [ "x$mpirun_rank"!= "x" ] && RANK=$MPIRUN_RANK [ "x$ompi_mca_ns_nds_vpid"!= "x" ] && RANK=$OMPI_MCA_ns_nds_vpid [ "x$pmi_id"!= "x" ] && RANK=$PMI_ID [ "x$ompi_comm_world_rank"!= "x" ] && RANK=$OMPI_COMM_WORLD_RANK #allocate a cpu core per MPI rank socket=$(( ($RANK + 1) % $SPN )) echo "myrank: $RANK, mysocket: $socket, hostname: $(hostname)" /usr/bin/numactl --cpunodebind=$socket --membind=$socket./mpi.exe # mpi.exe is user execution file - 7 -
8 MPI program that uses redirection job script example #$ -V #$ -cwd #$ -N mvapich_job #$ -pe mpi_4cpu 32 #$ -q normal #$ -R yes #$ -l h_rt=01:00:00 ##$ -M my address ##$ -m e mpirun_rsh -hostfile $TMPDIR/machines -np $NSLOTS./run.sh run.sh./mpi.exe < input.in > std.out - 8 -
9 OpenMP program job script example (mpi_openmp.sh) #$ -cwd #use current directory as job's working directory #$ -N openmp_job #Specify Job Name #$ -pe openmp 4 # Specify number of OpenMP thread #$ -q normal # Queue name ##$ -wd /scratch2/<user01>/openmp # jobs's working directory, in general it's not necessary. #$ -l OMP_NUM_THREADS=4 export OMP_NUM_THREADS=4./omp.exe Hybrid(MPI+OpenMP) job script example (hybrid.sh) #$ -cwd # use current directory as job's working directory #$ -N hybrid_job # specify Job Name #$ -pe mpi_4cpu 8 # Specify number of Task and Threads. #$ -q normal # Queue name ##$ -wd /scratch2/<user01>/hybrid # jobs's working directory, in general it's not necessary. #$ -l OMP_NUM_THREADS=2 # Specify number of OpenMP Threads per MPI Task. # Specify Threads vaule to same below OMP_NUM_THREADS value # or not, Job terminated forcely. export OMP_NUM_THREADS=2 mpirun_rsh -hostfile $TMPDIR/machines -np $NSLOTS./hybrid.exe User Support technical support NISN Helpdesk : accounting related parallelization / optimization education staff: JinWoo Sung account@ksc.re.kr Homepage: parallel@ksc.re.kr NISN Education:
Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationHPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-
HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated
More informationACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009
ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009 What is ACEnet? Shared resource......for research computing... physics, chemistry, oceanography, biology, math, engineering,
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationCluster Clonetroop: HowTo 2014
2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and
More informationKohinoor queuing document
List of SGE Commands: qsub : Submit a job to SGE Kohinoor queuing document qstat : Determine the status of a job qdel : Delete a job qhost : Display Node information Some useful commands $qstat f -- Specifies
More informationLab: Hybrid Programming and NUMA Control
Lab: Hybrid Programming and NUMA Control Steve Lantz Introduction to Parallel Computing May 19, 2010 Based on materials developed by by Bill Barth at TACC 1 What You Will Learn How to use numactl in the
More informationLab: Hybrid Programming and NUMA Control
Lab: Hybrid Programming and NUMA Control Steve Lantz Workshop: Parallel Computing on Ranger and Longhorn May 17, 2012 Based on materials developed by by Kent Milfeld at TACC 1 What You Will Learn How to
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationUsing the IBM Opteron 1350 at OSC. October 19-20, 2010
Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration
More informationWednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology
The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology Computational Biology @ TACC Goal: Establish TACC as a leading center for ENABLING computational biology
More informationUAntwerpen, 24 June 2016
Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationIntroduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende
Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationAmbiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)
Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS
More informationIntroduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)
Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What
More informationPACE Orientation. Research Scientist, PACE
PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is
More informationPACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE
PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationXSEDE New User Tutorial
June 12, 2015 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please remember to sign in for today s event: http://bit.ly/1fashvo Also, please
More informationIntroduction to CINECA HPC Environment
Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing May 1, 2006 Hardware 324 Opteron nodes, over 700 cores 105 Athlon nodes, 210 cores 64 Apple nodes, 128 cores Gigabit networking, Myrinet networking,
More informationThe cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group
The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More informationParallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines
Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines Götz Waschk Technical Seminar, Zeuthen April 27, 2010 > Introduction > Hardware Infiniband
More informationAdvanced Topics in High Performance Scientific Computing [MA5327] Exercise 1
Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de
More informationIntroduction to CINECA Computer Environment
Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch
More informationIntroduction to Cheyenne. 12 January, 2017 Consulting Services Group Brian Vanderwende
Introduction to Cheyenne 12 January, 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical specs of the Cheyenne supercomputer and expanded GLADE file systems The Cheyenne computing
More informationXSEDE New User Tutorial
May 13, 2016 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on-line survey about this module at http://bit.ly/hamptonxsede.
More informationIntroduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende
Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne
More informationProgramming Environment on Ranger Cluster
Programming Environment on Ranger Cluster Cornell Center for Advanced Computing December 8, 2010 12/8/2010 www.cac.cornell.edu 1 User Guides TACC Ranger (http://services.tacc.utexas.edu/index.php/ranger-user-guide)
More informationTech Computer Center Documentation
Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing February 8, 2007 Hardware 376 Opteron nodes, over 890 cores Gigabit networking, Myrinet networking, Infiniband networking soon Hardware: nyx nyx
More informationTo connect to the cluster, simply use a SSH or SFTP client to connect to:
RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head
More informationGetting started with the CEES Grid
Getting started with the CEES Grid October, 2013 CEES HPC Manager: Dennis Michael, dennis@stanford.edu, 723-2014, Mitchell Building room 415. Please see our web site at http://cees.stanford.edu. Account
More informationVienna Scientific Cluster: Problems and Solutions
Vienna Scientific Cluster: Problems and Solutions Dieter Kvasnicka Neusiedl/See February 28 th, 2012 Part I Past VSC History Infrastructure Electric Power May 2011: 1 transformer 5kV Now: 4-5 transformer
More informationIntroduction to HPC Numerical libraries on FERMI and PLX
Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of
More informationHPCC New User Training
High Performance Computing Center HPCC New User Training Getting Started on HPCC Resources Eric Rees, Ph.D. High Performance Computing Center Fall 2018 HPCC User Training Agenda HPCC User Training Agenda
More informationXSEDE New User Tutorial
October 20, 2017 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Please complete a short on line survey about this module at http://bit.ly/xsedesurvey.
More informationThe JANUS Computing Environment
Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors
More informationIntroduction to HPC Using zcluster at GACRC On-Class GENE 4220
Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationIntroduction to HPC2N
Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples
More informationUsing Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing
Zheng Meyer-Zhao - zheng.meyer-zhao@surfsara.nl Consultant Clustercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures and Specifications File systems Funding Hands-on Logging
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationUsing the computational resources at the GACRC
An introduction to zcluster Georgia Advanced Computing Resource Center (GACRC) University of Georgia Dr. Landau s PHYS4601/6601 course - Spring 2017 What is GACRC? Georgia Advanced Computing Resource Center
More informationCerebro Quick Start Guide
Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware
More informationA Brief Introduction to The Center for Advanced Computing
A Brief Introduction to The Center for Advanced Computing November 10, 2009 Outline 1 Resources Hardware Software 2 Mechanics: Access Transferring files and data to and from the clusters Logging into the
More informationIntroduc)on to Hyades
Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on
More informationEffective Use of CCV Resources
Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems
More informationUser Guide of High Performance Computing Cluster in School of Physics
User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationIntroduction to HPC Using the New Cluster at GACRC
Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster
More informationRunning Jobs, Submission Scripts, Modules
9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of
More informationOBTAINING AN ACCOUNT:
HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to
More informationIntroduction to High Performance Computing (HPC) Resources at GACRC
Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Concept
More informationStep 3: Access the HPC machine you will be using to run WRF: ocelote. Step 4: transfer downloaded WRF tar files to your home directory
Step 1: download WRF packages Get WRF tar file from WRF users page, Version 3.8.1. Also get WPS Version 3.8.1 (preprocessor) Store on your local machine Step 2: Login to UA HPC system ssh (UAnetid)@hpc.arizona.edu
More informationOutline. March 5, 2012 CIRMMT - McGill University 2
Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started
More informationIntroduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS
Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/
More informationSubmitting and running jobs on PlaFRIM2 Redouane Bouchouirbat
Submitting and running jobs on PlaFRIM2 Redouane Bouchouirbat Summary 1. Submitting Jobs: Batch mode - Interactive mode 2. Partition 3. Jobs: Serial, Parallel 4. Using generic resources Gres : GPUs, MICs.
More informationMinnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.
Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac
More informationHybrid Computing Lab
Hybrid Computing Lab John Zollweg Introduction to Parallel Computing May 29, 2009 Based on material developed by Kent Milfeld, TACC 05/29/09 www.cac.cornell.edu 1 What you will learn Using numactl in execution
More informationBatch system usage arm euthen F azo he Z J. B T
Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/
More informationKnights Landing production environment on MARCONI
Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationUsing Sapelo2 Cluster at the GACRC
Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram
More informationUser Training Cray XC40 IITM, Pune
User Training Cray XC40 IITM, Pune Sudhakar Yerneni, Raviteja K, Nachiket Manapragada, etc. 1 Cray XC40 Architecture & Packaging 3 Cray XC Series Building Blocks XC40 System Compute Blade 4 Compute Nodes
More informationHPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda
KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Agenda 1 Agenda-Day 1 HPC Overview What is a cluster? Shared v.s. Distributed Parallel v.s. Massively Parallel Interconnects
More informationProgramming Environment on Ranger Cluster
Programming Environment on Ranger Cluster Cornell Center for Advanced Computing October 12, 2009 10/12/2009 www.cac.cornell.edu 1 User Guides TACC Ranger (http://services.tacc.utexas.edu/index.php/ranger-user-guide)
More informationDuke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu
Duke Compute Cluster Workshop 3/28/2018 Tom Milledge rc.duke.edu rescomputing@duke.edu Outline of talk Overview of Research Computing resources Duke Compute Cluster overview Running interactive and batch
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? What is HPC Concept? What
More informationIntroduction to HPC Using zcluster at GACRC
Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou OVERVIEW GACRC High Performance
More informationJoint High Performance Computing Exchange (JHPCE) Cluster Orientation.
Joint High Performance Computing Exchange (JHPCE) Cluster Orientation http://www.jhpce.jhu.edu/ Schedule - Introductions who are we, who are you? - Terminology - Logging in and account setup - Basics of
More informationNUMA Control for Hybrid Applications. Hang Liu TACC February 7 th, 2011
NUMA Control for Hybrid Applications Hang Liu TACC February 7 th, 2011 Hybrid Applications Typical definition of hybrid application Uses both message passing (MPI) and a form of shared memory algorithm
More informationMigrating from Zcluster to Sapelo
GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment
More informationIntroduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah
Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to
More informationSimple examples how to run MPI program via PBS on Taurus HPC
Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload
More informationHow to Use a Supercomputer - A Boot Camp
How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is
More informationPACE Orientation OIT-ART
PACE Orientation OIT-ART Mehmet (Memo) Belgin, PhD Research Scientist, OIT-ART www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital
More informationGACRC User Training: Migrating from Zcluster to Sapelo
GACRC User Training: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/28/2017 GACRC Zcluster-Sapelo Migrating Training 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems
More informationIntroduc)on to Pacman
Introduc)on to Pacman Don Bahls User Consultant dmbahls@alaska.edu (Significant Slide Content from Tom Logan) Overview Connec)ng to Pacman Hardware Programming Environment Compilers Queuing System Interac)ve
More informationIntroduction to the Marc2 Cluster
Introduction to the Marc2 Cluster René Sitt 29.10.2018 HKHLR is funded by the Hessian Ministry of Sciences and Arts 1/30 Table of Contents 1 Preliminaries 2 Cluster Architecture and Access 3 Working on
More informationSuperMike-II Launch Workshop. System Overview and Allocations
: System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of
More informationIntroduction to HPC2N
Introduction to HPC2N Birgitte Brydsø, Jerry Eriksson, and Pedro Ojeda-May HPC2N, Umeå University 12 September 2017 1 / 38 Overview Kebnekaise and Abisko Using our systems The File System The Module System
More informationMERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced
MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced Sarvani Chadalapaka HPC Administrator University of California
More informationBioinformatics Facility at the Biotechnology/Bioservices Center
Bioinformatics Facility at the Biotechnology/Bioservices Center Co-Heads : J.P. Gogarten, Paul Lewis Facility Scientist : Pascal Lapierre Hardware/Software Manager: Jeff Lary Mandate of the Facility: To
More informationIntroduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS
Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/
More informationIntroduction to HPC at MSU
Introduction to HPC at MSU CYBERINFRASTRUCTURE DAYS 2014 Oct/23/2014 Yongjun Choi choiyj@msu.edu Research Specialist, Institute for Cyber- Enabled Research Agenda Introduction to HPCC Introduction to icer
More informationMIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationBatch Usage on JURECA Introduction to Slurm. May 2016 Chrysovalantis Paschoulas HPS JSC
Batch Usage on JURECA Introduction to Slurm May 2016 Chrysovalantis Paschoulas HPS group @ JSC Batch System Concepts Resource Manager is the software responsible for managing the resources of a cluster,
More informationFrequently Asked Questions
Frequently Asked Questions Fabien Archambault Aix-Marseille Université 2012 F. Archambault (AMU) Rheticus: F.A.Q. 2012 1 / 13 1 Rheticus configuration 2 Front-end connection 3 Modules 4 OAR submission
More information