Genius - introduction

Size: px
Start display at page:

Download "Genius - introduction"

Transcription

1 Genius - introduction HPC team ICTS, Leuven 5th June 2018

2 VSC HPC environment GENIUS 2

3 IB QDR IB QDR IB EDR IB EDR Ethernet IB IB QDR IB FDR Numalink6 /IB FDR IB IB EDR ThinKing Cerebro Accelerators Genius (2018) nodes 4160 cores 2x Intel Ivy Bridge 10 cores 64 GB RAM 128 GB RAM nodes 3456 cores 2x Intel Haswell 12 cores 64 GB RAM 128 GB RAM 1 nodes 480 cores 48x Intel Ivy Bridge 10 cores 12 TB RAM 20 TB scratch 1 nodes 160 cores 16x Intel Ivy Bridge 10 cores 2 TB RAM 8 nodes 2xNVIDIA Tesla K20X 2688 GPGPU cores 6 GB RAM 5 nodes 2xNVIDIA Tesla K GPGPU cores 12 GB RAM 8 nodes Intel Xeon Phi 5110P 120 CoCPU cores 8 GB RAM nodes 3456 cores 2x Intel Skylake 18 cores 192 GB RAM 768 GB RAM 20 nodes 720 cores 2x Intel Skylake 18 cores 4 x NVIDIA P100 Belnet KU Leuven 8 nodes 160 cores 2x Intel Ivy Bridge 10 cores 64 GB RAM 2 nodes Haswell 2xNVIDIA Quadro K GB RAM 64 GB RAM 2 nodes 72 cores 2x Intel Skylake 18 cores 384 GB RAM 2 nodes 72 cores 2x Intel Skylake 18 cores 384 GB RAM 1 x NVIDIA P600 NAS 70 TB HOME DATA GPFS Scratch DDN 1,2 PB GPFS Archive DDN 600 TB Login nodes Visualisation 3

4 Genius Overview GPU nodes distributed over 3 racks r22g r23g r24g nodes per chassis/enclosure Compute nodes Large Memory nodes r22i13n r22i27n r23i13n r23i27n r22 r23 r24 4 ICTS

5 Genius overview Type of node CPU type Interconnect # cores installed mem local discs # nodes skylake Xeon 6140 IB-EDR GB 800 GB 86 skylake large mem skylake GPU Xeon 6140 IB-EDR GB 800 GB 10 Xeon xP100 SXM2 IB-EDR GB 800 GB 20 5

6 System comparison Tier 2 ThinKing Cluster Genius (2018) Total nodes 176 / / / 10 Processor type Ivybridge Haswell Sky Lake Base Clock Speed 2.8 GHz 2.5 GHz 2.3 GHz Cores per node Total cores 4,160 3,456 3,456 Memory per node (GB) 64 / / / 768 Memory per core (GB) 3.2 / / / 21.3 Peak performance (Flops/cycle) 4 DP FLOPs/cycle: 4-wide AVX addition OR 4-wide AVX multiplication 8 DP FLOPs/cycle: 4-wide FMA (fused multiply-add) instructions AVX2 16 DP FLOPs/cycle: 8-wide FMA (fused multiply-add) instructions AVX-512 Network Infiniband QDR 2:1 Infiniband FDR Infiniband EDR 6 Cache ( KB/ KB/L3 MB) 10x(32i+32d) / 10x256 / 25 MB 12x(32i+32d) / 12x256 / 30MB 18x(32i+32d) / 18x1024 / 25 MB

7 Skylake compute node 7 DDR4 DDR4 DDR4 DDR4 DDR4 DDR4 DDR4 DDR4 QPI QPI core0 Socket 0 Numa node 0 Socket 1 Numa node 1 IB I/O L3 core0 core1 core2 core3 core4 core5 core6 core7 core8 core9 core10 core11 core12 core13 core14 core15 core16 core17 L3 core18 core19 core20 core21 core22 core23 core24 core25 core26 core27 core28 core29 core30 core31 core32 core33 core34 core35

8 GPU comparison K20Xm K40c P100 (2018) Total number of nodes GPUs per node Total CUDA cores Memory 6GB 12GB 16 GB Base Clock Speed cores 732MHz 745MHz 1328 MHz Max clock speed cores 784MHz 874MHz 1480 MHz Memory Bandwidth 249,6GB/s 288GB/s 732 GB/s Peak double precision floating point performance 1,31Tflops 1,43Tflops 5,3 Tflops Peak single precision floating point performance 3,95Tflops 4,29Tflops 10,6 Tflops Features SMX, Dynamic Parallelism, Hyper-Q, GPUBoost SMX, Dynamic Parallelism, Hyper-Q, GPUboost NVLink, GPUBoost 8

9 Production phase Viewpoint MOAB/Torque New MOAB/Torque MAM Pilot phase MAM ThinKing Cerebro Genius nodes 4160 cores 2x Intel Ivy Bridge 10 cores nodes 3456 cores 2x Intel Haswell 12 cores c 1 nodes 480 cores 48x Intel Ivy Bridge 10 cores 1 nodes 160 cores 16x Intel Ivy Bridge 10 cores nodes 3,456 cores 2x Intel Skylake 18 cores 20 nodes 720 cores 2x Intel Skylake 18 cores 64 GB RAM 128 GB RAM 64 GB RAM 128 GB RAM 12 TB RAM 20 TB scratch 2 TB RAM 192GB RAM 768GB RAM 4 x NVIDIA P100 GPFS DDN 14K 9 Login nodes

10 Login nodes ssh login nodes - different purpose, different limits 2 login nodes (different from ThinKing login nodes) login1-tier2.hpc.kuleuven.be login2-tier2.hpc.kuleuven.be nx1 nx2 GUI login to Thinking, terminal to Genius, Use to open Viewpoint Basic command line login 2 login nodes with a visualization capabilities (nvidia Quadro P6000 GPU) login3-tier2.hpc.kuleuven.be Basic login4-tier2.hpc.kuleuven.be command line login + GPU rendering 2 nx nodes, access through server 10

11 Storage areas (same as on ThinKing) Name Variable Type Access Backup Quota /user/leuven/30x/vsc30xxx $VSC_HOME NFS Global YES 3 GB /data/leuven/30x/vsc30xxx $VSC_DATA NFS Global YES 75 GB /scratch/leuven/30x/vsc30xxx $VSC_SCRATCH $VSC_SCRATCH_SITE GPFS Global NO 100 GB /node_scratch (ThinKing) $VSC_SCRATCH_NODE ext4 Local NO GB /node_scratch (Cerebro) $VSC_SCRATCH_NODE xfs Local NO 10 TB /node_scratch (Genius) $VSC_SCRATCH_NODE ext4 Local NO 100GB /staging/leuven/stg_xxxxx n/a GPFS Global NO Minimum 1TB /archive/leuven/arc_xxxxx n/a Object Global NO (Mirror) /mnt/beeond/ (Genius) $VSC_SCRATCH_JOB BeeGFS Nodes in the job NO Minimum 1TB 300GB To check available space: $ quota s ($VSC_HOME and $VSC_DATA) $ mmlsquota vol_ddn2:leuven_scratch --block-size auto ($VSC_SCRATCH) 11

12 Available GPUs at KU Leuven/UHasselt Tesla K20 Tesla K40 Pascal P100 SP cores 14x192=2,688 15x192= x64=3584 DP cores 14x64=896 15x64=960 56x32=1792 Clock freq. (MHz) DRAM (GB) DRAM freq. (GHz) 2.6 (384-bit) 3.0 (384-bit) Compute capability cache (MB) Constant mem. (KB) Shared mem. per block (KB) Registers per block (x1024) (4096-bit) 12

13 PCI-e PCI-e Peer-to-Peer Bandwidth High Med Low GPU 0 GPU 1 GPU 3 GPU 2 Dev. ID Bi-directional P2P: PCIe Dev. ID Bi-directional P2P: NVLink (P100@Leuven) 13

14 How to Start-to-GPU? Approach 1: Users Does your software already use GPUs? Check Nvidia Application Catalog: ( Machine Learning: Tensorflow, Keras, PyTorch, CAFFE2, Chemistry: Abinit, BigDFT, CP2K, Gaussian, QuantumEspresso, BEAGLE-lib, VASP, Phys. & Eng.: OpenFOAM, Fluent, COSMO, Biophysics: NAMD, CHARM, GROMACS, Tools: Alinea-Forge, Cmake, MAGMA, 14

15 How to Start-to-GPU? Approach 2: Porting Incrementally porting your code to use GPUs! Check Nvidia Libraries: cublas cufft cusparse curand THRUST Replace function calls in your application with one from the CUDA libraries. E.g. SGEMM( ) -> cublassgemm( ) (Image taken from Nvidia CUDA 9.2 Libraries) 15

16 Low-level APIs High-level APIs How to Start-to-GPU? Approach 3: Developer Tailor your software development to the GPU hardware! Python: Numba, Numbapro, pycuda, Quasar Matlab: Overloaded functions and gpuarrays R: rcuda, rpud Language Directives Programming Model OpenACC CUDA CUF Kernels CUDA (C/C++/Fortran) OpenCL 16

17 Torque/Moab Jobs have to be submitted from new (Genius) login nodes Some commands: $ qsub : Submit a job, returns a job ID $ qsub test.sh tier2-p-moab-2.icts.hpc.kuleuven.be $ qdel <job-id> : Delete a queued or running job $ qdel $ qsub -A lpt2_pilot_2018 : credits during pilot phase Later: project with A (even default_project for introductory credits) will be required CPU nodes: SINGLE user policy (only 1 user per node), Single core jobs can end up on the same node, but are accounted on a job basis. GPU nodes: SHARED user policy MULTIPLE users per node is allowed. 17

18 Moab Allocation Manager walltime nodes ftype # credits # 1/3600 Project credits valid for all Tier-2 clusters: ThinKing, Cerebro, GPU, Genius after the pilot phase f type = ThinKing IvyBridge ThinKing Haswell ThinKing GPU Cerebro Genius CPU Genius GPU (full node 4xP100) Example: -l nodes=1:ppn=1,walltime=1:00:00 #credits = ( ) 10 = 10 -l nodes=1:ppn=36,walltime=1:00:00 #credits = ( ) 10 = 10 18

19 BeeOND BeeOND("BeeGFS On Demand") was developed to enable easy creation of one or multiple BeeGFS instances on the fly. BeeOND is typically used to aggregate the performance and capacity of internal SSDs or hard disks in compute nodes for the duration of a compute job. This provides additional performance and a very elegant way of burst buffering. Temporary fast storage (during the job execution) Dedicated to the user (not shared), SSDs fast for I/O operations schedule a job with a BeeOND FS $ qsub -lnodes=2:ppn=36:beeond 19

20 Single island The compute nodes are bundled into several domains (islands). Within one island, the network topology is a 'fat tree' topology for highly efficient communication. The connection between the islands is much weaker. Choice to request running a job in one island (max number of nodes=24) $ qsub l nodes=24:ppn=36:singleisland 20

21 Queues The current available queues on Genius are: q1h, q24h, q72h and q7d. There will be no 21 day queue during the pilot phase. As before, we strongly recommend that instead of specifying queue names on the batch scripts you use the PBS l option to define your needs. Some useful are l options for resources usage: -l walltime=4:30:00 (job will last 4h 30 min) -l nodes=2:ppn=36 (job needs 2 nodes and 36 cores per node) -l pmem=5gb (job request 5 GB of memory per core, which is the default for the thin node) 21

22 Extra submission options GPUs: $ qsub l nodes=1:ppn=1:gpus=1 l partition=gpu $ qsub l nodes=1:ppn=36:gpus=4 l partition=gpu Large memory nodes: $ qsub -l partition=bigmem Debugging nodes: $ qsub -l qos=debugging l partition=gpu qsub l nodes=1:ppn=36 -l walltime=30:00 \ -l qos=debugging -l partition=gpu -A lpt2_pilot_2018 \ myprogram.pbs 22

23 Credits (after pilot phase) Credits card concept: Preauthorization: holding the balance as unavailable until the merchant clears the transaction Balance to be held as unavailable: based on requested resourced (walltime, nodes) Actual charge based on what was really used: used walltime (you pay only what you use, e.g. when job crashes) See output file: How to check available credits? (no module for accounting) $ mam-balance Resource List: neednodes=2:ppn=6,nodes=2:ppn=6,pmem=1gb,walltime=01:00:00 Resources Used: cput=00:00:00,mem=0kb,vmem=0kb,walltime=00:00:02 23

24 Viewpoint portal Ease-of-Use Job Submission and Management Viewpoint is a rich, easy-to-use portal for end-users and administrators, designed to increase productivity through its visual web-based interface, powerful job management features, and other workload functions. Allows to speed the submission process and reduce errors by automating best practices. Expands an HPC user base to include even non-it skilled users. Helps to gain admin insight into workload and resource utilization for better management and troubleshooting. 24

25 Viewpoint portal Who should use it? Researchers that like GUI or are not very familiar with Linux command Line Researchers that work in NX environment Group administrators that can create templates/workflows for the whole group Group members that share the data Researchers for whom template exists (but defining own templates is also possible) 25

26 Viewpoint portal Interested? Contact us for the initial login procedure Setup : Access from ThinKing NX (Firefox) Later to be moved outside HPC 26

27 Viewpoint portal 27

28 Viewpoint portal 28

29 Viewpoint portal 29

30 Viewpoint portal 30

31 Viewpoint portal - Home 31

32 Viewpoint portal - Workload 32

33 Viewpoint portal - Templates Contact us for help 33

34 Viewpoint portal File Manager 34

35 Viewpoint portal - Home 35

36 Viewpoint portal Create Job 36

37 Viewpoint portal R with Worker 37

38 Viewpoint portal Free form 38

39 Software Operating system CentOS , 64 bit Kernel el7.x86_64 Applications For development Compilers & basic libraries tool chains Libraries Tools: debuggers, profilers Use modules Different from ThinKing modules! 39

40 Available tool chains intel tool chain Name intel foss version 2018a 2018a Compilers Intel compilers (v ) icc, icpc, ifort foss tool chain MPI Library Intel MPI OpenMPI GNU compilers (v ) gcc, g++, gfortran Math libraries Intel MKL OpenBLAS, LAPACK FFTW ScaLAPACK 40

41 Software Mostly used software is installed. Own builds need to be rebuild for Genius. If missing please contact us! 41

42 Software By default 2018a software is listed ($ module available) The modules software manager is now Lmod. Lmod is a Lua based module system, but it is fully compatible with the TCL modulefiles we ve used in the past. All the module commands that you are used to will work. But Lmod is somewhat faster and adds a few additional features on top of the old implementation. To (re)compile ask for interactive job Default module at the time of loading. Subjest to changes. 42

43 Modules $ module available or module av R Lists all installed software packages $ module av & grep -i python To show only the modules that have the string 'python' in their name, regardless of the case $ module load foss Adds the matlab command in your PATH $ $ module list Lists all loaded modules in current session $ module unload R/3.4.4-intel-2018a-X Removes all only the selected module, other loaded modules dependencies are still loaded $ module purge Removes all loaded modules from your environment 43

44 Modules $ module swap foss intel = module unload foss; module load intel $ module try-load packagexyz try to load a module with no error message if it does not exist $ module keyword word1 word2... Keyword searching tool, searches any help message or whatis description for the word(s) given on the command line $ module help foss Prints help message from modulefile $ module spider foss Describes the module 44

45 Modules ml convenient tool $ ml = module list $ ml foss =module load foss $ ml -foss =module unload foss (not purge!) $ ml show foss Info about the module Possible to create user collections: module save <collection-name> module restore <collection-name> module describe <collection-name> module savelist module disable <collection-name> More info: 45

46 Questions Now Helpdesk: or VSC web site: VSC documentation Genius Quick Start Guide: Slides from the session available under session webpage VSC agenda: training sessions, events Systems status page: or 46

Genius - introduction

Genius - introduction Genius - introduction HPC team ICTS, Leuven 13th June 2018 VSC HPC environment GENIUS 2 IB QDR IB QDR IB EDR IB EDR Ethernet IB IB QDR IB FDR Numalink6 /IB FDR IB IB EDR ThinKing Cerebro Accelerators Genius

More information

VSC Users Day 2018 Start to GPU Ehsan Moravveji

VSC Users Day 2018 Start to GPU Ehsan Moravveji Outline A brief intro Available GPUs at VSC GPU architecture Benchmarking tests General Purpose GPU Programming Models VSC Users Day 2018 Start to GPU Ehsan Moravveji Image courtesy of Nvidia.com Generally

More information

Genius Quick Start Guide

Genius Quick Start Guide Genius Quick Start Guide Overview of the system Genius consists of a total of 116 nodes with 2 Skylake Xeon Gold 6140 processors. Each with 18 cores, at least 192GB of memory and 800 GB of local SSD disk.

More information

UAntwerpen, 24 June 2016

UAntwerpen, 24 June 2016 Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO

More information

Cerebro Quick Start Guide

Cerebro Quick Start Guide Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware

More information

Transitioning to Leibniz and CentOS 7

Transitioning to Leibniz and CentOS 7 Transitioning to Leibniz and CentOS 7 Fall 2017 Overview Introduction: some important hardware properties of leibniz Working on leibniz: Logging on to the cluster Selecting software: toolchains Activating

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

n N c CIni.o ewsrg.au

n N c CIni.o ewsrg.au @NCInews NCI and Raijin National Computational Infrastructure 2 Our Partners General purpose, highly parallel processors High FLOPs/watt and FLOPs/$ Unit of execution Kernel Separate memory subsystem GPGPU

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Agenda 1 Agenda-Day 1 HPC Overview What is a cluster? Shared v.s. Distributed Parallel v.s. Massively Parallel Interconnects

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

Introduction to CINECA HPC Environment

Introduction to CINECA HPC Environment Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Leibniz Supercomputer Centre. Movie on YouTube

Leibniz Supercomputer Centre. Movie on YouTube SuperMUC @ Leibniz Supercomputer Centre Movie on YouTube Peak Performance Peak performance: 3 Peta Flops 3*10 15 Flops Mega 10 6 million Giga 10 9 billion Tera 10 12 trillion Peta 10 15 quadrillion Exa

More information

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University Introduction to High Performance Computing Shaohao Chen Research Computing Services (RCS) Boston University Outline What is HPC? Why computer cluster? Basic structure of a computer cluster Computer performance

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

GPU Computing with Fornax. Dr. Christopher Harris

GPU Computing with Fornax. Dr. Christopher Harris GPU Computing with Fornax Dr. Christopher Harris ivec@uwa CAASTRO GPU Training Workshop 8-9 October 2012 Introducing the Historical GPU Graphics Processing Unit (GPU) n : A specialised electronic circuit

More information

IBM Deep Learning Solutions

IBM Deep Learning Solutions IBM Deep Learning Solutions Reference Architecture for Deep Learning on POWER8, P100, and NVLink October, 2016 How do you teach a computer to Perceive? 2 Deep Learning: teaching Siri to recognize a bicycle

More information

The Why and How of HPC-Cloud Hybrids with OpenStack

The Why and How of HPC-Cloud Hybrids with OpenStack The Why and How of HPC-Cloud Hybrids with OpenStack OpenStack Australia Day Melbourne June, 2017 Lev Lafayette, HPC Support and Training Officer, University of Melbourne lev.lafayette@unimelb.edu.au 1.0

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides

More information

Habanero Operating Committee. January

Habanero Operating Committee. January Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes

More information

GPUs and Emerging Architectures

GPUs and Emerging Architectures GPUs and Emerging Architectures Mike Giles mike.giles@maths.ox.ac.uk Mathematical Institute, Oxford University e-infrastructure South Consortium Oxford e-research Centre Emerging Architectures p. 1 CPUs

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD  Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

IT4Innovations national supercomputing center. Branislav Jansík

IT4Innovations national supercomputing center. Branislav Jansík IT4Innovations national supercomputing center Branislav Jansík branislav.jansik@vsb.cz Anselm Salomon Data center infrastructure Anselm and Salomon Anselm Intel Sandy Bridge E5-2665 2x8 cores 64GB RAM

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

HOKUSAI System. Figure 0-1 System diagram

HOKUSAI System. Figure 0-1 System diagram HOKUSAI System October 11, 2017 Information Systems Division, RIKEN 1.1 System Overview The HOKUSAI system consists of the following key components: - Massively Parallel Computer(GWMPC,BWMPC) - Application

More information

GPU computing at RZG overview & some early performance results. Markus Rampp

GPU computing at RZG overview & some early performance results. Markus Rampp GPU computing at RZG overview & some early performance results Markus Rampp Introduction Outline Hydra configuration overview GPU software environment Benchmarking and porting activities Team Renate Dohmen

More information

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Hybrid Computing @ KAUST Many Cores and OpenACC Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS + Agenda Hybrid Computing n Hybrid Computing n From Multi-Physics

More information

Übung zur Vorlesung Architektur paralleler Rechnersysteme

Übung zur Vorlesung Architektur paralleler Rechnersysteme Übung zur Vorlesung Architektur paralleler Rechnersysteme SoSe 17 L.079.05814 www.uni-paderborn.de/pc2 Architecture of Parallel Computer Systems SoSe 17 J.Simon 1 Overview Computer Systems Test Cluster

More information

Smarter Clusters from the Supercomputer Experts

Smarter Clusters from the Supercomputer Experts Smarter Clusters from the Supercomputer Experts Maximize Your Results with Flexible, High-Performance Cray CS500 Cluster Supercomputers In science and business, as soon as one question is answered another

More information

Pedraforca: a First ARM + GPU Cluster for HPC

Pedraforca: a First ARM + GPU Cluster for HPC www.bsc.es Pedraforca: a First ARM + GPU Cluster for HPC Nikola Puzovic, Alex Ramirez We ve hit the power wall ALL computers are limited by power consumption Energy-efficient approaches Multi-core Fujitsu

More information

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Guillimin HPC Users Meeting February 11, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Compute Canada News Scheduler Updates Software Updates Training

More information

Advanced Research Computing. ARC3 and GPUs. Mark Dixon

Advanced Research Computing. ARC3 and GPUs. Mark Dixon Advanced Research Computing Mark Dixon m.c.dixon@leeds.ac.uk ARC3 (1st March 217) Included 2 GPU nodes, each with: 24 Intel CPU cores & 128G RAM (same as standard compute node) 2 NVIDIA Tesla K8 24G RAM

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500

More information

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017

Technologies and application performance. Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 Technologies and application performance Marc Mendez-Bermond HPC Solutions Expert - Dell Technologies September 2017 The landscape is changing We are no longer in the general purpose era the argument of

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

RWTH GPU-Cluster. Sandra Wienke March Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky RWTH GPU-Cluster Fotos: Christian Iwainsky Sandra Wienke wienke@rz.rwth-aachen.de March 2012 Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative

More information

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky The GPU-Cluster Sandra Wienke wienke@rz.rwth-aachen.de Fotos: Christian Iwainsky Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative computer

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

HPC Hardware Overview

HPC Hardware Overview HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017 Overview The Globally Accessible Data Environment (GLADE) provides centralized file storage for HPC computational, data-analysis,

More information

PACE Orientation. Research Scientist, PACE

PACE Orientation. Research Scientist, PACE PACE Orientation Mehmet (Memo) Belgin, PhD Research Scientist, PACE www.pace.gatech.edu What is PACE A Partnership for an Advanced Computing Environment Provides faculty and researchers vital tools to

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Code optimization. Geert Jan Bex

Code optimization. Geert Jan Bex Code optimization Geert Jan Bex (geertjan.bex@uhasselt.be) License: this presentation is released under the Creative Commons, see http://creativecommons.org/publicdomain/zero/1.0/ 1 CPU 2 Vectorization

More information

CS500 SMARTER CLUSTER SUPERCOMPUTERS

CS500 SMARTER CLUSTER SUPERCOMPUTERS CS500 SMARTER CLUSTER SUPERCOMPUTERS OVERVIEW Extending the boundaries of what you can achieve takes reliable computing tools matched to your workloads. That s why we tailor the Cray CS500 cluster supercomputer

More information

Center for Research Informatics

Center for Research Informatics Introducing Gardner Center for Research Informatics Established in 2011 to support BSD research Mission: To provide informatics resources and service to the BSD, to participate in clinical and biomedical

More information

Basic Specification of Oakforest-PACS

Basic Specification of Oakforest-PACS Basic Specification of Oakforest-PACS Joint Center for Advanced HPC (JCAHPC) by Information Technology Center, the University of Tokyo and Center for Computational Sciences, University of Tsukuba Oakforest-PACS

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne

More information

ARCHER Champions 2 workshop

ARCHER Champions 2 workshop ARCHER Champions 2 workshop Mike Giles Mathematical Institute & OeRC, University of Oxford Sept 5th, 2016 Mike Giles (Oxford) ARCHER Champions 2 Sept 5th, 2016 1 / 14 Tier 2 bids Out of the 8 bids, I know

More information

NVIDIA GPU TECHNOLOGY UPDATE

NVIDIA GPU TECHNOLOGY UPDATE NVIDIA GPU TECHNOLOGY UPDATE May 2015 Axel Koehler Senior Solutions Architect, NVIDIA NVIDIA: The VISUAL Computing Company GAMING DESIGN ENTERPRISE VIRTUALIZATION HPC & CLOUD SERVICE PROVIDERS AUTONOMOUS

More information

KISTI TACHYON2 SYSTEM Quick User Guide

KISTI TACHYON2 SYSTEM Quick User Guide KISTI TACHYON2 SYSTEM Quick User Guide Ver. 2.4 2017. Feb. SupercomputingCenter 1. TACHYON 2 System Overview Section Specs Model SUN Blade 6275 CPU Intel Xeon X5570 2.93GHz(Nehalem) Nodes 3,200 total Cores

More information

Tesla GPU Computing A Revolution in High Performance Computing

Tesla GPU Computing A Revolution in High Performance Computing Tesla GPU Computing A Revolution in High Performance Computing Mark Harris, NVIDIA Agenda Tesla GPU Computing CUDA Fermi What is GPU Computing? Introduction to Tesla CUDA Architecture Programming & Memory

More information

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER Adrian Jackson a.jackson@epcc.ed.ac.uk @adrianjhpc Processors The power used by a CPU core is proportional to Clock Frequency x Voltage 2 In the past,

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

User Training Cray XC40 IITM, Pune

User Training Cray XC40 IITM, Pune User Training Cray XC40 IITM, Pune Sudhakar Yerneni, Raviteja K, Nachiket Manapragada, etc. 1 Cray XC40 Architecture & Packaging 3 Cray XC Series Building Blocks XC40 System Compute Blade 4 Compute Nodes

More information

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS)

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) DELIVERABLE D5.5 Report on ICARUS visualization cluster installation John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) 02 May 2011 NextMuSE 2 Next generation Multi-mechanics Simulation Environment Cluster

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati,

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, HPC-CINECA infrastructure: The New Marconi System HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, g.amati@cineca.it Agenda 1. New Marconi system Roadmap Some performance info

More information

High Performance Computing with Accelerators

High Performance Computing with Accelerators High Performance Computing with Accelerators Volodymyr Kindratenko Innovative Systems Laboratory @ NCSA Institute for Advanced Computing Applications and Technologies (IACAT) National Center for Supercomputing

More information

Introduction to ARC Systems

Introduction to ARC Systems Introduction to ARC Systems Presenter Name Advanced Research Computing Division of IT Feb 20, 2018 Before We Start Sign in Request account if necessary Windows Users: MobaXterm PuTTY Web Interface: ETX

More information

Accelerating High Performance Computing.

Accelerating High Performance Computing. Accelerating High Performance Computing http://www.nvidia.com/tesla Computing The 3 rd Pillar of Science Drug Design Molecular Dynamics Seismic Imaging Reverse Time Migration Automotive Design Computational

More information

DATARMOR: Comment s'y préparer? Tina Odaka

DATARMOR: Comment s'y préparer? Tina Odaka DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016 PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR DATARMOR : convention de nommage ClusterHPC REF SCRATCH

More information

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian

INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER. Adrian INTRODUCTION TO THE ARCHER KNIGHTS LANDING CLUSTER Adrian Jackson adrianj@epcc.ed.ac.uk @adrianjhpc Processors The power used by a CPU core is proportional to Clock Frequency x Voltage 2 In the past, computers

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

PODShell: Simplifying HPC in the Cloud Workflow

PODShell: Simplifying HPC in the Cloud Workflow PODShell: Simplifying HPC in the Cloud Workflow June 2011 Penguin provides Linux HPC Solutions Linux Systems Servers Workstations Cluster Management Software HPC as a Service - Penguin on Demand Professional

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

7 DAYS AND 8 NIGHTS WITH THE CARMA DEV KIT

7 DAYS AND 8 NIGHTS WITH THE CARMA DEV KIT 7 DAYS AND 8 NIGHTS WITH THE CARMA DEV KIT Draft Printed for SECO Murex S.A.S 2012 all rights reserved Murex Analytics Only global vendor of trading, risk management and processing systems focusing also

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø, Jerry Eriksson, and Pedro Ojeda-May HPC2N, Umeå University 12 September 2017 1 / 38 Overview Kebnekaise and Abisko Using our systems The File System The Module System

More information

Introduction to CINECA Computer Environment

Introduction to CINECA Computer Environment Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA) Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit

More information

HPC Resources at Lehigh. Steve Anthony March 22, 2012

HPC Resources at Lehigh. Steve Anthony March 22, 2012 HPC Resources at Lehigh Steve Anthony March 22, 2012 HPC at Lehigh: Resources What's Available? Service Level Basic Service Level E-1 Service Level E-2 Leaf and Condor Pool Altair Trits, Cuda0, Inferno,

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

XSEDE New User Tutorial

XSEDE New User Tutorial April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to

More information

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/

More information