Genius Quick Start Guide

Size: px
Start display at page:

Download "Genius Quick Start Guide"

Transcription

1 Genius Quick Start Guide Overview of the system Genius consists of a total of 116 nodes with 2 Skylake Xeon Gold 6140 processors. Each with 18 cores, at least 192GB of memory and 800 GB of local SSD disk. There are 3 sections, a thin node cluster with 86 nodes, 10 large memory nodes and a GPU sections with 20 nodes. Table 1 shows the hardware details of Genius cluster compared to ThinKing: ThinKing Ivy bridge Haswell Thin nodes Genius Large memory GPU Total nodes 176/32 48/ Processor type Intel Xeon E5-2680v2 Intel Xeon E5-2680v3 Skylake Intel Xeon 6140 Base Clock Speed (GHz) Cores per node Total cores Memory per node (GB) 1886 Mhz 64/128@ 2133Mhz 192@ 2666Mhz 768@ 2666Mhz 192@ 2666Mhz Memory per core (GB) 3.2/ Total peak performance CPU(TFlop/s) Network IB QDR 2:1 IB FDR Infiniband EDR Cache (L1 KB/L2 KB/L3 MB) 10x (32i+32d) 10x x(32i+32d) 12x x(32i+32d) 18x Local disk 200 GB HDD 100 GB SSD 800 GB SSD Table 1 Hardware overview The nodes have 2 IB connection. The networkt for the storage is separated from the network of the MPI communication).

2 ThinKing K20Xm ThinKing K40c Genius P100 Total nodes GPU per node Total CUDA cores Memory 6GB 12GB 16 GB Base Clock Speed cores 732MHz 745MHz 1328 MHz Max clock speed cores 784MHz 874MHz 1480 MHz Memory Bandwidth 249,6GB/s 288GB/s 732 GB/s Peak double precision floating point performance 1,31Tflops 1,43Tflops 5,3 Tflops Peak single precision floating point performance 3,95Tflops 4,29Tflops 10,6 Tflops Features SMX, Dynamic Parallelism, Hyper-Q, GPUBoost SMX, Dynamic Parallelism, Hyper-Q, GPUboost NVLink, GPUBoost

3 Connecting to Genius during pilot phase Genius does have a 4 dedicated login nodes. In the closed pilot phase only invited users have access to the login nodes. All users having an active VSC account can connect to the login node with the same credentials using the command: $ ssh vscxxxxx@nodename Where nodename can be one of the following: Normal login nodes: login1-tier2.hpc.kuleuven.be login2-tier2.hpc.kuleuven.be With a visualization capabilities (nvidia Quadro P6000 GPU): login3-tier2.hpc.kuleuven.be login4-tier2.hpc.kuleuven.be Accessing your Data All global storage areas available on Thinking are also available on Genius, so no data migration is required. Table 2 summarizes the available storage areas and their characteristics: Name Variable Type Access Backup Default Quota /user/leuven/30x/vsc30xxx $VSC_HOME NFS Global YES 3 GB /data/leuven/30x/vsc30xxx $VSC_DATA NFS Global YES 75 GB /scratch/leuven/30x/vsc30xxx $VSC_SCRATCH $VSC_SCRATCH_SITE GPFS Global NO 100 GB /node_scratch $VSC_SCRATCH_NODE ext4 Local NO 100GB /mnt/beeond/ $VSC_SCRATCH_JOB BeeGFS Nodes in the job NO 300 GB Table 2 Storage areas overview

4 $VSC_HOME: A regular home directory which contains all files that a user might need to log on to the system, and small 'utility' scripts/programs/source code/... The capacity that can be used is restricted by quota and this directory should not be used for I/O intensive programs. Regular backups are performed. $VSC_DATA: A data directory which can be used to store programs and their results.. Regular backups are performed. This are shpuld not be used for I/O intensive programs. There is a default quota of 75 GB, but it can be enlarged. You can find more information about the price and conditions here: $VSC_SCRATCH/$VSC_SCRATCH_SITE: On each cluster you have access to a scratch directory that is shared by all nodes on the cluster. This directory is also accessible from the login nodes, so it is accessible while your jobs run, and after they finish. No backups are made for that area and files can be removed automatically if they have not been accessed for 21 days. $VSC_SCRATCH_NODE: is a scratch space local to each compute node. Thus, on each node, this directory points to a different physical location and the content is only accessible from that particular worknode, and only during the runtime of your job. Software The software stack on for Genius is still under construction, but you will already find available a all the basic software packages and a number of application software. Everything is built with the 2018a toolchain. We recommend to compile your software on the debugging nodes or request an interactive node and not use login nodes as the OS system and node configuration of the compute nodes is slightly different that the ones of the login nodes. The modules software manager tool is available on Genius as it was on ThinKing. There is a small difference since it is now Lmod. Lmod is a Lua based module system, but it is fully compatible with the TCL modulefiles we ve used in the past. All the module commands that you are used to will work. But Lmod is somewhat faster and adds a few additional features on top of the old implementation. The switch to Lmod should be mostly transparent, i.e. you should not have to change your existing job scripts, but of course you need to take into account the new toolchain. The default MODULEPATH is 2018a. Existing module commands should keep working as they were. The naming scheme for modules remains the same : PackageName/version-ToolchainName-ToolchainVersion Where PackageName is the official name of the software, keeping capital and lower letters. On Genius: $ module av Python /apps/leuven/skylake/2018a/modules/all

5 Boost/ foss-2018a-Python GDAL/2.2.3-intel-2018a-Python GEOS/3.6.2-intel-2018a-Python Mako/1.0.7-intel-2018a-Python Mesa/ foss-2018a-Python Python/ foss-2018a Python/ GCCcore bare Python/ intel-2018a Python/3.6.4-foss-2018a Python/3.6.4-intel-2018a SWIG/ intel-2018a-Python Tkinter/3.6.4-foss-2018a-Python VTK/8.0.1-foss-2018a-Python matplotlib/2.1.2-foss-2018a-python wheel/ foss-2018a-python (D) (D) TIP: Revise your job scripts to ensure the appropriate software package name is used. Use always the complete name of the package (name and version) and do not rely on defaults. On Genius you will need to use the 2018 toolchain version. Compiling and Running your Code Several compilers and libraries are available on Genius, as well as two toolchains flavors intel (based on Intel software components) and foss (based on free and open source software). A toolchain is a collection of tools to build (HPC) software consistently. It consists of: compilers for C/C++ and Fortran a communications library (MPI) mathematical libraries (linear algebra, FFT). Toolchains are versioned and refreshed twice a year. All software available on the cluster is rebuilt when a new version of a toolchain is defined to ensure consistency. Version numbers consist of the year of their definition, followed by either a or b, e.g., 2018a. Note that the software components are not necessarily the most recent releases; rather they are selected for stability and reliability. The 2018a toolchain gives the best support for the new generation of intel CPU s. Older toolchains will not be ported to Genius. Table 4 summarizes the toolchains available on Genius and their components: Intel compilers Open Source compilers Name intel foss version 2018a 2018a Compilers Intel compilers (v ) icc, icpc, ifort MPI Library Intel MPI OpenMPI Math libraries MKL GNU compilers (v ) gcc, g++, gfortran OpenBLAS, ScaLAPACK Table 3. Toolchains on Genius

6 TIP: When recompiling your codes for using them on Genius, check the results of the recompiled codes before starting production runs, and use the available Toolchains for compiling whenever possible. In order compile programs we recommend to start an interactive job in the machine. Running Jobs Torque/Moab is used for scheduling jobs on Genius, so the same commands and scripts used on Thinking will work. Credits For the pilot phase everybody is added to the project lpt2_pilot_2018. There are credits available on this shared project. So to submit you should specify this project (-A lpt2_pilot_2018) always. Later on A option will be obligatory (even for introductory credits). A CPU node in Genius A node unit in Genius is a physical server with 2 CPU, which thus contains 36 cores. The scheduling policy is SINGLE_JOB, which means that only one user per node is allowed. Single core jobs can end up on the same node, but are accounted on a job basis. You should pack single core jobs, eg. With the worker framework, to fill the node in order to be accounted only once per node A GPU node in Genius A GPU node unit in Genius is a physical server with 2 CPU and 4 P100 GPU s. The scheduling policy is SHARED. So this means the node is shared with different users. However the users are separated by cgroups. A cgroup is created based on what was requested by the user. So if a user requests 18 cores and 2 GPUs he/she will only have access to 18 cores and 2 GPUs. If you want the complete node for yourself you will need to request also the complete node: so request 36 cores and 4 GPUs. Queues The current available queues on Genius are: q1h, q24h, q72h and q7d. There will be no 21 day queue during the pilot phase. As before, we strongly recommend that instead of specifying queue names on the batch scripts you use the PBS l option to define your needs. Some useful are l options are: Resources usage -l walltime=4:30:00 (job will last 4h 30 min) -l nodes=2:ppn=36 (job needs 2 nodes and 36 cores per node)

7 -l mem=40gb (jobs request 40 GB of memory, sum for all processes) -l pmem=5gb (job request 5 GB of memory per core, which is the default for the thin node) TIP: don t forget, the CPU nodes have 36 cores Revise your batch scripts to specify correct ppn. GPU Partition As explained before, Genius is split into two partitions with different number of cores and memory configuration. By default, jobs will be scheduled by the system in one of the two partitions according to the resources requested, and the availability. However, it is also possible to manually select one partition and have full control over where the jobs are executed. To specify partitions use the following PBS option: -l partition=partition_name Where partition_name can be either gpu. An example of a job submitted using resource request could be: $ qsub l nodes=10:ppn=36:gpus=4 -l walltime=1:00:00 \ -l pmem=4gb -l partition=gpu -A lpt2_pilot_2018 \ This would request 10 nodes with each 4 GPU s. In case you only need one GPU you can request: $ qsub l nodes=1:ppn=9:gpus=1 -l walltime=1:00:00 \ -l pmem=4gb -l partition=gpu -A lpt2_pilot_2018 \ Should your program launch more than 1 process on the GPU you need to add the :default option. $ qsub l nodes=1:ppn=9:gpus=1:default -l walltime=1:00:00 \ -l pmem=4gb -l partition=gpu -A lpt2_pilot_2018 \ Note that you really need to explicitly request the number of GPUs you want to use, otherwise they will be invisible. Large memory nodes To submit to the large memory nodes you will need to also specify explicitly lpartition=bigmem together with the amount of memory you will need for your job. For example: $ qsub l nodes=2:ppn=36 -l walltime=1:00:00 \ -l pmem=20gb -l partition=bigmem -A lpt2_pilot_2018 \

8 Debugging queue At this moment Genius has 1 GPU node for compiling and debugging purposes. You can request the node for a maximum of XX minutes by specifying the QOS: $ qsub l nodes=1:ppn=36 -l walltime=30:00 \ -l qos=debugging -l partition=gpu -A lpt2_pilot_2018 \ So mind specifying qos and also partition. The gpu debug node is a shared node. So if you want the node exclusive for you, you will have to reserve all cores using lnodes=1:ppn=36. the maximum walltime is 30 minutes, which is shorter than the default walltime. So this should be specified. Running tensorflow on GPU s Install miniconda in your VSC_DATA (and never in your VSC_HOME): o o If you already had one, please update your Conda to the latest release: o $> conda update conda Start an interactive session on one of the GPU nodes: o $> qsub I l partition=gpu,nodes=1:ppn=1:gpus=1 A <myproject-name> where the 1st argument is capital I to request an interactive node, the 2nd is the lowcase l to specify the resources, and the last one is your project name for debiting the credits (e.g. for the pilot phse please use lpt2_pilot_2018) Create a new conda environment with the latest TF-gpu: o $> conda create n py36-tf19 python=3.6 tensorflow-gpu=1.9.0 Currently, tensorflow-gpu version is the latest compatible one. The newer ones require higher versions of the Nvidia driver than already installed on the cluster. Ensure that TF-gpu can be imported without error, and can identify the two devices attached to the node: o $> conda activate py36-tf19 o $> python o $> import tensorflow as tf o $>sess=tf.session(config=tf.configproto(log_device_placement=tr ue)) Running PyTorch on GPU s It is very straightforward to manage a conda environment that includes PyTorch: Install miniconda in your VSC_DATA (and never in your VSC_HOME): Create a new conda environment with the latest PyTorch: o $> conda create n py36-torch python=3.6 Install the latest PyTorch from a channel: o $> conda activate py36-torch o $> conda install pytorch torchvision cuda91 c pytorch o $> python c import pytorch; print(pytorch. path )

9 New Features Using local disks as temporary scratch with BeeOnd As an alternative to the GPFS scratch space we also now provide on Genius the possibility for the user to request BeeGFS ( ) when launching your job. This will spawn during your job a parallel shared filesystem on the compute nodes using the local disks of the nodes used within your job. During your job this filesystem will be mounted on /mnt/beeond/. After your job is completed the filesystem will be destroyed so don t forget to include a file transfer to a safe place in your jobscript. To request this filesystem you have to submit for example like this: $ qsub l nodes=2:ppn=36:beeond -l walltime=1:00:00 \ -l pmem=5gb -A lpt2_pilot_2018 \

Cerebro Quick Start Guide

Cerebro Quick Start Guide Cerebro Quick Start Guide Overview of the system Cerebro consists of a total of 64 Ivy Bridge processors E5-4650 v2 with 10 cores each, 14 TB of memory and 24 TB of local disk. Table 1 shows the hardware

More information

Genius - introduction

Genius - introduction Genius - introduction HPC team ICTS, Leuven 5th June 2018 VSC HPC environment GENIUS 2 IB QDR IB QDR IB EDR IB EDR Ethernet IB IB QDR IB FDR Numalink6 /IB FDR IB IB EDR ThinKing Cerebro Accelerators Genius

More information

Genius - introduction

Genius - introduction Genius - introduction HPC team ICTS, Leuven 13th June 2018 VSC HPC environment GENIUS 2 IB QDR IB QDR IB EDR IB EDR Ethernet IB IB QDR IB FDR Numalink6 /IB FDR IB IB EDR ThinKing Cerebro Accelerators Genius

More information

UAntwerpen, 24 June 2016

UAntwerpen, 24 June 2016 Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

Transitioning to Leibniz and CentOS 7

Transitioning to Leibniz and CentOS 7 Transitioning to Leibniz and CentOS 7 Fall 2017 Overview Introduction: some important hardware properties of leibniz Working on leibniz: Logging on to the cluster Selecting software: toolchains Activating

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Effective Use of CCV Resources

Effective Use of CCV Resources Effective Use of CCV Resources Mark Howison User Services & Support This talk... Assumes you have some familiarity with a Unix shell Provides examples and best practices for typical usage of CCV systems

More information

Knights Landing production environment on MARCONI

Knights Landing production environment on MARCONI Knights Landing production environment on MARCONI Alessandro Marani - a.marani@cineca.it March 20th, 2017 Agenda In this presentation, we will discuss - How we interact with KNL environment on MARCONI

More information

Lisa User Day Lisa architecture. John Donners

Lisa User Day Lisa architecture. John Donners Lisa User Day 2011 Lisa architecture John Donners John.Donners@sara.nl What's in this presentation? Overview of all nodes in lisa How to specify jobs for particular purposes: -a quick turnaround -highest

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation

More information

Guillimin HPC Users Meeting March 17, 2016

Guillimin HPC Users Meeting March 17, 2016 Guillimin HPC Users Meeting March 17, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Outline Compute Canada News System Status Software Updates Training

More information

Introduction to CINECA HPC Environment

Introduction to CINECA HPC Environment Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems

More information

Supercomputing environment TMA4280 Introduction to Supercomputing

Supercomputing environment TMA4280 Introduction to Supercomputing Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Guillimin HPC Users Meeting February 11, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Compute Canada News Scheduler Updates Software Updates Training

More information

My operating system is old but I don't care : I'm using NIX! B.Bzeznik BUX meeting, Vilnius 22/03/2016

My operating system is old but I don't care : I'm using NIX! B.Bzeznik BUX meeting, Vilnius 22/03/2016 My operating system is old but I don't care : I'm using NIX! B.Bzeznik BUX meeting, Vilnius 22/03/2016 CIMENT is the computing center of the University of Grenoble CIMENT computing platforms 132Tflops

More information

Habanero Operating Committee. January

Habanero Operating Committee. January Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda KFUPM HPC Workshop April 29-30 2015 Mohamed Mekias HPC Solutions Consultant Agenda 1 Agenda-Day 1 HPC Overview What is a cluster? Shared v.s. Distributed Parallel v.s. Massively Parallel Interconnects

More information

DATARMOR: Comment s'y préparer? Tina Odaka

DATARMOR: Comment s'y préparer? Tina Odaka DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016 PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR DATARMOR : convention de nommage ClusterHPC REF SCRATCH

More information

New User Seminar: Part 2 (best practices)

New User Seminar: Part 2 (best practices) New User Seminar: Part 2 (best practices) General Interest Seminar January 2015 Hugh Merz merz@sharcnet.ca Session Outline Submitting Jobs Minimizing queue waits Investigating jobs Checkpointing Efficiency

More information

IBM Deep Learning Solutions

IBM Deep Learning Solutions IBM Deep Learning Solutions Reference Architecture for Deep Learning on POWER8, P100, and NVLink October, 2016 How do you teach a computer to Perceive? 2 Deep Learning: teaching Siri to recognize a bicycle

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

NBIC TechTrack PBS Tutorial

NBIC TechTrack PBS Tutorial NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

HOKUSAI System. Figure 0-1 System diagram

HOKUSAI System. Figure 0-1 System diagram HOKUSAI System October 11, 2017 Information Systems Division, RIKEN 1.1 System Overview The HOKUSAI system consists of the following key components: - Massively Parallel Computer(GWMPC,BWMPC) - Application

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

VSC Users Day 2018 Start to GPU Ehsan Moravveji

VSC Users Day 2018 Start to GPU Ehsan Moravveji Outline A brief intro Available GPUs at VSC GPU architecture Benchmarking tests General Purpose GPU Programming Models VSC Users Day 2018 Start to GPU Ehsan Moravveji Image courtesy of Nvidia.com Generally

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Shared Memory Programming With OpenMP Computer Lab Exercises

Shared Memory Programming With OpenMP Computer Lab Exercises Shared Memory Programming With OpenMP Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/presentations/fsu

More information

The RWTH Compute Cluster Environment

The RWTH Compute Cluster Environment The RWTH Compute Cluster Environment Tim Cramer 29.07.2013 Source: D. Both, Bull GmbH Rechen- und Kommunikationszentrum (RZ) The RWTH Compute Cluster (1/2) The Cluster provides ~300 TFlop/s No. 32 in TOP500

More information

Sherlock for IBIIS. William Law Stanford Research Computing

Sherlock for IBIIS. William Law Stanford Research Computing Sherlock for IBIIS William Law Stanford Research Computing Overview How we can help System overview Tech specs Signing on Batch submission Software environment Interactive jobs Next steps We are here to

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Cluster Clonetroop: HowTo 2014

Cluster Clonetroop: HowTo 2014 2014/02/25 16:53 1/13 Cluster Clonetroop: HowTo 2014 Cluster Clonetroop: HowTo 2014 This section contains information about how to access, compile and execute jobs on Clonetroop, Laboratori de Càlcul Numeric's

More information

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen

NBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS

More information

The JANUS Computing Environment

The JANUS Computing Environment Research Computing UNIVERSITY OF COLORADO The JANUS Computing Environment Monte Lunacek monte.lunacek@colorado.edu rc-help@colorado.edu What is JANUS? November, 2011 1,368 Compute nodes 16,416 processors

More information

n N c CIni.o ewsrg.au

n N c CIni.o ewsrg.au @NCInews NCI and Raijin National Computational Infrastructure 2 Our Partners General purpose, highly parallel processors High FLOPs/watt and FLOPs/$ Unit of execution Kernel Separate memory subsystem GPGPU

More information

Working on the NewRiver Cluster

Working on the NewRiver Cluster Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced

More information

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational

More information

ncsa eclipse internal training

ncsa eclipse internal training ncsa eclipse internal training This tutorial will cover the basic setup and use of Eclipse with forge.ncsa.illinois.edu. At the end of the tutorial, you should be comfortable with the following tasks:

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Guillimin HPC Users Meeting March 16, 2017

Guillimin HPC Users Meeting March 16, 2017 Guillimin HPC Users Meeting March 16, 2017 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Please be kind to your fellow user meeting attendees Limit to

More information

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:- HPC DOCUMENTATION 1. Hardware Resource :- Our HPC consists of Blade chassis with 5 blade servers and one GPU rack server. a.total available cores for computing: - 96 cores. b.cores reserved and dedicated

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

GPUs and Emerging Architectures

GPUs and Emerging Architectures GPUs and Emerging Architectures Mike Giles mike.giles@maths.ox.ac.uk Mathematical Institute, Oxford University e-infrastructure South Consortium Oxford e-research Centre Emerging Architectures p. 1 CPUs

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

First steps on using an HPC service ARCHER

First steps on using an HPC service ARCHER First steps on using an HPC service ARCHER ARCHER Service Overview and Introduction ARCHER in a nutshell UK National Supercomputing Service Cray XC30 Hardware Nodes based on 2 Intel Ivy Bridge 12-core

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1 Manfred Liebmann Technische Universität München Chair of Optimal Control Center for Mathematical Sciences, M17 manfred.liebmann@tum.de

More information

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS Introduction to High Performance Computing High Performance Computing at UEA http://rscs.uea.ac.uk/hpc/

More information

LS-DYNA Performance Benchmark and Profiling. October 2017

LS-DYNA Performance Benchmark and Profiling. October 2017 LS-DYNA Performance Benchmark and Profiling October 2017 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: LSTC, Huawei, Mellanox Compute resource

More information

The Why and How of HPC-Cloud Hybrids with OpenStack

The Why and How of HPC-Cloud Hybrids with OpenStack The Why and How of HPC-Cloud Hybrids with OpenStack OpenStack Australia Day Melbourne June, 2017 Lev Lafayette, HPC Support and Training Officer, University of Melbourne lev.lafayette@unimelb.edu.au 1.0

More information

User Guide of High Performance Computing Cluster in School of Physics

User Guide of High Performance Computing Cluster in School of Physics User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software

More information

HPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016

HPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016 HPCF Cray Phase 2 User Test period Cristian Simarro User Support advisory@ecmwf.int ECMWF April 18, 2016 Content Introduction Upgrade timeline Changes Hardware Software Steps for the testing on CCB Possible

More information

Shared Memory Programming With OpenMP Exercise Instructions

Shared Memory Programming With OpenMP Exercise Instructions Shared Memory Programming With OpenMP Exercise Instructions John Burkardt Interdisciplinary Center for Applied Mathematics & Information Technology Department Virginia Tech... Advanced Computational Science

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Introduction to the Cluster

Introduction to the Cluster Follow us on Twitter for important news and updates: @ACCREVandy Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu The Cluster We will be

More information

Introduction to HPC Numerical libraries on FERMI and PLX

Introduction to HPC Numerical libraries on FERMI and PLX Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of

More information

CS/Math 471: Intro. to Scientific Computing

CS/Math 471: Intro. to Scientific Computing CS/Math 471: Intro. to Scientific Computing Getting Started with High Performance Computing Matthew Fricke, PhD Center for Advanced Research Computing Table of contents 1. The Center for Advanced Research

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions Fabien Archambault Aix-Marseille Université 2012 F. Archambault (AMU) Rheticus: F.A.Q. 2012 1 / 13 1 Rheticus configuration 2 Front-end connection 3 Modules 4 OAR submission

More information

Übung zur Vorlesung Architektur paralleler Rechnersysteme

Übung zur Vorlesung Architektur paralleler Rechnersysteme Übung zur Vorlesung Architektur paralleler Rechnersysteme SoSe 17 L.079.05814 www.uni-paderborn.de/pc2 Architecture of Parallel Computer Systems SoSe 17 J.Simon 1 Overview Computer Systems Test Cluster

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance

More information

HPC at UZH: status and plans

HPC at UZH: status and plans HPC at UZH: status and plans Dec. 4, 2013 This presentation s purpose Meet the sysadmin team. Update on what s coming soon in Schroedinger s HW. Review old and new usage policies. Discussion (later on).

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS)

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) DELIVERABLE D5.5 Report on ICARUS visualization cluster installation John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS) 02 May 2011 NextMuSE 2 Next generation Multi-mechanics Simulation Environment Cluster

More information

(software agnostic) Computational Considerations

(software agnostic) Computational Considerations (software agnostic) Computational Considerations The Issues CPU GPU Emerging - FPGA, Phi, Nervana Storage Networking CPU 2 Threads core core Processor/Chip Processor/Chip Computer CPU Threads vs. Cores

More information

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant

More information

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD  Research Scientist, PACE PACE Instructional Cluster Environment (ICE) Orientation Mehmet (Memo) Belgin, PhD www.pace.gatech.edu Research Scientist, PACE What is PACE A Partnership for an Advanced Computing Environment Provides

More information

(Reaccredited with A Grade by the NAAC) RE-TENDER NOTICE. Advt. No. PU/R/RUSA Fund/Equipment Purchase-1/ Date:

(Reaccredited with A Grade by the NAAC) RE-TENDER NOTICE. Advt. No. PU/R/RUSA Fund/Equipment Purchase-1/ Date: Phone: 0427-2345766 Fax: 0427-2345124 PERIYAR UNIVERSITY (Reaccredited with A Grade by the NAAC) PERIYAR PALKALAI NAGAR SALEM 636 011. RE-TENDER NOTICE Advt. No. PU/R/RUSA Fund/Equipment Purchase-1/139-2018

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø, Jerry Eriksson, and Pedro Ojeda-May HPC2N, Umeå University 12 September 2017 1 / 38 Overview Kebnekaise and Abisko Using our systems The File System The Module System

More information

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky

The GPU-Cluster. Sandra Wienke Rechen- und Kommunikationszentrum (RZ) Fotos: Christian Iwainsky The GPU-Cluster Sandra Wienke wienke@rz.rwth-aachen.de Fotos: Christian Iwainsky Rechen- und Kommunikationszentrum (RZ) The GPU-Cluster GPU-Cluster: 57 Nvidia Quadro 6000 (29 nodes) innovative computer

More information

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende Introduction to NCAR HPC 25 May 2017 Consulting Services Group Brian Vanderwende Topics we will cover Technical overview of our HPC systems The NCAR computing environment Accessing software on Cheyenne

More information

ADINA DMP System 9.3 Installation Notes

ADINA DMP System 9.3 Installation Notes ADINA DMP System 9.3 Installation Notes for Linux (only) Updated for version 9.3.2 ADINA R & D, Inc. 71 Elton Avenue Watertown, MA 02472 support@adina.com www.adina.com ADINA DMP System 9.3 Installation

More information

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines Götz Waschk Technical Seminar, Zeuthen April 27, 2010 > Introduction > Hardware Infiniband

More information

Accelerator programming with OpenACC

Accelerator programming with OpenACC ..... Accelerator programming with OpenACC Colaboratorio Nacional de Computación Avanzada Jorge Castro jcastro@cenat.ac.cr 2018. Agenda 1 Introduction 2 OpenACC life cycle 3 Hands on session Profiling

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information