Our Workshop Environment

Similar documents
Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop

A Big Big Data Platform

Introduction to PICO Parallel & Production Enviroment

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go.

Building Bridges: A System for New HPC Communities

Introduction to GALILEO

Introduction to GALILEO

Working on the NewRiver Cluster

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

Sharpen Exercise: Using HPC resources and running parallel applications

Distributed Memory Programming With MPI Computer Lab Exercises

Effective Use of CCV Resources

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Introduction to Linux Environment. Yun-Wen Chen

For Dr Landau s PHYS8602 course

Introduction to CINECA HPC Environment

HPC Introductory Course - Exercises

GPUs and Emerging Architectures

First steps on using an HPC service ARCHER

Beginner's Guide for UK IBM systems

Shared Memory Programming With OpenMP Computer Lab Exercises

Name Department/Research Area Have you used the Linux command line?

High Performance Computing in C and C++

Genius Quick Start Guide

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Illinois Proposal Considerations Greg Bauer

Introduction to HPC Numerical libraries on FERMI and PLX

HPC Architectures. Types of resource currently in use

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

Using Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta

HPC Current Development in Indonesia. Dr. Bens Pardamean Bina Nusantara University Indonesia

Tutorial on MPI: part I

AMS 200: Working on Linux/Unix Machines

New User Tutorial. OSU High Performance Computing Center

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

A Brief Introduction to The Center for Advanced Computing

High Performance Beowulf Cluster Environment User Manual

Tech Computer Center Documentation

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Sharpen Exercise: Using HPC resources and running parallel applications

Shared Memory Programming With OpenMP Exercise Instructions

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

XSEDE New User Tutorial

Using the computational resources at the GACRC

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

Introduction to BioHPC

A Big Big Data Platform

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

The BioHPC Nucleus Cluster & Future Developments

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

An Introduction to Gauss. Paul D. Baines University of California, Davis November 20 th 2012

Xeon Phi Native Mode - Sharpen Exercise

Compiling applications for the Cray XC

Our new HPC-Cluster An overview

Introduction to the ITA computer system

The Eclipse Parallel Tools Platform

Introduction to GALILEO

OpenACC Support in Score-P and Vampir

Remote & Collaborative Visualization. Texas Advanced Computing Center

Comet Virtualization Code & Design Sprint

Working with Shell Scripting. Daniel Balagué

LBRN - HPC systems : CCT, LSU

Introduction to HPC Resources and Linux

Cluster Clonetroop: HowTo 2014

Intel Knights Landing Hardware

Xeon Phi Native Mode - Sharpen Exercise

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

Transitioning to Leibniz and CentOS 7

SCALABLE HYBRID PROTOTYPE

Cuda C Programming Guide Appendix C Table C-

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Introduction to Computing V - Linux and High-Performance Computing

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

INTRODUCTION TO THE CLUSTER

Allinea Unified Environment

A Brief Introduction to The Center for Advanced Computing

Introduction to High-Performance Computing (HPC)

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Introduction to Using OSCER s Linux Cluster Supercomputer This exercise will help you learn to use Sooner, the

An Introduction to the SPEC High Performance Group and their Benchmark Suites

GPU. OpenMP. OMPCUDA OpenMP. forall. Omni CUDA 3) Global Memory OMPCUDA. GPU Thread. Block GPU Thread. Vol.2012-HPC-133 No.

Practical: a sample code

Introduction to HPC2N

Introduction. Overview of 201 Lab and Linux Tutorials. Stef Nychka. September 10, Department of Computing Science University of Alberta

The RWTH Compute Cluster Environment

The Stampede is Coming: A New Petascale Resource for the Open Science Community

A Brief Introduction to The Center for Advanced Computing

Transcription:

Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016

Our Environment This Week Your laptops or workstations: only used for portal access Bridges is our HPC platform We will here briefly go through the steps to login, edit, compile and run before we get into the real materials. We want to get all of the distractions and local trivia out of the way here. Everything after this talk applies to any HPC environment you will encounter.

20 Storage Building Blocks, implementing the parallel Pylon filesystem (~10PB) using PSC s SLASH2 4 MDS nodes filesystem 2 front-end nodes 2 boot nodes 8 management nodes 6 core Intel OPA edge switches: fully interconnected, 2 links per switch Intel OPA cables 4 HPE Integrity Superdome X (12 TB) compute nodes 42 HPE ProLiant DL580 (3 TB) compute nodes 12 HPE ProLiant DL380 database nodes 6 HPE ProLiant DL360 web server nodes 20 leaf Intel OPA edge switches 800 HPE Apollo 2000 (128 GB) compute nodes 2015 Pittsburgh Supercomputing Center 32 RSM nodes with NVIDIA next-generation GPUs 16 RSM nodes with NVIDIA K80 GPUs Purpose-built Intel Omni-Path topology for data-intensive HPC http://staff.psc.edu/nystrom/bvt http://psc.edu/bridges

Type RAM a Phse. n CPU / GPU / other Server ESM LSM 12TB 3TB 1 2 16 Intel Xeon E7-8880 v3 (18c, 2.3/3.1 GHz, 45MB LLC) HPE Integrity 2 2 16 TBA Superdome X 1 8 4 Intel Xeon E5-8860 v3 (16c, 2.2/3.2 GHz, 40 MB LLC) 2 34 4 TBA RSM 128GB 1 752 2 Intel Xeon E5-2695 v3 (14c, 2.3/3.3 GHz, 35MB LLC) RSM- GPU 128GB Bridges Node Types 1 16 2 Intel Xeon E5-2695 v3 + 2 NVIDIA K80 2 32 2 Intel Xeon E5-2695 v3 + 2 NVIDIA next-generation GPU HPE ProLiant DL580 HPE Apollo 2000 DB-s 6 2 Intel Xeon E5-2695 v3 + SSD HPE ProLiant DL360 128GB 1 DB-h 6 2 Intel Xeon E5-2695 v3 + HDDs HPE ProLiant DL380 Web 128GB 1 6 2 Intel Xeon E5-2695 v3 HPE ProLiant DL360 Other b 128GB 1 14 2 Intel Xeon E5-2695 v3 Total a. All RAM in these nodes is DDR4-2133 b. Other nodes = front end (2) + management/log (8) + boot (2) + MDS (4) HPE ProLiant DL360, DL380

Getting Connected The first time you use your account sheet, you must go to apr.psc.edu to set a password. We will take a minute to do this shortly. We will be working on bridges.psc.edu. Use an ssh client (a Putty terminal, for example), to ssh to the machine. At this point you are on a login node. It will have a name like br001 or br006. This is a fine place to edit and compile codes. However we must be on compute nodes to do actual computing. We have designed Bridges to be the world s most interactive supercomputer. We generally only require you to use the batch system when you want to. Otherwise, you get your own personal piece of the machine. To get a single node use interact : [urbanic@br006 ~]$ interact [urbanic@r590 ~]$ You can tell you are on a regular memory compute node because it has a name like r590 or r101. Do make sure you are working on a compute node. Otherwise your results will be confusing.

Editors For editors, we have several options: emacs vi nano: use this if you aren t familiar with the others

Compiling We will be using standard Fortran and C compilers this week. They should look familiar. pgcc for C pgf90 for Fortran We will slightly prefer the PGI compilers (the Intel or gcc ones would also be fine for most of our work, but not so much for OpenACC). There are also MPI wrappers for these called mpicc and mpif90 that we will use. Note that on Bridges you would normally have to enable this compiler with module load pgi I have put that in the.bashrc file that we will all start with.

Multiple Sessions You are limited to one interactive compute node session for our workshop. However, there is no reason not to open other sessions (windows) to the login nodes for compiling and editing. You may find this convenient. Feel free to do so.

Our Setup For This Workshop After you copy the files from the training directory, you will have: /Exercises /Test /OpenMP /OpenACC /MPI laplace_serial.f90/c laplace_template.f90/c /Solutions /Examples

Let s get the boring stuff out of the way now. Preliminary Exercise Log on to apr.psc.edu and set an initial password. Log on to Bridges. ssh username@bridges.psc.edu Copy the exercise directory from the training directory to your home directory, and then copy the workshop shell script into your home directory. cp -r ~training/exercises. cp ~training/.bashrc. Logout and back on again to activate this script. You won t need to do that in the future. Edit a file to make sure you can do so. Use emacs, vi or nano (if the first two don t sound familiar). Start an interactive session. interact cd into your exercises/test directory and compile (C or Fortran) cd Exercises/Test pgcc test.c pgf90 test.f90 Run your program a.out (You should get back a message of Congratulations! )