Our Workshop Environment

Similar documents
Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Our Workshop Environment

Welcome to the XSEDE Big Data Workshop

Our Workshop Environment

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop

A Big Big Data Platform

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go.

Introduction to GALILEO

Working on the NewRiver Cluster

Introduction to PICO Parallel & Production Enviroment

Introduction to GALILEO

Distributed Memory Programming With MPI Computer Lab Exercises

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to Linux Environment. Yun-Wen Chen

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

Effective Use of CCV Resources

Building Bridges: A System for New HPC Communities

Sharpen Exercise: Using HPC resources and running parallel applications

First steps on using an HPC service ARCHER

Shared Memory Programming With OpenMP Computer Lab Exercises

For Dr Landau s PHYS8602 course

AMS 200: Working on Linux/Unix Machines

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

Using Compute Canada. Masao Fujinaga Information Services and Technology University of Alberta

HPC Introductory Course - Exercises

Tutorial on MPI: part I

A Brief Introduction to The Center for Advanced Computing

High Performance Beowulf Cluster Environment User Manual

New User Tutorial. OSU High Performance Computing Center

Beginner's Guide for UK IBM systems

High Performance Computing in C and C++

Introduction to CINECA HPC Environment

Spring Modern Computer Science in a Unix Like Environment CIS c

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Shared Memory Programming With OpenMP Exercise Instructions

XSEDE New User Tutorial

The Eclipse Parallel Tools Platform

Linux Tutorial. Ken-ichi Nomura. 3 rd Magics Materials Software Workshop. Gaithersburg Marriott Washingtonian Center November 11-13, 2018

Introduction to HPC Numerical libraries on FERMI and PLX

Name Department/Research Area Have you used the Linux command line?

Introduction to BioHPC

Sharpen Exercise: Using HPC resources and running parallel applications

Linux for Biologists Part 2

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

Introduction to Using OSCER s Linux Cluster Supercomputer This exercise will help you learn to use Sooner, the

Compiling applications for the Cray XC

Introduction. Overview of 201 Lab and Linux Tutorials. Stef Nychka. September 10, Department of Computing Science University of Alberta

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

Introduction to GALILEO

Setting up my Dev Environment ECS 030

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

How to Use a Supercomputer - A Boot Camp

Using the computational resources at the GACRC

Working with Shell Scripting. Daniel Balagué

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

Lab 1 Introduction to UNIX and C

Lab 0: Intro to running Jupyter Notebook on a Raspberry Pi

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

OpenACC Support in Score-P and Vampir

Our new HPC-Cluster An overview

Intro to HPC Exercise

CSC111 Computer Science II

Practical: a sample code

Using a Linux System 6

Precursor Steps & Storage Node

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

A Brief Introduction to The Center for Advanced Computing

Introduction to the ITA computer system

Introduction to HPC Resources and Linux

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

OBTAINING AN ACCOUNT:

Using the Zoo Workstations

A Brief Introduction to The Center for Advanced Computing

EE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California

Xeon Phi Native Mode - Sharpen Exercise

Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER

The DTU HPC system. and how to use TopOpt in PETSc on a HPC system, visualize and 3D print results.

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

Compilation and Parallel Start

Supercomputing environment TMA4280 Introduction to Supercomputing

XSEDE New User Tutorial

Introduction to Linux for BlueBEAR. January

Saint Louis University. Intro to Linux and C. CSCI 2400/ ECE 3217: Computer Architecture. Instructors: David Ferry

XSEDE New User Tutorial

CS 2400 Laboratory Assignment #1: Exercises in Compilation and the UNIX Programming Environment (100 pts.)

HPC Account and Software Setup

How to SSH to nice.fas.harvard.edu from Windows

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Practical Introduction to Message-Passing Interface (MPI)

Intermediate Programming, Spring Misha Kazhdan

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

CpSc 1111 Lab 1 Introduction to Unix Systems, Editors, and C

INSTALLING AN SSH / X-WINDOW ENVIRONMENT ON A WINDOWS PC. Nicholas Fitzkee Mississippi State University Updated May 19, 2017

Laboratory 1 Semester 1 11/12

Transcription:

Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018

Our Environment This Week Your laptops or workstations: only used for portal access Bridges is our HPC platform We will here briefly go through the steps to login, edit, compile and run before we get into the real materials. We want to get all of the distractions and local trivia out of the way here. Everything after this talk applies to any HPC environment you will encounter.

20 Storage Building Blocks, implementing the parallel Pylon filesystem (~10PB) using PSC s SLASH2 4 MDS nodes filesystem 2 front-end nodes 2 boot nodes 8 management nodes 6 core Intel OPA edge switches: fully interconnected, 2 links per switch Intel OPA cables 4 HPE Integrity Superdome X (12TB) compute nodes 42 HPE ProLiant DL580 (3 TB) compute nodes 12 HPE ProLiant DL380 database nodes 6 HPE ProLiant DL360 web server nodes 20 leaf Intel OPA edge switches 800 HPE Apollo 2000 (128GB) compute nodes 2015 Pittsburgh Supercomputing Center 32 RSM nodes with NVIDIA next-generation GPUs 16 RSM nodes with NVIDIA K80 GPUs Purpose-built Intel Omni-Path topology for data-intensive HPC http://staff.psc.edu/nystrom/bvt http://psc.edu/bridges

Bridges Node Types

Getting Connected The first time you use your account sheet, you must go to apr.psc.edu to set a password. You may already have done so, if not, we will take a minute to do this shortly. We will be working on bridges.psc.edu. Use an ssh client (a Putty terminal, for example), to ssh to the machine. At this point you are on a login node. It will have a name like br001 or br006. This is a fine place to edit and compile codes. However we must be on compute nodes to do actual computing. We have designed Bridges to be the world s most interactive supercomputer. We generally only require you to use the batch system when you want to. Otherwise, you get your own personal piece of the machine. To get a 8 processors on a node use interact : [urbanic@br006 ~]$ interact n 8 [urbanic@r590 ~]$ You can tell you are on a regular memory compute node because it has a name like r590 or r101. Do make sure you are working on a compute node. Otherwise your results will be confusing.

Editors For editors, we have several options: emacs vi nano: use this if you aren t familiar with the others

Compiling We will be using standard Fortran and C compilers this week. They should look familiar. pgcc (aka mpicc) for C pgf90 (aka mpif90) for Fortran We will slightly prefer the PGI compilers (the Intel or gcc ones would also be fine for most of our work, but not so much for OpenACC). There are also MPI wrappers for these called mpicc and mpif90 that we will use. Note that on Bridges you would normally have to enable this compiler with module load pgi I have put that in the.bashrc file that we will all start with.

Multiple Sessions You are limited to one interactive compute node session for our workshop. However, there is no reason not to open other sessions (windows) to the login nodes for compiling and editing. You may find this convenient. Feel free to do so.

Our Setup For This Workshop After you copy the files from the training directory, you will have: /Exercises /Test /OpenMP /OpenACC /MPI laplace_serial.f90/c laplace_template.f90/c /Solutions /Examples

Let s get the boring stuff out of the way now. Preliminary Exercise Log on to apr.psc.edu and set an initial password if you have not. Log on to Bridges. ssh username@bridges.psc.edu Copy the exercise directory from the training directory to your home directory, and then copy the workshop shell script into your home directory. cp -r ~training/exercises. cp ~training/.bashrc. Logout and back on again to activate this script. You won t need to do that in the future. Edit a file to make sure you can do so. Use emacs, vi or nano (if the first two don t sound familiar). Start an interactive session. interact n 8 cd into your exercises/test directory and compile (C or Fortran) cd Exercises/Test pgcc test.c pgf90 test.f90 Run your program a.out (You should get back a message of Congratulations! )