Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Similar documents
Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Introduction to PICO Parallel & Production Enviroment

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Introduction to GALILEO

Introduction to GALILEO

Introduction to GALILEO

UF Research Computing: Overview and Running STATA

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Introduction to HPC Using the New Cluster at GACRC

Name Department/Research Area Have you used the Linux command line?

OBTAINING AN ACCOUNT:

Cluster Network Products

LBRN - HPC systems : CCT, LSU

Introduction to HPC Resources and Linux

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

Introduction to HPC Using zcluster at GACRC

Jan 27, 2014 Nancy Rowe

Introduction to HPC Using zcluster at GACRC

Computing with the Moore Cluster

High Performance Computing (HPC) Using zcluster at GACRC

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Introduction to CINECA HPC Environment

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Tape Data Storage in Practice Minnesota Supercomputing Institute

Introduction to HPC Using zcluster at GACRC

Introduc)on to Hyades

SuperMike-II Launch Workshop. System Overview and Allocations

HPC Capabilities at Research Intensive Universities

Introduction to High Performance Computing (HPC) Resources at GACRC

Using Sapelo2 Cluster at the GACRC

Introduction to HPC Using zcluster at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC

Our new HPC-Cluster An overview

Migrating from Zcluster to Sapelo

ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine

MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced

Getting started with the CEES Grid

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

NBIC TechTrack PBS Tutorial

UAntwerpen, 24 June 2016

Introduction to Discovery.

Introduction to HPC Using zcluster at GACRC

Introduction to Discovery.

Introduction to running C based MPI jobs on COGNAC. Paul Bourke November 2006

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

A Brief Introduction to The Center for Advanced Computing

Guillimin HPC Users Meeting. Bryan Caron

Introduction to Discovery.

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

How to Use a Supercomputer - A Boot Camp

Introduction to HPC Using zcluster at GACRC

PACE. Instructional Cluster Environment (ICE) Orientation. Research Scientist, PACE

A Brief Introduction to The Center for Advanced Computing

The RWTH Compute Cluster Environment

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Introduction to CINECA Computer Environment

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

Tech Computer Center Documentation

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

PBS Pro Documentation

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Outline. March 5, 2012 CIRMMT - McGill University 2

Guillimin HPC Users Meeting. Bart Oldeman

Introduction to HPC Using the New Cluster at GACRC

Answers to Federal Reserve Questions. Training for University of Richmond

INTRODUCTION TO THE CLUSTER

Working on the NewRiver Cluster

Batch Systems. Running your jobs on an HPC machine

Batch Systems. Running calculations on HPC resources

Reduces latency and buffer overhead. Messaging occurs at a speed close to the processors being directly connected. Less error detection

Introduction to HPCC at MSU

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to NCAR HPC. 25 May 2017 Consulting Services Group Brian Vanderwende

PACE Orientation. Research Scientist, PACE

Introduction to HPC Using the New Cluster at GACRC

Knights Landing production environment on MARCONI

XSEDE New User Tutorial

Introduction to High-Performance Computing (HPC)

KISTI TACHYON2 SYSTEM Quick User Guide

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

XSEDE New User Training. Ritu Arora November 14, 2014

A Brief Introduction to The Center for Advanced Computing

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)

Cluster Clonetroop: HowTo 2014

PACE. Instructional Cluster Environment (ICE) Orientation. Mehmet (Memo) Belgin, PhD Research Scientist, PACE

Introduction to HPC Using the New Cluster at GACRC

HPCC New User Training

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

HPC Resources at Lehigh. Steve Anthony March 22, 2012

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

Introduction to HPC2N

Transcription:

Minnesota Supercomputing Institute

Introduction to MSI Systems Andrew Gustafson

The Machines at MSI

Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29

Machine Type: Cluster Clusters at MSI Itasca 1,091 HP ProLiant blade servers, each with two quad-core 2.8 GHz Intel Xeon X5560 "Nehalem EP" processors sharing 24 GiB of system memory. 40-gigabit QDR InfiniBand (IB) interconnect. In total, Itasca consists of 8,728 copute cores and 24 TiB of main memory. itasca.msi.umn.edu Calhoun SGI Altix XE 1300 cluster with 180 compute nodes, each with 16 GB of memory. The nodes are interconnected with a 20gigabit DDR InifiniBand (IB) network. calhoun.msi.umn.edu

Machine Type: Shared Memory Source: http://en.wikipedia.org/wiki/shared_memory

Machine Type: Shared Memory Shared memory systems at MSI Koronis Altix UV 1000 node with 1,140 compute cores (190 6-core Intel Xeon X7542 Westmere processors at 2.66 GHz). 2.96 TiB of globally addressable shared memory in a single system image. Dedicated to research related to human health. koronis.msi.umn.edu

Machine Type: Heterogeneous Source: http://electronicdesign.com/digital-ics/gpu-architecture-improves-embedded-application-support

Machine Type: Heterogeneous Heterogeneous systems at MSI Cascade 32 M2070 Telsa GPUs (4 per node, 8 nodes), 8 K20 Kepler GPUs (2 per node, 4 nodes), 2 Xeon Phi co-processors (1 per node, 2 nodes) cascade.msi.umn.edu

Machine Type: Heterogeneous Many Core NVIDIA GPGPU Programmed with CUDA http://www.nvidia.com/object/what-is-gpu-computing.html

Machine Type: Heterogeneous Many Core Intel Co-processor Programmed in C/C++/Fortran with special statements. Source: http://www.altera.com/technology/system-design/articles/2012/multicore-many-core.html

Machine Types: Programming Complexity General View: Simplest: Shared Memory (OpenMP) Medium: Cluster (MPI) Complex: Heterogeneous (CUDA, special statements)

MSI Labs and Lab Servers https://www.msi.umn.edu/labs Labs at 5 locations Labs machines use the MSI software environment Special high definition and 3D visualization equipment Labs Servers for small jobs lab.msi.umn.edu Many software packages. Some special hardware such as liquid cooled overclocked nodes.

Interacting with MSI Systems

Connecting to MSI https://www.msi.umn.edu/remote-access SSH is the most reliable method Some programs with SSH functionality in Windows include PuTTy and Cygwin Other connection methods such as NX are available. Most systems require you to first ssh into: login.msi.umn.edu

MSI Computing Environment https://www.msi.umn.edu/resources/computing-environment MSI systems are mostly Linux compute clusters running CentOS Software is managed via a module system Jobs are scheduled via a queueing system

High Performance Primary Storage Home directories are unified across all Linux systems (except for Koronis). Each group has a disk quota which can be viewed with the command: groupquota Panasas ActivStor 14: 1.28PB storage, capable of 30 GB/sec read/write, and 270,000 IOPS

Software Management Software is managed via a module system module load modulename - loads a software module module show modulename module avail module list - shows the actions of a module - shows all available modules - shows a list of currently loaded modules module unload modulename module purge - unloads a software module - unloads all software modules

Job Scheduling Jobs are scheduled using the Portable Batch System (PBS) queueing system To schedule a job first make a PBS job script: #!/bin/bash -l #PBS -l walltime=8:00:00,nodes=3:ppn=8,pmem=1000mb #PBS -m abe #PBS -M sample_email@umn.edu cd ~/program_directory module load intel module load ompi/intel mpirun -np 24 program_name < inputfile > outputfile https://www.msi.umn.edu/resources/job-submission-and-scheduling-pbs-scripts https://www.msi.umn.edu/resources/job-queues

Job Scheduling To submit a job script use the command: qsub -q queuename scriptname A list of queues available on different systems can be found here: https://www.msi.umn.edu/resources/job-queues Submit jobs to a queue which is appropriate for the resources needed. To view queued jobs use the commands: qstat -u username showq -w user=username To cancel a submitted job use the command: qdel jobnumber

Interactive Jobs Nodes may be requested for interactive use using the command: qsub -I -l walltime=1:00:00,nodes=1:ppn=8 The job waits in the queue like all jobs, and when it begins the terminal returns control.

Service Units (SUs) Jobs on the high performance computing (HPC) systems consume Service Units (SUs), which roughly correspond to processor time. Each research group is given a service unit allocation at the beginning of the year. To view the number of service units remaining use the command: acctinfo If a group is using service units faster than the "fairshare target", then the group's jobs will have lower queue priority. https://www.msi.umn.edu/hpc/fairshare

Simple Parallelization: pbsdsh #!/bin/bash -l #PBS -l walltime=8:00:00,nodes=1:ppn=8, pmem=1000mb #PBS -m abe #PBS -M sample_email@umn.edu cd ~/job_directory pbsdsh -n 0 ~/jobdirectory/script0.sh pbsdsh -n 1 ~/jobdirectory/script1.sh pbsdsh -n 2 ~/jobdirectory/script2.sh pbsdsh -n 3 ~/jobdirectory/script3.sh pbsdsh -n 4 ~/jobdirectory/script4.sh pbsdsh -n 5 ~/jobdirectory/script5.sh pbsdsh -n 6 ~/jobdirectory/script6.sh pbsdsh -n 7 ~/jobdirectory/script7.sh wait

Job Problem Solving MSI has resources available to help solve job problems. https://www.msi.umn.edu/services/user-support https://www.msi.umn.edu/resources/job-problem-solving https://www.msi.umn.edu/support/faq https://www.msi.umn.edu/resources/debugging Help can be requested by emailing: help@msi.umn.edu

MSI Services At-cost Consulting Complementing its core mission of providing supercomputing resources at no charge to the research communities of the University of Minnesota and other institutions of higher education in the state, MSI charges other units of the University at cost for the efforts of its staff when one of the following conditions obtains: Effort is @ 10% of an FTE or greater for at least one month; Effort is @ 5% of an FTE or greater for at least three months; or Effort is at any level and is grant funded.

MSI Services Data Storage For groups that have unusually large storage requirements or groups that require guaranteed space at all times, MSI offers high performance, dedicated space for a fee. Currently, the fee is $261 / (TB-year). MSI offers this storage in increments of 1 TB, and for periods of time that are multiples of 1 year. This fee guarantees the contracted space availability, backups, and general management and maintenance. Dedicated storage can be accessed from all MSI computing systems.

MSI Services Tape Storage For those users whose large data storage needs are largely of archival nature, MSI is also providing the capability of tape storage at cost. We provide high density tapes (LTO-5) for $100/tape, plus an associated labor charge of $109 (two hours of staff time) to move data to tape/s and to give tape/s to the researcher. Each tape has native capacity of 1.5 TB (3 TB compressed). Since this is off-line storage, data volume cannot exceed the capacity of the tape.

Minnesota Supercomputing Institute Web: www.msi.umn.edu Email: help@msi.umn.edu Telephone: (612) 626-0802 The University of Minnesota is an equal opportunity educator and employer. This PowerPoint is available in alternative formats upon request. Direct requests to Minnesota Supercomputing Institute, 599 Walter library, 117 Pleasant St. SE, Minneapolis, Minnesota, 55455, 612-624-0528.