Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Similar documents
Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Jan 27, 2014 Nancy Rowe

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

HPC Capabilities at Research Intensive Universities

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Introduction to GALILEO

Advanced Topics in High Performance Scientific Computing [MA5327] Exercise 1

Introduction to GALILEO

Introduction to PICO Parallel & Production Enviroment

University at Buffalo Center for Computational Research

LBRN - HPC systems : CCT, LSU

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to HPC Resources and Linux

High Performance Computing with Accelerators

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to GALILEO

SuperMike-II Launch Workshop. System Overview and Allocations

High Performance Computing Resources at MSU

Cluster Network Products

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

Introduction to HPC Using zcluster at GACRC

Introduction to CINECA HPC Environment

HPC Current Development in Indonesia. Dr. Bens Pardamean Bina Nusantara University Indonesia

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

Migrating from Zcluster to Sapelo

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

Introduction to HPC Using zcluster at GACRC

Large Scale Remote Interactive Visualization

How to Use a Supercomputer - A Boot Camp

Using Sapelo2 Cluster at the GACRC

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

Cornell Theory Center 1

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using zcluster at GACRC

Introduc)on to Hyades

Introduction to HPC Using zcluster at GACRC

Introduction to ARC Systems

Our Workshop Environment

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

High Performance Computing (HPC) Using zcluster at GACRC

The RWTH Compute Cluster Environment

Introduction to High Performance Computing Using Sapelo2 at GACRC

Our Workshop Environment

Introduction to CINECA Computer Environment

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

XSEDE New User Training. Ritu Arora November 14, 2014

Graham vs legacy systems

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduc)on to High Performance Compu)ng Advanced Research Computing

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

Name Department/Research Area Have you used the Linux command line?

Tech Computer Center Documentation

HPC Solution. Technology for a New Era in Computing

UB CCR's Industry Cluster

Using Cartesius & Lisa

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

Introduction to HPC Using zcluster at GACRC

GPU Computing with Fornax. Dr. Christopher Harris

MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced

Introduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)

ACEnet for CS6702 Ross Dickson, Computational Research Consultant 29 Sep 2009

INSPUR and HPC Innovation. Dong Qi (Forrest) Oversea PM

Ambiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

Our Workshop Environment

Illinois Proposal Considerations Greg Bauer

Our Workshop Environment

Outline. March 5, 2012 CIRMMT - McGill University 2

SARA Overview. Walter Lioen Group Leader Supercomputing & Senior Consultant

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

UAntwerpen, 24 June 2016

Introduction to HPC Using the New Cluster at GACRC

Architectures for Scalable Media Object Search

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

Hands-On Workshop bwunicluster June 29th 2015

PowerServe HPC/AI Servers

Our Workshop Environment

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose

Our Workshop Environment

Moab, TORQUE, and Gold in a Heterogeneous, Federated Computing System at the University of Michigan

Introduction to High-Performance Computing (HPC)

Getting started with the CEES Grid

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC

DATARMOR: Comment s'y préparer? Tina Odaka

Available Resources Considerate Usage Summary. Scientific Computing Resources and Current Fair Usage. Quentin CAUDRON Ellak SOMFAI Stefan GROSSKINSKY

OBTAINING AN ACCOUNT:

HPCC New User Training

HPC Resources at Lehigh. Steve Anthony March 22, 2012

Leonhard: a new cluster for Big Data at ETH

Introduction to HPC Using zcluster at GACRC

MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization

Our new HPC-Cluster An overview

HPC with GPU and its applications from Inspur. Haibo Xie, Ph.D

Cluster Clonetroop: HowTo 2014

Working on the NewRiver Cluster

Transcription:

Minnesota Supercomputing Institute

Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant

Goals Introduction to MSI resources Show you how to access our systems Highlight resources of particular use to physical scientists Point you to where to go for help Ensure that you are not overwhelmed Encourage you to ask questions of MSI staff to get what you need

Serves more than 500 PI groups with 3000+ users. MSI is an academic unit of the University of Minnesota (under OVPR) with about 45 full time employees. Dedicated resources for serial workflows, database management, and cloud access. MSI At a Glance Operates 3 large and several smaller/ experimental HPC (High Performance Computing) systems Manages approximately 400 software packages and 50 research databases. Main data center in Walter Library. MSI operates five laboratories on campus, open to all MSI users.

MSI Resources HPC Resources Koronis Itasca Calhoun Cascade GPUT Laboratories BMSDL BSCL CGL SDVL LMVL Software Chemical and Physical Sciences Engineering Graphics and Visualization Life Sciences Development Tools User Services Consulting Tutorials Code Porting Parallelization Visualization

The Machines at MSI

HPC Resources Koronis: SGI Altix 1140 Intel Nehalem Cores 2.96 TB of memory Itasca: Hewlett-Packard 3000BL 9480 Intel Nehalem Cores 31.3 TB of memory Calhoun: SGI Altix XE 1300 1440 Intel Xeon Clovertown Cores 2.8 TB of memory Heterogeneous Computing Cascade: 8 Dell Compute Nodes 8 Kepler Nodes 32 Nvidia M2070 GPGPUs 2 Phi Nodes GPUT: 16 Nvidia GeForce GTX 480 GPUs

Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29

Machine Type: Cluster Clusters at MSI o o Itasca 1,091 HP ProLiant blade servers, each with two quad-core 2.8 GHz Intel Xeon X5560 "Nehalem EP" processors sharing 24 GiB of system memory. 40-gigabit QDR InfiniBand (IB) interconnect. In total, Itasca consists of 8,728 copute cores and 24 TiB of main memory. itasca.msi.umn.edu Calhoun SGI Altix XE 1300 cluster with 180 compute nodes, each with 16 GB of memory. The nodes are interconnected with a 20- gigabit DDR InifiniBand (IB) network. calhoun.msi.umn.edu

Machine Type: Heterogeneous Source: http://electronicdesign.com/digital-ics/gpu-architecture-improves-embedded-application-support

Machine Type: Heterogeneous o Cascade 32 M2070 Telsa GPUs (4 per node, 8 nodes), 8 K20 Kepler GPUs (2 per node, 4 nodes), 2 Xeon Phi co-processors (1 per node, 2 nodes) cascade.msi.umn.edu

Machine Types: Programming Complexity General View: Simplest: Shared Memory (OpenMP) Medium: Cluster (MPI) Complex: Heterogeneous (CUDA, special statements)

Laboratories Scientific Development and Visualization Application development Computational Physics Workshops Basic Sciences Computing Structural Biology Batch processing Stereo Projection system Computational Genetics Bioinformatics Genomics Proteomics Biomedical Modeling, Simulation, and Design Drug Design Molecular Modeling Computational chemistry LCSE-MSI Visualization Virtual reality Large screen Remote visualization

MSI Labs and Lab Servers https://www.msi.umn.edu/labs Labs at 5 locations o Labs machines use the MSI software environment o Special high definition and 3D visualization equipment Labs Servers for small jobs o lab.msi.umn.edu o Many software packages. o Some special hardware such as liquid cooled overclocked nodes.

Interacting with MSI Systems

Connecting to MSI https://www.msi.umn.edu/remote-access SSH is usually the best choice ssh -X <username>@login.msi.umn.edu ssh -X itasca" Windows SSH options include PuTTY and Cygwin Remote desktop options include NX and VNC client See above page for details Most systems require you to connect first to: login.msi.umn.edu"

MSI Computing Environment https://www.msi.umn.edu/resources/computing-environment MSI systems are mostly Linux compute clusters running CentOS Software is managed via a system of environment modules Jobs are run via a batch scheduling system

High Performance Primary Storage Home directories are unified across all Linux systems (except for Koronis). Each group has a disk quota which can be viewed with the command: groupquota Panasas ActivStor 14: 1.28PB storage, capable of 30 GB/ sec read/write, and 270,000 IOPS

Software Management Software is managed via a module system module load modulename module show modulename - loads a software module - shows the actions of a module module avail module list - shows all available modules - shows a list of currently loaded modules module unload modulename - unloads a software module module purge - unloads all software modules

Job Scheduling Jobs are scheduled using the Portable Batch System (PBS) queueing system To schedule a job first make a PBS job script: #!/bin/bash -l #PBS -l walltime=8:00:00,nodes=3:ppn=8,pmem=1000mb #PBS -m abe #PBS -M sample_email@umn.edu cd ~/program_directory module load intel module load ompi/intel mpirun -np 24 program_name < inputfile > outputfile https://www.msi.umn.edu/resources/job-submission-and-scheduling-pbs-scripts https://www.msi.umn.edu/resources/job-queues

Interactive Jobs Nodes may be requested for interactive use using the command: qsub -I -l walltime=1:00:00,nodes=1:ppn=8 The job waits in the queue like all jobs, and when it begins the terminal returns control.

Service Units (SUs) Jobs on the high performance computing (HPC) systems consume Service Units (SUs), which roughly correspond to processor time. Each research group is given a service unit allocation at the beginning of the year. To view the number of service units remaining use the command: acctinfo If a group is using service units faster than the "fairshare target", then the group's jobs will have lower queue priority.https://www.msi.umn.edu/hpc/fairshare

Important Software

Problem Solving MSI has resources available to help solve job problems. https://www.msi.umn.edu/services/user-support https://www.msi.umn.edu/resources/job-problem-solving https://www.msi.umn.edu/support/faq https://www.msi.umn.edu/resources/debugging Help can be requested by emailing: help@msi.umn.edu

Minnesota Supercomputing Institute Web: www.msi.umn.edu Email: help@msi.umn.edu Telephone: (612) 626-0802 The University of Minnesota is an equal opportunity educator and employer. This PowerPoint is available in alternative formats upon request. Direct requests to Minnesota Supercomputing Institute, 599 Walter library, 117 Pleasant St. SE, Minneapolis, Minnesota, 55455, 612-624-0528.