VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING. BSC Tools Hands-On. Germán Llort, Judit Giménez. Barcelona Supercomputing Center
|
|
- Anne Wright
- 5 years ago
- Views:
Transcription
1 BSC Tools Hands-On Germán Llort, Judit Giménez Barcelona Supercomputing Center
2 2 VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING Getting a trace with Extrae
3 Extrae features Platforms Intel, Cray, BlueGene, Intel MIC, ARM, Android, Fujitsu Sparc Parallel programming model MPI, OpenMP, pthreads, OmpSs, CUDA, OpenCL, Java, Python... Performance Counters Using PAPI interface Link to source code Callstack at MPI routines OpenMP outlined routines Selected user functions No need to recompile / relink! Periodic samples User events (Extrae API) 3
4 Extrae overheads Average values Archer Event ns ns Event + PAPI ns ns Event + callstack (1 level) 600 ns 540 ns Event + callstack (6 levels) 1.9 us 1.5 us 4
5 How does Extrae work? Symbol substitution through LD_PRELOAD Specific libraries for each combination of runtimes MPI OpenMP OpenMP+MPI Recommended Dynamic instrumentation Based on DynInst (developed by U.Wisconsin/U.Maryland) Instrumentation in memory Binary rewriting Static link (i.e., PMPI, Extrae API) 5
6 Linking in Archer Cray compilers link statically by default How make it dynamic? Add the flag dynamic Enables tracing with LD_PRELOAD method archer> [ cc CC ftn ]... -dynamic FOOTER (INSERT > HEADER AND FOOTER) 6
7 Problems with dynamic linking? Link statically against the tracing library (+ dependencies) Only supports MPI instrumentation Insert before the actual MPI library Extrae will always intercept the MPI calls Don t set LD_PRELOAD LDFLAGS += \ -L$EXTRAE_HOME/lib lmpitrace \ -L$BSCTOOLS_HOME/deps/binutils/2.24/lib lbfd liberty \ -L$BSCTOOLS_HOME/deps/libunwind/1.1/lib lunwind \ -L/opt/cray/papi/ /lib lpapi \ -L/usr/lib64 lxml \ -lrt lz -ldl FOOTER (INSERT > HEADER AND FOOTER) 7
8 Using Extrae in 3 steps 1. Adapt your job submission script 2. Configure what to trace XML configuration file Example configurations at $EXTRAE_HOME/share/example 3. Run it! For further reference check the Extrae User Guide: Also distributed with Extrae at $EXTRAE_HOME/share/doc 8
9 Login to Archer and copy the examples laptop> ssh Y <USER>@login.archer.ac.uk archer> cp r /work/y14/shared/bsctools/tools-material $WORK archer> ls $WORK/tools-material... apps/... clustering/... extrae/... slides/ Here you have a copy of this slides... traces/ 9
10 Step 1: Adapt the job script to load Extrae with LD_PRELOAD PIcomputer.sh archer> vi $WORK/tools-material/extrae/run_lulesh_27p.sh #!/bin/bash --login #PBS N LULESH2 #PBS l select=2 #PBS l walltime=00:05:00 #PBS A y14 Request resources module unload PrgEnv-cray PrgEnv-gnu module load PrgEnv-intel Change MPI version export PBS_O_WORKDIR=$(readlink f $PBS_O_WORKDIR) cd ${PBS_O_WORKDIR} export OMP_NUM_THREADS=1 aprun n 27 S 7../apps/lulesh Run the program 10
11 Step 1: Adapt the job script to load Extrae with LD_PRELOAD PIcomputer.sh archer> vi $WORK/tools-material/extrae/run_lulesh_27p.sh #!/bin/bash --login #PBS N LULESH2 #PBS l select=2 #PBS l walltime=00:05:00 #PBS A y14 module unload PrgEnv-cray PrgEnv-gnu module load PrgEnv-intel export PBS_O_WORKDIR=$(readlink f $PBS_O_WORKDIR) cd ${PBS_O_WORKDIR} export OMP_NUM_THREADS=1 export TRACE_NAME=lulesh_27p.prv aprun n 27 S 7./trace.sh../apps/lulesh Activate Extrae during the run 11
12 Step 1: Adapt the job script to load Extrae with LD_PRELOAD PIcomputer.sh archer> vi $WORK/tools-material/extrae/trace.sh #!/bin/bash --login #PBS N LULESH2 #PBS l select=2 #PBS l walltime=00:05:00 #PBS A y14 module unload PrgEnv-cray PrgEnv-gnu module load PrgEnv-intel export PBS_O_WORKDIR=$(readlink f $PBS_O_WORKDIR) cd ${PBS_O_WORKDIR} export OMP_NUM_THREADS=1 export TRACE_NAME=lulesh_27p.prv aprun n 27 S 7./trace.sh../apps/lulesh #!/bin/bash source /work/.../extrae/intel-mpich/etc/extrae.sh # Configure Extrae export EXTRAE_CONFIG_FILE=./extrae.xml # Load the tracing library (choose C/Fortran) export LD_PRELOAD=${EXTRAE_HOME}/lib/libmpitrace.so #export LD_PRELOAD=${EXTRAE_HOME}/lib/libmpitracef.so # Run the program $* Same MPI version as the application Select what to trace Select your type of application 12
13 Step 1: Which tracing library? Choose depending on the application type Library Serial MPI OpenMP pthread CUDA libseqtrace libmpitrace[f] 1 libomptrace libpttrace libcudatrace libompitrace[f] 1 libptmpitrace[f] 1 libcudampitrace[f] 1 1 include suffix f in Fortran codes 13
14 Step 3: Run it! Submit your job archer> cd $WORK/tools-material/extrae archer> qsub run_lulesh_27p.sh Once finished the trace will be in the same folder: lulesh_27p.{pcf,prv,row} (3 files) Check the status of your job with: qstat u $USER Any issue? Already generated at $WORK/tools-material/traces 14
15 Step 2: Extrae XML configuration archer> vi $WORK/tools-material/extrae/extrae.xml <mpi enabled="yes"> <counters enabled="yes" /> </mpi> <openmp enabled="yes"> <locks enabled="no" /> <counters enabled="yes" /> </openmp> Trace the MPI calls (What s the program doing?) <pthread enabled="no"> <locks enabled="no" /> <counters enabled="yes" /> </pthread> <callers enabled="yes"> <mpi enabled="yes">1-3</mpi> <sampling enabled="no">1-5</sampling> </callers> Trace the call-stack (Where in my code?) Compile with debug! (-g)
16 Step 2: Extrae XML configuration (II) <counters enabled="yes"> <cpu enabled="yes" starting-set-distribution="cyclic"> <set enabled="yes" changeat-time="500000us" domain="all > PAPI_TOT_INS, PAPI_TOT_CYC, PAPI_L1_DCM, PAPI_L3_TCM, PAPI_BR_INS, PAPI_L2_DCA </set> <set enabled="yes" changeat-time="500000us" domain="all"> PAPI_TOT_INS, PAPI_TOT_CYC, PAPI_SR_INS, RESOURCE_STALLS:ROB, RESOURCE_STALLS:RS </set> <set /set> </cpu> <network enabled="no" /> <resource-usage enabled="no" /> <memory-usage enabled="no" /> </counters> Select which HW counters are measured (How s the machine doing?)
17 Step 2: Extrae XML configuration (III) <buffer enabled="yes"> <size enabled="yes"> </size> <circular enabled="no" /> </buffer> Trace buffer size (Flush/memory trade-off) <sampling enabled="no" type="default" period="50m" variability="10m" /> <merge enabled= yes" synchronization="default" tree-fan-out="16" max-memory="512" joint-states="yes" keep-mpits="yes" sort-addresses="yes" overwrite="yes > $TRACE_NAME$ </merge> Automatic post-processing to generate the Paraver trace Enable sampling (Want more details?)
18 18 VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING Installing Paraver & First analysis steps
19 Install Paraver in your laptop Download from Pick your version Also /work/y14/shared/bsctools/tools-packages wxparaver win.zip wxparaver mac.zip wxparaver linux_i686.tar.gz (32-bits) wxparaver linux_x86_64.tar.gz (64-bits) laptop> scp shared/bsctools/tools-packages/<package> $HOME 19
20 Install Paraver (II) Download tutorials: Documentation -> Tutorial guidelines Also /work/y14/shared/bsctools/tools-packages laptop> scp bsctools/tools-packages/paraver-tutorials tar.gz $HOME Download links FOOTER (INSERT > HEADER AND FOOTER) 20
21 Uncompress, rename & move Rename Uncompress folders Drag tutorials both into packages paraver folder into and tutorials paraver? Destination is Right click Show Package Contents Contents Resources Command-line (Linux) laptop> tar xf wxparaver linux-x86_64.tar.gz laptop> mv wxparaver linux-x86_64 paraver laptop> tar xf paraver-tutorials tar.gz laptop> mv paraver-tutorials paraver/tutorials FOOTER (INSERT > HEADER AND FOOTER) 21
22 Check that everything works Start Paraver laptop> $HOME/paraver/bin/wxparaver & Check that tutorials are available Remotely available in Archer Click on Help Tutorials laptop> ssh Y <USER>@login.archer.ac.uk archer> /work/y14/shared/bsctools/wxparaver/latest/bin/wxparaver 22
23 First steps of analysis Copy the trace to your laptop ( All 3 files: *.prv, *.pcf, *.row ) laptop> scp <USER>@login.archer.ac.uk:$WORK/tools-material/extrae/lulesh_27p.*./ Load the trace Click on File Load Trace Browse to the *.prv file Follow Tutorial #3 Introduction to Paraver and Dimemas methodology Click on Help Tutorials 23
24 Measure the parallel efficiency Click on the mpi_stats.cfg Right click Zoom to skip initialization / finalization Click on Paste Time phases Open (drag Control & drop) Window Parallel efficiency Comm efficiency Load balance Right click Copy Time 24
25 Computation time and work distribution Click on 2dh_usefulduration.cfg (2nd link) Shows time computing Work imbalance (zig-zag) Zoom to skip Performance large burst from imbalance the initialization (zig-zag) (by drag-and-dropping) and 2dh_useful_instructions.cfg (3rd link) Shows amount of work 25
26 Where does this happen? GoSlow from the & table Fast to at the the timeline same time Imbalance Click on Open Filtered Control Window Zoom into Hints Callers Caller function Select this area (by drag-and-dropping) 1 of the iterations (by drag-and-dropping) Right click Fit Semantic Scale Fit both Right click Copy Right click Paste Time Hidden values (click to show) CommSend CommMonoQ TimeIncrement FOOTER (INSERT > HEADER AND FOOTER) 26
27 Save CFG s (2 methods) Right click on timeline 1. Main Paraver window 2. Select 3. Save FOOTER (INSERT > HEADER AND FOOTER) 27
28 CFG s distribution Paraver comes with many more included CFG s FOOTER (INSERT > HEADER AND FOOTER) 28
29 Hints: a good place to start! Paraver suggests CFG s based on the information present in the trace FOOTER (INSERT > HEADER AND FOOTER) 29
30 30 VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING Cluster-based analysis
31 Use clustering analysis Run clustering laptop> ssh Y <USER>@login.archer.ac.uk archer> cd $WORK/tools-material/clustering archer> /work/y14/shared/bsctools/clustering/2.6.6/bin/burstclustering -d cluster.xml -i../extrae/lulesh_27p.prv -o lulesh_27p_clustered.prv If you didn t get your own trace, use a prepared one from: archer> ls $WORK/tools-material/traces/lulesh_27p.prv 31
32 Variable work VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING Cluster-based analysis Check the resulting scatter plot archer> gnuplot lulesh_27p_clustered.ipc.papi_tot_ins.gnuplot Identify main computing trends Work (Y) vs. Speed (X) Look at the clusters shape Variability in both axes indicate potential imbalances Variable speed 32
33 Correlating scatter plot and time distribution Copy the clustered trace to your laptop and look at it laptop> $HOME/paraver/bin/wxparaver <path-to>/lulesh_27p_clustered.prv Display the distribution of clusters over time File Load configuration $HOME/paraver/cfgs/clustering/clusterID_window.cfg Variable work / speed + different processes = Imbalances 33
BSC Tools Hands-On. Judit Giménez, Lau Mercadal Barcelona Supercomputing Center
BSC Tools Hands-On Judit Giménez, Lau Mercadal (lau.mercadal@bsc.es) Barcelona Supercomputing Center 2 VIRTUAL INSTITUTE HIGH PRODUCTIVITY SUPERCOMPUTING Extrae Extrae features Parallel programming models
More informationPerformance Tools Hands-On. PATC Apr/2016.
Performance Tools Hands-On PATC Apr/2016 tools@bsc.es Accounts Users: nct010xx Password: f.23s.nct.0xx XX = [ 01 60 ] 2 Extrae features Parallel programming models MPI, OpenMP, pthreads, OmpSs, CUDA, OpenCL,
More informationInstrumentation. BSC Performance Tools
Instrumentation BSC Performance Tools Index The instrumentation process A typical MN process Paraver trace format Configuration XML Environment variables Adding references to the source API CEPBA-Tools
More informationCOMP Superscalar. COMPSs Tracing Manual
COMP Superscalar COMPSs Tracing Manual Version: 2.4 November 9, 2018 This manual only provides information about the COMPSs tracing system. Specifically, it illustrates how to run COMPSs applications with
More informationPROGRAMMING MODEL EXAMPLES
( Cray Inc 2015) PROGRAMMING MODEL EXAMPLES DEMONSTRATION EXAMPLES OF VARIOUS PROGRAMMING MODELS OVERVIEW Building an application to use multiple processors (cores, cpus, nodes) can be done in various
More informationBatch Systems. Running calculations on HPC resources
Batch Systems Running calculations on HPC resources Outline What is a batch system? How do I interact with the batch system Job submission scripts Interactive jobs Common batch systems Converting between
More informationShifter on Blue Waters
Shifter on Blue Waters Why Containers? Your Computer Another Computer (Supercomputer) Application Application software libraries System libraries software libraries System libraries Why Containers? Your
More informationImproving Applica/on Performance Using the TAU Performance System
Improving Applica/on Performance Using the TAU Performance System Sameer Shende, John C. Linford {sameer, jlinford}@paratools.com ParaTools, Inc and University of Oregon. April 4-5, 2013, CG1, NCAR, UCAR
More informationHow to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions
How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules
More informationUnderstanding applications with Paraver and Dimemas. March 2013
Understanding applications with Paraver and Dimemas judit@bsc.es March 2013 BSC Tools outline Tools presentation Demo: ABYSS-P analysis Hands-on pi computer Extrae, Paraver Clustering Dimemas Our Tools
More informationExtrae User guide manual for version
Extrae User guide manual for version 3.2.1 tools@bsc.es November 3, 2015 ii Contents Contents List of Figures List of Tables iii v vii 1 Quick start guide 1 1.1 The instrumentation package...............................
More informationRunning applications on the Cray XC30
Running applications on the Cray XC30 Running on compute nodes By default, users do not access compute nodes directly. Instead they launch jobs on compute nodes using one of three available modes: 1. Extreme
More informationBatch environment PBS (Running applications on the Cray XC30) 1/18/2016
Batch environment PBS (Running applications on the Cray XC30) 1/18/2016 1 Running on compute nodes By default, users do not log in and run applications on the compute nodes directly. Instead they launch
More informationImage Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System
Image Sharpening Practical Introduction to HPC Exercise Instructions for Cirrus Tier-2 System 2 1. Aims The aim of this exercise is to get you used to logging into an HPC resource, using the command line
More informationOur Workshop Environment
Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2015 Our Environment Today Your laptops or workstations: only used for portal access Blue Waters
More informationExtrae User guide manual for version
Extrae User guide manual for version 3.1.0 tools@bsc.es May 25, 2015 ii Contents Contents List of Figures List of Tables iii v vii 1 Quick start guide 1 1.1 The instrumentation package...............................
More informationIntroduction to GALILEO
Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Maurizio Cremonesi m.cremonesi@cineca.it
More informationIntroduction to OpenMP. Lecture 2: OpenMP fundamentals
Introduction to OpenMP Lecture 2: OpenMP fundamentals Overview 2 Basic Concepts in OpenMP History of OpenMP Compiling and running OpenMP programs What is OpenMP? 3 OpenMP is an API designed for programming
More informationPractical: a sample code
Practical: a sample code Alistair Hart Cray Exascale Research Initiative Europe 1 Aims The aim of this practical is to examine, compile and run a simple, pre-prepared OpenACC code The aims of this are:
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Andrew Turner, Dominic Sloan-Murphy, David Henty, Adrian Jackson Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into
More informationExtrae User guide manual for version
Extrae User guide manual for version 2.5.0 tools@bsc.es February 20, 2014 ii Contents Contents List of Figures List of Tables iii vii ix 1 Quick start guide 1 1.1 The instrumentation package...............................
More informationOur new HPC-Cluster An overview
Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization
More informationGraham vs legacy systems
New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet
More informationIntroduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER
Introduction to Molecular Dynamics on ARCHER: Instructions for running parallel jobs on ARCHER 1 Introduction This handout contains basic instructions for how to login in to ARCHER and submit jobs to the
More informationFirst steps on using an HPC service ARCHER
First steps on using an HPC service ARCHER ARCHER Service Overview and Introduction ARCHER in a nutshell UK National Supercomputing Service Cray XC30 Hardware Nodes based on 2 Intel Ivy Bridge 12-core
More informationCompiling applications for the Cray XC
Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers
More informationTable of Contents. Table of Contents Job Manager for remote execution of QuantumATK scripts. A single remote machine
Table of Contents Table of Contents Job Manager for remote execution of QuantumATK scripts A single remote machine Settings Environment Resources Notifications Diagnostics Save and test the new machine
More informationThe Eclipse Parallel Tools Platform
May 1, 2012 Toward an Integrated Development Environment for Improved Software Engineering on Crays Agenda 1. What is the Eclipse Parallel Tools Platform (PTP) 2. Tour of features available in Eclipse/PTP
More informationDebugging CUDA Applications with Allinea DDT. Ian Lumb Sr. Systems Engineer, Allinea Software Inc.
Debugging CUDA Applications with Allinea DDT Ian Lumb Sr. Systems Engineer, Allinea Software Inc. ilumb@allinea.com GTC 2013, San Jose, March 20, 2013 Embracing GPUs GPUs a rival to traditional processors
More informationAccelerate HPC Development with Allinea Performance Tools
Accelerate HPC Development with Allinea Performance Tools 19 April 2016 VI-HPS, LRZ Florent Lebeau / Ryan Hulguin flebeau@allinea.com / rhulguin@allinea.com Agenda 09:00 09:15 Introduction 09:15 09:45
More informationDomain Decomposition: Computational Fluid Dynamics
Domain Decomposition: Computational Fluid Dynamics July 11, 2016 1 Introduction and Aims This exercise takes an example from one of the most common applications of HPC resources: Fluid Dynamics. We will
More informationParallel Performance Analysis Using the Paraver Toolkit
Parallel Performance Analysis Using the Paraver Toolkit Parallel Performance Analysis Using the Paraver Toolkit [16a] [16a] Slide 1 University of Stuttgart High-Performance Computing Center Stuttgart (HLRS)
More informationProfilers and performance evaluation. Tools and techniques for performance analysis Andrew Emerson
Profilers and performance evaluation Tools and techniques for performance analysis Andrew Emerson 10/06/2016 Tools and Profilers, Summer School 2016 1 Contents Motivations Manual Methods Measuring execution
More informationIntroduction to PICO Parallel & Production Enviroment
Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it
More informationSharpen Exercise: Using HPC resources and running parallel applications
Sharpen Exercise: Using HPC resources and running parallel applications Contents 1 Aims 2 2 Introduction 2 3 Instructions 3 3.1 Log into ARCHER frontend nodes and run commands.... 3 3.2 Download and extract
More informationEE/CSCI 451 Introduction to Parallel and Distributed Computation. Discussion #4 2/3/2017 University of Southern California
EE/CSCI 451 Introduction to Parallel and Distributed Computation Discussion #4 2/3/2017 University of Southern California 1 USC HPCC Access Compile Submit job OpenMP Today s topic What is OpenMP OpenMP
More informationHomework 1 Due Monday April 24, 2017, 11 PM
CME 213 Spring 2017 1/6 Homework 1 Due Monday April 24, 2017, 11 PM In this programming assignment you will implement Radix Sort, and will learn about OpenMP, an API which simplifies parallel programming
More informationIBM High Performance Computing Toolkit
IBM High Performance Computing Toolkit Pidad D'Souza (pidsouza@in.ibm.com) IBM, India Software Labs Top 500 : Application areas (November 2011) Systems Performance Source : http://www.top500.org/charts/list/34/apparea
More informationScalability of Trace Analysis Tools. Jesus Labarta Barcelona Supercomputing Center
Scalability of Trace Analysis Tools Jesus Labarta Barcelona Supercomputing Center What is Scalability? Jesus Labarta, Workshop on Tools for Petascale Computing, Snowbird, Utah,July 2007 2 Index General
More informationDomain Decomposition: Computational Fluid Dynamics
Domain Decomposition: Computational Fluid Dynamics December 0, 0 Introduction and Aims This exercise takes an example from one of the most common applications of HPC resources: Fluid Dynamics. We will
More informationDDT: A visual, parallel debugger on Ra
DDT: A visual, parallel debugger on Ra David M. Larue dlarue@mines.edu High Performance & Research Computing Campus Computing, Communications, and Information Technologies Colorado School of Mines March,
More informationDomain Decomposition: Computational Fluid Dynamics
Domain Decomposition: Computational Fluid Dynamics May 24, 2015 1 Introduction and Aims This exercise takes an example from one of the most common applications of HPC resources: Fluid Dynamics. We will
More informationHPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016
HPCF Cray Phase 2 User Test period Cristian Simarro User Support advisory@ecmwf.int ECMWF April 18, 2016 Content Introduction Upgrade timeline Changes Hardware Software Steps for the testing on CCB Possible
More informationBatch Systems & Parallel Application Launchers Running your jobs on an HPC machine
Batch Systems & Parallel Application Launchers Running your jobs on an HPC machine Partners Funding Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike
More informationHigh Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin
High Performance Computing (HPC) Club Training Session Xinsheng (Shawn) Qin Outline HPC Club The Hyak Supercomputer Logging in to Hyak Basic Linux Commands Transferring Files Between Your PC and Hyak Submitting
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Contents User access, logging in Linux/Unix
More informationClustering. BSC Performance Tools
Clustering BSC Tools Clustering Identify computation regions of similar behavior Data structure not Gaussian DBSCAN Similar in terms of duration or hardware counter rediced metrics Different routines may
More informationProgramming Techniques for Supercomputers. HPC RRZE University Erlangen-Nürnberg Sommersemester 2018
Programming Techniques for Supercomputers HPC Services @ RRZE University Erlangen-Nürnberg Sommersemester 2018 Outline Login to RRZE s Emmy cluster Basic environment Some guidelines First Assignment 2
More informationLogging in to the CRAY
Logging in to the CRAY 1. Open Terminal Cray Hostname: cray2.colostate.edu Cray IP address: 129.82.103.183 On a Mac 2. type ssh username@cray2.colostate.edu where username is your account name 3. enter
More informationNBIC TechTrack PBS Tutorial
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen Visit our webpage at: http://www.nbic.nl/support/brs 1 NBIC PBS Tutorial
More informationUser Guide of High Performance Computing Cluster in School of Physics
User Guide of High Performance Computing Cluster in School of Physics Prepared by Sue Yang (xue.yang@sydney.edu.au) This document aims at helping users to quickly log into the cluster, set up the software
More informationA Hands-On Tutorial: RNA Sequencing Using High-Performance Computing
A Hands-On Tutorial: RNA Sequencing Using Computing February 11th and 12th, 2016 1st session (Thursday) Preliminaries: Linux, HPC, command line interface Using HPC: modules, queuing system Presented by:
More informationQuick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing
Quick Start Guide by Burak Himmetoglu Supercomputing Consultant Enterprise Technology Services & Center for Scientific Computing E-mail: bhimmetoglu@ucsb.edu Linux/Unix basic commands Basic command structure:
More informationARCHER Single Node Optimisation
ARCHER Single Node Optimisation Profiling Slides contributed by Cray and EPCC What is profiling? Analysing your code to find out the proportion of execution time spent in different routines. Essential
More informationComputing with the Moore Cluster
Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI
More information[Scalasca] Tool Integrations
Mitglied der Helmholtz-Gemeinschaft [Scalasca] Tool Integrations Aug 2011 Bernd Mohr CScADS Performance Tools Workshop Lake Tahoe Contents Current integration of various direct measurement tools Paraver
More informationISTeC Cray High-Performance Computing System. Richard Casey, PhD RMRCE CSU Center for Bioinformatics
ISTeC Cray High-Performance Computing System Richard Casey, PhD RMRCE CSU Center for Bioinformatics Compute Node Status Check whether interactive and batch compute nodes are up or down: xtprocadmin NID
More informationWorking on the NewRiver Cluster
Working on the NewRiver Cluster CMDA3634: Computer Science Foundations for Computational Modeling and Data Analytics 22 February 2018 NewRiver is a computing cluster provided by Virginia Tech s Advanced
More informationMIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization 2 Glenn Bresnahan Director, SCV MGHPCC Buy-in Program Kadin Tseng HPC Programmer/Consultant
More informationIntroduction to CINECA Computer Environment
Introduction to CINECA Computer Environment Today you will learn... Basic commands for UNIX environment @ CINECA How to submitt your job to the PBS queueing system on Eurora Tutorial #1: Example: launch
More informationIntroduction to HPC Numerical libraries on FERMI and PLX
Introduction to HPC Numerical libraries on FERMI and PLX HPC Numerical Libraries 11-12-13 March 2013 a.marani@cineca.it WELCOME!! The goal of this course is to show you how to get advantage of some of
More informationBefore We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop
Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources
More informationPortable Power/Performance Benchmarking and Analysis with WattProf
Portable Power/Performance Benchmarking and Analysis with WattProf Amir Farzad, Boyana Norris University of Oregon Mohammad Rashti RNET Technologies, Inc. Motivation Energy efficiency is becoming increasingly
More informationScore-P A Joint Performance Measurement Run-Time Infrastructure for Periscope, Scalasca, TAU, and Vampir
Score-P A Joint Performance Measurement Run-Time Infrastructure for Periscope, Scalasca, TAU, and Vampir VI-HPS Team Score-P: Specialized Measurements and Analyses Mastering build systems Hooking up the
More informationRunning LAMMPS on CC servers at IITM
Running LAMMPS on CC servers at IITM Srihari Sundar September 9, 2016 This tutorial assumes prior knowledge about LAMMPS [2, 1] and deals with running LAMMPS scripts on the compute servers at the computer
More informationTools and techniques for optimization and debugging. Fabio Affinito October 2015
Tools and techniques for optimization and debugging Fabio Affinito October 2015 Profiling Why? Parallel or serial codes are usually quite complex and it is difficult to understand what is the most time
More informationUsing ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim
Using ISMLL Cluster Tutorial Lec 5 1 Agenda Hardware Useful command Submitting job 2 Computing Cluster http://www.admin-magazine.com/hpc/articles/building-an-hpc-cluster Any problem or query regarding
More informationClustering. BSC Performance Tools
Clustering BSC Performance Tools Clustering Identify computation regions of similar behavior Data structure not Gaussian DBSCAN Similar in terms of duration or hardware counter reduced metrics Different
More informationIntroduction to CINECA HPC Environment
Introduction to CINECA HPC Environment 23nd Summer School on Parallel Computing 19-30 May 2014 m.cestari@cineca.it, i.baccarelli@cineca.it Goals You will learn: The basic overview of CINECA HPC systems
More informationIntel Manycore Testing Lab (MTL) - Linux Getting Started Guide
Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide Introduction What are the intended uses of the MTL? The MTL is prioritized for supporting the Intel Academic Community for the testing, validation
More informationShifter and Singularity on Blue Waters
Shifter and Singularity on Blue Waters Maxim Belkin June 7, 2018 A simplistic view of a scientific application DATA RESULTS My Application Received an allocation on Blue Waters! DATA RESULTS My Application
More informationUoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)
UoW HPC Quick Start Information Technology Services University of Wollongong ( Last updated on October 10, 2011) 1 Contents 1 Logging into the HPC Cluster 3 1.1 From within the UoW campus.......................
More informationUsing CrayPAT and Apprentice2: A Stepby-step
Using CrayPAT and Apprentice2: A Stepby-step guide Cray Inc. (2014) Abstract This tutorial introduces Cray XC30 users to the Cray Performance Analysis Tool and its Graphical User Interface, Apprentice2.
More informationParaTools ThreadSpotter Analysis of HELIOS
ParaTools ThreadSpotter Analysis of HELIOS ParaTools, Inc. 2836 Kincaid St. Eugene, OR 97405 (541) 913-8797 info@paratools.com Distribution Statement A: Approved for public release. Distribution is unlimited
More informationExercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters.
Exercise 1: Connecting to BW using ssh: NOTE: $ = command starts here, =means one space between words/characters. Before you login to the Blue Waters system, make sure you have the following information
More informationTutorial: Analyzing MPI Applications. Intel Trace Analyzer and Collector Intel VTune Amplifier XE
Tutorial: Analyzing MPI Applications Intel Trace Analyzer and Collector Intel VTune Amplifier XE Contents Legal Information... 3 1. Overview... 4 1.1. Prerequisites... 5 1.1.1. Required Software... 5 1.1.2.
More informationStep 3: Access the HPC machine you will be using to run WRF: ocelote. Step 4: transfer downloaded WRF tar files to your home directory
Step 1: download WRF packages Get WRF tar file from WRF users page, Version 3.8.1. Also get WPS Version 3.8.1 (preprocessor) Store on your local machine Step 2: Login to UA HPC system ssh (UAnetid)@hpc.arizona.edu
More informationXSEDE New User Tutorial
April 2, 2014 XSEDE New User Tutorial Jay Alameda National Center for Supercomputing Applications XSEDE Training Survey Make sure you sign the sign in sheet! At the end of the module, I will ask you to
More informationIntroduction to Unix Environment: modules, job scripts, PBS. N. Spallanzani (CINECA)
Introduction to Unix Environment: modules, job scripts, PBS N. Spallanzani (CINECA) Bologna PATC 2016 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit
More informationSupercomputing environment TMA4280 Introduction to Supercomputing
Supercomputing environment TMA4280 Introduction to Supercomputing NTNU, IMF February 21. 2018 1 Supercomputing environment Supercomputers use UNIX-type operating systems. Predominantly Linux. Using a shell
More informationCOMPILING FOR THE ARCHER HARDWARE. Slides contributed by Cray and EPCC
COMPILING FOR THE ARCHER HARDWARE Slides contributed by Cray and EPCC Modules The Cray Programming Environment uses the GNU modules framework to support multiple software versions and to create integrated
More informationAmbiente CINECA: moduli, job scripts, PBS. A. Grottesi (CINECA)
Ambiente HPC @ CINECA: moduli, job scripts, PBS A. Grottesi (CINECA) Bologna 2017 In this tutorial you will learn... How to get familiar with UNIX environment @ CINECA How to submit your job to the PBS
More informationSequence Alignment. Practical Introduction to HPC Exercise
1 Sequence Alignment Practical Introduction to HPC Exercise Aims The aims of this exercise are to get you used to logging on to an HPC machine, using the command line in a terminal window and an editor
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationGUT. GUT Installation Guide
Date : 17 Mar 2011 1/6 GUT Contents 1 Introduction...2 2 Installing GUT...2 2.1 Optional Extensions...2 2.2 Installation using the Binary package...2 2.2.1 Linux or Mac OS X...2 2.2.2 Windows...4 2.3 Installing
More informationRunning Jobs, Submission Scripts, Modules
9/17/15 Running Jobs, Submission Scripts, Modules 16,384 cores total of about 21,000 cores today Infiniband interconnect >3PB fast, high-availability, storage GPGPUs Large memory nodes (512GB to 1TB of
More informationECMWF Environment on the CRAY practical solutions
ECMWF Environment on the CRAY practical solutions Xavi Abellan Xavier.Abellan@ecmwf.int User Support Section HPCF 2015 Cray ECMWF Environment ECMWF 2015 Slide 1 Let s play Start a fresh session on cca,
More informationIntroduction to GALILEO
November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department
More informationIntel Xeon Phi Coprocessor
Intel Xeon Phi Coprocessor A guide to using it on the Cray XC40 Terminology Warning: may also be referred to as MIC or KNC in what follows! What are Intel Xeon Phi Coprocessors? Hardware designed to accelerate
More informationbwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs
bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR
More informationKNL Performance Comparison: R and SPRINT. Adrian Jackson March 2017
KNL Performance Comparison: R and SPRINT Adrian Jackson a.jackson@epcc.ed.ac.uk March 2017 1. Introduction R (https://www.r-project.org/) is a statistical programming language widely used for data analytics
More informationPractical Introduction to Message-Passing Interface (MPI)
1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your
More informationCSCS Proposal writing webinar Technical review. 12th April 2015 CSCS
CSCS Proposal writing webinar Technical review 12th April 2015 CSCS Agenda Tips for new applicants CSCS overview Allocation process Guidelines Basic concepts Performance tools Demo Q&A open discussion
More informationNBIC TechTrack PBS Tutorial. by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen
NBIC TechTrack PBS Tutorial by Marcel Kempenaar, NBIC Bioinformatics Research Support group, University Medical Center Groningen 1 NBIC PBS Tutorial This part is an introduction to clusters and the PBS
More informationTools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - LRZ,
Tools for Intel Xeon Phi: VTune & Advisor Dr. Fabio Baruffa - fabio.baruffa@lrz.de LRZ, 27.6.- 29.6.2016 Architecture Overview Intel Xeon Processor Intel Xeon Phi Coprocessor, 1st generation Intel Xeon
More informationHigh Performance Computing (HPC) Using zcluster at GACRC
High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?
More informationPerformance Tools (Paraver/Dimemas)
www.bsc.es Performance Tools (Paraver/Dimemas) Jesús Labarta, Judit Gimenez BSC Enes workshop on exascale techs. Hamburg, March 18 th 2014 Our Tools! Since 1991! Based on traces! Open Source http://www.bsc.es/paraver!
More informationBeginner's Guide for UK IBM systems
Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,
More informationAutomatic trace analysis with the Scalasca Trace Tools
Automatic trace analysis with the Scalasca Trace Tools Ilya Zhukov Jülich Supercomputing Centre Property Automatic trace analysis Idea Automatic search for patterns of inefficient behaviour Classification
More informationOpenACC Course. Office Hour #2 Q&A
OpenACC Course Office Hour #2 Q&A Q1: How many threads does each GPU core have? A: GPU cores execute arithmetic instructions. Each core can execute one single precision floating point instruction per cycle
More information