Practical Introduction to Message-Passing Interface (MPI)

Size: px
Start display at page:

Download "Practical Introduction to Message-Passing Interface (MPI)"

Transcription

1 1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your serial code What is MPI? Why do we need it? How do we run MPI codes (on the Guillimin cluster)? How to program with MPI? Program structure Basic functions Examples Outline of the workshop 3 Parallelizing your serial code 4 Practical exercises on Guillimin Login, setup environment, launch MPI code Analyzing and running examples Modifying and tuning MPI codes Models for parallel computing (as an ordinary user sees it...) Implicit Parallelization minimum work for you Threaded libraries (MKL, ACML, GOTO, etc...) Compiler directives (OpenMP) Good for desktops and shared memory machines Explicit Parallelization work is required! You tell what should be done on what CPU Solution for distributed clusters (shared nothing!)

2 Distributed memory approach Process 0 A(10) 5 Address Space Different Variables! Process 1 A(10) Separate processes with own address spaces You need to make them interact and exchange information You tell what operation to do on what process MPI Message Passing Interface 6 Model for distributed memory approach Programmer manages memory by placing data in a particular process Programmer sends data between processes Programmer performs collective operations on sets of processes What is MPI for a user? MPI is NOT a language! MPI is NOT a compiler or specific product MPI is a specification for a standardized library: You use its subroutines You link it with your code History: MPI-1 (1994), MPI-2 (1997), MPI-3 (2012). Different implementations : 7 MPICH(2), MVAPICH(2), Open MPI, HP-MPI,... Your MPI code What MPI library do I link to? MPI Lib 8 InfiniBand Lib InfiniBand 10 Gbit/s Gigabit 1 Gbit/s When working on the Guillimin cluster, please use our version of the MVAPICH2 or Open MPI library! MPI library must match compiler used (Intel, PGI, or GCC) both at compile and at run time.

3 Basic features of MPI program 9 MPI is simple! (and complex...) 10 Include basic definitions (#include <mpi.h>, INCLUDE mpif.h, or ). Initialize and terminate MPI environment Get information about processes Send information between two specific processes (point-to-point communications) Send information between groups of processes (collective communications) You need to know these 6 functions: MPI Init : Initialize MPI environment MPI Comm size : How many CPUs do I have? MPI Comm rank : Identify each CPU MPI Send : Send data to another CPU MPI Recv : Receive data from another CPU MPI Finalize : Close MPI environment Fortran PROGRAM hello Example: Hello from N cores INTEGER err, rank, size CALL MPI Init(err) CALL MPI Comm rank (MPI COMM WORLD, & rank, ierr) CALL MPI Comm size (MPI COMM WORLD, & size, ierr) WRITE(*,*) Hello from processor,& rank, of, size CALL MPI Finalize(err) END PROGRAM hello C 11 #include <stdio.h> #include <mpi.h> int main (int argc, char * argv[]) { int rank, size; } MPI Init( &argc, &argv ); MPI Comm rank( MPI COMM WORLD, &rank ); MPI Comm size( MPI COMM WORLD, &size ); printf( "Hello from processor %d of %d\n", rank, size ); MPI Finalize(); return 0; General program structure 12 The header file (mpi.h or mpif.h) or module mpi must be included - contains definitions of MPI constants, types and functions All MPI programs start with MPI Init and end with MPI Finalize MPI COMM WORLD is a default communicator (defined in mpi.h) refers to the group of all processors in the job Each statement executes independently in each process

4 13 Compiling your MPI code Running your MPI code 14 NOT defined by the standard More or less similar for all implementations: Need to specify include directory and MPI library But usually a compiler wrapper (mpif90, mpicc) does it for you automatically. On the Guillimin cluster: user@guillimin>module add openmpi user@guillimin>mpif90 hello.f90 -o hello NOT defined by the standard A launching program (mpirun, mpiexec, mpirun rsh,...) is used to start your MPI code Particular choice of launcher depends on MPI implementation and on the machine used On the Guillimin cluster : mpiexec -n 4./hello 15 Exercise 1: Log in to Guillimin, setting up the environment 1) Log in to Guillimin: ssh username@guillimin.clumeq.ca 2) Check for loaded software modules: guillimin> module list 3) See all available modules: guillimin> module av 4) Load necessary modules: guillimin> module add ifort icc openmpi 5) Check loaded modules again 6) Verify that you have access to the correct mpi package: guillimin> which mpif90 output: /software/tools/openmpi intel/bin/mpif90 Exercise 2: Hello program, job submission 1) Copy hello.f90(c) and hello.pbs to your home directory: guillimin> cp /software/workshop/hello.f90. guillimin> cp /software/workshop/hello.pbs. 2) Compile your code: guillimin> mpif90 hello.f90 -o hello guillimin> mpicc hello.c -o hello 16

5 17 Exercise 2: Hello program, job submission Exercise 2: Hello program, job submission 18 3) Edit the file hello.pbs : #!/bin/bash #PBS -l nodes=1:ppn=2 #PBS -l walltime=00:05:00 #PBS -V #PBS -N hello cd $PBS O WORKDIR mpiexec -n 2./hello > hello.out 4) Submit your job: guillimin> msub -q class hello.pbs 5) Check the job status: guillimin> showq -u <username> 6) Check the output (hello.out) Exercise 3: Modifying Hello 19 MPI is simple! (and complex...) 20 Let s ask each CPU to do its own computation... IF (rank == 0) THEN a=sqrt(2.0) b=0.0 WRITE(*,*) a,b=,a,b, on proc,rank IF (rank == 1) THEN a=0.0 b=sqrt(3.0) WRITE(*,*) a,b=,a,b, on proc,rank Recompile as before and submit to the queue... You need to know these 6 functions: MPI Init : Initialize MPI environment MPI Comm size : How many CPUs I have? MPI Comm rank : Identify each CPU MPI Send : Send data to another CPU MPI Recv : Receive data from another CPU MPI Finalize : Close MPI environment

6 21 MPI Send / MPI Recv MPI basic datatypes (Fortran) 22 Passing message between two different MPI processes (point-to-point communication) If one process sends, another initiates the matching receive. The exchange data types are predefined for portability. (MPI has its own data types). MPI Send / MPI Recv is blocking! (There are also non-blocking versions) MPI Data type MPI INTEGER MPI REAL MPI DOUBLE PRECISION MPI COMPLEX MPI DOUBLE COMPLEX MPI LOGICAL MPI CHARACTER MPI PACKED MPI BYTE Fortran Data type INTEGER REAL DOUBLE PRECISION COMPLEX DOUBLE COMPLEX LOGICAL CHARACTER(1) Data packed with MPI Pack 8 binary digits MPI basic datatypes (C) MPI Data type MPI CHAR MPI SHORT MPI INT MPI LONG MPI UNSIGNED CHAR MPI UNSIGNED SHORT MPI UNSIGNED MPI UNSIGNED LONG MPI FLOAT MPI DOUBLE MPI LONG DOUBLE MPI PACKED MPI BYTE 23 C Data type signed char signed short int signed int signed long int unsigned char unsigned short unsigned int unsigned long int float double long double Data packed with MPI Pack 8 binary digits MPI: Sending a message Fortran: MPI Send (data, count, data type, dest, tag, comm, ierr) C: MPI Send (&data, count, data type, dest, tag, comm) Data : variable to send 24 Count : number of data elements to send Data type : type of data to send Dest : rank of the receiving process Tag : the label of the message Comm : communicator set of involved processes Ierr : error code (return value for C)

7 25 MPI: Receiving a message Fortran: MPI Recv (data, count, data type, source, tag, comm, status, ierr) C: MPI Recv (&data, count, data type, source, tag, comm, &status) Source is the rank of the sending process (or can be set to MPI any source) Tag must match the label used by sender (or can be set to MPI any tag) Status is an integer array with information in case of an error (source, tag, actual number of bytes received) MPI Send / MPI Recv are blocking! MPI Send / MPI Recv : Example PROGRAM mpipoint2point IMPLICIT NONE INTEGER :: rank, buffer, error, status( MPI status size ) CALL MPI init( error ) CALL MPI Comm rank( MPI COMM WORLD, rank, error ) IF( rank == 0 ) THEN buffer = 33 CALL MPI Send( buffer, 1, MPI INTEGER, 1, 10, & MPI COMM WORLD, error ) IF( rank == 1 ) THEN CALL MPI Recv( buffer, 1, MPI INTEGER, 0, 10, & PRINT*, Rank, rank, buffer=, buffer IF( buffer /= 33 ) Print*, fail CALL MPI Finalize( error ) END PROGRAM mpipoint2point 26 MPI Send / MPI Recv : Example 2 IF( rank == 0 ) THEN 27 IF( rank == 0 ) THEN Example 2 : Solution A 28 CALL MPI Send( buffer1, 1, MPI INTEGER, 1, 10, & MPI COMM WORLD, error ) CALL MPI Recv( buffer2, 1, MPI INTEGER, 1, 20, & ELSE IF( rank == 1 ) THEN CALL MPI Send( buffer2, 1, MPI INTEGER, 0, 20, & MPI COMM WORLD, error ) CALL MPI Recv( buffer1, 1, MPI INTEGER, 0, 10, & CALL MPI Send( buffer1, 1, MPI INTEGER, 1, 10, & MPI COMM WORLD, error ) CALL MPI Recv( buffer2, 1, MPI INTEGER, 1, 20, & ELSE IF( rank == 1 ) THEN CALL MPI Recv( buffer2, 1, MPI INTEGER, 0, 10, & MPI COMM WORLD, error ) CALL MPI Send( buffer1, 1, MPI INTEGER, 0, 20, & Will NOT work!!! Exchange Send/Recv order on one processor

8 IF( rank == 0 ) THEN 29 Example 2 : Solution B CALL MPI Isend( buffer1, 1, MPI INTEGER, 1, 10, & MPI COMM WORLD, REQUEST, error ) CALL MPI Recv( buffer2, 1, MPI INTEGER, 1, 20, & ELSE IF( rank == 1 ) THEN CALL MPI Isend( buffer2, 1, MPI INTEGER, 0, 20, & MPI COMM WORLD, REQUEST, error ) CALL MPI Recv( buffer1, 1, MPI INTEGER, 0, 10, & CALL MPI Wait( REQUEST, status, error )! Wait until send is complete Use non-blocking send: ISend Non-blocking Send: MPI Isend Differences from blocking Send: 30 Starts the transfer and returns control Between the start and the end of transfer the code can do other things You need to check for completion! REQUEST additional identifier for the comunication MPI Wait (REQUEST, Status, error) : The code waits for the completion of the transfer Exercise 3: MPI Send/Recv Copy send.f90(c) to your home directory: guillimin> cp /software/workshop/send.f90. Let s modify it: Send an array (or matrix) Ask 0 and 1 to exchange information Hints: /software/workshop/all/: send matrix.f90 exchange.f step beyond : Collective Communications 32 Involve ALL processes in the communicator MPI Bcast : Same data sent from root process to all the others MPI Reduce : root process collects data from the others and performs an operation (min, max, add, multiply, etc...) MPI Scatter : distributes variables from the root process to each of the others MPI Gather : collects variables from all processes to the root one

9 33 MPI Bcast: Example PROGRAM broadcast INTEGER ierr, myid, nproc, root INTEGER status(mpi STATUS SIZE) REAL A(2) CALL MPI Init(ierr) CALL MPI Comm size(mpi COMM WORLD, & nproc, ierr) CALL MPI Comm rank(mpi COMM WORLD, & myid, ierr) root = 0 IF( myid == 0 ) THEN a(1) = 2.0 a(2) = 4.0 CALL MPI BCAST(a, 2, MPI REAL, root, & MPI COMM WORLD, ierr) WRITE(*,*) myid, : a(1)=, a(1), & a(2)=, a(2) CALL MPI Finalize(ierr) END PROGRAM broadcast MPI Reduce : partial sum Other operations: product, min, max, etc. 34 MPI Reduce : Example PROGRAM reduce INTEGER ierr, myid, nproc, root INTEGER status(mpi STATUS SIZE) REAL A(2), res(2) CALL MPI Init(ierr) CALL MPI Comm size(mpi COMM WORLD, nproc, ierr) CALL MPI Comm rank(mpi COMM WORLD, myid, ierr) root = 0 a(1) = 2.0 a(2) = 4.0 CALL MPI REDUCE(a, res, 2, MPI REAL, MPI SUM, root, & MPI COMM WORLD, ierr) IF( myid == root ) THEN WRITE(*,*) myid, : res(1)=, res(1), & res(2)=, res(2) CALL MPI Finalize(ierr) END PROGRAM reduce 35 MPI Scatter MPI Scatter / MPI Gather 36 MPI Gather MPI Scatter: One-to-all communication different data sent from root process to all others in the communicator MPI Gather: data collected by the root process. Is the opposite of Scatter

10 Scatter 37 MPI Scatter / Gather : Example PROGRAM scatter INTEGER ierr, myid, nproc, nsnd, i, root INTEGER status(mpi STATUS SIZE) REAL A(16), B(2) CALL MPI Init(ierr) CALL MPI Comm size(mpi COMM WORLD, nproc, ierr) CALL MPI Comm rank(mpi COMM WORLD, myid, ierr) root = 0 IF( myid == root ) THEN DO i = 1, 16 a(i) = REAL(i) END DO nsnd = 2 CALL MPI Scatter (a, nsnd, MPI REAL, b, & nsnd, MPI REAL, root, & MPI COMM WORLD, ierr) WRITE(*,*) myid, : b(1)=, b(1), & b(2)=, b(2) CALL MPI Finalize(ierr) END PROGRAM scatter Gather PROGRAM gather INTEGER ierr, myid, nproc, nsnd, i, root INTEGER status(mpi STATUS SIZE) REAL A(16), B(2) CALL MPI Init(ierr) CALL MPI Comm size(mpi COMM WORLD, nproc, ierr) CALL MPI Comm rank(mpi COMM WORLD, myid, ierr) root = 0 b(1) = REAL( myid ) b(2) = REAL( myid ) nsnd = 2 CALL MPI Gather(b, nsnd, MPI REAL, a, nsnd,mpi REAL, root, MPI COMM WORLD, ierr) IF( myid == root ) THEN DO i = 1, (nsnd*nproc) WRITE(*,*) myid, : a(i)=, a(i) END DO CALL MPI Finalize(ierr) END PROGRAM gather Exercise 4: MPI Reduce Copy pi collect.f90(c) to your home directory: guillimin> cp /software/workshop/pi collect.f90. Let s add timings: IF (rank==0) THEN t1 = MPI Wtime()... IF (rank==0) THEN t2 = MPI Wtime() WRITE(*,*) Time =, t2-t Exercise 5: Replace collective calls in pi collect by point-to-point communication subroutines (MPI Send / MPI Recv) Exercise 6: take-home exercise Phase field crystal model A (Elder & Provatas): ψ t = M(W 2 2 ψ a 2 ψ a 4 ψ 3 ) here ψ(x, y) is discretized on a grid, e.g. 256x And/Or Implement a dot product routine in MPI.

11 41 Exercise 6: take-home exercise Discretized equation to solve PDE using Euler s method: ψ i,j,tn+1 = ψ i,j,tn + dtm(w 2 2 ψ a 2 ψ a 4 ψ 3 ) where 2 ψ (the Laplace operator) is discretized as ψ i+1,j +ψ i 1,j +ψ i,j+1 +ψ i,j ψ i+1,j+1+ψ i 1,j+1 +ψ i+1,j 1 +ψ i 1,j 1 4 3ψ i,j dx 2 with periodic boundary conditions. Exercise 6: take-home exercise Code in guillimin> cp /software/workshop/pf ModelA/f90/*. guillimin> make Split up grid among processors: column-wise, row-wise (implemented), or block-wise. All need neighbour-to-neigbour communication along the borders. Questions: Given 16 processors and a 256x256 grid, how many elements need to be transferred along the borders using 16 rows, and using 4x4 blocks, respectively, for every time step? What technique should scale better? 42 MPI routines we know... Startup: MPI Init, MPI Finalize Information on the processes: MPI Comm rank, MPI Comm size Point-to-point communications: MPI Send, MPI Recv MPI Irecv, MPI Isend, MPI Wait Collective communications: MPI Bcast, MPI Reduce, MPI Scatter, MPI Gather 43 Further readings: 44 The standard itself, news, development: Detailed MPI tutorials: mpi/tutorial/

Advanced Message-Passing Interface (MPI)

Advanced Message-Passing Interface (MPI) Outline of the workshop 2 Advanced Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Morning: Advanced MPI Revision More on Collectives More on Point-to-Point

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

Practical Introduction to

Practical Introduction to Outline of the workshop Practical Introduction to Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your serial code What is OpenMP? Why do

More information

Practical Introduction to

Practical Introduction to 1 2 Outline of the workshop Practical Introduction to What is ScaleMP? When do we need it? How do we run codes on the ScaleMP node on the ScaleMP Guillimin cluster? How to run programs efficiently on ScaleMP?

More information

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s

More information

Welcome to the introductory workshop in MPI programming at UNICC

Welcome to the introductory workshop in MPI programming at UNICC Welcome...... to the introductory workshop in MPI programming at UNICC Schedule: 08.00-12.00 Hard work and a short coffee break Scope of the workshop: We will go through the basics of MPI-programming and

More information

HPC Parallel Programing Multi-node Computation with MPI - I

HPC Parallel Programing Multi-node Computation with MPI - I HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright

More information

Message Passing Interface: Basic Course

Message Passing Interface: Basic Course Overview of DM- HPC2N, UmeåUniversity, 901 87, Sweden. April 23, 2015 Table of contents Overview of DM- 1 Overview of DM- Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

Introduction to MPI, the Message Passing Library

Introduction to MPI, the Message Passing Library Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of

More information

Introduction to parallel computing with MPI

Introduction to parallel computing with MPI Introduction to parallel computing with MPI Sergiy Bubin Department of Physics Nazarbayev University Distributed Memory Environment image credit: LLNL Hybrid Memory Environment Most modern clusters and

More information

MPI introduction - exercises -

MPI introduction - exercises - MPI introduction - exercises - Paolo Ramieri, Maurizio Cremonesi May 2016 Startup notes Access the server and go on scratch partition: ssh a08tra49@login.galileo.cineca.it cd $CINECA_SCRATCH Create a job

More information

Distributed Memory Programming With MPI Computer Lab Exercises

Distributed Memory Programming With MPI Computer Lab Exercises Distributed Memory Programming With MPI Computer Lab Exercises Advanced Computational Science II John Burkardt Department of Scientific Computing Florida State University http://people.sc.fsu.edu/ jburkardt/classes/acs2

More information

Parallel Programming

Parallel Programming Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why serial is not enough Computing architectures Parallel paradigms Message Passing Interface How

More information

An Introduction to MPI

An Introduction to MPI An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current

More information

Message Passing Interface

Message Passing Interface MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across

More information

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction

More information

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction

More information

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d MPI Lab Getting Started Login to ranger.tacc.utexas.edu Untar the lab source code lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar Part 1: Getting Started with simple parallel coding hello mpi-world

More information

Tutorial: parallel coding MPI

Tutorial: parallel coding MPI Tutorial: parallel coding MPI Pascal Viot September 12, 2018 Pascal Viot Tutorial: parallel coding MPI September 12, 2018 1 / 24 Generalities The individual power of a processor is still growing, but at

More information

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums MPI Lab Parallelization (Calculating π in parallel) How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums Sharing Data Across Processors

More information

ECE 574 Cluster Computing Lecture 13

ECE 574 Cluster Computing Lecture 13 ECE 574 Cluster Computing Lecture 13 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 21 March 2017 Announcements HW#5 Finally Graded Had right idea, but often result not an *exact*

More information

The Message Passing Model

The Message Passing Model Introduction to MPI The Message Passing Model Applications that do not share a global address space need a Message Passing Framework. An application passes messages among processes in order to perform

More information

Computing with the Moore Cluster

Computing with the Moore Cluster Computing with the Moore Cluster Edward Walter An overview of data management and job processing in the Moore compute cluster. Overview Getting access to the cluster Data management Submitting jobs (MPI

More information

Masterpraktikum - Scientific Computing, High Performance Computing

Masterpraktikum - Scientific Computing, High Performance Computing Masterpraktikum - Scientific Computing, High Performance Computing Message Passing Interface (MPI) and CG-method Michael Bader Alexander Heinecke Technische Universität München, Germany Outline MPI Hello

More information

Parallele Numerik. Blatt 1

Parallele Numerik. Blatt 1 Universität Konstanz FB Mathematik & Statistik Prof. Dr. M. Junk Dr. Z. Yang Ausgabe: 02. Mai; SS08 Parallele Numerik Blatt 1 As a first step, we consider two basic problems. Hints for the realization

More information

Masterpraktikum - Scientific Computing, High Performance Computing

Masterpraktikum - Scientific Computing, High Performance Computing Masterpraktikum - Scientific Computing, High Performance Computing Message Passing Interface (MPI) Thomas Auckenthaler Wolfgang Eckhardt Technische Universität München, Germany Outline Hello World P2P

More information

Compilation and Parallel Start

Compilation and Parallel Start Compiling MPI Programs Programming with MPI Compiling and running MPI programs Type to enter text Jan Thorbecke Delft University of Technology 2 Challenge the future Compiling and Starting MPI Jobs Compiling:

More information

Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Introduction to Parallel Programming with MPI PICASso Tutorial October 25-26, 2006 Stéphane Ethier (ethier@pppl.gov) Computational Plasma Physics Group Princeton Plasma Physics Lab Why Parallel Computing?

More information

Introduction to Parallel Programming Message Passing Interface Practical Session Part I

Introduction to Parallel Programming Message Passing Interface Practical Session Part I Introduction to Parallel Programming Message Passing Interface Practical Session Part I T. Streit, H.-J. Pflug streit@rz.rwth-aachen.de October 28, 2008 1 1. Examples We provide codes of the theoretical

More information

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,

More information

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing

More information

MPI Lab. Steve Lantz Susan Mehringer. Parallel Computing on Ranger and Longhorn May 16, 2012

MPI Lab. Steve Lantz Susan Mehringer. Parallel Computing on Ranger and Longhorn May 16, 2012 MPI Lab Steve Lantz Susan Mehringer Parallel Computing on Ranger and Longhorn May 16, 2012 1 MPI Lab Parallelization (Calculating p in parallel) How to split a problem across multiple processors Broadcasting

More information

Hands-on. MPI basic exercises

Hands-on. MPI basic exercises WIFI XSF-UPC: Username: xsf.convidat Password: 1nt3r3st3l4r WIFI EDUROAM: Username: roam06@bsc.es Password: Bsccns.4 MareNostrum III User Guide http://www.bsc.es/support/marenostrum3-ug.pdf Remember to

More information

Topic Notes: Message Passing Interface (MPI)

Topic Notes: Message Passing Interface (MPI) Computer Science 400 Parallel Processing Siena College Fall 2008 Topic Notes: Message Passing Interface (MPI) The Message Passing Interface (MPI) was created by a standards committee in the early 1990

More information

Distributed Memory Programming with Message-Passing

Distributed Memory Programming with Message-Passing Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and

More information

An introduction to MPI

An introduction to MPI An introduction to MPI C MPI is a Library for Message-Passing Not built in to compiler Function calls that can be made from any compiler, many languages Just link to it Wrappers: mpicc, mpif77 Fortran

More information

Assignment 3 MPI Tutorial Compiling and Executing MPI programs

Assignment 3 MPI Tutorial Compiling and Executing MPI programs Assignment 3 MPI Tutorial Compiling and Executing MPI programs B. Wilkinson: Modification date: February 11, 2016. This assignment is a tutorial to learn how to execute MPI programs and explore their characteristics.

More information

CS 426. Building and Running a Parallel Application

CS 426. Building and Running a Parallel Application CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations

More information

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education MPI Tutorial Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education Center for Vision, Cognition, Learning and Art, UCLA July 15 22, 2013 A few words before

More information

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector

Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector Intel Parallel Studio XE Cluster Edition - Intel MPI - Intel Traceanalyzer & Collector A brief Introduction to MPI 2 What is MPI? Message Passing Interface Explicit parallel model All parallelism is explicit:

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

MPI MESSAGE PASSING INTERFACE

MPI MESSAGE PASSING INTERFACE MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions

More information

ECE 574 Cluster Computing Lecture 13

ECE 574 Cluster Computing Lecture 13 ECE 574 Cluster Computing Lecture 13 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 15 October 2015 Announcements Homework #3 and #4 Grades out soon Homework #5 will be posted

More information

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011. CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with

More information

Introduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco

Introduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco Introduction to MPI 2018 HPC Workshop: Parallel Programming Alexander B. Pacheco Research Computing July 17-18, 2018 Distributed Memory Model Each process has its own address space Data is local to each

More information

Parallel Programming Using MPI

Parallel Programming Using MPI Parallel Programming Using MPI Short Course on HPC 15th February 2019 Aditya Krishna Swamy adityaks@iisc.ac.in SERC, Indian Institute of Science When Parallel Computing Helps? Want to speed up your calculation

More information

Parallel Computing: Overview

Parallel Computing: Overview Parallel Computing: Overview Jemmy Hu SHARCNET University of Waterloo March 1, 2007 Contents What is Parallel Computing? Why use Parallel Computing? Flynn's Classical Taxonomy Parallel Computer Memory

More information

. Programming Distributed Memory Machines in MPI and UPC. Kenjiro Taura. University of Tokyo

. Programming Distributed Memory Machines in MPI and UPC. Kenjiro Taura. University of Tokyo .. Programming Distributed Memory Machines in MPI and UPC Kenjiro Taura University of Tokyo 1 / 57 Distributed memory machines chip (socket, node, CPU) (physical) core hardware thread (virtual core, CPU)

More information

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

L14 Supercomputing - Part 2

L14 Supercomputing - Part 2 Geophysical Computing L14-1 L14 Supercomputing - Part 2 1. MPI Code Structure Writing parallel code can be done in either C or Fortran. The Message Passing Interface (MPI) is just a set of subroutines

More information

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS First day Basics of parallel programming RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS Today s schedule: Basics of parallel programming 7/22 AM: Lecture Goals Understand the design of typical parallel

More information

Slides prepared by : Farzana Rahman 1

Slides prepared by : Farzana Rahman 1 Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based

More information

MPI-Hello World. Timothy H. Kaiser, PH.D.

MPI-Hello World. Timothy H. Kaiser, PH.D. MPI-Hello World Timothy H. Kaiser, PH.D. tkaiser@mines.edu 1 Calls we will Use MPI_INIT( ierr ) MPI_Get_Processor_name(myname,resultlen,ierr) MPI_FINALIZE(ierr) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr

More information

Tutorial: Compiling, Makefile, Parallel jobs

Tutorial: Compiling, Makefile, Parallel jobs Tutorial: Compiling, Makefile, Parallel jobs Hartmut Häfner Steinbuch Centre for Computing (SCC) Funding: www.bwhpc-c5.de Outline Compiler + Numerical Libraries commands Linking Makefile Intro, Syntax

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 24, 2011 Many slides adapted from lectures by

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 1 Overview Introduction and very short historical review MPI - as simple as it comes Communications Process Topologies (I have no

More information

15-440: Recitation 8

15-440: Recitation 8 15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs

More information

Tech Computer Center Documentation

Tech Computer Center Documentation Tech Computer Center Documentation Release 0 TCC Doc February 17, 2014 Contents 1 TCC s User Documentation 1 1.1 TCC SGI Altix ICE Cluster User s Guide................................ 1 i ii CHAPTER 1

More information

Parallel Programming using MPI. Supercomputing group CINECA

Parallel Programming using MPI. Supercomputing group CINECA Parallel Programming using MPI Supercomputing group CINECA Contents Programming with message passing Introduction to message passing and MPI Basic MPI programs MPI Communicators Send and Receive function

More information

Simple examples how to run MPI program via PBS on Taurus HPC

Simple examples how to run MPI program via PBS on Taurus HPC Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload

More information

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010. Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, 2010 Final Project Purpose: - A chance to dig in deeper into a parallel programming model and explore concepts. -

More information

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08 IPM School of Physics Workshop on High Perfomance Computing/HPC08 16-21 February 2008 MPI tutorial Luca Heltai Stefano Cozzini Democritos/INFM + SISSA 1 When

More information

MPI Message Passing Interface

MPI Message Passing Interface MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

What s in this talk? Quick Introduction. Programming in Parallel

What s in this talk? Quick Introduction. Programming in Parallel What s in this talk? Parallel programming methodologies - why MPI? Where can I use MPI? MPI in action Getting MPI to work at Warwick Examples MPI: Parallel Programming for Extreme Machines Si Hammond,

More information

Distributed Memory Parallel Programming

Distributed Memory Parallel Programming COSC Big Data Analytics Parallel Programming using MPI Edgar Gabriel Spring 201 Distributed Memory Parallel Programming Vast majority of clusters are homogeneous Necessitated by the complexity of maintaining

More information

MPI-Hello World. Timothy H. Kaiser, PH.D.

MPI-Hello World. Timothy H. Kaiser, PH.D. MPI-Hello World Timothy H. Kaiser, PH.D. tkaiser@mines.edu 1 Calls we will Use MPI_INIT( ierr ) MPI_Get_Processor_name(myname,resultlen,ierr) MPI_FINALIZE(ierr) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr

More information

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc. Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright

More information

mpi4py HPC Python R. Todd Evans January 23, 2015

mpi4py HPC Python R. Todd Evans January 23, 2015 mpi4py HPC Python R. Todd Evans rtevans@tacc.utexas.edu January 23, 2015 What is MPI Message Passing Interface Most useful on distributed memory machines Many implementations, interfaces in C/C++/Fortran

More information

Parallel Programming Basic MPI. Timothy H. Kaiser, Ph.D.

Parallel Programming Basic MPI. Timothy H. Kaiser, Ph.D. Parallel Programming Basic MPI Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Talk Overview Background on MPI Documentation Hello world in MPI Some differences between mpi4py and normal MPI Basic communications

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico March 31, 2016 Many slides adapted from lectures by Bill

More information

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal

More information

Introduction to MPI part II. Fabio AFFINITO

Introduction to MPI part II. Fabio AFFINITO Introduction to MPI part II Fabio AFFINITO (f.affinito@cineca.it) Collective communications Communications involving a group of processes. They are called by all the ranks involved in a communicator (or

More information

MPI 1. CSCI 4850/5850 High-Performance Computing Spring 2018

MPI 1. CSCI 4850/5850 High-Performance Computing Spring 2018 MPI 1 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

MPI Runtime Error Detection with MUST

MPI Runtime Error Detection with MUST MPI Runtime Error Detection with MUST At the 25th VI-HPS Tuning Workshop Joachim Protze IT Center RWTH Aachen University March 2017 How many issues can you spot in this tiny example? #include #include

More information

Parallel programming with MPI Part I -Introduction and Point-to-Point Communications

Parallel programming with MPI Part I -Introduction and Point-to-Point Communications Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, A. Marani, Supercomputing Applications and Innovation (SCAI), CINECA 23 February 2016 MPI course 2016 Contents

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah m.cuma@utah.edu Overview Types of parallel computers. Parallel programming options. How to write

More information

Beginner's Guide for UK IBM systems

Beginner's Guide for UK IBM systems Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,

More information

Parallel programming with MPI Part I -Introduction and Point-to-Point

Parallel programming with MPI Part I -Introduction and Point-to-Point Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, Supercomputing Applications and Innovation (SCAI), CINECA 1 Contents Introduction to message passing and

More information

High Performance Computing Course Notes Message Passing Programming I

High Performance Computing Course Notes Message Passing Programming I High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works

More information

MPI. (message passing, MIMD)

MPI. (message passing, MIMD) MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

Parallel Computing and the MPI environment

Parallel Computing and the MPI environment Parallel Computing and the MPI environment Claudio Chiaruttini Dipartimento di Matematica e Informatica Centro Interdipartimentale per le Scienze Computazionali (CISC) Università di Trieste http://www.dmi.units.it/~chiarutt/didattica/parallela

More information

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com.

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com. December 2003 Provided by ClusterWorld for Jeff Squyres cw.squyres.com www.clusterworld.com Copyright 2004 ClusterWorld, All Rights Reserved For individual private use only. Not to be reproduced or distributed

More information

Message Passing Interface

Message Passing Interface Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented

More information

Parallel programming MPI

Parallel programming MPI Parallel programming MPI Distributed memory Each unit has its own memory space If a unit needs data in some other memory space, explicit communication (often through network) is required Point-to-point

More information

MPI introduction - exercises -

MPI introduction - exercises - MPI introduction - exercises - Introduction to Parallel Computing with MPI and OpenMP P. Ramieri May 2015 Hello world! (Fortran) As an ice breaking activity try to compile and run the Helloprogram, either

More information

Introduction to Lab Series DMS & MPI

Introduction to Lab Series DMS & MPI TDDC 78 Labs: Memory-based Taxonomy Introduction to Lab Series DMS & Mikhail Chalabine Linköping University Memory Lab(s) Use Distributed 1 Shared 2 3 Posix threads OpenMP Distributed 4 2011 LAB 5 (tools)

More information

Parallel Programming. Libraries and Implementations

Parallel Programming. Libraries and Implementations Parallel Programming Libraries and Implementations Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking

More information

Parallel Computing Paradigms

Parallel Computing Paradigms Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is

More information

Message-Passing Computing

Message-Passing Computing Chapter 2 Slide 41þþ Message-Passing Computing Slide 42þþ Basics of Message-Passing Programming using userlevel message passing libraries Two primary mechanisms needed: 1. A method of creating separate

More information

Why Combine OpenMP and MPI

Why Combine OpenMP and MPI Why Combine OpenMP and MPI OpenMP might not require copies of data structures Can have some interesting designs that overlap computation and communication Overcome the limits of small processor counts

More information

Elementary Parallel Programming with Examples. Reinhold Bader (LRZ) Georg Hager (RRZE)

Elementary Parallel Programming with Examples. Reinhold Bader (LRZ) Georg Hager (RRZE) Elementary Parallel Programming with Examples Reinhold Bader (LRZ) Georg Hager (RRZE) Two Paradigms for Parallel Programming Hardware Designs Distributed Memory M Message Passing explicit programming required

More information

Introduction to Parallel Programming

Introduction to Parallel Programming Introduction to Parallel Programming Ste phane Zuckerman Laboratoire ETIS Universite Paris-Seine, Universite de Cergy-Pontoise, ENSEA, CNRS F95000, Cergy, France October 19, 2018 S.Zuckerman (ETIS) Parallel

More information