Parallel Short Course. Distributed memory machines
|
|
- Giles Manning
- 5 years ago
- Views:
Transcription
1 Parallel Short Course Message Passing Interface (MPI ) I Introduction and Point-to-point operations Spring 2007 Distributed memory machines local disks Memory Network card 1 Compute node message passing network administrative network Processor Network card 2 2 1
2 Communication between different machines 1st Process (Client) Memory Processor local disks Network card 1 Network card 2 mypc.my-university.edu http request Host Name and Host Address Internet local disks 2nd Process (Server) Memory Processor Network card 1 Network card 2 webserver.provider.com Protocol 3 Communication between different machines on the Internet Addressing: hostname and/or IP Address Communication: based on protocols, e.g. http or TCP/IP Process start-up: every process (= application) has to be started separately 4 2
3 The Message Passing universe Process start-up: Want to start n-processes which shall work on the same problem mechanisms to start n-processes provided by MPI library Addressing: Every process has a unique identifier. The value of the rank is between 0 and n-1. Communication: MPI defines interfaces/routines how to send data to a process and how to receive data from a process. It does not specify a protocol. 5 6 History of MPI Until the early 90 s: all vendors of parallel hardware had their own message passing library Some public domain message passing libraries available all of them being incompatible to each other High efforts for end-users to move code from one architecture to another June 1994: Version 1.0 of MPI presented by the MPI Forum June 1995: Version 1.1 (errata of MPI 1.0) 1997: MPI 2.0 adding new functionality to MPI 3
4 Simple Example (I) MPI command to start process name of the application to start number of processes to be started Number of processes which have been started Rank of the 2nd process Rank of the 1st process 7 Simple example (II) application t1 with rank = 0 PC of the end-user Parallel computer (e.g. PC cluster) mypc> mpirun np 2./t1 application t1 with rank = 1 mpirun starts the application t1 two times (as specified with the np argument) on two currently available processors of the parallel machine telling one process that his rank is 0 and the other that his rank is 1 8 4
5 #include mpi.h Simple Example (III) int main ( int argc, char **argv ) { int rank, size; MPI_Init ( &argc, &argv ); MPI_Comm_rank ( MPI_COMM_WORLD, &rank ); MPI_Comm_size ( MPI_COMM_WORLD, &size ); printf ( Mpi hi von node %d job size %d\n, rank, size); MPI_Finalize (); return (0); 9 MPI basics mpirun starts the required number of processes every process has a unique identifier (rank) which is between 0 and n-1 no identifiers are duplicate, no identifiers are left out all processes which have been started by mpirun are organized in a process group (communicator) called MPI_COMM_WORLD MPI_COMM_WORLD is static number of processes can not change participating processes can not change 10 5
6 MPI basics (II) The rank of a process is always related to the process group e.g. a process is uniquely identified by a tuple (rank, process group) A process can be part of the several groups i.e. a process has in each group a rank MPI_COMM_WORLD, size= new process group, size = Simple Example (IV) Function returns the rank of a process within a process group Rank of a process within the process group MPI_COMM_WORLD ---snip--- MPI_Comm_rank ( MPI_COMM_WORLD, &rank ); MPI_Comm_size ( MPI_COMM_WORLD, &size ); ---snip--- Default process group containing all processes started by mpirun Function returns the size of a process group Number of processes in the process group MPI_COMM_WORLD 12 6
7 Simple Example (V) Function sets up parallel environment: processes set up network connection to each other default process group (MPI_COMM_WORLD) is set up should be the first function executed in the application ---snip--- MPI_Init (&argc, &argv ); ---snip--- MPI_Finalize (); ---snip--- Function closes the parallel environment should be the last function called in the application might stop all processes 13 Second example scalar product of two vectors two vectors are distributed on two processors each process holds half of the original vector Process with rank=0 Process with rank=1 a ( 0... N 1) b ( 0... N 1) a ( N... N) b( N... N)
8 Second example (II) Logical/Global view of the data compared to local view of the data Process with rank=0 Process with rank=1 a ( 0... N 1) a( N... N) 2 2 a local a local a local a local ( 0) a(0) ( 1) a(1) ( 2) a(2) ( n) a( N 1) 2 a ( 0) ( N local a ) 2 a ( 1) ( N local a + 1) 2 a ( 2) ( N local a + 2) 2 a local ( n) a( N 1) 15 Second example (III) Scalar product: s = N 1 i= 0 Parallel algorithm s = N / 2 1 i= 0 N / 2 1 a[ i]* b[ i] ( a[ i]* b[ i]) + N 1 requires communication between the processes i= N / 2 ( a[ i]* b[ i]) N / 2 1 = ( alocal[ i]* blocal[ i]) + ( alocal[ i]* blocal[ i]) i= 0 i= rank = 0 rank =
9 #include mpi.h Second example (IV) int main ( int argc, char **argv ) { int i, rank, size; double a_local[n/2], b_local[n/2]; double s_local, s; MPI_Init ( &argc, &argv ); MPI_Comm_rank ( MPI_COMM_WORLD, &rank ); MPI_Comm_size ( MPI_COMM_WORLD, &size ); s_local = 0; for ( i=0; i<n/2; i++ ) { s_local = s_local + a_local[i] * b_local[i]; 17 Second example (V) if ( rank == 0 ) { /* Send the local result to rank 1 */ MPI_Send ( &s_local, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); if ( rank == 1 ) { MPI_Recv ( &s, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status ); /* Calculate global result */ s = s + s_local; 18 9
10 Second example (VI) /* Rank 1 holds the global result and sends it now to rank 0 */ if ( rank == 0 ) { MPI_Recv (&s, 1, MPI_DOUBLE, 1, 1, MPI_COMM_WORLD, &status ); if ( rank == 1 ) { MPI_Send (&s, 1, MPI_DOUBLE, 0, 1, MPI_COMM_WORLD); /* Close the parallel environment */ MPI_Finalize (); return (0); 19 Sending Data Data element which shall be send Number of elements which shall be send Data Type of the element which shall be send ---snip--- MPI_Send (&s_local, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD ); ---snip--- Process group containing all processes started by mpirun a user defined integer (tag) for uniquely identifying a message Rank of processes in the process group MPI_COMM_WORLD to which the message shall be sent 20 10
11 Receiving Data Data element where the data shall be received Number of elements which shall be recvd Data Type of the element which shall be recvd ---snip--- MPI_Recv (&s_local, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, &status ); ---snip--- Process group Status information about the message Rank of processes in the process group which sent the message a user defined integer (tag) for uniquely identifying a message 21 Deadlock (I) Question: how can two processes safely exchange data at the same time? Possibility 1 Process 0 Process 1 MPI_Send(buf, ); MPI_Recv(buf, ); MPI_Send(buf, ); MPI_Recv(buf, ); can deadlock, depending on the message length and the capability of the hardware/mpi library to buffer messages 22 11
12 Deadlock (II) Possibility 2: re-order MPI functions on one process Process 0 Process 1 MPI_Recv(rbuf, ); MPI_Send(buf, ); MPI_Send(buf, ); MPI_Recv(rbuf, ); Other possibilities: asynchronous communication shown later use buffered send (MPI_Bsend) not shown here use MPI_Sendrecv not shown here 23 marvin.cs.uh.edu Login name: gabriel Password: haha99/1 mkdir cd cp../helloworldf.f. mpicc o helloworld helloworld.c mpif77 o helloworldf helloworldf.f mpirun np 2./helloworldf 24 12
13 Faulty examples (I) Sender mismatch: if rank does not exist (e.g. rank > size of MPI_COMM_WORLD), the MPI library can recognize it and return an error if rank does exist (0<rank<size of MPI_COMM_WORLD) but does not send a message => MPI_Recv waits forever => deadlock if ( rank == 0 ) { /* Send the local result to rank 1 */ MPI_Send ( &s_local, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); if ( rank == 1 ) { MPI_Recv ( &s, 1, MPI_DOUBLE, 5, 0, MPI_COMM_WORLD, &status ); 25 Faulty examples (II) Tag mismatch: if tag outside of the allowed range (e.g. 0<tag<MPI_TAG_UB ) the MPI library can recognize it and return an error if tag in MPI_Recv then the tag specified in MPI_Send => MPI_Recv waits forever => deadlock if ( rank == 0 ) { /* Send the local result to rank 1 */ MPI_Send ( &s_local, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); if ( rank == 1 ) { MPI_Recv ( &s, 1, MPI_DOUBLE, 0, 18, MPI_COMM_WORLD, &status ); 26 13
14 What you ve learned so far Six MPI functions are sufficient for programming a distributed system memory machine MPI_Init(int *argc, char ***argv); MPI_Finalize (); MPI_Comm_rank (MPI_Comm comm, int *rank); MPI_Comm_size (MPI_Comm comm, int *size); MPI_Send (void *buf, int count, MPI_Datatype dat, int dest, int tag, MPI_Comm comm); MPI_Recv (void *buf, int count, MPI_Datatype dat, int source, int tag, MPI_Comm comm, MPI_Status *status); 27 So, why not stop here? Performance need functions which can fully exploit the capabilities of the hardware need functions to abstract typical communication patterns Usability need functions to simplify often recurring tasks need functions to simplify the management of parallel applications 28 14
15 So, why not stop here? Performance asynchronous point-to-point operations one-sided operations collective operations derived data-types parallel I/O hints Usability process grouping functions environmental and process management error handling object attributes language bindings 29 Point-to-point operations Data exchange between two processes both processes are actively participating in the data exchange two-sided communication Large set of functions defined in MPI-1 (50+) Blocking Non-blocking Persistent Standard MPI_Send MPI_Isend MPI_Send_init Buffered MPI_Bsend MPI_Ibsend MPI_Bsend_init Ready MPI_Rsend MPI_Irsend MPI_Rsend_init Synchronous MPI_Ssend MPI_Issend MPI_Ssend_init 30 15
16 A message contains of the data which is to be sent from the sender to the receiver, described by the beginning of the buffer a data-type the number of elements of the data-type the message header (message envelope) rank of the sender process rank of the receiver process the communicator a tag 31 Rules for point-to-point operations Reliability: MPI guarantees, that no message gets lost Non-overtaking rule: MPI guarantees, that two messages posted from process A to process B arrive in the same order as they have been posted Message-based paradigm: MPI specifies, that a single message cannot be received with more than one Recv operation (in contrary to sockets!) Message Recv buffer1 Recv buffer2 Message in the Recv buffers if (rank == 0 ) { MPI_Send(buf, 4, ); if ( rank == 1 ) { MPI_Recv(buf, 3, ); MPI_Recv(&(buf[3],1, ); 32 16
17 Message matching (I) How does the receiver know, whether the message which he just received is the message for which he was waiting? the sender of the arriving message has to match the sender of the expected message the tag of the arriving message has to match the tag of the expected message the communicator of the arriving message has to match the communicator of the expected message? 33 Message matching (II) What happens if the length of the arriving message does not match the length of the expected message? the length of the message is not used for matching if the received message is shorter than the expected message, no problems the received message is longer than the expected message an error code (MPI_ERR_TRUNC) will be returned or your application will be aborted or your application will deadlock or your application writes a core-dump 34 17
18 Message matching (III) Example 1: correct example if (rank == 0 ) { MPI_Send(buf, 3, MPI_INT, 1, 1, MPI_COMM_WORLD); else if ( rank == 1 ) { MPI_Recv(buf, 5, MPI_INT, 0, 1, MPI_COMM_WORLD, &status); Message Recv buffer untouched elements in the recv buffer Message in the Recv buffer 35 Message matching (IV) Example 2: erroneous example if (rank == 0 ) { MPI_Send(buf, 5, MPI_INT, 1, 1, MPI_COMM_WORLD); else if ( rank == 1 ) { MPI_Recv(buf, 3, MPI_INT, 0, 1, MPI_COMM_WORLD, &status); Message Recv buffer potentially writing over the end of the recv buffer Message in the Recv buffer 36 18
19 Example Implementation of a ring using Send/Recv Rank 0 starts the ring MPI_Comm_rank (comm, &rank); MPI_Comm_size (comm, &size); if (rank == 0 ) { MPI_Send(buf, 1, MPI_INT, rank+1, 1,comm); MPI_Recv(buf, 1, MPI_INT, size-1, 1,comm,&status); else if ( rank == size-1 ) { MPI_Recv(buf, 1, MPI_INT, rank-1, 1,comm,&status); MPI_Send(buf, 1, MPI_INT, 0, 1,comm); else { MPI_Recv(buf, 1, MPI_INT, rank-1, 1,comm,&status); MPI_Send(buf, 1, MPI_INT, rank+1, 1,comm); 37 Wildcards Question: can I use wildcards for the arguments in Send/Recv? Answer: for Send: no for Recv: tag: yes, MPI_ANY_TAG source: yes, MPI_ANY_SOURCE communicator: no 38 19
20 Status of a message (I) the MPI status contains directly accessible information who sent the message what was the tag what is the error-code of the message and indirectly accessible information through function calls how long is the message has the message bin cancelled 39 Status of a message (II) usage in C MPI_Status status; MPI_Recv ( buf, cnt, MPI_INT,, &status); /*directly access source, tag, and error */ src = status.mpi_source; tag = status.mpi_tag; err = status.mpi_error; /*determine message length and whether it has been cancelled */ MPI_Get_count (status, MPI_INT, &rcnt); MPI_Test_cancelled (status, &flag); 40 20
21 Status of a message (III) usage in Fortran integer status(mpi_status_size) call MPI_Recv(buf,cnt,MPI_INTEGER,,status,ierr)! directly access source, tag, and error src = status(mpi_source) tag = status(mpi_tag) err = status(mpi_error)! determine message length and whether it has been! cancelled call MPI_Get_count (status,mpi_integer,rcnt,ierr) call MPI_Test_cancelled (status,flag,ierr) 41 Status of a message (IV) If you are not interested in the status, you can pass MPI_STATUS_NULL MPI_STATUSES_NULL to MPI_Recv and all other MPI functions, which return a status 42 21
22 Non-blocking operations (I) A regular MPI_Send returns, when the data is safely stored away A regular MPI_Recv returns, when the data is fully available in the receive-buffer Non-blocking operations initiate the Send and Receive operations, but do not wait for its completion. Functions, which check or wait for completion of an initiated communication have to be called explicitly Since the functions initiating communication return immediately, all MPI-functions have an I prefix (e.g. MPI_Isend or MPI_Irecv). 43 Non-blocking operations (II) MPI_Isend (void *buf, int cnt, MPI_Datatype dat, int dest, int tag, MPI_Comm comm, MPI_Request *req); MPI_Irecv (void *buf, int cnt, MPI_Datatype dat, int src, int tag, MPI_Comm comm, MPI_Request *reqs); 44 22
23 Non-blocking operations (III) After initiating a non-blocking communication, it is not allowed to touch (=modify) the communication buffer until completion you can not make any assumptions about when the message will really be transferred All Immediate functions take an additional argument, a request a request uniquely identifies an ongoing communication, and has to be used, if you want to check/wait for the completion of a posted communication 45 MPI_Wait MPI_Waitall MPI_Waitany Completion functions (I) (MPI_Request *req, MPI_Status *stat); (int cnt, MPI_Request *reqs, MPI_Status *stats); (int cnt, MPI_Request *reqs, int *index, MPI_Status *stat); MPI_Waitsome (int cnt, MPI_Request *reqs, int *indices, MPI_Status *stats); Functions waiting for completion MPI_Wait wait for one communication to finish MPI_Waitall wait for all comm. of a list to finish MPI_Waitany wait for one comm. of a list to finish MPI_Waitsome wait for at least one comm. of a list Content of the status not defined for Send operations 46 23
24 Completion functions (II) MPI_Test (MPI_Request *req, int *flag, MPI_Status *stat); MPI_Testall (int cnt, MPI_Request *reqs, int *flag, MPI_Status *stats); MPI_Testany (int cnt, MPI_Request *reqs, int *index, int *flag, MPI_Status *stat); MPI_Testsome (int cnt, MPI_Request *reqs, int *indices, int *flag, MPI_Status *stats); Test-functions verify, whether a communication is complete MPI_Test check, whether a comm. has finished MPI_Testall check, whether all comm. of a list finished MPI_Testany check, whether one of a list of comm. finished MPI_Testsome check, how many of a list of comm. finished 47 Deadlock problem revisited Question: how can two processes safely exchange data at the same time? Possibility 3: usage of non-blocking operations Process 0 Process 1 MPI_Irecv(rbuf,, &req); MPI_Send (buf, ); MPI_Wait (req, &status); MPI_Irecv(rbuf,,&req); MPI_Send (buf, ); MPI_Wait (req, &status); note: you have to use 2 separate buffers! many different ways for formulating this scenario identical code for both processes 48 24
25 Some Links MPI Forum: My personal MPI home page: Open MPI: MPICH:
COSC 6374 Parallel Computation
COSC 6374 Parallel Computation Message Passing Interface (MPI ) II Advanced point-to-point operations Spring 2008 Overview Point-to-point taxonomy and available functions What is the status of a message?
More informationCOSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines
Network card Network card 1 COSC 6374 Parallel Computation Message Passing Interface (MPI ) I Introduction Edgar Gabriel Fall 015 Distributed memory machines Each compute node represents an independent
More informationParallel Programming
Parallel Programming Point-to-point communication Prof. Paolo Bientinesi pauldj@aices.rwth-aachen.de WS 18/19 Scenario Process P i owns matrix A i, with i = 0,..., p 1. Objective { Even(i) : compute Ti
More informationHigh performance computing. Message Passing Interface
High performance computing Message Passing Interface send-receive paradigm sending the message: send (target, id, data) receiving the message: receive (source, id, data) Versatility of the model High efficiency
More informationIntroduction to parallel computing concepts and technics
Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing
More informationMessage Passing Interface. George Bosilca
Message Passing Interface George Bosilca bosilca@icl.utk.edu Message Passing Interface Standard http://www.mpi-forum.org Current version: 3.1 All parallelism is explicit: the programmer is responsible
More informationAcknowledgments. Programming with MPI Basic send and receive. A Minimal MPI Program (C) Contents. Type to enter text
Acknowledgments Programming with MPI Basic send and receive Jan Thorbecke Type to enter text This course is partly based on the MPI course developed by Rolf Rabenseifner at the High-Performance Computing-Center
More informationProgramming with MPI Basic send and receive
Programming with MPI Basic send and receive Jan Thorbecke Type to enter text Delft University of Technology Challenge the future Acknowledgments This course is partly based on the MPI course developed
More informationParallel Programming
Parallel Programming Prof. Paolo Bientinesi pauldj@aices.rwth-aachen.de WS 16/17 Point-to-point communication Send MPI_Ssend MPI_Send MPI_Isend. MPI_Bsend Receive MPI_Recv MPI_Irecv Paolo Bientinesi MPI
More informationPoint-to-Point Communication. Reference:
Point-to-Point Communication Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/mpi/ Introduction Point-to-point communication is the fundamental communication facility provided by the MPI library. Point-to-point
More informationPractical Scientific Computing: Performanceoptimized
Practical Scientific Computing: Performanceoptimized Programming Programming with MPI November 29, 2006 Dr. Ralf-Peter Mundani Department of Computer Science Chair V Technische Universität München, Germany
More informationDistributed Memory Parallel Programming
COSC Big Data Analytics Parallel Programming using MPI Edgar Gabriel Spring 201 Distributed Memory Parallel Programming Vast majority of clusters are homogeneous Necessitated by the complexity of maintaining
More informationProgramming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationMessage Passing Interface
MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across
More informationMessage Passing Interface. most of the slides taken from Hanjun Kim
Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message
More informationParallel Programming using MPI. Supercomputing group CINECA
Parallel Programming using MPI Supercomputing group CINECA Contents Programming with message passing Introduction to message passing and MPI Basic MPI programs MPI Communicators Send and Receive function
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationCSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )
CSE 613: Parallel Programming Lecture 21 ( The Message Passing Interface ) Jesmin Jahan Tithi Department of Computer Science SUNY Stony Brook Fall 2013 ( Slides from Rezaul A. Chowdhury ) Principles of
More informationParallel Computing Paradigms
Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is
More informationOutline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM
THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationParallel programming with MPI Part I -Introduction and Point-to-Point Communications
Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, A. Marani, Supercomputing Applications and Innovation (SCAI), CINECA 23 February 2016 MPI course 2016 Contents
More informationDepartment of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 4: MPI, CG M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationint sum;... sum = sum + c?
int sum;... sum = sum + c? Version Cores Time (secs) Speedup manycore Message Passing Interface mpiexec int main( ) { int ; char ; } MPI_Init( ); MPI_Comm_size(, &N); MPI_Comm_rank(, &R); gethostname(
More informationA message contains a number of elements of some particular datatype. MPI datatypes:
Messages Messages A message contains a number of elements of some particular datatype. MPI datatypes: Basic types. Derived types. Derived types can be built up from basic types. C types are different from
More informationMessage Passing Interface
Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented
More informationParallel programming with MPI Part I -Introduction and Point-to-Point
Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, Supercomputing Applications and Innovation (SCAI), CINECA 1 Contents Introduction to message passing and
More informationParallel Programming with MPI: Day 1
Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH 43212-1163 1 Table of Contents Brief History of MPI
More informationProgramming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing
More informationParallel Computing. PD Dr. rer. nat. habil. Ralf-Peter Mundani. Computation in Engineering / BGU Scientific Computing in Computer Science / INF
Parallel Computing PD Dr. rer. nat. habil. Ralf-Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2018/19 Part 5: Programming Memory-Coupled Systems
More informationMPI. (message passing, MIMD)
MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point
More informationOutline. Communication modes MPI Message Passing Interface Standard
MPI THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationReusing this material
Messages Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationStandard MPI - Message Passing Interface
c Ewa Szynkiewicz, 2007 1 Standard MPI - Message Passing Interface The message-passing paradigm is one of the oldest and most widely used approaches for programming parallel machines, especially those
More informationHigh Performance Computing Course Notes Message Passing Programming I
High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works
More informationIntroduction to MPI: Part II
Introduction to MPI: Part II Pawel Pomorski, University of Waterloo, SHARCNET ppomorsk@sharcnetca November 25, 2015 Summary of Part I: To write working MPI (Message Passing Interface) parallel programs
More informationIPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08
IPM School of Physics Workshop on High Perfomance Computing/HPC08 16-21 February 2008 MPI tutorial Luca Heltai Stefano Cozzini Democritos/INFM + SISSA 1 When
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018
More informationMPI Runtime Error Detection with MUST
MPI Runtime Error Detection with MUST At the 27th VI-HPS Tuning Workshop Joachim Protze IT Center RWTH Aachen University April 2018 How many issues can you spot in this tiny example? #include #include
More informationNon-Blocking Communications
Non-Blocking Communications Deadlock 1 5 2 3 4 Communicator 0 2 Completion The mode of a communication determines when its constituent operations complete. - i.e. synchronous / asynchronous The form of
More informationHolland Computing Center Kickstart MPI Intro
Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:
More informationIntroduction to the Message Passing Interface (MPI)
Applied Parallel Computing LLC http://parallel-computing.pro Introduction to the Message Passing Interface (MPI) Dr. Alex Ivakhnenko March 4, 2018 Dr. Alex Ivakhnenko (APC LLC) Introduction to MPI March
More informationMPI point-to-point communication
MPI point-to-point communication Slides Sebastian von Alfthan CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. Introduction MPI processes are independent, they communicate to coordinate
More informationMPI 2. CSCI 4850/5850 High-Performance Computing Spring 2018
MPI 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationWriting Message Passing Parallel Programs with MPI
Writing Message Passing Parallel Programs with MPI A Two Day Course on MPI Usage Course Notes Version 1.8.2 Neil MacDonald, Elspeth Minty, Joel Malard, Tim Harding, Simon Brown, Mario Antonioletti Edinburgh
More informationDistributed Memory Programming with Message-Passing
Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and
More informationDocument Classification Problem
Document Classification Problem Search directories, subdirectories for documents (look for.html,.txt,.tex, etc.) Using a dictionary of key words, create a profile vector for each document Store profile
More informationMessage Passing with MPI
Message Passing with MPI PPCES 2016 Hristo Iliev IT Center / JARA-HPC IT Center der RWTH Aachen University Agenda Motivation Part 1 Concepts Point-to-point communication Non-blocking operations Part 2
More informationLesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.
The goals of this lesson are: understanding the MPI programming model managing the MPI environment handling errors point-to-point communication 1. The MPI Environment Lesson 1 MPI (Message Passing Interface)
More informationCS 179: GPU Programming. Lecture 14: Inter-process Communication
CS 179: GPU Programming Lecture 14: Inter-process Communication The Problem What if we want to use GPUs across a distributed system? GPU cluster, CSIRO Distributed System A collection of computers Each
More informationSlides prepared by : Farzana Rahman 1
Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based
More informationOutline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.
Lecture 8 MPI Outline Introduction to HPC computing OpenMP MPI Introduction Understanding communications Collective communications Communicators Topologies Grouping Data for Communication Input / output
More informationHigh Performance Computing
High Performance Computing Course Notes 2009-2010 2010 Message Passing Programming II 1 Communications Point-to-point communications: involving exact two processes, one sender and one receiver For example,
More informationMessage Passing with MPI Christian Iwainsky HiPerCH
Message Passing with MPI Christian Iwainsky HiPerCH 05.08.2013 FB. Computer Science Scientific Computing Christian Iwainsky 1 Agenda Recap MPI Part 1 Concepts Point-to-Point Basic Datatypes MPI Part 2
More informationMPI (Message Passing Interface)
MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation.
More informationBuffering in MPI communications
Buffering in MPI communications Application buffer: specified by the first parameter in MPI_Send/Recv functions System buffer: Hidden from the programmer and managed by the MPI library Is limitted and
More informationIntroduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.
Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright
More informationAn Introduction to MPI
An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current
More informationCOS 318: Operating Systems. Message Passing. Kai Li and Andy Bavier Computer Science Department Princeton University
COS 318: Operating Systems Message Passing Kai Li and Andy Bavier Computer Science Department Princeton University (http://www.cs.princeton.edu/courses/cos318/) Quizzes Quiz 1 Most of you did very well
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationMPI Message Passing Interface
MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information
More informationLecture 7: Distributed memory
Lecture 7: Distributed memory David Bindel 15 Feb 2010 Logistics HW 1 due Wednesday: See wiki for notes on: Bottom-up strategy and debugging Matrix allocation issues Using SSE and alignment comments Timing
More informationLecture 3 Message-Passing Programming Using MPI (Part 1)
Lecture 3 Message-Passing Programming Using MPI (Part 1) 1 What is MPI Message-Passing Interface (MPI) Message-Passing is a communication model used on distributed-memory architecture MPI is not a programming
More informationExperiencing Cluster Computing Message Passing Interface
Experiencing Cluster Computing Message Passing Interface Class 6 Message Passing Paradigm The Underlying Principle A parallel program consists of p processes with different address spaces. Communication
More informationNon-Blocking Communications
Non-Blocking Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More information15-440: Recitation 8
15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs
More informationParallel Programming, MPI Lecture 2
Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel
More informationParallel Programming
Parallel Programming Prof. Paolo Bientinesi pauldj@aices.rwth-aachen.de WS 17/18 Exercise MPI_Irecv MPI_Wait ==?? MPI_Recv Paolo Bientinesi MPI 2 Exercise MPI_Irecv MPI_Wait ==?? MPI_Recv ==?? MPI_Irecv
More informationCSE 160 Lecture 18. Message Passing
CSE 160 Lecture 18 Message Passing Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i);
More informationCS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) rabenseifner@hlrs.de Introduction to the Message Passing Interface (MPI) [03] University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS)
More informationMPI Tutorial. Shao-Ching Huang. IDRE High Performance Computing Workshop
MPI Tutorial Shao-Ching Huang IDRE High Performance Computing Workshop 2013-02-13 Distributed Memory Each CPU has its own (local) memory This needs to be fast for parallel scalability (e.g. Infiniband,
More informationIntroduction to MPI. Branislav Jansík
Introduction to MPI Branislav Jansík Resources https://computing.llnl.gov/tutorials/mpi/ http://www.mpi-forum.org/ https://www.open-mpi.org/doc/ Serial What is parallel computing Parallel What is MPI?
More informationIntroduction to MPI. SuperComputing Applications and Innovation Department 1 / 143
Introduction to MPI Isabella Baccarelli - i.baccarelli@cineca.it Mariella Ippolito - m.ippolito@cineca.it Cristiano Padrin - c.padrin@cineca.it Vittorio Ruggiero - v.ruggiero@cineca.it SuperComputing Applications
More informationCS 6230: High-Performance Computing and Parallelization Introduction to MPI
CS 6230: High-Performance Computing and Parallelization Introduction to MPI Dr. Mike Kirby School of Computing and Scientific Computing and Imaging Institute University of Utah Salt Lake City, UT, USA
More informationRecap of Parallelism & MPI
Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break
More informationMessage Passing Interface
Message Passing Interface by Kuan Lu 03.07.2012 Scientific researcher at Georg-August-Universität Göttingen and Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen Am Faßberg, 37077 Göttingen,
More informationDiscussion: MPI Basic Point to Point Communication I. Table of Contents. Cornell Theory Center
1 of 14 11/1/2006 3:58 PM Cornell Theory Center Discussion: MPI Point to Point Communication I This is the in-depth discussion layer of a two-part module. For an explanation of the layers and how to navigate
More informationCSE 160 Lecture 15. Message Passing
CSE 160 Lecture 15 Message Passing Announcements 2013 Scott B. Baden / CSE 160 / Fall 2013 2 Message passing Today s lecture The Message Passing Interface - MPI A first MPI Application The Trapezoidal
More informationIntermediate MPI (Message-Passing Interface) 1/11
Intermediate MPI (Message-Passing Interface) 1/11 What happens when a process sends a message? Suppose process 0 wants to send a message to process 1. Three possibilities: Process 0 can stop and wait until
More informationME964 High Performance Computing for Engineering Applications
ME964 High Performance Computing for Engineering Applications Parallel Computing with MPI Building/Debugging MPI Executables MPI Send/Receive Collective Communications with MPI April 10, 2012 Dan Negrut,
More informationmith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut
mith College CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut dthiebaut@smith.edu Introduction to MPI D. Thiebaut Inspiration Reference MPI by Blaise Barney, Lawrence Livermore National
More informationIntermediate MPI (Message-Passing Interface) 1/11
Intermediate MPI (Message-Passing Interface) 1/11 What happens when a process sends a message? Suppose process 0 wants to send a message to process 1. Three possibilities: Process 0 can stop and wait until
More informationMPI 3. CSCI 4850/5850 High-Performance Computing Spring 2018
MPI 3 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationIntroduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/
Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point
More informationProgramming SoHPC Course June-July 2015 Vladimir Subotic MPI - Message Passing Interface
www.bsc.es Programming with Message-Passing Libraries SoHPC Course June-July 2015 Vladimir Subotic 1 Data Transfer Blocking: Function does not return, before message can be accessed again Process is blocked
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationDEADLOCK DETECTION IN MPI PROGRAMS
1 DEADLOCK DETECTION IN MPI PROGRAMS Glenn Luecke, Yan Zou, James Coyle, Jim Hoekstra, Marina Kraeva grl@iastate.edu, yanzou@iastate.edu, jjc@iastate.edu, hoekstra@iastate.edu, kraeva@iastate.edu High
More informationThe Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing
The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task
More informationPractical Introduction to Message-Passing Interface (MPI)
1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):
More informationPCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.
PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed
More informationWeek 3: MPI. Day 02 :: Message passing, point-to-point and collective communications
Week 3: MPI Day 02 :: Message passing, point-to-point and collective communications Message passing What is MPI? A message-passing interface standard MPI-1.0: 1993 MPI-1.1: 1995 MPI-2.0: 1997 (backward-compatible
More informationTutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros
Tutorial 2: MPI CS486 - Principles of Distributed Computing Papageorgiou Spyros What is MPI? An Interface Specification MPI = Message Passing Interface Provides a standard -> various implementations Offers
More informationCS 426. Building and Running a Parallel Application
CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations
More informationIntroduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014
Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction
More informationThe Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs
1 The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs http://mpi-forum.org https://www.open-mpi.org/ Mike Bailey mjb@cs.oregonstate.edu Oregon State University mpi.pptx
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2019 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationMPI Runtime Error Detection with MUST
MPI Runtime Error Detection with MUST At the 25th VI-HPS Tuning Workshop Joachim Protze IT Center RWTH Aachen University March 2017 How many issues can you spot in this tiny example? #include #include
More informationMPI Correctness Checking with MUST
Center for Information Services and High Performance Computing (ZIH) MPI Correctness Checking with MUST Parallel Programming Course, Dresden, 8.- 12. February 2016 Mathias Korepkat (mathias.korepkat@tu-dresden.de
More informationParallel Programming
Parallel Programming MPI Part 1 Prof. Paolo Bientinesi pauldj@aices.rwth-aachen.de WS17/18 Preliminaries Distributed-memory architecture Paolo Bientinesi MPI 2 Preliminaries Distributed-memory architecture
More information