NUMERICAL PARALLEL COMPUTING
|
|
- Adam Stephens
- 6 years ago
- Views:
Transcription
1 Lecture 5, March 23, 2012: The Message Passing Interface Peter Arbenz, Andreas Adelmann Computer Science Dept, ETH Zürich Paul Scherrer Institut, Villigen Parallel Numerical Computing. Lecture 5, Mar 23, /57
2 Parallel Numerical Computing. Lecture 5, Mar 23, /57 MIMD: Multiple Instruction stream - Multiple Data stream MIMD: Multiple Instruction stream - Multiple Data stream distributed memory machines (multicomputers) all data are local to some processor, programmer responsible for data placement communication by message passing CPU CPU CPU CPU Memory Memory Memory Memory Interconnect
3 arallel Numerical Computing. Lecture 5, Mar 23, /57 Message passing Message passing Communication on parallel computers with distributed memory (multicomputers) is most commonly done by message passing. Processes coordinate their activities by explicitly sending and receiving messages. We assume that processes are statically allocated. That is, the number of processes is set at the beginning of the program execution; no further processes are created during execution. There is usually one process executing on one processor or core. Each process is assigned a unique integer rank in the range 0, 1,..., p 1, where p is the number of processes.
4 arallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI The Message Passing Interface: MPI Like OpenMP for shared memory programming, MPI is an application programmer interface to message passing. MPI extends programming languages (C, C++, Fortran) with a library of functions for point-to-point and collective communication and additional functions for managing the processes participating at the computation and for querying their status. MPI has become a de facto standard for message passing on multicomputers. Standardization by MPI forum: Implementations: OpenMPI: (Brutus runs OpenMPI) MPICH:
5 Parallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI The Message Passing Interface: MPI (cont.) Goal of the Message Passing Interface: to be practical to be portable to be efficient Supported hardware platforms: Distributed Memory: Original target systems. Shared memory Hybrid All parallelism is explicit: programmer is responsible for correctly identifying parallelism and implementing parallel algorithms using MPI constructs. The number of tasks dedicated to run a parallel program is static. New tasks can not be dynamically spawned during run time.
6 arallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI The Message Passing Interface: MPI (cont.) MPI-2 Dynamic processes extensions that remove the static process model of MPI. Provides routines to create new processes. One-Sided communications provides routines for one directional communications. Include shared memory operations (put/get) and remote accumulate operations. Extended collective operations allows e.g. for non-blocking collective operations. Additional language bindings C++ and F90 bindings Parallel I/O Here we restrict ourselves to basic MPI-1.
7 arallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI MPI References P. S. Pacheco: Parallel programming with MPI. Morgan Kaufmann, San Francisco CA Easy to read. Much of the material is with Fortran P. S. Pacheco: Introduction to Parallel programming. Morgan Kaufmann, San Francisco CA Easy to read. Not limited to MPI. Programs mostly in C. W. Gropp, A. Skjellum, E. Lusk: Using MPI: Portable Parallel Programming with the Message Passing Interface. MIT Press, 2nd ed, Online tutorial by Blaise Barney, Lawrence Livermore National Laboratory at (I took images on p. 8 and 26 from that web page.)
8 Parallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI General MPI program structure
9 Parallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI hello.c hello.c #include <stdio.h> #include "mpi.h" main(int argc, char** argv) { int rank; /* rank of process */ int p; /* number of processes */ MPI_Init(&argc, &argv); /* Start up MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &rank);/* Find out proc rank */ MPI_Comm_size(MPI_COMM_WORLD, &p); /* Find out number of processes */ printf("hello world from %d (out of %d procs)\n", rank, p); } MPI_Finalize(); /* Shut down MPI */
10 Parallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI hello.c hello.c (cont.) ~]$ module load quadrics_mpi/stable ~]$ mpicc -o hello hello.c ~]$ bsub -n 4 -o output prun./hello Quadrics MPI job. Job <853076> is submitted to queue <qsnet.s>. [iyves@brutus3 ~]$ cat output... Hello world from 0 (out of 4 procs) Hello world from 2 (out of 4 procs) Hello world from 1 (out of 4 procs) Hello world from 3 (out of 4 procs)...
11 arallel Numerical Computing. Lecture 5, Mar 23, /57 The Message Passing Interface: MPI hello.c MPI communicator A communicator indicates a collection of processes that can send messages to each other. MPI assumes that p processes ranked 0,..., p 1 have been statically allocated (on hopefully p processors) that can communicate with each other by means of MPI commands. The initial set of processes are assigned the default communicator MPI COMM WORLD. MPI Comm rank and MPI Comm size are used by a process to find out its position in the MPI world.
12 arallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Example of simple send and receive commands A typical usage of sending and receiving is given by the following example, where process 0 sends a single float x to process 1. Process 0 executes MPI_Send(&x, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); while process 1 executes MPI_Recv(&x, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status);
13 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands SPMD programming model Process 0 and process 1 execute different statements. However, the single-programm, multiple-data (SPMD) programming model permits individual processes executing different statements of the same program by means of conditional branches. if (my_rank==0) MPI_Send(&x, 1, MPI_FLOAT, 1, 0, MPI_COMM_WORLD); else if (my_rank==1) MPI_Recv(&x, 1, MPI_FLOAT, 0, 0, MPI_COMM_WORLD, &status); The SPMD programming model is the most common approach to programming MIMD systems.
14 arallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands MPI Messages MPI messages Message = Data + Envelope Data: the load that has to be transfered from one place to another. In MPI: A sequence (array) of items of equal type. The data is given by a memory address (e.g. a pointer in C), an array length, and a MPI datatype (cf. next slide). (We do not discuss how to send messages composed of data of varying types.)
15 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands MPI Messages MPI datatype MPI CHAR MPI SHORT MPI INT MPI LONG MPI UNSIGNED CHAR MPI UNSIGNED SHORT MPI UNSIGNED INT MPI UNSIGNED LONG MPI FLOAT MPI DOUBLE MPI LONG DOUBLE MPI BYTE MPI PACKED C datatype signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double
16 arallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands MPI Messages MPI messages (cont.) Message = Data + Envelope Envelope: Contains (1) address (of sender or recipient, respectively, within the communicator) (2) a tag (or message type ) to distinguish messages from same sender to same recipient. Tag of sent and received message must match! Different tags avoid confusion if messages are communicated between same sender/receiver, e.g. in an iteration.
17 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Simple MPI send and receive functions Simple MPI send and receive functions int MPI_Send(void* buffer /* in */, int count /* in */, MPI_Datatype datatype /* in */, int destination /* in */, int tag /* in */, MPI_Comm communicator /* in */) int MPI_Recv(void* buffer /* out */, int count /* in */, MPI_Datatype datatype /* in */, int source /* in */, int tag /* in */, MPI_Comm communicator /* in */, MPI_Status* status /* out */)
18 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Simple MPI send and receive functions Message matching The message sent by process source by the call to MPI Send can be received by process destination if source and destination fit recv comm = send comm recv tag = send tag recv type = send type recv buf size send buf size
19 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Simple MPI send and receive functions The status in MPI Recv returns information on the data that was actually received. status is a C structure with (at least) three members: status -> MPI SOURCE status -> MPI TAG status -> MPI ERROR If, e.g., tag or source have been set to be a wildcard, i.e. MPI ANY TAG or MPI ANY SOURCE, then status -> MPI TAG and status -> MPI SOURCE return the actual values of these parameters. The length of the message is obtained by int MPI_Get_count(MPI_Status* status, /* in */ MPI_Datatype datatype, /* in */ int* count)} /* out */
20 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Pacheco s greetings.c Pacheco s greetings.c #include <stdio.h> #include <string.h> #include "mpi.h" main(int argc, char** argv) { int my_rank; /* rank of process */ int p; /* number of processes */ int source; /* rank of sender */ int dest; /* rank of receiver */ int tag = 0; /* tag for messages */ char message[100]; /* storage for message */ MPI_Status status; /* return status for receive */ MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &p);
21 arallel Numerical Computing. Lecture 5, Mar 23, /57 Simple send and receive commands Pacheco s greetings.c } if (my_rank!= 0) { /* Create message */ sprintf(message, "Greetings from process %d!", my_rank); dest = 0; /* Use strlen+1 so that \0 gets transmitted */ MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } else { /* my_rank == 0 */ for (source=1; source < p; source++) { MPI_Recv(message, 100, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); printf("%s\n", message); } }
22 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Point-to-point communication Point-to-point communication routines involve message passing between two, and only two, different MPI tasks. (Pair of send/receive operations.) In MPI there are different types of send and receive routines used for different purposes. For example: Blocking send / blocking receive Non-blocking send / non-blocking receive Buffered send / buffered receive... Any type of send routine can be paired with any type of receive routine.
23 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Semantics of MPI Send MPI Send may buffer the message in MPI-internal storage. The call to MPI Send will then return. This type of communication is called asynchronous. MPI Send may block until MPI Recv has started to receive the data. Only then MPI Send returns. This type of communication is called synchronous.
24 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Semantics of MPI Send (cont.) The exact behavior of MPI Send depends on the implementation. Typically, messages that are smaller than a default cutoff are buffered, longer messages are blocked. Programmer s view: A blocking send (MPI Send) routine will only return after it is safe to modify the application buffer (your send data) for reuse. This holds for both synchronous and asynchronous implementations.
25 arallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Semantics of MPI Recv MPI Recv always blocks until a matching message has been received. When MPI Recv returns, the message is available in the receive buffer. MPI messages are nonovertaking: If a source process sends two messages to a destination process, then the first message must be available to the destination process before the second one. (This holds only if two processes are involved.)
26 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Communication modes: Buffered receive
27 arallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Deadlocks Program fragment that can generates a deadlock MPI_Comm_rank (comm, &my_rank); if (my_rank == 0) { MPI_Recv (recvbuf, count, MPI_INT, 1, tag, comm, &status); MPI_Send (sendbuf, count, MPI_INT, 1, tag, comm); } else if (my_rank == 1) { MPI_Recv (recvbuf, count, MPI_INT, 0, tag, comm, &status); MPI_Send (sendbuf, count, MPI_INT, 0, tag, comm); } Problem: Both processes wait on each other. Possible solution: Swap send/recv for one of the processes.
28 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Deadlocks (cont.) Program fragment generates a deadlock implementation dependent. MPI_Comm_rank (comm, &my_rank); if (my_rank == 0) { MPI_Send (sendbuf, count, MPI_INT, 1, tag, comm); MPI_Recv (recvbuf, count, MPI_INT, 1, tag, comm, &status); } else if (my_rank == 1) { MPI_Send (sendbuf, count, MPI_INT, 0, tag, comm); MPI_Recv (recvbuf, count, MPI_INT, 0, tag, comm, &status); } The communication completes correctly if the messages sendbuf are stored in a system buffer. See MPI Sendrecv command for data exchange.
29 arallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Non-blocking communication Blocking communication often leads to a bad usage of compute resources. (In particular as todays hardware may have processors dedicated to communication.) A message may be sent long after the receive (MPI Recv) has been issued, leaving the receiving process idling. Non-blocking communication comes to our rescue. Non-blocking send (MPI Isend) and receive (MPI Irecv) routines return almost immediately. They do not wait for any communication events to complete, such as message copying from user memory to system buffer space or the actual arrival of message.
30 arallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Non-blocking communication (cont.) Non-blocking operations simply request the MPI library to perform the operation when it is able. The user can not predict when that will happen. It is unsafe to modify the application buffer before the requested non-blocking operation was actually performed. There are functions for checking this. Non-blocking communication is primarily used to overlap computation with communication and exploit possible performance gains (Latency hiding).
31 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Point-to-point communication Example of a non-blocking receive Non-blocking routines have an additional output parameter, called a request. It allows for checking if a message actually has been received, i.e., has been copied into memory. int MPI_Irecv(... MPI_Comm communicator /* in */, MPI_Request* request /* out */) int MPI_Wait(MPI_Request* request /* in/out */ MPI_Status* status /* out */) The MPI Wait call blocks. The parameter status holds the same information as the MPI Recv would provide. MPI Test does not block.
32 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Collective communication Collective communication In collective communication, all processes participate at the communication. Let s assume that process 0 reads some input data that it needs to make available to all other processes in the group. We know how process 0 could proceed: for (dest = 1; dest < p; dest++) { MPI_Send(data, 10, MPI_INT, dest, tag, MPI_COMM_WORLD);} In this approach p 1 messages are sent, all with the same sender. We know that there are more elegant (and in general more efficient) algorithms to do the above by a tree-structured algorithm (see page 39).
33 Communication Collective communication Broadcast A broadcast implements the tree-structured algorithm: int MPI_Bcast(void* message /* in/out */, int count /* in */, MPI_Datatype datatype /* in */, int root /* in */, MPI_Comm communicator /* in */) The above example becomes: MPI_Bcast(data, 10, MPI_INT, 0, MPI_COMM_WORLD); Here, the number 0 is the rank of the source process. By issuing MPI Bcast the message data is sent from the source process to all other processes in the communicator. All processes execute the same statement. So, data is input data in the source process and output data other wise. Notice that there is no tag! (The reason is history: broadcasts have been used for synchronization.) Parallel Numerical Computing. Lecture 5, Mar 23, /57
34 Communication Collective communication Reduction As OpenMP, MPI provides a reduction function (that uses a tree-structured algorithm): int MPI_Reduce(void* operand /* in */, void* result /* out */, int count /* in */, MPI_Datatype datatype /* in */, MPI_Op operator /* in */, int root /* in */, MPI_Comm comm /* in */) MPI Reduce combines the operands stored in memory location operand and stores the result in *result in process root. Both operand and result refer to count memory locations with data type datatype. MPI Reduce must be called by all processes in the communicator comm, and count, datatype, operator, and root must be the same in each invocation. Parallel Numerical Computing. Lecture 5, Mar 23, /57
35 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Communication Collective communication Operation name MPI MAX MPI MIN MPI SUM MPI PROD MPI LAND MPI BAND MPI LOR MPI BOR MPI LXOR MPI BXOR MPI MAXLOC MPI MINLOC Meaning Maximum Minimum Sum Product Logical and Bitwise and Logical or Bitwise or Logical exclusive or Bitwise exclusive or Maximum and location of maximum Minimum and location of minimum
36 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product Example: Dot product Parallel inner (dot) product with block distribution of the data. float Serial_dot( float x[] /* in */, float y[] /* in */, int n /* in */) { } int i; float sum = 0.0; for (i = 0; i < n; i++) sum = sum + x[i]*y[i]; return sum;
37 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product float Parallel_dot( float local_x[] /* in */, float local_y[] /* in */, int n_bar /* in */) { float local_dot; float dot = 0.0; float Serial_dot(float x[], float y[], int m); local_dot = Serial_dot(local_x, local_y, n_bar); MPI_Reduce(&local_dot, &dot, 1, MPI_FLOAT, MPI_SUM, 0, MPI_COMM_WORLD); return dot; } /* Parallel_dot */
38 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product Allreduce The function MPI Allreduce is used in the same way as MPI Reduce. However, in contrast to MPI Reduce all processes get the result. Thus, parameter root is not needed. int MPI_Allreduce( void* operand /* in */, void* result /* out */, int count /* in */, MPI_Datatype datatype /* in */, MPI_Op operator /* in */, MPI_Comm comm /* in */)
39 arallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product Reduction algorithm for p = 8 (cf. hypercube in d = 3 dimensions.) The reduction takes d = log 2 p steps. In step i (i = 0,..., d 1), processors (k +1)2 i (k = 0, 2,..., p/2 i 2) sends a message to processor k 2 i containing the partial sum σk i. Processor k 2i then computes σ i+1 k = σk i + σi k+1. The desired result is σ0 d = x y.
40 arallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product The hypercube view of the reduction algorithm. Questions: The result of the reduction is known to node 0. What if all the nodes need to know the result?
41 arallel Numerical Computing. Lecture 5, Mar 23, /57 Communication cost Communication cost Communication often means overhead when executing parallel programs. The time for transfering a message between two processors is called communication latency. Most of the time it has the form t comm = t startup + l msg t word where t startup is the time for preparing a message by the sending process, l msg is the length of the message, t word is the per-word transfer time. This is the inverse of the bandwidth of the network. In static networks we have an additional term depending on the number of intermediate nodes (hops). (This effect is in general negligible.) There are two routing strategies in static networks store-and-forward-routing cut-through-routing (pipelining)
42 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Dot product Speedup analysis for the dot product For simplicity wa assume p = 2 q, i.e., q = log 2 p, q N T (p) = (2 n p 1) t flop + log 2 p (t startup + 1 t word + 1 t flop ) S(p) = T (1) T (p) = (2n 1)t flop (2 n p 1)t flop + log 2 p (t startup + t word + t flop ) p = p (2n/p + q 1) 2n 1 + p q (t startup + t word ) (2n 1)t flop n p p 1 + p log 2 p t startup + t word 2n t flop
43 Example: Dot product Speedup analysis for the dot product (cont.) S(p) 1 + p log 2 p }{{} p 1 }{{} 2n t startup t flop }{{} algorithm problem size hardware. E(p) = S(p) p 1 + p log 2 p 2n 1 t startup + t word t flop What happens if p is not an (integer) power of 2? The isoefficiency function is f (p) = Cp log 2 p. Parallel Numerical Computing. Lecture 5, Mar 23, /57
44 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Matrix-vector multiplication Let s consider what ingredients we need to do a matrix-vector multiplication, y = Ax. For simplicity we assume that A is square of order n. Then n 1 y j = a ji x i, 0 j < n. i=0 In OpenMP we could parallelize this by #pragma omp parallel for schedule (static) for (j=0; j< N; j++){ y[j] = 0.0; for (i=0; i < N; i++) y[j] = y[j] + a[j][i] * x[i]; }
45 Example: Matrix-vector multiplication As the outer-most loop is parallelized and we access the matrix row-wise, we can visualize the matrix vector product as follows. This is not quite correct as all processes access all of x. Parallel Numerical Computing. Lecture 5, Mar 23, /57
46 Example: Matrix-vector multiplication How can we do this in a distributed memory environment? Let us assume that the matrix A and the vectors x and y are distributed in the block-wise (or panel) distribution as displayed on the previous slide. Let x k IR m, y k IR m, A k IR m n m = n p 0 k < p, be the portions of x, y, and A, respectively, stored in process k (usually on processor k). Then, y k = A k x. Thus, each element of the vector y is the result of the inner product of a row of A with the vector x. In order to form the inner product of each row of A with x we either have to gather all of x onto each process or we have to scatter each (block-)row of A across the processes. (In our previous OpenMP code the former was done.) Parallel Numerical Computing. Lecture 5, Mar 23, /57
47 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication
48 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication In MPI, gathering vector x on process 0 can be done by the call /* Space allocated in calling program */ float local_x[]; /* local storage for x */ float global_x[]; /* storage for all of x */ /* Assumes that p divides both n */ MPI_Gather(local_x, n/p, MPI_FLOAT, global_x, n/p, MPI_FLOAT, 0, MPI_COMM_WORLD); The syntax is int MPI_Gather(void* send_data /* in */, int send_count /* in */, MPI_Datatype send_type /* in */, void* recv_data /* out */, int recv_count /* in */, MPI_Datatype recv_type /* in */, int dest /* in */, MPI_Comm comm /* in */)
49 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Collecting all the pieces of a distributed vector on a single processor (and arranging the pieces in the correct order) is called a gather operation in MPI. Doing this in all processes is an allgather. That s evidently what we need in the matrix-vector multiplication. MPI provides MPI Allgather to that end. int MPI_Allgather(void* send_data /* in */, int send_count /* in */, MPI_Datatype send_type /* in */, void* recv_data /* out */, int recv_count /* in */, MPI_Datatype recv_type /* in */, MPI_Comm comm /* in */)
50 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Remark: The cost of a gather with a vector of length m on p processors is T gather (p)(m) = log 2 (p) t startup + 2 log 2 (p) m t word = log 2 p 1 i=0 ( 1 tstartup + 2 i m t word ) = q t startup + p m t word, if p = 2 q, as the length of the message doubles in each stage of the tree-structured algorithm.
51 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication If A is stored block column-wise, we have Here, x k IR m, y k IR m, A k IR n m.
52 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Formally, we have to do the following 1. Compute (locally) y j = A j x j, 0 j < p. 2. Reduce y = p 1 j=0 yj with root process 0, e.g. 3. Scatter y on the p processes. The last two items can be combined by the call to MPI Reduce Scatter (see next page). Remark: The cost of a reduce scatter with vector pieces of length m on p processors is T reduce scatter (p)(m) = log 2 (p) t startup + 2 log 2 (p) m t word = q t startup + p m t word, if p = 2 q, Here, we have neglected the arithmetic operations in the reduction.
53 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication
54 Example: Matrix-vector multiplication If A is stored in blocks of size m m we combine the two algorithms. For convenience, we assume that p = q 2, q N, and indicate the process with rank k by (j, l) = ( k/q, k mod q) Parallel Numerical Computing. Lecture 5, Mar 23, /57
55 arallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Life is easier, if we split x and y in chunks of n/q and store these in the diagonal processes (i, i). Then, x i IR m, y j IR m, A j,i IR m m and q 1 i=0 q 1 y j = A j,i x i = y (i) j, q = p, i=0 where A j,i denotes the block of A on process(or) (j, i) jq + i. The algorithm proceeds as follows: 1. Broadcast x c along column c 2. Local computation: y (r) c = A r,c x c. 3. Reduce along row: y (r) = q 1 c=0 y(r) c on process (c, c).
56 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Complexity of matrix-vector multiplication A stored block-wise (p = q 2 ) T MxV = 2n2 p t flop + T bcast (q)( n q ) + T reduce(q)( n q ) = 2n2 p t flop + log 2 p t startup + 2n t word. Remark: Note that log 2 p = 2 log 2 q. Remark: Here, we have assumed that p is a power of 2. Otherwise, the complexity rises by at most a factor 2. Remark: The complexity of all three matrix-vector multiplications is (approximately) equal.
57 Parallel Numerical Computing. Lecture 5, Mar 23, /57 Example: Matrix-vector multiplication Speedup and efficiency The speedup of these algorithm is S(p) = p 1 + p log 2 p t startup 2n 2 t flop + p t. word 2n t flop Recall that efficiency is defined as Thus, iso-efficiency holds if E(p) = S(p) p. p n. The algorithm is perfectly scalable.
Scientific Computing
Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 21 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling
More informationHolland Computing Center Kickstart MPI Intro
Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:
More informationMPI Message Passing Interface
MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information
More informationMPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group
MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,
More informationMPI. (message passing, MIMD)
MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point
More informationHPC Parallel Programing Multi-node Computation with MPI - I
HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright
More informationMessage Passing Interface. most of the slides taken from Hanjun Kim
Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message
More informationDistributed Systems + Middleware Advanced Message Passing with MPI
Distributed Systems + Middleware Advanced Message Passing with MPI Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola
More informationCSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )
CSE 613: Parallel Programming Lecture 21 ( The Message Passing Interface ) Jesmin Jahan Tithi Department of Computer Science SUNY Stony Brook Fall 2013 ( Slides from Rezaul A. Chowdhury ) Principles of
More informationMessage Passing Interface
Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented
More informationWhat s in this talk? Quick Introduction. Programming in Parallel
What s in this talk? Parallel programming methodologies - why MPI? Where can I use MPI? MPI in action Getting MPI to work at Warwick Examples MPI: Parallel Programming for Extreme Machines Si Hammond,
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018
More informationMessage Passing Interface
MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across
More informationParallel Computing Paradigms
Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is
More informationIntroduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/
Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationCPS 303 High Performance Computing
CPS 303 High Performance Computing Wensheng Shen Department of Computational Science SUNY Brockport Chapter 5: Collective communication The numerical integration problem in Chapter 4 is not very efficient.
More informationIntroduction to MPI: Part II
Introduction to MPI: Part II Pawel Pomorski, University of Waterloo, SHARCNET ppomorsk@sharcnetca November 25, 2015 Summary of Part I: To write working MPI (Message Passing Interface) parallel programs
More informationParallel programming MPI
Parallel programming MPI Distributed memory Each unit has its own memory space If a unit needs data in some other memory space, explicit communication (often through network) is required Point-to-point
More informationRecap of Parallelism & MPI
Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break
More informationStandard MPI - Message Passing Interface
c Ewa Szynkiewicz, 2007 1 Standard MPI - Message Passing Interface The message-passing paradigm is one of the oldest and most widely used approaches for programming parallel machines, especially those
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationMore about MPI programming. More about MPI programming p. 1
More about MPI programming More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems, the CPUs share
More informationBasic MPI Communications. Basic MPI Communications (cont d)
Basic MPI Communications MPI provides two non-blocking routines: MPI_Isend(buf,cnt,type,dst,tag,comm,reqHandle) buf: source of data to be sent cnt: number of data elements to be sent type: type of each
More informationIntroduction to MPI, the Message Passing Library
Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of
More informationParallel Programming. Using MPI (Message Passing Interface)
Parallel Programming Using MPI (Message Passing Interface) Message Passing Model Simple implementation of the task/channel model Task Process Channel Message Suitable for a multicomputer Number of processes
More informationDistributed Memory Programming with MPI
Distributed Memory Programming with MPI Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna moreno.marzolla@unibo.it Algoritmi Avanzati--modulo 2 2 Credits Peter Pacheco,
More informationIntroduction to MPI. HY555 Parallel Systems and Grids Fall 2003
Introduction to MPI HY555 Parallel Systems and Grids Fall 2003 Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running Typical layout of an
More informationMPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].
MPI: The Message-Passing Interface Most of this discussion is from [1] and [2]. What Is MPI? The Message-Passing Interface (MPI) is a standard for expressing distributed parallelism via message passing.
More informationCS 179: GPU Programming. Lecture 14: Inter-process Communication
CS 179: GPU Programming Lecture 14: Inter-process Communication The Problem What if we want to use GPUs across a distributed system? GPU cluster, CSIRO Distributed System A collection of computers Each
More informationLecture 7: Distributed memory
Lecture 7: Distributed memory David Bindel 15 Feb 2010 Logistics HW 1 due Wednesday: See wiki for notes on: Bottom-up strategy and debugging Matrix allocation issues Using SSE and alignment comments Timing
More informationProgramming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction
More informationOutline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM
THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationCollective Communication in MPI and Advanced Features
Collective Communication in MPI and Advanced Features Pacheco s book. Chapter 3 T. Yang, CS240A. Part of slides from the text book, CS267 K. Yelick from UC Berkeley and B. Gropp, ANL Outline Collective
More informationDistributed Memory Systems: Part IV
Chapter 5 Distributed Memory Systems: Part IV Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 293/342 The Message Passing Interface is a standard for creation of parallel programs using
More informationLecture 7: More about MPI programming. Lecture 7: More about MPI programming p. 1
Lecture 7: More about MPI programming Lecture 7: More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems
More informationFirst day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS
First day Basics of parallel programming RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS Today s schedule: Basics of parallel programming 7/22 AM: Lecture Goals Understand the design of typical parallel
More informationCSE 160 Lecture 15. Message Passing
CSE 160 Lecture 15 Message Passing Announcements 2013 Scott B. Baden / CSE 160 / Fall 2013 2 Message passing Today s lecture The Message Passing Interface - MPI A first MPI Application The Trapezoidal
More informationOutline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.
Lecture 8 MPI Outline Introduction to HPC computing OpenMP MPI Introduction Understanding communications Collective communications Communicators Topologies Grouping Data for Communication Input / output
More informationMessage Passing Interface - MPI
Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 24, 2011 Many slides adapted from lectures by
More informationAn Introduction to MPI
An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current
More informationLesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.
The goals of this lesson are: understanding the MPI programming model managing the MPI environment handling errors point-to-point communication 1. The MPI Environment Lesson 1 MPI (Message Passing Interface)
More informationME964 High Performance Computing for Engineering Applications
ME964 High Performance Computing for Engineering Applications Parallel Computing with MPI Building/Debugging MPI Executables MPI Send/Receive Collective Communications with MPI April 10, 2012 Dan Negrut,
More informationParallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:
COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single
More informationParallel Programming
Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG
More informationPoint-to-Point Communication. Reference:
Point-to-Point Communication Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/mpi/ Introduction Point-to-point communication is the fundamental communication facility provided by the MPI library. Point-to-point
More informationPCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.
PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed
More informationIntroduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign
Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s
More informationOutline. Communication modes MPI Message Passing Interface Standard
MPI THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationProgramming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing
More informationThe Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing
The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task
More informationHigh Performance Computing Course Notes Message Passing Programming I
High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works
More informationCollective Communications I
Collective Communications I Ned Nedialkov McMaster University Canada CS/SE 4F03 January 2016 Outline Introduction Broadcast Reduce c 2013 16 Ned Nedialkov 2/14 Introduction A collective communication involves
More informationIntroduction to parallel computing concepts and technics
Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing
More informationMessage Passing Interface
Message Passing Interface by Kuan Lu 03.07.2012 Scientific researcher at Georg-August-Universität Göttingen and Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen Am Faßberg, 37077 Göttingen,
More informationMPI - The Message Passing Interface
MPI - The Message Passing Interface The Message Passing Interface (MPI) was first standardized in 1994. De facto standard for distributed memory machines. All Top500 machines (http://www.top500.org) are
More informationCS 426. Building and Running a Parallel Application
CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations
More informationMPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education
MPI Tutorial Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education Center for Vision, Cognition, Learning and Art, UCLA July 15 22, 2013 A few words before
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationMPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI
Linux MPI Linux MPI Message Passing Interface Linux MPI Linux MPI Message Passing Interface MPI MPICH MPI Department of Science and Engineering Computing School of Mathematics School Peking University
More informationCS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.
Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, 2010 Final Project Purpose: - A chance to dig in deeper into a parallel programming model and explore concepts. -
More informationHigh Performance Computing
High Performance Computing Course Notes 2009-2010 2010 Message Passing Programming II 1 Communications Point-to-point communications: involving exact two processes, one sender and one receiver For example,
More informationCSE 160 Lecture 18. Message Passing
CSE 160 Lecture 18 Message Passing Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i);
More informationProgramming with MPI. Pedro Velho
Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?
More informationCOMP 322: Fundamentals of Parallel Programming
COMP 322: Fundamentals of Parallel Programming https://wiki.rice.edu/confluence/display/parprog/comp322 Lecture 37: Introduction to MPI (contd) Vivek Sarkar Department of Computer Science Rice University
More informationProgramming Using the Message Passing Paradigm
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic Overview
More informationTopics. Lecture 7. Review. Other MPI collective functions. Collective Communication (cont d) MPI Programming (III)
Topics Lecture 7 MPI Programming (III) Collective communication (cont d) Point-to-point communication Basic point-to-point communication Non-blocking point-to-point communication Four modes of blocking
More informationNon-Blocking Communications
Non-Blocking Communications Deadlock 1 5 2 3 4 Communicator 0 2 Completion The mode of a communication determines when its constituent operations complete. - i.e. synchronous / asynchronous The form of
More informationProgramming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho
Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -
More informationMPI point-to-point communication
MPI point-to-point communication Slides Sebastian von Alfthan CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. Introduction MPI processes are independent, they communicate to coordinate
More informationMessage-Passing Computing
Chapter 2 Slide 41þþ Message-Passing Computing Slide 42þþ Basics of Message-Passing Programming using userlevel message passing libraries Two primary mechanisms needed: 1. A method of creating separate
More informationDepartment of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 4: MPI, CG M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationIntroduction in Parallel Programming - MPI Part I
Introduction in Parallel Programming - MPI Part I Instructor: Michela Taufer WS2004/2005 Source of these Slides Books: Parallel Programming with MPI by Peter Pacheco (Paperback) Parallel Programming in
More informationMessage Passing Interface: Basic Course
Overview of DM- HPC2N, UmeåUniversity, 901 87, Sweden. April 23, 2015 Table of contents Overview of DM- 1 Overview of DM- Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2
More informationPart - II. Message Passing Interface. Dheeraj Bhardwaj
Part - II Dheeraj Bhardwaj Department of Computer Science & Engineering Indian Institute of Technology, Delhi 110016 India http://www.cse.iitd.ac.in/~dheerajb 1 Outlines Basics of MPI How to compile and
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationNon-Blocking Communications
Non-Blocking Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationAgenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2.
Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Agenda 1) Recap: MPI 2) 2. Übungszettel 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel, Projektauswahl, Konzepte 5)
More informationMPI 5. CSCI 4850/5850 High-Performance Computing Spring 2018
MPI 5 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationProgramming Using the Message-Passing Paradigm (Chapter 6) Alexandre David
Programming Using the Message-Passing Paradigm (Chapter 6) Alexandre David 1.2.05 1 Topic Overview Principles of Message-Passing Programming MPI: the Message Passing Interface Topologies and Embedding
More informationLecture 9: MPI continued
Lecture 9: MPI continued David Bindel 27 Sep 2011 Logistics Matrix multiply is done! Still have to run. Small HW 2 will be up before lecture on Thursday, due next Tuesday. Project 2 will be posted next
More informationTopic Notes: Message Passing Interface (MPI)
Computer Science 400 Parallel Processing Siena College Fall 2008 Topic Notes: Message Passing Interface (MPI) The Message Passing Interface (MPI) was created by a standards committee in the early 1990
More informationMasterpraktikum - Scientific Computing, High Performance Computing
Masterpraktikum - Scientific Computing, High Performance Computing Message Passing Interface (MPI) Thomas Auckenthaler Wolfgang Eckhardt Technische Universität München, Germany Outline Hello World P2P
More informationParallel Programming, MPI Lecture 2
Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Point-to-Point Communication Non Blocking PTP Communication 2 Collective
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON CÉCI - Consortium des Équipements de Calcul Intensif http://hpc.montefiore.ulg.ac.be Outline Introduction From serial source code to parallel execution MPI
More informationDistributed Memory Programming with Message-Passing
Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and
More informationCS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with
More informationCapstone Project. Project: Middleware for Cluster Computing
Capstone Project Project: Middleware for Cluster Computing Middleware is computer software that connects software components or applications. The software consists of a set of enabling services that allow
More informationSlides prepared by : Farzana Rahman 1
Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based
More informationIntroduction to the Message Passing Interface (MPI)
Applied Parallel Computing LLC http://parallel-computing.pro Introduction to the Message Passing Interface (MPI) Dr. Alex Ivakhnenko March 4, 2018 Dr. Alex Ivakhnenko (APC LLC) Introduction to MPI March
More information15-440: Recitation 8
15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs
More informationECE 574 Cluster Computing Lecture 13
ECE 574 Cluster Computing Lecture 13 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 21 March 2017 Announcements HW#5 Finally Graded Had right idea, but often result not an *exact*
More informationMPI 2. CSCI 4850/5850 High-Performance Computing Spring 2018
MPI 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives
More informationMessage Passing Interface - MPI
Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico March 31, 2016 Many slides adapted from lectures by Bill
More informationData parallelism. [ any app performing the *same* operation across a data stream ]
Data parallelism [ any app performing the *same* operation across a data stream ] Contrast stretching: Version Cores Time (secs) Speedup while (step < NumSteps &&!converged) { step++; diffs = 0; foreach
More informationMPI Collective communication
MPI Collective communication CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) MPI Collective communication Spring 2018 1 / 43 Outline 1 MPI Collective communication
More informationCOSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines
Network card Network card 1 COSC 6374 Parallel Computation Message Passing Interface (MPI ) I Introduction Edgar Gabriel Fall 015 Distributed memory machines Each compute node represents an independent
More informationLecture 3 Message-Passing Programming Using MPI (Part 1)
Lecture 3 Message-Passing Programming Using MPI (Part 1) 1 What is MPI Message-Passing Interface (MPI) Message-Passing is a communication model used on distributed-memory architecture MPI is not a programming
More informationHigh-Performance Computing: MPI (ctd)
High-Performance Computing: MPI (ctd) Adrian F. Clark: alien@essex.ac.uk 2015 16 Adrian F. Clark: alien@essex.ac.uk High-Performance Computing: MPI (ctd) 2015 16 1 / 22 A reminder Last time, we started
More informationCluster Computing MPI. Industrial Standard Message Passing
MPI Industrial Standard Message Passing MPI Features Industrial Standard Highly portable Widely available SPMD programming model Synchronous execution MPI Outer scope int MPI_Init( int *argc, char ** argv)
More information