Introduction to the Message Passing Interface (MPI)

Similar documents
MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].

Message Passing Interface

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

MPI. (message passing, MIMD)

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

MPI Collective communication

CS 179: GPU Programming. Lecture 14: Inter-process Communication

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

MPI Message Passing Interface

An Introduction to MPI

CSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )

Supercomputing in Plain English

CSE 160 Lecture 15. Message Passing

Message Passing Interface. most of the slides taken from Hanjun Kim

Distributed Memory Parallel Programming

Message Passing Interface

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Slides prepared by : Farzana Rahman 1

Parallel Programming & Cluster Computing Distributed Multiprocessing

Holland Computing Center Kickstart MPI Intro

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

Distributed Memory Programming with Message-Passing

Parallel Programming in C with MPI and OpenMP

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

Linux Clusters Institute: Introduction to MPI. Presented by: David Swanson, Director, HCC Based upon slides by: Henry Neeman, University of Oklahoma

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Supercomputing in Plain English

int sum;... sum = sum + c?

Collective Communication in MPI and Advanced Features

CSE 160 Lecture 18. Message Passing

MPI MESSAGE PASSING INTERFACE

Distributed Memory Programming with MPI

MPI MESSAGE PASSING INTERFACE

Distributed Memory Programming with MPI

Practical Scientific Computing: Performanceoptimized

Outline. Communication modes MPI Message Passing Interface Standard

Chapter 3. Distributed Memory Programming with MPI

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM

Recap of Parallelism & MPI

15-440: Recitation 8

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

Basic MPI Communications. Basic MPI Communications (cont d)

Practical Course Scientific Computing and Visualization

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

HPC Parallel Programing Multi-node Computation with MPI - I

Parallel Programming. Using MPI (Message Passing Interface)

CS 426. Building and Running a Parallel Application

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Parallel Computing and the MPI environment

Parallel Computing Paradigms

Parallel programming MPI

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

Peter Pacheco. Chapter 3. Distributed Memory Programming with MPI. Copyright 2010, Elsevier Inc. All rights Reserved

Distributed Memory Programming with MPI. Copyright 2010, Elsevier Inc. All rights Reserved

Cluster Computing MPI. Industrial Standard Message Passing

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Parallel Programming in C with MPI and OpenMP

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

High Performance Computing Course Notes Message Passing Programming I

Message-Passing Computing

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018

Introduction to Parallel Programming & Cluster Computing Distributed Multiprocessing Josh Alexander, University of Oklahoma Ivan Babic, Earlham

NAME MPI_Address - Gets the address of a location in memory. INPUT PARAMETERS location - location in caller memory (choice)

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

Parallel Programming, MPI Lecture 2

AMath 483/583 Lecture 21

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Practical Introduction to Message-Passing Interface (MPI)

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

Introduction to parallel computing concepts and technics

Collective Communications I

Introduction in Parallel Programming - MPI Part I

Message Passing Interface

The Message Passing Model

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

High performance computing. Message Passing Interface

Introduction to MPI: Part II

ECE 574 Cluster Computing Lecture 13

Distributed Systems + Middleware Advanced Message Passing with MPI

Message Passing Interface - MPI

Topics. Lecture 7. Review. Other MPI collective functions. Collective Communication (cont d) MPI Programming (III)

CSE. Parallel Algorithms on a cluster of PCs. Ian Bush. Daresbury Laboratory (With thanks to Lorna Smith and Mark Bull at EPCC)

Parallel Short Course. Distributed memory machines

Introduction to MPI. Branislav Jansík

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

Chapter 4. Message-passing Model

Tutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros

Lecture 7: Distributed memory

Standard MPI - Message Passing Interface

AMath 483/583 Lecture 18 May 6, 2011

NUMERICAL PARALLEL COMPUTING

MPI 2. CSCI 4850/5850 High-Performance Computing Spring 2018

High-Performance Computing: MPI (ctd)

MPI Runtime Error Detection with MUST

Transcription:

Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 1 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 2 / 38

Acknowledgements Some material used in creating these slides comes from MPI Programming Model: Desert Islands Analogy by Henry Neeman, University of Oklahoma Supercomputing Center. An Introduction to MPI by William Gropp and Ewing Lusk, Argonne National Laboratory. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 3 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 4 / 38

What is MPI? MPI stands for Message Passing Interface and is a library specification for message-passing, proposed as a standard by a broadly based committee of vendors, implementors, and users. MPI consists of 1 a header file mpi.h 2 a library of routines and functions, and 3 a runtime system. MPI is for parallel computers, clusters, and heterogeneous networks. MPI is full-featured. MPI is designed to provide access to advanced parallel hardware for end users library writers tool developers MPI can be used with C/C++, Fortran, and many other languages. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 5 / 38

MPI is an API MPI is actually just an Application Programming Interface (API). As such, MPI specifies what a call to each routine should look like, and how each routine should behave, but does not specify how each routine should be implemented, and sometimes is intentionally vague about certain aspects of a routines behavior; implementations are often platform vendor specific, and has multiple open-source and proprietary implementations. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 6 / 38

Example MPI routines The following routines are found in nearly every program that uses MPI: MPI_Init() starts the MPI runtime environment. MPI_Finalize() shuts down the MPI runtime environment. MPI_Comm_size() gets the number of processes, N p. MPI_Comm_rank() gets the process ID of the current process which is between 0 and N p 1, inclusive. (These last two routines are typically called right after MPI_Init().) CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 7 / 38

More example MPI routines Some of the simplest and most common communication routines are: MPI_Send() sends a message from the current process to another process (the destination). MPI_Recv() receives a message on the current process from another process (the source). MPI_Bcast() broadcasts a message from one process to all of the others. MPI_Reduce() performs a reduction (e.g. a global sum, maximum, etc.) of a variable in all processes, with the result ending up in a single process. MPI_Allreduce() performs a reduction of a variable in all processes, with the result ending up in all processes. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 8 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 9 / 38

MPI Hello world: hello.c # include <stdio.h> # include <mpi.h> int main ( int argc, char * argv [] ) { int rank ; int number_ of_ processes ; MPI_Init ( &argc, & argv ); MPI_ Comm_ size ( MPI_COMM_WORLD, & number_ of_ processes ); MPI_ Comm_ rank ( MPI_COMM_WORLD, & rank ); printf ( " hello from process % d of % d\ n", rank, number_ of_ processes ); MPI_Finalize (); } return 0; CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 10 / 38

MPI Hello world output Running the program produces the output Note: hello from process 3 of 8 hello from process 0 of 8 hello from process 1 of 8 hello from process 7 of 8 hello from process 2 of 8 hello from process 5 of 8 hello from process 6 of 8 hello from process 4 of 8 All MPI processes (normally) run the same executable Each MPI process knows which rank it is Each MPI process knows how many processes are part of the same job The processes run in a non-deterministic order CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 11 / 38

Communicators Recall the MPI initialization sequence: MPI_Init ( &argc, & argv ); MPI_ Comm_ size ( MPI_COMM_WORLD, & number_ of_ processes ); MPI_ Comm_ rank ( MPI_COMM_WORLD, & rank ); MPI uses communicators to organize how processes communicate with each other. A single communicator, MPI_COMM_WORLD, is created by MPI_Init() and all the processes running the program have access to it. Note that process ranks are relative to a communicator. A program may have multiple communicators; if so, a process may have multiple ranks, one for each communicator it is associated with. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 12 / 38

MPI is (usually) SPMD Usually MPI is run in SPMD (Single Program, Multiple Data) mode. (It is possible to run multiple programs, i.e. MPMD). The program can use its rank to determine its role: const int SERVER_ RANK = 0; if ( rank == SERVER_ RANK ) { /* do server stuff */ } else { /* do compute node stuff */ } as shown here, often the rank 0 process plays the role of server or process coordinator. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 13 / 38

A second MPI program: greeting.c The next several slides show the source code for an MPI program that works on a client-server model. When the program starts, it initializes the MPI system then determines if it is the server process (rank 0) or a client process. Each client process will construct a string message and send it to the server. The server will receive and display messages from the clients one-by-one. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 14 / 38

greeting.c: main # include <stdio.h> # include <mpi.h> const int SERVER_ RANK = 0; const int MESSAGE_ TAG = 0; int main ( int argc, char * argv [] ) { int rank, number_ of_ processes ; MPI_Init ( &argc, & argv ); MPI_ Comm_ size ( MPI_COMM_WORLD, & number_ of_ processes ); MPI_ Comm_ rank ( MPI_COMM_WORLD, & rank ); if ( rank == SERVER_ RANK ) do_ server_ work ( number_ of_ processes ); else do_ client_ work ( rank ); } MPI_Finalize (); return 0; CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 15 / 38

greeting.c: server void do_ server_ work ( int number_ of_ processes ) { const int max_ message_ length = 256; char message [ max_ message_ length ]; int src ; MPI_ Status status ; } for ( src = 0; src < number_ of_ processes ; src ++ ) { if ( src!= SERVER_ RANK ) { MPI_ Recv ( message, max_ message_ length, MPI_CHAR, src, MESSAGE_TAG, MPI_COMM_WORLD, & status ); printf ( " Received : %s\n", message ); } } CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 16 / 38

greeting.c: client void do_ client_ work ( int rank ) { const int max_ message_ length = 256; char message [ max_ message_ length ]; int message_ length ; message_ length = sprintf ( message, " Greetings from process % d", rank ); message_ length ++; /* add one for null char */ } MPI_ Send ( message, message_length, MPI_CHAR, SERVER_RANK, MESSAGE_TAG, MPI_ COMM_ WORLD ); CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 17 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 18 / 38

Compiling an MPI program Compiling a program for MPI is almost just like compiling a regular C or C++ program Compiler front-ends have been defined that know where the mpi.h file and the proper libraries are found. The C compiler is mpicc and the C++ compiler is mpic++. For example, to compile hello.c you would use a command like mpicc - O2 -o hello hello. c CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 19 / 38

Running an MPI program Here is a sample session compiling and running the program greeting.c. Note: $ mpicc - O2 -o greeting greeting. c $ mpiexec -n 1 greeting $ mpiexec -n 2 greeting Greetings from process 1 $ mpiexec -n 4 greeting Greetings from process 1 Greetings from process 2 Greetings from process 3 the server process (rank 0) does not send a message, but does display the contents of messages received from the other processes. mpirun can be used rather than mpiexec. the arguments to mpiexec vary between MPI implementations. mpiexec (or mpirun) may not be available. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 20 / 38

Deterministic operation? You may have noticed that in the four-process case the greeting messages were printed out in-order. Does this mean that the order the messages were sent is deterministic? Look again at the loop that carries out the server s work: for ( src = 0; src < number_ of_ processes ; src ++ ) { if ( src!= SERVER_ RANK ) { MPI_ Recv ( message, max_ message_ length, MPI_CHAR, src, MESSAGE_TAG, MPI_COMM_WORLD, & status ); printf ( " Received : %s\n", message ); } } The server loops over values of src from 0 to number_of_processes, skipping the server s own rank. The fourth argument to MPI_Recv() is the rank of the process the message is received from. Here the messages are received in increasing rank order, regardless of the sending order. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 21 / 38

Non-deterministic receive order By making one small change, we can allow the messages to be received in any order. The constant MPI_ANY_SOURCE can be used in the MPI_Recv() function to indicate that the next available message with the correct tag should be read, regardless of its source. for ( src = 0; src < number_ of_ processes ; src ++ ) { if ( src!= SERVER_ RANK ) { MPI_ Recv ( message, max_ message_ length, MPI_CHAR, MPI_ANY_SOURCE, MESSAGE_TAG, MPI_COMM_WORLD, & status ); printf ( " Received : %s\n", message ); } } Note: it is possible to use the data returned in status to determine the message s source. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 22 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 23 / 38

MPI function return values Nearly all MPI functions return an integer status code: MPI_SUCCESS if function completed without error, otherwise an error code is returned, Most examples you find on the web and in textbooks do not check the MPI function return status value...... but this should be done in production code. It can certainly help to avoid errors during development. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 24 / 38

Sample MPI error handler Here is a sample MPI error handler. If called with a non-successful status value, it displays a string corresponding to the error and then exits all errors are treated as fatal. void mpi_ check_ status ( int mpi_ status ) { if ( mpi_ status!= MPI_ SUCCESS ) { int len ; char err_ string [ MPI_ MAX_ ERROR_ STRING ]; MPI_ Error_ string ( mpi_status, err_string, & len ); fprintf ( stderr, " MPI Error : (%d) %s\n", mpi_status, err_ string ); exit ( EXIT_ FAILURE ); } } CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 25 / 38

Outline 1 The Message Passing Interface What is MPI? MPI Examples Compiling and running MPI programs Error handling in MPI programs MPI communication, datatypes, and tags CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 26 / 38

MPI Send() The calling sequence for MPI_Send() is int MPI_ Send ( void * buf, /* pointer to send buffer */ int count, /* number of items to send */ MPI_ Datatype datatype, /* datatype of buffer elements */ int dest, /* rank of destination process */ int tag, /* message type identifier */ MPI_ Comm comm ) /* MPI communicator to use */ The MPI_Send() function initiates a blocking send. Here blocking does not indicate that the sender waits for the message to be received, but rather that the sender waits for the message to be accepted by the MPI system. It does mean that once this function returns the send buffer may be changed with out impacting the send operation. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 27 / 38

MPI Recv() The calling sequence for MPI_Recv() is int MPI_ Recv ( void * buf, /* pointer to send buffer */ int count, /* number of items to send */ MPI_ Datatype datatype, /* datatype of buffer elements */ int source, /* rank of sending process */ int tag, /* message type identifier */ MPI_ Comm comm, /* MPI communicator to use */ MPI_ Status * status ) /* MPI status object */ The MPI_Recv() function initiates a blocking receive. It will not return to its caller until a message with the specified tag is received from the specified source. MPI_ANY_SOURCE may be used to indicate the message should be accepted from any source. MPI_ANY_TAG may be used to indicate the message should be accepted regardless of its tag. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 28 / 38

MPI Datatypes Here is a list of the most commonly used MPI datatypes. There are others and users can construct their own datatypes to handle special situations. C/C++ datatype MPI datatype char int float double MPI_CHAR MPI_INT MPI_FLOAT MPI_DOUBLE The count argument in both MPI_Send() and MPI_Recv() specifies the number of elements of the indicated type in the buffer, not the number of bytes in the buffer. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 29 / 38

MPI Tags MPI uses tags to identify messages. Why is this necessary? Isn t just knowing the source or destination sufficient? Often processes need to communicate different types of messages. For example, one message might contain a domain specification and another message might contain domain data. Some things to keep in mind about tags: message tags can be any integer. use const int to name tags; it s poor programming style to use numbers for tags in send and receive function calls. some simple programs (like greeting.c) can use a single tag. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 30 / 38

Three more communication functions Send and Receive carry out point-to-point communication; one process sending a message to one other process. MPI supports many other communication functions. Three others that we ll examine today are 1 MPI_Bcast(): single process sends message to all processes. 2 MPI_Reduce(): data values in all processes are reduced via some operation (e.g. sum, max, etc.) to a single value on a single process. 3 MPI_Allreduce(): data values in all processes are reduced via some operation to a single value available on all processes. These are examples of collective communication. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 31 / 38

MPI Bcast() The calling sequence for MPI_Bcast() is int MPI_ Bcast ( void * buf, /* pointer to send buffer */ int count, /* number of items to send */ MPI_ Datatype datatype, /* datatype of buffer elements */ int root, /* rank of broadcast root */ MPI_ Comm comm ) /* MPI communicator to use */ all processes call MPI_Bcast() the data pointed to by buf in the process with rank root is sent to all other processes upon return, the data pointed to by buf in each process is the same. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 32 / 38

MPI Bcast() example What will be displayed by the code segment below when run on 4 processors? MPI_ Comm_ size ( MPI_COMM_WORLD, & number_ of_ processes ); MPI_ Comm_ rank ( MPI_COMM_WORLD, & rank ); data = 111 * rank ; printf ( " Before : process % d has data %03 d\ n", rank, data ); root = 6 % number_ of_ processes ; MPI_ Bcast ( & data, 1, MPI_INT, root, MPI_ COMM_ WORLD ); printf ( " After : process % d has data %03 d\ n", rank, data ); CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 33 / 38

MPI Bcast() example output Here is some example output when run on 4 processors: Before : process 1 has data 111 Before : process 3 has data 333 After : process 3 has data 222 After : process 1 has data 222 Before : process 0 has data 000 After : process 0 has data 222 Before : process 2 has data 222 After : process 2 has data 222 Of course, the statements could appear in any order. We see that the data from the process with rank 2 (since root = 2) was broadcast to all processes. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 34 / 38

MPI Reduce() The calling sequence for MPI_Reduce() is int MPI_ Reduce ( void * sendbuf, /* pointer to send buffer */ void * recvbuf, /* pointer to receive buffer */ int count, /* number of items to send */ MPI_ Datatype datatype, /* datatype of buffer elements */ MPI_Op op, /* reduce operation */ int root, /* rank of root process */ MPI_ Comm comm ) /* MPI communicator to use */ Contents of sendbuf on all processes are combined via the reduction operation specified by op (e.g. MPI_SUM, MPI_MAX, etc.) with the result being placed in recvbuf on the process with rank root. The contents of sendbuf on all processes is unchanged. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 35 / 38

MPI Allreduce() The calling sequence for MPI_Allreduce() is int MPI_ Allreduce ( void * sendbuf, /* pointer to send buffer */ void * recvbuf, /* pointer to receive buffer */ int count, /* number of items to send */ MPI_ Datatype datatype, /* datatype of buffer elements */ MPI_Op op, /* reduce operation */ MPI_ Comm comm ) /* MPI communicator to use */ Contents of sendbuf on all processes are combined via the reduction operation specified by op (e.g. MPI_SUM, MPI_MAX, etc.) with the result being placed in recvbuf on all processes. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 36 / 38

MPI Reduce() and MPI Allreduce() example What will be displayed by the code segment below when run on 4 processors? const int MASTER_ RANK = 0; sum = 0; MPI_Reduce ( &rank, &sum, 1, MPI_INT, MPI_SUM, MASTER_RANK, MPI_ COMM_ WORLD ); printf ( " Reduce : process %d has %3d\n", rank, sum ); sum = 0; MPI_Allreduce ( &rank, &sum, 1, MPI_INT, MPI_SUM, MPI_ COMM_ WORLD ); printf ( " Allreduce : process % d has %3 d\ n", rank, sum ); CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 37 / 38

MPI Reduce() and MPI Allreduce() example output Here is some example output when run on 4 processors: Reduce : process 1 has 0 Reduce : process 2 has 0 Reduce : process 3 has 0 Reduce : process 0 has 6 Allreduce : process 1 has 6 Allreduce : process 2 has 6 Allreduce : process 0 has 6 Allreduce : process 3 has 6 As in the last example, the statements could appear in any order. CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018 38 / 38