Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Similar documents
Parallel Programming, MPI Lecture 2

Message Passing Interface

High Performance Computing Course Notes Message Passing Programming I

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

Introduction to the Message Passing Interface (MPI)

Distributed Memory Programming with Message-Passing

CS 426. Building and Running a Parallel Application

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

int sum;... sum = sum + c?

Introduction to parallel computing concepts and technics

Practical Introduction to Message-Passing Interface (MPI)

MPI Program Structure

Holland Computing Center Kickstart MPI Intro

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines

Introduction in Parallel Programming - MPI Part I

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

Lecture 7: Distributed memory

Parallel Programming using MPI. Supercomputing group CINECA

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

Parallel programming with MPI Part I -Introduction and Point-to-Point

Introduction to MPI. Ricardo Fonseca.

Parallel programming with MPI Part I -Introduction and Point-to-Point Communications

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

MPI Message Passing Interface

Parallel Programming with MPI: Day 1

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

MPI MESSAGE PASSING INTERFACE

An Introduction to MPI

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].

Parallel Short Course. Distributed memory machines

15-440: Recitation 8

The Message Passing Model

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

CSE 160 Lecture 18. Message Passing

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

A message contains a number of elements of some particular datatype. MPI datatypes:

MPI 2. CSCI 4850/5850 High-Performance Computing Spring 2018

MPI. (message passing, MIMD)

Reusing this material

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

Message Passing Interface

Parallel Programming Using MPI

ECE 574 Cluster Computing Lecture 13

Parallel Computing Paradigms

Recap of Parallelism & MPI

HPC Parallel Programing Multi-node Computation with MPI - I

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Anomalies. The following issues might make the performance of a parallel program look different than it its:

Programming with MPI. Pedro Velho

CS 179: GPU Programming. Lecture 14: Inter-process Communication

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

CSE. Parallel Algorithms on a cluster of PCs. Ian Bush. Daresbury Laboratory (With thanks to Lorna Smith and Mark Bull at EPCC)

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

Department of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer

Slides prepared by : Farzana Rahman 1

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Parallel Programming

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

MPI introduction - exercises -

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

Tutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros

Introduction to MPI. Branislav Jansík

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

Parallel Programming in C with MPI and OpenMP

CS 6230: High-Performance Computing and Parallelization Introduction to MPI

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

MPI point-to-point communication

MPI Runtime Error Detection with MUST

Parallel Computing: Overview

An introduction to MPI

AMath 483/583 Lecture 21

ME964 High Performance Computing for Engineering Applications

Introduction to the Message Passing Interface (MPI)

Distributed Memory Programming with MPI

Simple examples how to run MPI program via PBS on Taurus HPC

Message Passing Interface. George Bosilca

MPI 1. CSCI 4850/5850 High-Performance Computing Spring 2018

DISTRIBUTED MEMORY PROGRAMMING WITH MPI. Carlos Jaime Barrios Hernández, PhD.

Lecture 3 Message-Passing Programming Using MPI (Part 1)

Point-to-Point Communication. Reference:

CSE 160 Lecture 15. Message Passing

Hybrid MPI and OpenMP Parallel Programming

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project.

High performance computing. Message Passing Interface

Message Passing Interface. most of the slides taken from Hanjun Kim

Introduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco

Message Passing Interface

Introduction to MPI. Jerome Vienne Texas Advanced Computing Center January 10 th,

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

Distributed Memory Parallel Programming

Outline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.

MPI Tutorial. Shao-Ching Huang. IDRE High Performance Computing Workshop

Parallel Programming in C with MPI and OpenMP

Transcription:

Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright 2015 Paul Preney. All Rights Reserved. November 11, 2015 Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 1

Presentation Overview 1 Overview 2 An MPI Hello World! 3 Sending and Receiving Messages 4 Demonstrations & Questions 5 References Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 2

Table of Contents 1 Overview What is MPI? Choices Other Than MPI? Some Terms The MPI Parallel Programming Model Important Parallel Programming Goals Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 3

What is MPI? Message-Passing Interface: is a de facto standard dating back to 1994. [1] is one of many ways to write code that executes in parallel has language bindings for Fortran and C. NOTE: The C ++ language bindings were removed in MPI v3.0. [2, 16.2, p.596] enables compute nodes to efficiently pass messages to one another. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 4

Choices Other Than MPI? If you don t need to use MPI you may want to also consider (each with its own pros and cons): OpenCL (CPUs and/or GPUs) Devices can be CPUs or GPUs, OpenCL C code is deployed to each device. Can achieve high performance but more difficult to program in. OpenCL C code is not the same as traditional C, C ++, or Fortran code. GPU-specific OpenACC (open), CUDA (proprietary to NVIDIA) More difficult to program. Code is not the same as traditional C, C ++, or Fortran code. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 5

Choices Other Than MPI? (con t) OpenMP (CPUs) Already have C, C ++, Fortran code? OpenMP makes it relatively easily to convert specific constructs (e.g., for loops) to use threads. Normally run on CPUs. pthreads, C11, or C ++ 11 threads (CPUs) Traditional C, C ++, or Fortran code using threads. Code will be portable, but, is only deployed on CPUs. If you have have questions or require assistance choosing an appropriate technology, please do ask! :-) Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 6

Some Terms distributed Resources that are spread out over a number of compute nodes (which may or may not be separate computers). message A POD or an array of PODs. POD plain-old data, i.e., C-style data types but avoid pointers. No data or function pointers, object-oriented types using inheritance or containing virtual functions (e.g., roughly in C++ terms nothing can be virtual). SIMD Single-Instruction, Multiple Data (streams). UE Unit of execution (e.g., an MPI process). Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 7

The MPI Parallel Programming Model Memory in MPI is assumed to be distributed and not shared. This means you cannot access data in a UE without the UE sending it as a message back to you. UEs in MPI are processes and should be assumed to be such. MPI v2 has the ability to run each MPI process as a thread but the MPI software must be configured to run this way by the sysadmin. So you must assume all memory is not shared. Sending messages to/from MPI processes is not free. If you need to send 10 MB messages to 10 nodes from one node, the sender will need to transmit 100 MB of data. No matter how fast the interconnects are, it takes time to transmit and receive these messages! Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 8

Important Parallel Programming Goals Once you become comfortable with parallel programming, you will find yourself adopting these goals: Creating all data structures / objects so that they are PODs or easily converted into a POD. Translation: In C ++ terms, avoid using object-oriented virtual inheritance or virtual functions. The preferred container data structure to hold multiple PODs is an string, array, std::string (C ++ ), std::array (C ++ ), or std::vector (C ++ ). If you use almost anything else, you will need to convert it to a POD / array of PODs, or, you will need to write advanced code to map your object so that it can be sent/received as a message. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 9

Important Parallel Programming Goals (con t) Think about what you want to do and how you will go about doing it before writing any code! It is much harder to write a correct parallel program than to write a correct sequential ( serial ) program. It is much harder to debug an incorrect parallel program than to debug an incorrect sequential program. Adopt the noble goal of trying to design a correct solution before writing any code, then write it. This will greatly reduce the difficulty of writing the code considerably. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 10

Table of Contents 2 An MPI Hello World! Hello Word: MPI Calls Hello World: Sample Output Hello World: The Programs Hello World: Compiling Hello World: Running Hello World: Using SHARCNET s sqsub Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 11

Hello Word: MPI Calls MPI Call MPI_INIT MPI_COMM_RANK MPI_COMM_SIZE MPI_FINALIZE Purpose Required to start up and use MPI functionality. The ID of the current MPI process running. The total number of MPI processes running. Required to stop using and shut down MPI. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 12

Hello World: Sample Output Sample output from the Hello World! programs that will be presented: 1 $ mpirun -n 4./helloworld2-cxx.exe 2 Hello world! I am process 1 out of 4! 3 Hello world! I am process 0 out of 4! 4 Hello world! I am process 2 out of 4! 5 Hello world! I am process 3 out of 4! 6 $ mpirun -n 4./helloworld2-cxx.exe 7 Hello world! I am process 1 out of 4! 8 Hello world! I am process 2 out of 4! 9 Hello world! I am process 3 out of 4! 10 Hello world! I am process 0 out of 4! 11 $ NOTE: The lines output are not in the same order! Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 13

Hello World: The Programs An MPI Hello World! program in Fortran-90: 1 program helloworld 2 use mpi 3 integer ierr, numprocs, procid 4 5 call MPI_INIT(ierr) 6 7 call MPI_COMM_RANK(MPI_COMM_WORLD, procid, ierr) 8 call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr) 9 10 print *, "Hello world! I am process ", procid, " out of ", numprocs, "!" 11 12 call MPI_FINALIZE(ierr) 13 14 stop 15 end Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 14

Hello World: The Programs (con t) An MPI Hello World! program in C: 1 #include <stdio.h> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 int ierr, procid, numprocs; 7 8 ierr = MPI_Init(&argc, &argv); 9 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 10 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 11 12 printf("hello world! I am process %d out of %d!\n", procid, numprocs); 13 14 ierr = MPI_Finalize(); 15 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 15

Hello World: The Programs (con t) An MPI Hello World! program in C ++ using std::printf(): 1 #include <cstdio> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 using namespace std; 7 8 int ierr = MPI_Init(&argc, &argv); 9 int procid, numprocs; 10 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 11 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 12 13 printf("hello world! I am process %d out of %d!\n", procid, numprocs); 14 15 ierr = MPI_Finalize(); 16 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 16

Hello World: The Programs (con t) An MPI Hello World! program in C ++ using std::cout: 1 #include <iostream> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 using namespace std; 7 8 int ierr = MPI_Init(&argc, &argv); 9 int procid, numprocs; 10 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 11 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 12 13 cout << "Hello world! I am process " << procid << " out of " << numprocs << "!\n"; 14 15 ierr = MPI_Finalize(); 16 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 17

Hello World: Compiling To compile the code: C: mpicc -o prog.exe helloworld.c C ++ : mpic++ -o prog.exe helloworld.cxx Fortran: mpifort -o prog.exe helloworld.f90 With final programs, remember to add compile with optimizations: C: mpicc -o prog.exe -O2 helloworld.c C ++ : mpic++ -o prog.exe -O2 helloworld.cxx Fortran: mpifort -o prog.exe -O2 helloworld.f90 Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 18

Hello World: Running To run an MPI program locally on your own computer or a SHARCNET development node run: mpirun -n NCPUS./prog.exe where NCPUS is the number of available CPUs/cores. e.g., 4 cores: mpirun -n 4./prog.exe 16 cores: mpirun -n 16./prog.exe Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 19

Hello World: Using SHARCNET s sqsub SHARCNET collections of computers (i.e., nodes) placed in groups called clusters. e.g., kraken.sharcnet.ca, orca.sharcnet.ca, saw.sharcnet.ca When you want to run a job on a cluster, you will: Log in to that cluster s login node. Submit the job to the mpi queue. You job will normally not run right away so you ll need to wait until it has finished running. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 20

Hello World: Using SHARCNET s sqsub (con t) The syntax required to submit prog.exe to sqsub is: sqsub -q mpi -n NCPUS -o output.log -r TIME --mpp RAM./prog.exe where: NCPUS is the number of cores output.log is where all screen (stdout, stderr) output is written TIME is a number (representing the maximum time to run) followed by a time unit ( m is minutes, h is hours, and d is days) RAM is a number (representing the maximum amount of RAM per MPI process) followed by a unit ( M is megabytes, G is gigabytes) Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 21

Hello World: Using SHARCNET s sqsub (con t) For example, to place Hello World! in a queue to run on SHARCNET one would use: sqsub -q mpi -n 4 -o out.log -r 5m --mpp 200M./helloworld.exe Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 22

Table of Contents 3 Sending and Receiving Messages Overview MPI_Send MPI_Recv MPI_Datatype Example 1: MPI_SEND & MPI_RECV Example 2: MPI_SEND & MPI_RECV Example 3: MPI_SEND & MPI_RECV Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 23

Overview The simplest way to communicate point-to-point messages between two MPI processes is to use: MPI_Send() to send messages. MPI_Recv() to receive messages. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 24

Overview (con t) One must specify: the datatype being sent/received, the receiver s process ID when sending, the sender s process ID (or MPI_ANY_SOURCE) when receiving, and, the sender s tag ID (or MPI_ANY_TAG) when receiving. In order to receive a message, MPI requires the type, process id, and the tag match. If they don t match, it the receive call will wait forever hanging your program. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 25

MPI_Send MPI_Send(buf,count,type,dest,tag,comm) is a blocking send operation whose arguments are defined as follows: [2, 3] Argument In/Out Description buf IN starting address of send buffer count IN number of elements in send buffer type IN MPI_Datatype of each send buffer element dest IN node rank id to send the buffer to tag IN message tag comm IN communicator When called, MPI_Send transmits count elements in buf all of type type to node dest with the label tag. The buffer is assumed to have been sent after the call returns. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 26

MPI_Recv MPI_Recv(buf,count,type,src,tag,comm,status) is a blocking receive operation whose arguments are defined as follows: [2, 3] Argument In/Out Description buf OUT starting address of receive buffer count IN number of elements in receive buffer type IN MPI_Datatype of each buffer element src IN node rank id to receive the buffer from tag IN message tag comm IN communicator status OUT status object When called, MPI_Recv receives up to count elements in buf all of type type from node src with the label tag. Up to count buffer elements can be stored. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 27

MPI_Datatype MPI uses instances of a special type called MPI_Datatype to represent the types of messages being sent or received. The MPI standard defines a set of predefined MPI_Datatypes that map to C s fundamental types. Some of these mappings are: [2, 3.2] MPI_Datatype Name MPI_C_BOOL MPI_CHAR MPI_UNSIGNED_CHAR MPI_SIGNED_CHAR MPI_INT MPI_DOUBLE MPI_LONG_DOUBLE MPI_C_DOUBLE_COMPLEX C Type _Bool char (text) unsigned char (integer) signed char (integer) signed int double long double double _Complex Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 28

MPI_Datatype (con t) Mappings to Fortran types are defined as follows: [2, 3.2] MPI_Datatype Name MPI_INTEGER MPI_REAL MPI_DOUBLE_PRECISION MPI_COMPLEX MPI_LOGICAL MPI_CHARACTER Fortran Type INTEGER REAL DOUBLE PRECISION COMPLEX LOGICAL CHARACTER(1) Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 29

Example 1: MPI_SEND & MPI_RECV Let s look at a program that has Process 0 sending 3.14 to Process 1. Sample runs of this program are: 1 $ mpirun -np 2./sendrecv1-c.exe 2 ProcID 0 sent value 3.140000 to ProcID 1. 3 ProcID 1 received value 3.140000. 4 $ mpirun -np 3./sendrecv1-c.exe 5 ------------------------------------------------------------------------ 6 MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD 7 with errorcode 1. 8 9 [snip] Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 30

Example 1: MPI_SEND & MPI_RECV (con t) The sendrecv1.c program is: 1 #include <stdio.h> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 int ierr, procid, numprocs; 7 8 ierr = MPI_Init(&argc, &argv); 9 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 10 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 11 12 if (numprocs!= 2) 13 { 14 printf("error: Number of processes is not 2!\n"); 15 return MPI_Abort(MPI_COMM_WORLD, 1); 16 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 31

Example 1: MPI_SEND & MPI_RECV (con t) 17 if (procid == 0) 18 { 19 // procid 0 will send the number 3.14 to procid 1... 20 double pi = 3.14; 21 MPI_Send(&pi, 1, MPI_DOUBLE, 1, 0, MPI_COMM_WORLD); 22 printf("procid %d sent value %lf to ProcID 1.\n", procid, pi) ; 23 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 32

Example 1: MPI_SEND & MPI_RECV (con t) 24 else 25 { 26 // procid 1 will wait to receive a double from procid 0... 27 double value; 28 MPI_Status status; 29 ierr = MPI_Recv(&value, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD, & status); 30 if (ierr == MPI_SUCCESS) 31 printf("procid %d received value %lf.\n", procid, value); 32 else 33 printf("procid %d did not successfully receive a value!\n", procid); 34 } 35 36 ierr = MPI_Finalize(); 37 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 33

Example 2: MPI_SEND & MPI_RECV Let s look at a program that has Process i sending the results of 3.14 + i to Process i + 1. A sample run of this program is: 1 $ mpirun -np 6./sendrecv2-c.exe 2 ProcID 0 sent value 3.140000 to ProcID 1. 3 ProcID 1 received value 3.140000. 4 ProcID 4 sent value 7.140000 to ProcID 5. 5 ProcID 2 sent value 5.140000 to ProcID 3. 6 ProcID 3 received value 5.140000. 7 ProcID 5 received value 7.140000. 8 $ Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 34

Example 2: MPI_SEND & MPI_RECV (con t) The sendrecv2.c program is: 1 #include <stdio.h> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 int ierr, procid, numprocs; 7 8 ierr = MPI_Init(&argc, &argv); 9 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 10 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 11 12 if (numprocs % 2!= 0) 13 { 14 printf("error: Number of processes is not even!\n"); 15 return MPI_Abort(MPI_COMM_WORLD, 1); 16 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 35

Example 2: MPI_SEND & MPI_RECV (con t) 17 if (procid % 2 == 0) 18 { 19 // even procid i will send the number 3.14+i to procid i+1... 20 double val = 3.14+procid; 21 MPI_Send(&val, 1, MPI_DOUBLE, procid+1, 0, MPI_COMM_WORLD); 22 printf("procid %d sent value %lf to ProcID %d.\n", procid, val, procid+1); 23 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 36

Example 2: MPI_SEND & MPI_RECV (con t) 24 else 25 { 26 // odd procid i will wait to receive a value from any procid 27 double val; 28 MPI_Status status; 29 ierr = MPI_Recv(&val, 1, MPI_DOUBLE, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status); 30 if (ierr == MPI_SUCCESS) 31 printf("procid %d received value %lf.\n", procid, val); 32 else 33 printf("procid %d did not successfully receive a value!\n", procid); 34 } 35 36 ierr = MPI_Finalize(); 37 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 37

Example 3: MPI_SEND & MPI_RECV Let s look at a program that has Process i sending the results of i to Process 0. Process 0 will sum up all i values received. A sample run of this program is: 1 $ mpirun -np 4 sendrecv3-c.exe 2 ProcID 1 sent value -1.000000 to ProcID 0. 3 ProcID 3 sent value -3.000000 to ProcID 0. 4 ProcID 2 sent value -2.000000 to ProcID 0. 5 ProcID 0 sent value -0.000000 to ProcID 0. 6 ProcID 0 received value -0.000000. 7 ProcID 0 received value -1.000000. 8 ProcID 0 received value -3.000000. 9 ProcID 0 received value -2.000000. 10 The total is -6.000000. 11 $ Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 38

Example 3: MPI_SEND & MPI_RECV (con t) The sendrecv3.c program is: 1 #include <stdio.h> 2 #include <mpi.h> 3 4 int main(int argc, char *argv[]) 5 { 6 int ierr, procid, numprocs; 7 8 ierr = MPI_Init(&argc, &argv); 9 ierr = MPI_Comm_rank(MPI_COMM_WORLD, &procid); 10 ierr = MPI_Comm_size(MPI_COMM_WORLD, &numprocs); 11 12 // All procids send the value -procid to Process 0... 13 double val = -1.0 * procid; 14 MPI_Send(&val, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); 15 printf("procid %d sent value %lf to ProcID 0.\n", procid, val); Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 39

Example 3: MPI_SEND & MPI_RECV (con t) 16 if (procid == 0) { 17 // Process 0 must receive numprocs values... 18 int i; double val, sum = 0; MPI_Status status; 19 20 for (i = 0; i!= numprocs; ++i) { 21 ierr = MPI_Recv(&val, 1, MPI_DOUBLE, MPI_ANY_SOURCE, 0, MPI_COMM_WORLD, &status); 22 if (ierr == MPI_SUCCESS) { 23 printf("procid %d received value %lf.\n", procid, val); 24 sum += val; 25 } 26 else MPI_Abort(MPI_COMM_WORLD, 1); 27 } 28 printf("the total is %lf.\n", sum); 29 } 30 ierr = MPI_Finalize(); 31 } Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 40

Demonstrations & Questions Demonstrations and questions. Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 41

References [1] Message Passing Interface Forum. MPI: A Message-Passing Interface Standard. Version 1.0. 1994-05-05. url: http://www.mpi-forum.org/docs/mpi-1.0/mpi-10.ps (cit. on p. 4). [2] Message Passing Interface Forum. MPI: A Message-Passing Interface Standard. Version 3.0. 2012-09-21. url: http: //www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf (cit. on pp. 4, 26 29). Introduction to MPI: SHARCNET MPI Lecture Series: Part I of II Paul Preney 42