A message contains a number of elements of some particular datatype. MPI datatypes:

Similar documents
Reusing this material

Parallel Programming with MPI: Day 1

MPI point-to-point communication

MPI Program Structure

Parallel Programming, MPI Lecture 2

ME964 High Performance Computing for Engineering Applications

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

Parallel Programming using MPI. Supercomputing group CINECA

Parallel programming with MPI Part I -Introduction and Point-to-Point Communications

Parallel programming with MPI Part I -Introduction and Point-to-Point

Writing Message Passing Parallel Programs with MPI

Introduction in Parallel Programming - MPI Part I

MPI Message Passing Interface

Introduction to parallel computing concepts and technics

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Practical Introduction to Message-Passing Interface (MPI)

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Message Passing Programming with MPI. Message Passing Programming with MPI 1

Non-Blocking Communications

COSC 6374 Parallel Computation

Introduction to MPI Programming Part 2

Acknowledgments. Programming with MPI Basic send and receive. A Minimal MPI Program (C) Contents. Type to enter text

Programming with MPI Basic send and receive

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

Experiencing Cluster Computing Message Passing Interface

The MPI Message-passing Standard Practical use and implementation (II) SPD Course 27/02/2017 Massimo Coppola

Non-Blocking Communications

Practical Scientific Computing: Performanceoptimized

Introduction to the Message Passing Interface (MPI)

MPI Course Overview. Home MPI Programming MPI 2.2 Index Using MPI at NF

Introduction to the Message Passing Interface (MPI)

Working with IITJ HPC Environment

MPI. (message passing, MIMD)

Parallel Short Course. Distributed memory machines

CME 194 Introduc0on to MPI

Message Passing with MPI Christian Iwainsky HiPerCH

Programming Using the Message Passing Paradigm

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

Message Passing Programming. Modes, Tags and Communicators

Message Passing Interface

int sum;... sum = sum + c?

Message Passing Interface

Parallel Programming

MPI Programming. Henrik R. Nagel Scientific Computing IT Division

Recap of Parallelism & MPI

4. Parallel Programming with MPI

Parallel Computing Programming Distributed Memory Architectures

Intermediate MPI. M. D. Jones, Ph.D. Center for Computational Research University at Buffalo State University of New York

5/5/2012. Message Passing Programming Model Blocking communication. Non-Blocking communication Introducing MPI. Non-Buffered Buffered

Department of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer

Parallel Programming. Using MPI (Message Passing Interface)

Lecture 3 Message-Passing Programming Using MPI (Part 1)

Introduction to MPI. Student Notes. Martin Preston. Manchester and North HPC T&EC

An Introduction to Parallel Programming

CSE 160 Lecture 18. Message Passing

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Message Passing with MPI

Introduction to Parallel Programming

CSE 160 Lecture 15. Message Passing

Message Passing Programming. Modes, Tags and Communicators

Week 3: MPI. Day 02 :: Message passing, point-to-point and collective communications

CS 426. Building and Running a Parallel Application

Introduction to MPI. Ricardo Fonseca.

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Parallel Computing Programming Distributed Memory Architectures

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

Outline. CSC 447: Parallel Programming for Multi-Core and Cluster Systems 2

MPI. What to Learn This Week? MPI Program Structure. What is MPI? This week, we will learn the basics of MPI programming.

Parallel Programming in C with MPI and OpenMP

Point-to-Point Communication. Reference:

Introduction to the Message Passing Interface (MPI)

A Message Passing Standard for MPP and Workstations

15-440: Recitation 8

CS 179: GPU Programming. Lecture 14: Inter-process Communication

Introduction to MPI. SuperComputing Applications and Innovation Department 1 / 143

Parallel Computing Paradigms

AMath 483/583 Lecture 21

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Outline. Communication modes MPI Message Passing Interface Standard

Intermediate MPI (Message-Passing Interface) 1/11

Message passing. Week 3: MPI. Day 02 :: Message passing, point-to-point and collective communications. What is MPI?

The Message Passing Model

DISTRIBUTED MEMORY PROGRAMMING WITH MPI. Carlos Jaime Barrios Hernández, PhD.

Intermediate MPI (Message-Passing Interface) 1/11

Distributed Systems + Middleware Advanced Message Passing with MPI

High Performance Computing Course Notes Message Passing Programming III

Cluster Computing MPI. Industrial Standard Message Passing

High Performance Computing Course Notes Message Passing Programming I

Parallel Computing. PD Dr. rer. nat. habil. Ralf-Peter Mundani. Computation in Engineering / BGU Scientific Computing in Computer Science / INF

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

An Introduction to MPI

Holland Computing Center Kickstart MPI Intro

Discussion: MPI Basic Point to Point Communication I. Table of Contents. Cornell Theory Center

Buffering in MPI communications

Parallel Programming

CSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

High Performance Computing Course Notes Message Passing Programming III

Lecture Topic: Multi-Core Processors: MPI 1.0 Overview (Part-II)

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

Transcription:

Messages

Messages A message contains a number of elements of some particular datatype. MPI datatypes: Basic types. Derived types. Derived types can be built up from basic types. C types are different from Fortran types.

MPI Basic Datatypes - C MPI Datatype MPI_CHAR MPI_SHORT MPI_INT MPI_LONG MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE C datatype signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double MPI_BYTE MPI_PACKED

MPI Basic Datatypes - Fortran MPI Datatype MPI_INTEGER MPI_REAL MPI_DOUBLE_PRECISION MPI_COMPLEX MPI_LOGICAL MPI_CHARACTER Fortran Datatype INTEGER REAL DOUBLE PRECISION COMPLEX LOGICAL CHARACTER(1) MPI_BYTE MPI_PACKED

Point-to-Point Communication

Point-to-Point Communication 1 2 5 3 4 Destination Source 0 Communicator Communication between two processes. Source process sends message to destination process. Communication takes place within a communicator. Destination process is identified by its rank in the communicator.

Sender calls a SEND routine specifying the data that is to be sent this is called the send buffer Receiver calls a RECEIVE routine Point-to-point messaging in MPI specifying where the incoming data should be stored this is called the receive buffer Data goes into the receive buffer Metadata describing message also transferred this is received into separate storage this is called the status

Communication modes Sender mode Notes Synchronous send Only completes when the receive has completed. Buffered send Always completes (unless an error occurs), irrespective of receiver. Standard send Ready send Either synchronous or buffered. Always completes (unless an error occurs), irrespective of whether the receive has completed. Receive Completes when a message has arrived.

MPI Sender Modes OPERATION MPI CALL Standard send MPI_Send Synchronous send MPI_Ssend Buffered send MPI_Bsend Ready send MPI_Rsend Receive MPI_Recv

Sending a message C: int MPI_Ssend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) Fortran: MPI_SSEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG INTEGER COMM, IERROR

Send data from rank 1 to rank 3 // Array of ten integers int x[10];... if (rank == 1) MPI_Ssend(x, 10, MPI_INT, dest=3, tag=0, MPI_COMM_WORLD); // Integer scalar int x;... if (rank == 1) MPI_Ssend(&x, 1, MPI_INT, dest=3, tag=0, MPI_COMM_WORLD);

Send data from rank 1 to rank 3! Array of ten integers integer, dimension(10) :: x... if (rank.eq. 1) CALL MPI_SSEND(x, 10, MPI_INTEGER, dest=3, tag=0, MPI_COMM_WORLD, ierr)! Integer scalar integer :: x... if (rank.eq. 1) CALL MPI_SSEND(x, 1, MPI_INTEGER, dest=3, tag=0, MPI_COMM_WORLD, ierr)

Receiving a message C: int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) Fortran: MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR

Receive data from rank 1 on rank 3 int y[10]; MPI_Status status;... if (rank == 3) MPI_Recv(y, 10, MPI_INT, src=1, tag=0, MPI_COMM_WORLD, &status); int y;... if (rank == 3) MPI_Recv(&y, 1, MPI_INT, src=1, tag=0, MPI_COMM_WORLD, &status);

Receive data from rank 1 on rank 3 integer, dimension(10) :: y integer, dimension(mpi_status_size) :: status... if (rank.eq. 3) CALL MPI_RECV(y, 10, MPI_INTEGER, src=1, tag=0, MPI_COMM_WORLD, status, ierr) integer :: y... if (rank.eq. 3) CALL MPI_RECV(y, 1, MPI_INTEGER, src=1, tag=0, MPI_COMM_WORLD, status, ierr)

Synchronous Blocking Message-Passing Processes synchronise. Sender process specifies the synchronous mode. Blocking: both processes wait until the transaction has completed.

For a communication to succeed: Sender must specify a valid destination rank. Receiver must specify a valid source rank. The communicator must be the same. Tags must match. Message types must match. Receiver's buffer must be large enough.

Wildcarding Receiver can wildcard. To receive from any source MPI_ANY_SOURCE To receive with any tag MPI_ANY_TAG Actual source and tag are returned in the receiver's status parameter.

Communication Envelope Senders Address For the attention of: Destination Address Data Item 1 Item 2 Item 3

Commmunication Envelope Information Envelope information is returned from MPI_RECV as status Information includes: Source: status.mpi_source or status(mpi_source) Tag: status.mpi_tag or status(mpi_tag) Count: MPI_Get_count or MPI_GET_COUNT

Received Message Count C: int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count) Fortran: MPI_GET_COUNT(STATUS, DATATYPE, COUNT, IERROR) INTEGER STATUS(MPI_STATUS_SIZE), DATATYPE, COUNT, IERROR

Message Order Preservation 1 5 2 3 4 Communicator 0 Messages do not overtake each other. This is true even for non-synchronous sends.

Message Matching (i) Rank 0: Ssend(msg1, dest=1, tag=1) Ssend(msg2, dest=1, tag=2) Rank 1: Recv(buf1, src=0, tag=1) Recv(buf2, src=0, tag=2) buf1 = msg1; buf2 = msg2 Sends and receives correctly matched

Message Matching (ii) Rank 0: Ssend(msg1, dest=1, tag=1) Ssend(msg2, dest=1, tag=2) Rank 1: Recv(buf2, src=0, tag=2) Recv(buf1, src=0, tag=1) Deadlock (due to synchronous send) Sends and receives incorrectly matched

Message Matching (iii) Rank 0: Bsend(msg1, dest=1, tag=1) Bsend(msg2, dest=1, tag=1) Rank 1: Recv(buf1, src=0, tag=1) Recv(buf2, src=0, tag=1) buf1 = msg1; buf2 = msg2 Messages have same tags but matched in order

Message Matching (iv) Rank 0: Bsend(msg1, dest=1, tag=1) Bsend(msg2, dest=1, tag=2) Rank 1: Recv(buf2, src=0, tag=2) Recv(buf1, src=0, tag=1) buf1 = msg1; buf2 = msg2 Do not have to receive messages in order!

Message Matching (v) Rank 0: Bsend(msg1, dest=1, tag=1) Bsend(msg2, dest=1, tag=2) Rank 1: Recv(buf1, src=0, tag=mpi_any_tag) Recv(buf2, src=0, tag=mpi_any_tag) buf1 = msg1; buf2 = msg2 Messages guaranteed to match in send order examine status to find out the actual tag values

Message Order Preservation If a receive matches multiple messages in the inbox then the messages will be received in the order they were sent Only relevant for multiple messages from the same source

Exercise Calculation of Pi See Exercise 2 on the exercise sheet Illustrates how to divide work based on rank and how to send point-to-point messages in an SPMD code Notes: the value of N in the expansion of pi is not the same as the number of processors you should expect to write a program such as N=100 running on 4 processors your code should be able to run on any number of processors do not hard code the number of processors in your program! If you finish the pi example you may want to try Exercise 3 (ping-pong) but it is not essential

Timers C: Fortran: double MPI_Wtime(void); DOUBLE PRECISION MPI_WTIME() Time is measured in seconds. Time to perform a task is measured by consulting the timer before and after subtract values to get elapsed time Modify your program to measure its execution time and print it out.