CSE 160 Lecture 18. Message Passing

Similar documents
CSE 160 Lecture 15. Message Passing

Introduction to the Message Passing Interface (MPI)

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Holland Computing Center Kickstart MPI Intro

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

CS 179: GPU Programming. Lecture 14: Inter-process Communication

MPI Program Structure

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

MPI Message Passing Interface

High Performance Computing Course Notes Message Passing Programming I

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

Introduction to parallel computing concepts and technics

CSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

CS 426. Building and Running a Parallel Application

The Message Passing Model

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

int sum;... sum = sum + c?

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

MPI MESSAGE PASSING INTERFACE

Parallel Programming, MPI Lecture 2

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Slides prepared by : Farzana Rahman 1

MPI. (message passing, MIMD)

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

5/5/2012. Message Passing Programming Model Blocking communication. Non-Blocking communication Introducing MPI. Non-Buffered Buffered

Introduction in Parallel Programming - MPI Part I

A message contains a number of elements of some particular datatype. MPI datatypes:

High performance computing. Message Passing Interface

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

Parallel Computing Paradigms

Parallel Programming with MPI: Day 1

Department of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer

Lecture 6. Programming with Message Passing Message Passing Interface (MPI)

Message Passing Interface

MPI 2. CSCI 4850/5850 High-Performance Computing Spring 2018

Reusing this material

Introduction to the Message Passing Interface (MPI)

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

ME964 High Performance Computing for Engineering Applications

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Distributed Memory Programming with Message-Passing

Parallel Numerical Algorithms

Practical Scientific Computing: Performanceoptimized

Recap of Parallelism & MPI

Distributed Memory Programming with MPI

MPI (Message Passing Interface)

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

Message Passing Interface

Parallel Programming using MPI. Supercomputing group CINECA

CSE. Parallel Algorithms on a cluster of PCs. Ian Bush. Daresbury Laboratory (With thanks to Lorna Smith and Mark Bull at EPCC)

Parallel Short Course. Distributed memory machines

Lecture 3 Message-Passing Programming Using MPI (Part 1)

Lecture 7: Distributed memory

Introduction to MPI. Ricardo Fonseca.

CS 351 Week The C Programming Language, Dennis M Ritchie, Kernighan, Brain.W

An Introduction to MPI

Message Passing Interface. most of the slides taken from Hanjun Kim

Programming Using the Message Passing Paradigm

Collective Communication in MPI and Advanced Features

Acknowledgments. Programming with MPI Basic send and receive. A Minimal MPI Program (C) Contents. Type to enter text

Programming with MPI Basic send and receive

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018

Introduction to MPI. Branislav Jansík

Practical Introduction to Message-Passing Interface (MPI)

Point-to-Point Communication. Reference:

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

Message Passing Interface

Parallel Programming in C with MPI and OpenMP

MPI 3. CSCI 4850/5850 High-Performance Computing Spring 2018

Supercomputing in Plain English

COSC 6374 Parallel Computation

ECE 574 Cluster Computing Lecture 13

Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality)

MPI Runtime Error Detection with MUST

Parallel Programming

Cluster Computing MPI. Industrial Standard Message Passing

Computer Architecture

Parallel Computing and the MPI environment

Parallel programming with MPI Part I -Introduction and Point-to-Point Communications

MPI MESSAGE PASSING INTERFACE

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project.

Parallel programming with MPI Part I -Introduction and Point-to-Point

Parallel Programming. Using MPI (Message Passing Interface)

CS 6230: High-Performance Computing and Parallelization Introduction to MPI

Supercomputing in Plain English

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

Outline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.

Transcription:

CSE 160 Lecture 18 Message Passing

Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i); for i = 3:3:n/3-1 y(2*i) = y(3*i); % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (INCORRECT) for i = 1:n/3-1 if (mod(i,3) == 0) b(2*i) = b(3*i); for i = 1:n/3-1 if (mod(i,3) ~= 0) b(2*i) = b(3*i); % Restructured for Parallelism (CORRECT) for i = 1:n/3-1 if (mod((i*2),3)) a(2*i) = a(3*i); for i = 1:n/3-1 if (~mod((i*2),3)) a(2*i) = a(3*i); % Restructured for Parallelism (INCORRECT) for i = 1:n/3-1 if (0 == mod((i*2),3)) z(2*i) = z(3*i); for i = 1:n/3-1 if (0 ~= mod((i*2),3)) z(2*i) = z(3*i); 3

Programming with Message Passing Programs execute as a set of P processes (user specifies P) Each process assumed to run on a different core Usually initialized with the same code, but has private state SPMD = Same Program Multiple Data Communicates with other processes by sing and receiving messages Executes instructions at its own rate according to its rank (0:P-1) and the messages it ss and receives Program execution is often called bulk synchronous or loosely synchronous P0 P1 P4 P5 Node 0 Node 1 P2 P3 P6 P7 Tan Nguyen 4

Message passing There are two kinds of communication patterns Point-to-point communication: a single pair of communicating processes copy data between address space Collective communication: all the processors participate, possibly exchanging information 5

Point-to-Point message passing Messages are like email; to s one, we specify A destination A message body (can be empty) To receive a message we need similar information, including a receptacle to hold the incoming data Requires a ser and an explicit recipient that must be aware of one another Message passing performs two events Memory to memory block copy Synchronization signal on receiving : Data arrived Message buffers 6

S and Recv The primitives that implement Pt 2 Pt communication When S( ) returns, the message is in transit A return doesn t tell us if the message has been received The data is somewhere in the system Safe to overwrite the buffer Receive( ) blocks until the message has been received Safe to use the data in the buffer y Process 0 y x Process 1 S(y,1) Recv(x) 7

Causality If a process ss multiple messages to the same destination, then the messages will be received in the order sent If different processes s messages to the same destination, the order of receipt isn t defined across sources A C B A B C B A A X Y B X Y 8

MPI We ll program with a library called MPI Message Passing Interface 125 routines in MPI-1 7 minimal routines needed by every MPI program start,, and query MPI execution state (4) non-blocking point-to-point message passing (3) Reference material: see http://www-cse.ucsd.edu/users/baden/doc/mpi.html Callable from C, C++, Fortran, etc. All major vors support MPI, but implementations differ in quality 9

Functionality we ll will cover today Point-to-point communication Message Filtering Communicators 10

A first MPI program : hello world #include "mpi.h int main(int argc, char **argv ){ } MPI_Init(&argc, &argv); int rank, size; MPI_Comm_size(MPI_COMM_WORLD,&size); MPI_Comm_rank(MPI_COMM_WORLD,&rank); printf("hello, world! I am process %d of %d.\n, rank, size); MPI_Finalize(); return(0); 11

MPI s A minimal interface Opening and closing MPI MPI_Init and MPI_Finalize Query functions MPI_Comm_size( ) = # processes MPI_Comm_rank( ) = this process rank Point-to-point communication Simplest form of communication S a message to another process MPI_Is( ), MPI_S( ) Receive a message from another process MPI_Irecv( ), MPI_Recv( ) Wait on an incoming message: MPI_Wait( ) 12

Point-to-point messages To s a message we need A destination A type A message body (can be empty) A context (called a communicator in MPI) To receive a message we need similar information, including a receptacle to hold the incoming data We can filter messages, enabling us organize message passing activity 13

S and Recv const int Tag=99; int msg[2] = { rank, rank * rank}; if (rank == 0) { Message length MPI_Status status; MPI_Recv(msg, 2, MPI_INT, 1, Tag, MPI_COMM_WORLD, &status); Message Buffer Message Tag Communicator } else MPI_S(msg, 2, MPI_INT, 0, Tag, MPI_COMM_WORLD); SOURCE Process ID Destination Process ID 14

Communicators A communicator is a name-space (or a context) describing a set of processes that may communicate MPI defines a default communicator MPI_COMM_WORLD containing all processes MPI provides the means of generating uniquely named subsets (later on) A mechanism for screening messages 15

MPI Tags Tags enable processes to organize or screen messages Each sent message is accompanied by a userdefined integer tag: Receiving process can use this information to organize or filter messages MPI_ANY_TAG inhibits screening. 16

MPI Datatypes MPI messages have a specified length The unit deps on the type of the data The length in bytes is sizeof(type) # elements We don t specify the as the # byte MPI specifies a set of built-in types for each of the primitive types of the language In C: MPI_INT, MPI_FLOAT, MPI_DOUBLE, MPI_CHAR, MPI_LONG, MPI_UNSIGNED, MPI_BYTE, Also defined types, e.g. structs 17

What we covered today Message passing concepts A practical interface Next time Applications Asynchronous communication Collective communication 18