Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Similar documents
Parallel Programming. Using MPI (Message Passing Interface)

MPI. (message passing, MIMD)

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Message Passing Interface

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

Introduction in Parallel Programming - MPI Part I

Working with IITJ HPC Environment

Message Passing Interface. most of the slides taken from Hanjun Kim

Parallel Programming, MPI Lecture 2

Introduction to the Message Passing Interface (MPI)

MPI Message Passing Interface

A message contains a number of elements of some particular datatype. MPI datatypes:

Capstone Project. Project: Middleware for Cluster Computing

Distributed Systems + Middleware Advanced Message Passing with MPI

CS 426. Building and Running a Parallel Application

Cluster Computing MPI. Industrial Standard Message Passing

What s in this talk? Quick Introduction. Programming in Parallel

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Parallel programming MPI

Distributed Memory Systems: Part IV

Introduction to MPI Programming Part 2

Programming Using the Message Passing Paradigm

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

Parallel Programming in C with MPI and OpenMP

Reusing this material

CSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )

Message Passing Interface

MPI Program Structure

Parallel Computing Paradigms

Introduction to MPI. Ricardo Fonseca.

Parallel Programming with MPI: Day 1

Message Passing Interface

ME964 High Performance Computing for Engineering Applications

HPC Parallel Programing Multi-node Computation with MPI - I

Holland Computing Center Kickstart MPI Intro

Introduction to Parallel Programming

Parallel Programming in C with MPI and OpenMP

The Message Passing Model

Introduction to the Message Passing Interface (MPI)

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

Outline. Communication modes MPI Message Passing Interface Standard

IPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08

CS 179: GPU Programming. Lecture 14: Inter-process Communication

MPI Programming. Henrik R. Nagel Scientific Computing IT Division

MPI. What to Learn This Week? MPI Program Structure. What is MPI? This week, we will learn the basics of MPI programming.

Practical Introduction to Message-Passing Interface (MPI)

MPI point-to-point communication

MPI Collective communication

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM

Scientific Computing

Distributed Memory Programming with MPI

Collective Communications I

Practical Scientific Computing: Performanceoptimized

15-440: Recitation 8

4. Parallel Programming with MPI

CSE 160 Lecture 15. Message Passing

Recap of Parallelism & MPI

Department of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer

Lecture 3 Message-Passing Programming Using MPI (Part 1)

High-Performance Computing: MPI (ctd)

NUMERICAL PARALLEL COMPUTING

NAME MPI_Address - Gets the address of a location in memory. INPUT PARAMETERS location - location in caller memory (choice)

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

CSE 160 Lecture 18. Message Passing

MPI MESSAGE PASSING INTERFACE

MPI MESSAGE PASSING INTERFACE

In the simplest sense, parallel computing is the simultaneous use of multiple computing resources to solve a problem.

Parallel Processing WS 2017/18. Universität Siegen Tel.: 0271/ , Büro: H-B Stand: January 15, 2018

Basic MPI Communications. Basic MPI Communications (cont d)

Introduction to MPI: Part II

Outline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.

Practical Course Scientific Computing and Visualization

Tutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros

Parallel Programming, MPI Lecture 2

The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs

The MPI Message-passing Standard Practical use and implementation (II) SPD Course 27/02/2017 Massimo Coppola

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

CME 194 Introduc0on to MPI

Distributed Memory Programming with Message-Passing

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

University of Notre Dame

Programming SoHPC Course June-July 2015 Vladimir Subotic MPI - Message Passing Interface

Collective Communications II

Message Passing Interface - MPI

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

Standard MPI - Message Passing Interface

Programming Distributed Memory Machines with Message Passing (MPI)

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines

Parallel Computing. Distributed memory model MPI. Leopold Grinberg T. J. Watson IBM Research Center, USA. Instructor: Leopold Grinberg

Programming Using the Message-Passing Paradigm (Chapter 6) Alexandre David

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

MPI 5. CSCI 4850/5850 High-Performance Computing Spring 2018

Parallel Programming using MPI. Supercomputing group CINECA

Outline. CSC 447: Parallel Programming for Multi-Core and Cluster Systems 2

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

High Performance Computing Course Notes Message Passing Programming III

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

O.I. Streltsova, D.V. Podgainy, M.V. Bashashin, M.I.Zuev

Transcription:

Introduction to MPI HY555 Parallel Systems and Grids Fall 2003

Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running

Typical layout of an MPI program #include <stdio.h> #include <mpi.h> int main(int argc,char **argv) { int myrank; /*Rank of process*/ int p; /*number of processes*/ MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myrank); MPI_Comm_size(MPI_COMM_WORLD,&p);... MPI_Finalize(); }

Understanding layout MPI_Init MUST be called before any other MPI functions Sets up the MPI library so it can be used After finished with MPI library MPI_Finalize must be called Cleans up unfinished operations MPI_Comm_rank(MPI_COMM_WORLD,&myrank); Fills up myrank with the rank of a process Each process has a unique rank, starting with 0,1,.. MPI_Comm_size(MPI_COMM_WORLD,&p); Fills up p with the number of available processes

Sending Messages int MPI_Send( void *message, int count, MPI_Datatype, int dest, int tag, MPI_Comm comm) Argument message count datatype dest tag comm Explanation pointer to the data number of values to be sent type of data destination of the message type of message MPI_COMM_WORLD

Receiving messages (1/2) int MPI_Recv( void *message, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) Argument message count datatype source tag comm status Explanation where to store the data how many values to store type of data we expect source of the message type of message MPI_COMM_WORLD information on the received data

Receiving messages (2/2) int MPI_Recv( void *message, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) source can be MPI_ANY_SOURCE We receive messages from any source status will contain the rank of process that sent the message tag can be MPI_ANY_TAG We receive messages with any tag status will contain the tag of the message

A simple example (1/2) Hello world program (not again!!!) We have p processes Process with rank 0 will receive a message from each one of the rest p-1 process and will print it #include <stdio.h> #include mpi.h int main(int argc,char **argv) { int myrank; /* process rank */ int p; /* number of processes*/ int source; /* source of the message */ int dest; /*destination of the message */ int tag=50; /* message tag */ char message[100]; /* buffer */ MPI_Status status; MPI_Init(&argc,&argv); /*! Don t forget this */ MPI_Comm_rank(MPI_COMM_WORLD,&myrank); MPI_Comm_size(MPI_COMM_WORLD,&p);

A simple example (2/2) if(myrank!=0) { /*if I am process other than the one with rank 0*/ sprintf(message, Greetings from process %d\n,myrank); /*send it to process with rank 0*/ dest=0; /* strlen(message)+1, 1 one stands for \0 */ MPI_Send(message,strlen(message)+1,MPI_CHAR,dest,tag, MPI_COMM_WORLD); } else { /* I am process with rank 0 */ for(source=1;source<p;source++) { /* receive a message from each one of the other p-1 processes */MPI_Recv(message,100,MPI_CHAR,source,tag,MPI_COMM_WORL D,&status); } } MPI_Finalize(); /* clean up */ }

Collective communication (1/3) Broadcast: A single process sends the same data to every process int MPI_Bcast(void *message, int count, MPI_Datatype datatype,int root,mpi_comm comm) message, count and datatype arguments same as MPI_Send and MPI_Recv root specifies the process rank that broadcasts the message

Collective Communication (2/3) Reduce: global operations int MPI_Reduce(void *operand, void *result, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) Combines the operands stored in *operand using operation op Stores the result in *result on process root Must be called by ALL process inside comm count,datatype and op must be the same

Collective communication (3/3) Synchronizing processes: Barriers Each process blocks on the barrier Unblocks when all processes have reached barrier int MPI_Barrier(MPI_Comm comm) Gathering data: MPI_Gather int MPI_Gather(void *send_buf, int send_count, MPI_Datatype send_type, void *recv_buf, int recv_count, MPI_Datatype send_type, int root, MPI_Comm comm) process with rank root gathers data and stores them in recv_buf storage inside recv_buf is based on sender s rank

Collective communication overview

MPI Datatypes What types of data can we send and receive? MPI datatype MPI_CHAR MPI_SHORT MPI_INT MPI_LONG MPI_UNSIGNED_CHAR MPI_UNSIGNED SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE MPI_BYTE MPI_PACKED C datatype signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double

What about structures? (1/2) We must create a new datatype Suppose we want to send a struct such as: struct { char display[50]; int maxiter; double xmin,ymin; double xmax,ymax; int width,height; }cmdline; MPI_Type_struct builds a new datatype int MPI_Type_struct(int count, int blocklens[], MPI_Aint indices[], MPI_Datatype old_types[], MPI_Datatype *newtype )

What about structures? (2/2) /* set up 4 blocks */ int blockcounts[4] = {50,1,4,2}; MPI_Datatype types[4]; MPI_Aint displs[4]; MPI_Datatype cmdtype; /* initialize types and displs with addresses of items */ MPI_Address( &cmdline.display, &displs[0] ); MPI_Address( &cmdline.maxiter, &displs[1] ); MPI_Address( &cmdline.xmin, &displs[2] ); MPI_Address( &cmdline.width, &displs[3] ); types[0] = MPI_CHAR; types[1] = MPI_INT; types[2] = MPI_DOUBLE; types[3] = MPI_INT; for (i = 3; i >= 0; i--) displs[i] -= displs[0]; MPI_Type_struct( 4, blockcounts, displs, types, &cmdtype ); MPI_Type_commit( &cmdtype );

Timing your MPI applications Standard approach: gettimeofday in the master process Instead of gettimeofday you can use MPI_Wtime double MPI_Wtime() simpler than gettimeofday, same semantics returns seconds since an arbitrary moment in the past Do not measure initialization functions!! e.g no need to measure MPI_Init

Compile and run mpicc used for compilation mpicc O2 -o program mpi_helloworld.c Running programs with mpirun mpirun np 4 machinefile machines.solaris program arg1 np: number of processes machines.solaris: configuration file that lists machines on which MPI programs an be run Simple list of machine names e.g chaos.csd.uoc.gr geras.csd.uoc.gr Add /home/lessons2/hy555/mpi/solaris-x86/bin/ to your path For man pages add /home/lessons2/hy555/mpi/solarisx86/man to your manpath