Parallel Programming, MPI Lecture 2
|
|
- Derick Wheeler
- 5 years ago
- Views:
Transcription
1 Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011
2 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel Machine Distributed memory parallel machines Shared memory parallel machine 2 Methodical Design Partitioning Domain Decomposition Functional Decomposition 3 An Introduction to MPI 4 Point-to-Point Communication Blocking PTP Communication
3 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel Machine Distributed memory parallel machines Shared memory parallel machine 2 Methodical Design Partitioning Domain Decomposition Functional Decomposition 3 An Introduction to MPI 4 Point-to-Point Communication Blocking PTP Communication
4 The Von Neumann Computer Figure: The Von Neumann Computer
5 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel Machine Distributed memory parallel machines Shared memory parallel machine 2 Methodical Design Partitioning Domain Decomposition Functional Decomposition 3 An Introduction to MPI 4 Point-to-Point Communication Blocking PTP Communication
6 Different type of parallel platforms:shared Memory Figure: The typical representation of a Shared Memory Parallel machine
7 Different type of parallel platforms:distributed Memory Figure: A typical representation of Distributed Memory Parallel machine
8 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel Machine Distributed memory parallel machines Shared memory parallel machine 2 Methodical Design Partitioning Domain Decomposition Functional Decomposition 3 An Introduction to MPI 4 Point-to-Point Communication Blocking PTP Communication
9 Methodical Design Partitioning Domain Decomposition Figure: Domain Decomposition 1 1 Designing and Building Parallel Programs (On-line book ), by Ian Foster
10 Methodical Design Partitioning Functional Decomposition Figure: Functional Decomposition 2 2 Designing and Building Parallel Programs (On-line book ), by Ian Foster
11 3 Methodical Design 3 Communication Number of tasks 8 8 = 64, Number of Communications 4 64 = 256.
12 3 Methodical Design 3 Communication Number of tasks 8 8 = 64, Number of Communications 4 64 = 256. Number of tasks 1 4 = 4, Number of Communications 4 4 = 16.
13 Methodical Design 4 Mapping 4 Designing and Building Parallel Programs (On-line book ), by Ian Foster
14 An Introduction to MPI Applications: Scalable Parallel Computers (SPCs) with distributed memory, Network Of Workstations (NOWs)
15 An Introduction to MPI Applications: Scalable Parallel Computers (SPCs) with distributed memory, Network Of Workstations (NOWs) Some Goals of MPI: Design an application programming interface, Allow efficient communication, Allow for implementations that can be used in a heterogeneous environment, Allow convenient C and Fortran 77 binding for the interface, Provide a reliable communication interface, Define an interface not too different from current practice, such as PVM, NX, etc.
16 An Introduction to MPI What is Included in MPI Point-to-point communication, Collective operations, Process groups, Communication domains, Process topologies, Environmental Management and inquiry, Profiling interface, Binding for Fortran 77 and C (Also for C++ and F90 in MPI-2)â, I/O functions (in MPI-2).â
17 An Introduction to MPI What is Included in MPI Point-to-point communication, Collective operations, Process groups, Communication domains, Process topologies, Environmental Management and inquiry, Profiling interface, Binding for Fortran 77 and C (Also for C++ and F90 in MPI-2)â, I/O functions (in MPI-2).â Versions of MPI Version 1.0 (was made in June 1994)â, Version 1.1 (was made in June 1995)â, Version 2.
18 An Introduction to MPI Procedure Specification The call uses but does not update an argument marked IN, The call may update an argument marked OUT, The call both uses and updates an argument marked INOUT.
19 An Introduction to MPI Procedure Specification The call uses but does not update an argument marked IN, The call may update an argument marked OUT, The call both uses and updates an argument marked INOUT. Types of MPI Calls Local, Non-local, Blocking, Non-blocking, Opaque Objects, Language Binding.
20 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel Machine Distributed memory parallel machines Shared memory parallel machine 2 Methodical Design Partitioning Domain Decomposition Functional Decomposition 3 An Introduction to MPI 4 Point-to-Point Communication Blocking PTP Communication
21 Point-to-Point Communication The Simplest Example main.for Program main implicit none include mpif.h integer ierr,rc call MPI_INIT(ierr) print*, HI There call MPI_FINALIZE(rc) End main.cpp #include <iostream.h> #include <mpi.h> int main(int argc, char **argv){ MPI_Init(&argc, &argv); cout«"hi There"; MPI_Finalize(); return 0; }
22 Point-to-Point Communication Compiling a Program > more hostfile Naft 2 Oil 1 > more hostfile HPCLAB ws01 ws02 ws03 Lamboot -v hostfile mpicc code_name.c -o code_exe_name mpicc code_name.cpp -o code_exe_name mpif77 code_name.for -o code_exe_name mpif90 code_name.f90 -o code_exe_name mpirun -v -np 9 code_exe_name mpirun N code_exe_name
23 Point-to-Point Communication A More Complex Program #include <iostream.h> #include <mpi.h> int main(int argc, char **argv) { int npes, myrank; MPI_Init(&argc, &argv); MPI_Comm_size(MPI_COMM_WORLD, &npes); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); cout«"hi There, I am node "«myrank«" and the total worker which you are using now is:" «npes«endl; MPI_Finalize(); return 0; }
24 Point-to-Point Communication A More Complex Program Program size_rank implicit none include mpif.h integer ierr,npes,myrank call MPI_INIT(ierr) call MPI_COMM_SIZE( MPI_COMM_WORLD, npes, ierr ) call MPI_COMM_RANK( MPI_COMM_WORLD, myrank, ierr ) print*,"hi There, I am node ",myrank," and the total", *"number of workers which you are using now is: ",npes call MPI_FINALIZE(ierr) End
25 Point-to-Point Communication Blocking Send Operation MPI_SEND(buf, count, datatype, dest, tag, comm) IN buf initial address of send buffer IN count number of entries to send IN datatype datatype of each entry IN dest rank of destination IN tag message tag IN comm communicator C Version int MPI_Send(void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm, comm)
26 Point-to-Point Communication Blocking Send Operation MPI_SEND(buf, count, datatype, dest, tag, comm) IN buf initial address of send buffer IN count number of entries to send IN datatype datatype of each entry IN dest rank of destination IN tag message tag IN comm communicator Fortran version MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERROR) <type> BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERROR
27 Point-to-Point Communication Blocking Receive Operation MPI_RECV(buf, count, datatype, source, tag, comm, status)â OUT buf initial address of received buffer IN count number of entries to receive IN datatype datatype of each entry IN source rank of source IN tag message tag IN comm communicator OUT status return status C Version int MPI_Recv(void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status);
28 Point-to-Point Communication Blocking Receive Operation MPI_RECV(buf, count, datatype, source, tag, comm, status)â OUT buf initial address of received buffer IN count number of entries to receive IN datatype datatype of each entry IN source rank of source IN tag message tag IN comm communicator OUT status return status Fortran Version MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERROR) <type> BUF(*) STATUS(MPI_STATUS_SIZE), IERROR INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM,
29 Point-to-Point Communication Data Type MPI Data Type MPI_INTEGER MPI_REAL MPI_DOUBLE_PRECISION MPI_COMPLEX MPI_LOGICAL MPI_CHARACTER MPI_BYTE MPI_PACKED Fortran Data Type INTEGER REAL BOUBLE PRECISION COMPLEX LOGICAL CHARACTER(1)â
30 Point-to-Point Communication Data Type MPI Data Type MPI_CHAR MPI_SHORT MPI_INT MPI_LONG MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE MPI_BYTE MPI_PACKED C Data Type signed char signed short int signed int signed long int unsigned char unsigned short int unsigned int unsigned long int float double long double
31 Point-to-Point Communication A useful command MPI_GET_PROCESSOR_NAME(name, resultlen) OUT name A unique specifier for the current physical node OUT resultlen Length (in printable characters) of the result returned in name C Version int MPI_Get_processor_name(char* name, int* resultlen) Fortran Version MPI_GET_PROCESSOR_NAME(NAME, RESULTLEN, IERROR) CHARACTER*(*) NAME INTEGER RESULTLEN, IERROR
32 Point-to-Point Communication Blocking Send & Receive (Bsend_recd_1.cpp) int npes, myrank, namelen, i; char processor_name[mpi_max_processor_name]; char greeting[mpi_max_processor_name + 80]; MPI_Status status; MPI_Get_processor_name( processor_name, &namelen ); sprintf( greeting, "Hello, World, From process %d of %d on %s", myrank, npes, processor_name ); if (myrank == 0 ) { printf( "%s", greeting ); for (i = 1; i < npes; i++) { MPI_Recv( greeting, sizeof( greeting ), MPI_CHAR, i, 1, MPI_COMM_WORLD, &status); printf( "%s", greeting ); } } else { MPI_Send( greeting, strlen( greeting ) + 1, MPI_CHAR, 0, 1, MPI_COMM_WORLD); } } }
33 Point-to-Point Communication Blocking Send & Receive (Bsend_Recd_1.for) INTEGER ierr,npes,myrank,namelen,i INTEGER stat(mpi_status_size) CHARACTER*(MPI_MAX_PROCESSOR_NAME) processor_name CHARACTER (MPI_MAX_PROCESSOR_NAME + 80) greeting CHARACTER(1) numb(0:9) numb(0)="0" ; numb(1)="1"; numb(2)="2" ; numb(3)="3" numb(4)="4" ; numb(5)="5" ; numb(6)="6" ; numb(7)="7" numb(8)="8" ; numb(9)="9" call MPI_GET_PROCESSOR_NAME(processor_name,namelen,ierr) greeting = Hello Wold, From process //numb(myrank)// * of //numb(npes)// on //processor_name IF(myrank.EQ.0) THEN print*, greeting DO i = 1, npes -1 call MPI_RECV(greeting, len(greeting)+1,mpi_character,i,1, * MPI_COMM_WORLD,stat,ierr) print*,greeting END DO ELSE call MPI_SEND(greeting,len(greeting)+1,MPI_CHARACTER,0,1, * MPI_COMM_WORLD,ierr) ENDIF
34 Point-to-Point Communication Safety
35 Point-to-Point Communication Safety
36 Point-to-Point Communication Safety
37 Point-to-Point Communication Safety src = * :: MPI_ANY_SOURCE, tag = * :: MPI_ANY_TAG
38 Point-to-Point Communication Blocking Send Received example 2 Domain Decomposition
39 Point-to-Point Communication Blocking Send & Receive (Bsend_recd_2.cpp) for(j =0; j<= np+1; j++){ for(i=0; i<= np+1; i++)â a[i][j] = myrank*100+10*j+i;} left = myrank - 1; right= myrank + 1; if(myrank == 1 ) left = npes -1 ; if(myrank == (npes-1)) right = 1; if (myrank!= 0 ) { if(myrank%2==0){ MPI_Send(&a[1][0],np+2,MPI_INT,left,1,MPI_COMM_WORLD); MPI_Send(&a[np][0],np+2,MPI_INT,right,1,MPI_COMM_WORLD); MPI_Recv(&a[0][0],np+2,MPI_INT,left,1,MPI_COMM_WORLD,&stat); MPI_Recv(&a[np+1][0],np+2,MPI_INT,right,1,MPI_COMM_WORLD,&stat); } else { MPI_Recv(&a[np+1][0],np+2,MPI_INT,right,1,MPI_COMM_WORLD,&stat); MPI_Recv(&a[0][0],np+2,MPI_INT,left,1,MPI_COMM_WORLD,&stat); MPI_Send(&a[np][0],np+2,MPI_INT,right,1,MPI_COMM_WORLD); MPI_Send(&a[1][0],np+2,MPI_INT,left,1,MPI_COMM_WORLD); } }
40 Point-to-Point Communication Blocking Send & Receive (Bsend_Recd_2.for) IF(myrank.NE.0) THEN IF(MOD(myrank,2).EQ.0)THEN call MPI_SEND(a(0,1),np+2,MPI_INTEGER,left,1, * MPI_COMM_WORLD,ierr) call MPI_SEND(a(0,np),np+2,MPI_INTEGER,right,1, * MPI_COMM_WORLD,ierr) call MPI_RECV(a(0,0),np+2,MPI_INTEGER,left,1, * MPI_COMM_WORLD,stat,ierr) call MPI_RECV(a(0,np+1),np+2,MPI_INTEGER,right,1, * MPI_COMM_WORLD,stat,ierr) ELSE call MPI_RECV(a(0,np+1),np+2,MPI_INTEGER,right,1, * MPI_COMM_WORLD,stat,ierr) call MPI_RECV(a(0,0),np+2,MPI_INTEGER,left,1, * MPI_COMM_WORLD,stat,ierr) call MPI_SEND(a(0,np),np+2,MPI_INTEGER,right,1, * MPI_COMM_WORLD,ierr) call MPI_SEND(a(0,1),np+2,MPI_INTEGER,left,1, * MPI_COMM_WORLD,ierr) ENDIF ENDIF
41 The End That s All Folks
MPI Program Structure
MPI Program Structure Handles MPI communicator MPI_COMM_WORLD Header files MPI function format Initializing MPI Communicator size Process rank Exiting MPI 1 Handles MPI controls its own internal data structures
More informationA message contains a number of elements of some particular datatype. MPI datatypes:
Messages Messages A message contains a number of elements of some particular datatype. MPI datatypes: Basic types. Derived types. Derived types can be built up from basic types. C types are different from
More informationReusing this material
Messages Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationParallel Programming with MPI: Day 1
Parallel Programming with MPI: Day 1 Science & Technology Support High Performance Computing Ohio Supercomputer Center 1224 Kinnear Road Columbus, OH 43212-1163 1 Table of Contents Brief History of MPI
More informationParallel Programming using MPI. Supercomputing group CINECA
Parallel Programming using MPI Supercomputing group CINECA Contents Programming with message passing Introduction to message passing and MPI Basic MPI programs MPI Communicators Send and Receive function
More informationParallel programming with MPI Part I -Introduction and Point-to-Point Communications
Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, A. Marani, Supercomputing Applications and Innovation (SCAI), CINECA 23 February 2016 MPI course 2016 Contents
More informationParallel programming with MPI Part I -Introduction and Point-to-Point
Parallel programming with MPI Part I -Introduction and Point-to-Point Communications A. Emerson, Supercomputing Applications and Innovation (SCAI), CINECA 1 Contents Introduction to message passing and
More informationProgramming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction
More informationPractical Introduction to Message-Passing Interface (MPI)
1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):
More informationIntroduction to MPI. HY555 Parallel Systems and Grids Fall 2003
Introduction to MPI HY555 Parallel Systems and Grids Fall 2003 Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running Typical layout of an
More informationIntroduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/
Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point
More informationIntroduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.
Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright
More informationIPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08
IPM School of Physics Workshop on High Perfomance Computing/HPC08 16-21 February 2008 MPI tutorial Luca Heltai Stefano Cozzini Democritos/INFM + SISSA 1 When
More informationIntroduction in Parallel Programming - MPI Part I
Introduction in Parallel Programming - MPI Part I Instructor: Michela Taufer WS2004/2005 Source of these Slides Books: Parallel Programming with MPI by Peter Pacheco (Paperback) Parallel Programming in
More informationCS 426. Building and Running a Parallel Application
CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations
More informationIntroduction to parallel computing concepts and technics
Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing
More informationMPI point-to-point communication
MPI point-to-point communication Slides Sebastian von Alfthan CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. Introduction MPI processes are independent, they communicate to coordinate
More informationIntroduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2
Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS Teacher: Jan Kwiatkowski, Office 201/15, D-2 COMMUNICATION For questions, email to jan.kwiatkowski@pwr.edu.pl with 'Subject=your name.
More informationMPI. (message passing, MIMD)
MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point
More informationProgramming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing
More informationMPI Message Passing Interface
MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information
More informationMessage Passing Interface
MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across
More informationLesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.
The goals of this lesson are: understanding the MPI programming model managing the MPI environment handling errors point-to-point communication 1. The MPI Environment Lesson 1 MPI (Message Passing Interface)
More informationint sum;... sum = sum + c?
int sum;... sum = sum + c? Version Cores Time (secs) Speedup manycore Message Passing Interface mpiexec int main( ) { int ; char ; } MPI_Init( ); MPI_Comm_size(, &N); MPI_Comm_rank(, &R); gethostname(
More informationDepartment of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 4: MPI, CG M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationLecture 3 Message-Passing Programming Using MPI (Part 1)
Lecture 3 Message-Passing Programming Using MPI (Part 1) 1 What is MPI Message-Passing Interface (MPI) Message-Passing is a communication model used on distributed-memory architecture MPI is not a programming
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018
More informationME964 High Performance Computing for Engineering Applications
ME964 High Performance Computing for Engineering Applications Parallel Computing with MPI Building/Debugging MPI Executables MPI Send/Receive Collective Communications with MPI April 10, 2012 Dan Negrut,
More informationIntroduction to MPI Programming Part 2
Introduction to MPI Programming Part 2 Outline Collective communication Derived data types Collective Communication Collective communications involves all processes in a communicator One to all, all to
More informationHigh Performance Computing Course Notes Message Passing Programming I
High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works
More informationThe Message Passing Model
Introduction to MPI The Message Passing Model Applications that do not share a global address space need a Message Passing Framework. An application passes messages among processes in order to perform
More informationHolland Computing Center Kickstart MPI Intro
Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationIntroduction to the Message Passing Interface (MPI)
Applied Parallel Computing LLC http://parallel-computing.pro Introduction to the Message Passing Interface (MPI) Dr. Alex Ivakhnenko March 4, 2018 Dr. Alex Ivakhnenko (APC LLC) Introduction to MPI March
More informationMPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI
Linux MPI Linux MPI Message Passing Interface Linux MPI Linux MPI Message Passing Interface MPI MPICH MPI Department of Science and Engineering Computing School of Mathematics School Peking University
More informationMessage Passing Interface
Message Passing Interface by Kuan Lu 03.07.2012 Scientific researcher at Georg-August-Universität Göttingen and Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen Am Faßberg, 37077 Göttingen,
More informationIntroduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014
Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction
More informationCSE 160 Lecture 18. Message Passing
CSE 160 Lecture 18 Message Passing Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i);
More informationAMath 483/583 Lecture 21
AMath 483/583 Lecture 21 Outline: Review MPI, reduce and bcast MPI send and receive Master Worker paradigm References: $UWHPSC/codes/mpi class notes: MPI section class notes: MPI section of bibliography
More informationDistributed Memory Programming with Message-Passing
Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and
More informationMPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group
MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,
More informationMPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].
MPI: The Message-Passing Interface Most of this discussion is from [1] and [2]. What Is MPI? The Message-Passing Interface (MPI) is a standard for expressing distributed parallelism via message passing.
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationParallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center
05 Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Talk Overview Background on MPI Documentation Hello world in MPI Basic communications Simple
More informationParallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:
COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single
More informationProgramming Using the Message Passing Paradigm
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic Overview
More informationLecture 7: Distributed memory
Lecture 7: Distributed memory David Bindel 15 Feb 2010 Logistics HW 1 due Wednesday: See wiki for notes on: Bottom-up strategy and debugging Matrix allocation issues Using SSE and alignment comments Timing
More informationmith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut
mith College CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut dthiebaut@smith.edu Introduction to MPI D. Thiebaut Inspiration Reference MPI by Blaise Barney, Lawrence Livermore National
More informationWorking with IITJ HPC Environment
Working with IITJ HPC Environment by Training Agenda for 23 Dec 2011 1. Understanding Directory structure of IITJ HPC 2. User vs root 3. What is bash_profile 4. How to install any source code in your user
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives n Understanding how MPI programs execute n Familiarity with fundamental MPI functions
More informationParallel Programming. Using MPI (Message Passing Interface)
Parallel Programming Using MPI (Message Passing Interface) Message Passing Model Simple implementation of the task/channel model Task Process Channel Message Suitable for a multicomputer Number of processes
More informationOutline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.
Lecture 8 MPI Outline Introduction to HPC computing OpenMP MPI Introduction Understanding communications Collective communications Communicators Topologies Grouping Data for Communication Input / output
More informationCOSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines
Network card Network card 1 COSC 6374 Parallel Computation Message Passing Interface (MPI ) I Introduction Edgar Gabriel Fall 015 Distributed memory machines Each compute node represents an independent
More informationRecap of Parallelism & MPI
Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break
More informationPractical Scientific Computing: Performanceoptimized
Practical Scientific Computing: Performanceoptimized Programming Programming with MPI November 29, 2006 Dr. Ralf-Peter Mundani Department of Computer Science Chair V Technische Universität München, Germany
More informationFirst day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS
First day Basics of parallel programming RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS Today s schedule: Basics of parallel programming 7/22 AM: Lecture Goals Understand the design of typical parallel
More informationIntroduction to MPI. SuperComputing Applications and Innovation Department 1 / 143
Introduction to MPI Isabella Baccarelli - i.baccarelli@cineca.it Mariella Ippolito - m.ippolito@cineca.it Cristiano Padrin - c.padrin@cineca.it Vittorio Ruggiero - v.ruggiero@cineca.it SuperComputing Applications
More information5/5/2012. Message Passing Programming Model Blocking communication. Non-Blocking communication Introducing MPI. Non-Buffered Buffered
Lecture 7: Programming Using the Message-Passing Paradigm 1 Message Passing Programming Model Blocking communication Non-Buffered Buffered Non-Blocking communication Introducing MPI 2 1 Programming models
More informationAn Introduction to Parallel Programming
Guide 48 Version 2 An Introduction to Parallel Programming Document code: Guide 48 Title: An Introduction to Parallel Programming Version: 2 Date: 31/01/2011 Produced by: University of Durham Information
More informationParallel Short Course. Distributed memory machines
Parallel Short Course Message Passing Interface (MPI ) I Introduction and Point-to-point operations Spring 2007 Distributed memory machines local disks Memory Network card 1 Compute node message passing
More informationMessage Passing with MPI Christian Iwainsky HiPerCH
Message Passing with MPI Christian Iwainsky HiPerCH 05.08.2013 FB. Computer Science Scientific Computing Christian Iwainsky 1 Agenda Recap MPI Part 1 Concepts Point-to-Point Basic Datatypes MPI Part 2
More informationWriting Message Passing Parallel Programs with MPI
Writing Message Passing Parallel Programs with MPI A Two Day Course on MPI Usage Course Notes Version 1.8.2 Neil MacDonald, Elspeth Minty, Joel Malard, Tim Harding, Simon Brown, Mario Antonioletti Edinburgh
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationProgramming with MPI. Pedro Velho
Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?
More informationL14 Supercomputing - Part 2
Geophysical Computing L14-1 L14 Supercomputing - Part 2 1. MPI Code Structure Writing parallel code can be done in either C or Fortran. The Message Passing Interface (MPI) is just a set of subroutines
More informationCS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with
More informationSlides prepared by : Farzana Rahman 1
Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based
More informationMPI Tutorial. Shao-Ching Huang. IDRE High Performance Computing Workshop
MPI Tutorial Shao-Ching Huang IDRE High Performance Computing Workshop 2013-02-13 Distributed Memory Each CPU has its own (local) memory This needs to be fast for parallel scalability (e.g. Infiniband,
More informationIntroduction to MPI. Martin Čuma Center for High Performance Computing University of Utah
Introduction to MPI Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Quick introduction (in case you slept/missed last time). MPI concepts, initialization.
More informationParallel Computing Paradigms
Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives Understanding how MPI programs execute Familiarity with fundamental MPI functions
More informationProgramming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho
Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -
More informationMPI introduction - exercises -
MPI introduction - exercises - Paolo Ramieri, Maurizio Cremonesi May 2016 Startup notes Access the server and go on scratch partition: ssh a08tra49@login.galileo.cineca.it cd $CINECA_SCRATCH Create a job
More informationAn Introduction to MPI
An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current
More informationTutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros
Tutorial 2: MPI CS486 - Principles of Distributed Computing Papageorgiou Spyros What is MPI? An Interface Specification MPI = Message Passing Interface Provides a standard -> various implementations Offers
More informationThe Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing
The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task
More informationExperiencing Cluster Computing Message Passing Interface
Experiencing Cluster Computing Message Passing Interface Class 6 Message Passing Paradigm The Underlying Principle A parallel program consists of p processes with different address spaces. Communication
More informationCSE 160 Lecture 15. Message Passing
CSE 160 Lecture 15 Message Passing Announcements 2013 Scott B. Baden / CSE 160 / Fall 2013 2 Message passing Today s lecture The Message Passing Interface - MPI A first MPI Application The Trapezoidal
More informationIntroduction to MPI. Jerome Vienne Texas Advanced Computing Center January 10 th,
Introduction to MPI Jerome Vienne Texas Advanced Computing Center January 10 th, 2013 Email: viennej@tacc.utexas.edu 1 Course Objectives & Assumptions Objectives Teach basics of MPI-Programming Share information
More informationDISTRIBUTED MEMORY PROGRAMMING WITH MPI. Carlos Jaime Barrios Hernández, PhD.
DISTRIBUTED MEMORY PROGRAMMING WITH MPI Carlos Jaime Barrios Hernández, PhD. Remember Special Features of Architecture Remember concurrency : it exploits better the resources (shared) within a computer.
More informationHPC Parallel Programing Multi-node Computation with MPI - I
HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright
More informationHigh performance computing. Message Passing Interface
High performance computing Message Passing Interface send-receive paradigm sending the message: send (target, id, data) receiving the message: receive (source, id, data) Versatility of the model High efficiency
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) rabenseifner@hlrs.de Introduction to the Message Passing Interface (MPI) [03] University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS)
More informationDistributed Simulation in CUBINlab
Distributed Simulation in CUBINlab John Papandriopoulos http://www.cubinlab.ee.mu.oz.au/ ARC Special Research Centre for Ultra-Broadband Information Networks Outline Clustering overview The CUBINlab cluster
More informationIntroduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco
Introduction to MPI 2018 HPC Workshop: Parallel Programming Alexander B. Pacheco Research Computing July 17-18, 2018 Distributed Memory Model Each process has its own address space Data is local to each
More informationIntroduction to MPI. Shaohao Chen Research Computing Services Information Services and Technology Boston University
Introduction to MPI Shaohao Chen Research Computing Services Information Services and Technology Boston University Outline Brief overview on parallel computing and MPI Using MPI on BU SCC Basic MPI programming
More informationMPI Programming. Henrik R. Nagel Scientific Computing IT Division
1 MPI Programming Henrik R. Nagel Scientific Computing IT Division 2 Outline Introduction Basic MPI programming Examples Finite Difference Method Finite Element Method LU Factorization Monte Carlo Method
More informationOverview. Parallel Distributed Memory Programming - Message Passing. Section Contents. Section I: Background
Parallel Distributed Memory Programming - Message Passing (How to efficiently parallelize your code with great difficulty) Tom Logan ARSC Overview I. Background II. Introduction to MPI III. Basic MPI Programs
More informationParallel Programming Basic MPI. Timothy H. Kaiser, Ph.D.
Parallel Programming Basic MPI Timothy H. Kaiser, Ph.D. tkaiser@mines.edu Talk Overview Background on MPI Documentation Hello world in MPI Basic communications Simple send and receive program Examples
More informationIntroduction to Parallel Programming
University of Nizhni Novgorod Faculty of Computational Mathematics & Cybernetics Section 4. Part 1. Introduction to Parallel Programming Parallel Programming with MPI Gergel V.P., Professor, D.Sc., Software
More informationCS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.
Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, 2010 Final Project Purpose: - A chance to dig in deeper into a parallel programming model and explore concepts. -
More informationPoint-to-Point Communication. Reference:
Point-to-Point Communication Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/mpi/ Introduction Point-to-point communication is the fundamental communication facility provided by the MPI library. Point-to-point
More informationPCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.
PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed
More informationParallel Computing Programming Distributed Memory Architectures
Parallel Computing Programming Distributed Memory Architectures Dr. Gerhard Wellein,, Dr. Georg Hager Regionales Rechenzentrum Erlangen (RRZE) Vorlesung Parallelrechner Georg-Simon Simon-Ohm-Fachhochschule
More informationWhat s in this talk? Quick Introduction. Programming in Parallel
What s in this talk? Parallel programming methodologies - why MPI? Where can I use MPI? MPI in action Getting MPI to work at Warwick Examples MPI: Parallel Programming for Extreme Machines Si Hammond,
More informationHigh Performance Computing Course Notes Message Passing Programming III
High Performance Computing Course Notes 2009-2010 2010 Message Passing Programming III Blocking synchronous send the sender doesn t return until it receives the acknowledgement from the receiver that the
More informationHigh Performance Computing Course Notes Message Passing Programming III
High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming III Communication modes Synchronous mode The communication is considered complete when the sender receives the acknowledgement
More informationDistributed Systems + Middleware Advanced Message Passing with MPI
Distributed Systems + Middleware Advanced Message Passing with MPI Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola
More informationIntermediate MPI. M. D. Jones, Ph.D. Center for Computational Research University at Buffalo State University of New York
Intermediate MPI M. D. Jones, Ph.D. Center for Computational Research University at Buffalo State University of New York High Performance Computing I, 2008 M. D. Jones, Ph.D. (CCR/UB) Intermediate MPI
More informationMessage Passing Programming. Introduction to MPI
Message Passing Programming Introduction to MPI Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More information