Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI
|
|
- Paula Price
- 5 years ago
- Views:
Transcription
1 Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI
2 Agenda 1) MPI für Java Installation OK? 2) 2. Übungszettel Grundidee klar? 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel, Projektauswahl, Konzepte
3 Recap Parallel programming in Java Java has two built-in forms of support for parallel programming: Multithreading Multiple threads of control (sub processes), useful for Pseudo-parallelism within a single machine Real parallelism on shared-memory machine Remote Method Invocation (RMI) Allows invocation on an object located at another machine Useful for distributed-memory machines Many additional parallel programming libraries exist: MPJ Express (based on MPI) Ibis / Ibis Portability Layer Apache Flink Apache Spark
4 Recap Multi-Threading A thread has Its own program counter Its own local variables All threads on same Java Virtual Machine share global variables Threads can communicate through shared variables Threads can run concurrently (e.g. on multi-core systems) or are time-sliced
5 Recap public class MyThread extends Thread { public void hi() { System.out.println("hi"); } public void run() { System.out.println("hello"); } } MyThread t1 = new MyThread(); // allocates a thread MyThread t2 = new MyThread(); // allocates another thread t1.start(); t2.start(); t1.hi(); // starts first thread and invokes t1.run() // starts second thread and invokes t2.run()
6 Recap Example problem: Thread-1 does: X = X + 1; Thread-2 does: X = X + 2; Result should be +3, but sometimes is Consequence: we need to prevent concurrent access to same data by mutual exclusion (synchronization)
7 Next step: instead of running threads on same machine and use global memory we now run programs (threads) on different machines
8 MPI Application Example
9 Colliding Galaxies
10 Colliding Galaxies with 60,000 Particles 10
11 Cluster Formation
12 Structure Formation with two million Particles 12
13 MPI Recap
14 Message-Passing Programming Paradigm Each processor in a message-passing program runs a sub-program written in a convential sequential language all variables are private communicate via special subroutine calls M M M Memory P P P Processors Interconnection Network
15 Messages Messages are packets of data moving between subprograms The message passing system has to be told the following information Sending processor Source location Data type Data length Receiving processor(s) Destination location Destination size
16 Messages Access: Each sub-program needs to be connected to a message passing system Addressing: Messages need to have addresses to be sent to Reception: It is important that the receiving process is capable of dealing with the messages it is sent A message passing system is similar to: Post-office, Phone line, Fax, , etc
17 Point-to-Point Communication Simplest form of message passing One process sends a message to another Several variations on how sending a message can interact with execution of the sub-program
18 Point-to-Point variations Synchronous Sends provide information about the completion of the message e.g. fax machines Asynchronous Sends Only know when the message has left e.g. post cards Blocking operations only return from the call when operation has completed Non-blocking operations return straight away - can test/wait later for completion
19 Collective Communications Collective communication routines are higher level routines involving several processes at a time Can be built out of point-to-point communications Barriers synchronise processes Broadcast one-to-many communication Reduction operations combine data from several processes to produce a single (usually) result
20 Introduction Hint: we are using mpjexpress because it can also be run in a multi-core environment.
21 Two Important Concepts Two fundamental concepts of parallel programming are: Domain decomposition Functional decomposition 21
22 Domain Decomposition Image taken from 22
23 Functional Decomposition Image taken from 23
24 Message Passing Interface (MPI) MPI is a standard (an interface or an API): It defines a set of methods that are used by application developers to write their applications MPI library implement these methods MPI itself is not a library it is a specification document that is followed! MPI-1.2 is the most popular specification version Reasons for popularity: Software and hardware vendors were involved Significant contribution from academia MPICH served as an early reference implementation MPI compilers are simply wrappers to widely used C and Fortran compilers History: The first draft specification was produced in 1993 MPI-2.0, introduced in 1999, adds many new features to MPI Bindings available to C, C++, and Fortran MPI is a success story: It is the mostly adopted programming paradigm of IBM Blue Gene systems At least two production-quality MPI libraries: MPICH2 ( OpenMPI ( There s even a Java library: MPJ Express (
25 Message Passing Model Message passing model allows processors to communicate by passing messages: Processors do not share memory Data transfer between processors required cooperative operations to be performed by each processor: One processor sends the message while other receives the message
26 Distributed Memory Cluster Allegro (cluster head node) node1 node2 node7 node3 node6 node4 node5 26
27 Minimal MPI Java Program import mpi.* class Hello { static public void main(string[] args) { MPI.Init(args) ; int myrank = MPI.COMM_WORLD.Rank() ; if(myrank == 0) { char[] message = Hello, there.tochararray() ; MPI.COMM_WORLD.Send(message, 0, message.length, MPI.CHAR, 1, 99) ; } else { char[] message = new char [20] ; MPI.COMM_WORLD.Recv(message, 0, 20, MPI.CHAR, 0, 99) ; } System.out.println( received: + new String(message) + : ) ; } } MPI.Finalize() ; 27
28 Steps involved in executing the Hello World! program 1. Let s logon to the cluster head node 2. Write the Hello World program 3. Compile the program 4. Write the machines files 5. Start MPJ Express daemons 6. Execute the parallel program 7. Stop MPJ Express daemons 28
29 Step1: Logon to the head node 29
30 Step 2: Write the Hello World Program 30
31 Step 3: Compile the code 31
32 Step 4: Write the machines file 32
33 Step 5: Start MPJ Express daemons 33
34 Step 6: Execute the parallel program mpjrun.sh -np 6 -headnodeip dport HelloWorld.. Hi from process <3> of total <6> Hi from process <1> of total <6> Hi from process <2> of total <6> Hi from process <4> of total <6> Hi from process <5> of total <6> Hi from process <0> of total <6> 34
35 Step 7: Stop the MPJ Express daemons 35
36 COMM WORLD Communicator import java.util.*; import mpi.*;.. // Initialize MPI MPI.Init(args); // start up MPI // Get total number of processes and rank size = MPI.COMM_WORLD.Size(); rank = MPI.COMM_WORLD.Rank();.. 36
37 What is size? import java.util.*; import mpi.*;.... // Get total number of processes size = MPI.COMM_WORLD.Size(); Total number of processes in a communicator: The size of MPI.COMM_WORLD is 6 37
38 What is rank? import java.util.*; import mpi.*;.... // Get total number of processes rank = MPI.COMM_WORLD.Rank(); The unique identify (id) of a process in a communicator: Each of the six processes in MPI.COMM_WORLD has a distinct rank or id 38
39 Single Program Multiple Data (SPMD) Model import java.util.*; import mpi.*; public class HelloWorld { MPI.Init(args); // start up MPI size = MPI.COMM_WORLD.Size(); rank = MPI.COMM_WORLD.Rank(); if (rank == 0) { System.out.println( I am Process 0 ); } else if (rank == 1) { System.out.println( I am Process 1 ); } } MPI.Finalize(); 39
40 Single Program Multiple Data (SPMD) Model import java.util.*; import mpi.*; public class HelloWorld { MPI.Init(args); // start up MPI size = MPI.COMM_WORLD.Size(); rank = MPI.COMM_WORLD.Rank(); if (rank%2 == 0) { System.out.println( I am an even process ); } else if (rank%2 == 1) { System.out.println( I am an odd process ); } } MPI.Finalize(); 40
41 Point to Point Communication The most fundamental facility provided by MPI Basically exchange messages between two processes : One process (source) sends message The other process (destination) receives message 41
42 Point to Point Communication It is possible to send message for each basic datatype: Floats (MPI.FLOAT), Integers (MPI.INT), Doubles (MPI.DOUBLE) Java Objects (MPI.OBJECT) Each message contains a tag an identifier 42
43 Point to Point Communication Process 0 message Process 1 Process 2 Integers Process 4 Tag COMM_WORLD Process 7 Process 3 Process 6 Process 4 Process 5 43
44 Blocking Send() and Recv() Methods public void Send(Object buf, int offset, int count, Datatype datatype, int dest, int tag) throws MPIException public Status Recv(Object buf, int offset, int count, Datatype datatype, int src, int tag) throws MPIException 44
45 mpjrun.sh -np 5 ToyExample MPJ Express (0.38) is started in the multicore configuration
46 Blocking and Non-blocking Point-to-Point Comm There are blocking and non-blocking version of send and receive methods Blocking versions: A process calls Send() or Recv(), these methods return when the message has been physically sent or received Non-blocking versions: A process calls Isend() or Irecv(), these methods return immediately The user can check the status of message by calling Test() or Wait() Non-blocking versions provide overlapping of computation and communication: Asynchronous communication 46
47 Non-blocking Point-to-Point Comm Standard Synchronous Ready Buffered Blocking Send() Recv() Ssend() Rsend() Bsend() Non-blocking Isend() Irecv() Issend() Irsend() Ibsend() Non-blocking methods return a Request object: Wait() //waits until communication completes Test() //test if the communication has finished
48 Blocking vs. non-blocking 48
49 Non-blocking Point-to-Point Comm public Request Isend(Object buf, int offset, int count, Datatype datatype, int dest, int tag) throws MPIException public Request Irecv(Object buf, int offset, int count, Datatype datatype, int src, int tag) throws MPIException public Status Wait() throws MPIException public Status Test() throws MPIException 49
50 Performance Evaluation of Point to Point Communication Normally ping pong benchmarks are used to calculate: Latency: How long it takes to send N bytes from sender to receiver? Throughput: How much bandwidth is achieved? Latency is a useful measure for studying the performance of small messages Throughput is a useful measure for studying the performance of large messages 50
51 Latency on MultiCore 51
52 Throughput on MultiCore 52
53 Latency on GigE 53
54 Throughput on GigE 54
55 Collective communications Provided as a convenience for application developers: Save significant development time Efficient algorithms may be used Stable (tested) Built on top of point-to-point communications These operations include: Broadcast, Barrier, Reduce, Allreduce, Alltoall, Scatter, Scan, Allscatter Versions that allows displacements between the data 55
56 Broadcast, scatter, gather, reduction
57 Broadcast, scatter, gather, allgather, alltoall 57
58 Scatter Toy-Example
59 Scatter Example: distribute file
60 Broadcast, scatter, gather, allgather, alltoall public void Bcast(Object buf, int offset, int count, Datatype type, int root) throws MPIException public void Scatter(Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, Object recvbuf, int recvoffset, int recvcount, Datatype recvtype, int root) throws MPIException public void Gather(Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, Object recvbuf, int recvoffset, int recvcount, Datatype recvtype, int root) throws MPIException public void Allgather(Object sendbuf, int sendoffset int sendcount, Datatype sendtype, Object recvbuf, int recvoffset, int recvcount, Datatype recvtype) throws MPIException public void Alltoall(Object sendbuf, int sendoffset, int sendcount, Datatype sendtype, Object recvbuf, int recvoffset, int recvcount, Datatype recvtype) throws MPIException 60
61 Processes Reduce collective operations Processes Data reduce allreduce MPI.PROD MPI.SUM MPI.MIN MPI.MAX MPI.LAND MPI.BAND MPI.LOR MPI.BOR MPI.LXOR MPI.BXOR MPI.MINLOC MPI.MAXLOC
62 Processes Reduce collective operations Processes Data reduce allreduce MPI.PROD MPI.SUM MPI.MIN MPI.MAX MPI.LAND MPI.BAND MPI.LOR MPI.BOR MPI.LXOR MPI.BXOR MPI.MINLOC MPI.MAXLOC
63 Reduce collective operations public void Reduce(Object sendbuf, int sendoffset, Object recvbuf, int recvoffset, int count, Datatype datatype, Op op, int root) throws MPIException public void Allreduce(Object sendbuf, int sendoffset, Object recvbuf, int recvoffset, int count, Datatype datatype, Op op) throws MPIException 63
64 Collective Communication Performance 64
65 MPJ Design MPJ API MPJ collective Communications (High level) MPJ point to point communications (Base level) mpjdev (MPJ Device level) xdev Native MPI smpdev niodev gmdev JNI Java Virtual Machine (JVM) Threads API Java NIO JNI Hardware (NIC, Memory etc) 65
66 Summary MPJ Express is a Java messaging system that can be used to write parallel applications: MPJ/Ibis and mpijava are other similar software MPJ Express provides point-to-point communication methods like Send() and Recv(): Blocking and non-blocking versions Collective communications is also supported 66
Agenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2.
Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Agenda 1) Recap: MPI 2) 2. Übungszettel 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel, Projektauswahl, Konzepte 5)
More informationCOMP 322: Fundamentals of Parallel Programming. Lecture 34: Introduction to the Message Passing Interface (MPI), contd
COMP 322: Fundamentals of Parallel Programming Lecture 34: Introduction to the Message Passing Interface (MPI), contd Vivek Sarkar, Eric Allen Department of Computer Science, Rice University Contact email:
More informationParallel Programming. Using MPI (Message Passing Interface)
Parallel Programming Using MPI (Message Passing Interface) Message Passing Model Simple implementation of the task/channel model Task Process Channel Message Suitable for a multicomputer Number of processes
More informationCOMP 322: Fundamentals of Parallel Programming
COMP 322: Fundamentals of Parallel Programming https://wiki.rice.edu/confluence/display/parprog/comp322 Lecture 37: Introduction to MPI (contd) Vivek Sarkar Department of Computer Science Rice University
More informationCollective Communications
Collective Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationOutline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM
THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationProgramming with MPI Collectives
Programming with MPI Collectives Jan Thorbecke Type to enter text Delft University of Technology Challenge the future Collectives Classes Communication types exercise: BroadcastBarrier Gather Scatter exercise:
More informationSlides prepared by : Farzana Rahman 1
Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based
More informationLecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality)
COMP 322: Fundamentals of Parallel Programming Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality) Mack Joyner and Zoran Budimlić {mjoyner,
More informationReview of MPI Part 2
Review of MPI Part Russian-German School on High Performance Computer Systems, June, 7 th until July, 6 th 005, Novosibirsk 3. Day, 9 th of June, 005 HLRS, University of Stuttgart Slide Chap. 5 Virtual
More informationParallel Programming
Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG
More informationOutline. Communication modes MPI Message Passing Interface Standard
MPI THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking
More informationDistributed Systems + Middleware Advanced Message Passing with MPI
Distributed Systems + Middleware Advanced Message Passing with MPI Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola
More informationPart - II. Message Passing Interface. Dheeraj Bhardwaj
Part - II Dheeraj Bhardwaj Department of Computer Science & Engineering Indian Institute of Technology, Delhi 110016 India http://www.cse.iitd.ac.in/~dheerajb 1 Outlines Basics of MPI How to compile and
More informationDistributed Memory Parallel Programming
COSC Big Data Analytics Parallel Programming using MPI Edgar Gabriel Spring 201 Distributed Memory Parallel Programming Vast majority of clusters are homogeneous Necessitated by the complexity of maintaining
More informationHPC Parallel Programing Multi-node Computation with MPI - I
HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright
More informationRecap of Parallelism & MPI
Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break
More informationIntroduction to the Message Passing Interface (MPI)
Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018
More informationMPI. (message passing, MIMD)
MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point
More informationMPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group
MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,
More informationIntroduction to MPI, the Message Passing Library
Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of
More informationWhat s in this talk? Quick Introduction. Programming in Parallel
What s in this talk? Parallel programming methodologies - why MPI? Where can I use MPI? MPI in action Getting MPI to work at Warwick Examples MPI: Parallel Programming for Extreme Machines Si Hammond,
More informationPractical Scientific Computing: Performanceoptimized
Practical Scientific Computing: Performanceoptimized Programming Programming with MPI November 29, 2006 Dr. Ralf-Peter Mundani Department of Computer Science Chair V Technische Universität München, Germany
More informationParallel Computing Paradigms
Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is
More informationMPI - The Message Passing Interface
MPI - The Message Passing Interface The Message Passing Interface (MPI) was first standardized in 1994. De facto standard for distributed memory machines. All Top500 machines (http://www.top500.org) are
More informationMessage Passing Interface
Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented
More informationParallel programming MPI
Parallel programming MPI Distributed memory Each unit has its own memory space If a unit needs data in some other memory space, explicit communication (often through network) is required Point-to-point
More informationMessage Passing Interface. most of the slides taken from Hanjun Kim
Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message
More information1 Overview. KH Computational Physics QMC. Parallel programming.
Parallel programming 1 Overview Most widely accepted technique for parallel programming is so called: M P I = Message Passing Interface. This is not a package or program, but rather a standardized collection
More informationWeek 3: MPI. Day 02 :: Message passing, point-to-point and collective communications
Week 3: MPI Day 02 :: Message passing, point-to-point and collective communications Message passing What is MPI? A message-passing interface standard MPI-1.0: 1993 MPI-1.1: 1995 MPI-2.0: 1997 (backward-compatible
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON CÉCI - Consortium des Équipements de Calcul Intensif http://hpc.montefiore.ulg.ac.be Outline Introduction From serial source code to parallel execution MPI
More informationMessage-Passing and MPI Programming
Message-Passing and MPI Programming 2.1 Transfer Procedures Datatypes and Collectives N.M. Maclaren Computing Service nmm1@cam.ac.uk ext. 34761 July 2010 These are the procedures that actually transfer
More informationMPI Message Passing Interface
MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information
More informationNon-Blocking Communications
Non-Blocking Communications Deadlock 1 5 2 3 4 Communicator 0 2 Completion The mode of a communication determines when its constituent operations complete. - i.e. synchronous / asynchronous The form of
More informationCluster Computing MPI. Industrial Standard Message Passing
MPI Industrial Standard Message Passing MPI Features Industrial Standard Highly portable Widely available SPMD programming model Synchronous execution MPI Outer scope int MPI_Init( int *argc, char ** argv)
More informationThe Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing
The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives n Understanding how MPI programs execute n Familiarity with fundamental MPI functions
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives Understanding how MPI programs execute Familiarity with fundamental MPI functions
More informationTopic Notes: Message Passing Interface (MPI)
Computer Science 400 Parallel Processing Siena College Fall 2008 Topic Notes: Message Passing Interface (MPI) The Message Passing Interface (MPI) was created by a standards committee in the early 1990
More informationMPI Collective communication
MPI Collective communication CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) MPI Collective communication Spring 2018 1 / 43 Outline 1 MPI Collective communication
More informationChapter 4. Message-passing Model
Chapter 4 Message-Passing Programming Message-passing Model 2 1 Characteristics of Processes Number is specified at start-up time Remains constant throughout the execution of program All execute same program
More informationNon-Blocking Communications
Non-Blocking Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationCSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )
CSE 613: Parallel Programming Lecture 21 ( The Message Passing Interface ) Jesmin Jahan Tithi Department of Computer Science SUNY Stony Brook Fall 2013 ( Slides from Rezaul A. Chowdhury ) Principles of
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationExperiencing Cluster Computing Message Passing Interface
Experiencing Cluster Computing Message Passing Interface Class 6 Message Passing Paradigm The Underlying Principle A parallel program consists of p processes with different address spaces. Communication
More informationHigh Performance Computing
High Performance Computing Course Notes 2009-2010 2010 Message Passing Programming II 1 Communications Point-to-point communications: involving exact two processes, one sender and one receiver For example,
More informationBig Data Analytics. Lars Schmidt-Thieme
Big Data Analytics Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany A. Parallel Computing / 2. Message Passing Interface
More informationMessage passing. Week 3: MPI. Day 02 :: Message passing, point-to-point and collective communications. What is MPI?
Week 3: MPI Day 02 :: Message passing, point-to-point and collective communications Message passing What is MPI? A message-passing interface standard MPI-1.0: 1993 MPI-1.1: 1995 MPI-2.0: 1997 (backward-compatible
More informationScientific Computing
Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 21 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationLecture 7: More about MPI programming. Lecture 7: More about MPI programming p. 1
Lecture 7: More about MPI programming Lecture 7: More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems
More informationMessage Passing Interface
Message Passing Interface by Kuan Lu 03.07.2012 Scientific researcher at Georg-August-Universität Göttingen and Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen Am Faßberg, 37077 Göttingen,
More informationDistributed Memory Systems: Part IV
Chapter 5 Distributed Memory Systems: Part IV Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 293/342 The Message Passing Interface is a standard for creation of parallel programs using
More informationCapstone Project. Project: Middleware for Cluster Computing
Capstone Project Project: Middleware for Cluster Computing Middleware is computer software that connects software components or applications. The software consists of a set of enabling services that allow
More informationWorking with IITJ HPC Environment
Working with IITJ HPC Environment by Training Agenda for 23 Dec 2011 1. Understanding Directory structure of IITJ HPC 2. User vs root 3. What is bash_profile 4. How to install any source code in your user
More informationLecture 9: MPI continued
Lecture 9: MPI continued David Bindel 27 Sep 2011 Logistics Matrix multiply is done! Still have to run. Small HW 2 will be up before lecture on Thursday, due next Tuesday. Project 2 will be posted next
More informationIPM Workshop on High Performance Computing (HPC08) IPM School of Physics Workshop on High Perfomance Computing/HPC08
IPM School of Physics Workshop on High Perfomance Computing/HPC08 16-21 February 2008 MPI tutorial Luca Heltai Stefano Cozzini Democritos/INFM + SISSA 1 When
More informationCDP. MPI Derived Data Types and Collective Communication
CDP MPI Derived Data Types and Collective Communication Why Derived Data Types? Elements in an MPI message are of the same type. Complex data, requires two separate messages. Bad example: typedef struct
More informationIntroduction to MPI. HY555 Parallel Systems and Grids Fall 2003
Introduction to MPI HY555 Parallel Systems and Grids Fall 2003 Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running Typical layout of an
More informationBasic MPI Communications. Basic MPI Communications (cont d)
Basic MPI Communications MPI provides two non-blocking routines: MPI_Isend(buf,cnt,type,dst,tag,comm,reqHandle) buf: source of data to be sent cnt: number of data elements to be sent type: type of each
More informationPyConZA High Performance Computing with Python. Kevin Colville Python on large clusters with MPI
PyConZA 2012 High Performance Computing with Python Kevin Colville Python on large clusters with MPI Andy Rabagliati Python to read and store data on CHPC Petabyte data store www.chpc.ac.za High Performance
More informationMPI Tutorial Part 1 Design of Parallel and High-Performance Computing Recitation Session
S. DI GIROLAMO [DIGIROLS@INF.ETHZ.CH] MPI Tutorial Part 1 Design of Parallel and High-Performance Computing Recitation Session Slides credits: Pavan Balaji, Torsten Hoefler https://htor.inf.ethz.ch/teaching/mpi_tutorials/ppopp13/2013-02-24-ppopp-mpi-basic.pdf
More informationMore about MPI programming. More about MPI programming p. 1
More about MPI programming More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems, the CPUs share
More informationMPI Tutorial Part 1 Design of Parallel and High-Performance Computing Recitation Session
S. DI GIROLAMO [DIGIROLS@INF.ETHZ.CH] MPI Tutorial Part 1 Design of Parallel and High-Performance Computing Recitation Session Slides credits: Pavan Balaji, Torsten Hoefler https://htor.inf.ethz.ch/teaching/mpi_tutorials/ppopp13/2013-02-24-ppopp-mpi-basic.pdf
More informationHigh-Performance Computing: MPI (ctd)
High-Performance Computing: MPI (ctd) Adrian F. Clark: alien@essex.ac.uk 2015 16 Adrian F. Clark: alien@essex.ac.uk High-Performance Computing: MPI (ctd) 2015 16 1 / 22 A reminder Last time, we started
More informationParallel Programming Using MPI
Parallel Programming Using MPI Short Course on HPC 15th February 2019 Aditya Krishna Swamy adityaks@iisc.ac.in SERC, Indian Institute of Science When Parallel Computing Helps? Want to speed up your calculation
More informationIntroduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign
Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s
More informationProgramming Using the Message Passing Paradigm
Programming Using the Message Passing Paradigm Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic Overview
More informationCollective Communications I
Collective Communications I Ned Nedialkov McMaster University Canada CS/SE 4F03 January 2016 Outline Introduction Broadcast Reduce c 2013 16 Ned Nedialkov 2/14 Introduction A collective communication involves
More informationNon-blocking Java Communications Support on Clusters
Non-blocking Java Communications Support on Clusters Guillermo L. Taboada*, Juan Touriño, Ramón Doallo UNIVERSIDADE DA CORUÑA SPAIN {taboada,juan,doallo}@udc.es 13th European PVM/MPI Users s Meeting (EuroPVM/MPI
More informationProgramming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction
More informationThe MPI Message-passing Standard Practical use and implementation (V) SPD Course 6/03/2017 Massimo Coppola
The MPI Message-passing Standard Practical use and implementation (V) SPD Course 6/03/2017 Massimo Coppola Intracommunicators COLLECTIVE COMMUNICATIONS SPD - MPI Standard Use and Implementation (5) 2 Collectives
More informationNUMERICAL PARALLEL COMPUTING
Lecture 5, March 23, 2012: The Message Passing Interface http://people.inf.ethz.ch/iyves/pnc12/ Peter Arbenz, Andreas Adelmann Computer Science Dept, ETH Zürich E-mail: arbenz@inf.ethz.ch Paul Scherrer
More informationPractical Course Scientific Computing and Visualization
July 5, 2006 Page 1 of 21 1. Parallelization Architecture our target architecture: MIMD distributed address space machines program1 data1 program2 data2 program program3 data data3.. program(data) program1(data1)
More informationCPS 303 High Performance Computing
CPS 303 High Performance Computing Wensheng Shen Department of Computational Science SUNY Brockport Chapter 5: Collective communication The numerical integration problem in Chapter 4 is not very efficient.
More informationHigh Performance Computing Course Notes Message Passing Programming I
High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works
More informationThe MPI Message-passing Standard Practical use and implementation (VI) SPD Course 08/03/2017 Massimo Coppola
The MPI Message-passing Standard Practical use and implementation (VI) SPD Course 08/03/2017 Massimo Coppola Datatypes REFINING DERIVED DATATYPES LAYOUT FOR COMPOSITION SPD - MPI Standard Use and Implementation
More informationMPI Message Passing Interface. Source:
MPI Message Passing Interface Source: http://www.netlib.org/utk/papers/mpi-book/mpi-book.html Message Passing Principles Explicit communication and synchronization Programming complexity is high But widely
More informationThe Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs
1 The Message Passing Interface (MPI): Parallelism on Multiple (Possibly Heterogeneous) CPUs http://mpi-forum.org https://www.open-mpi.org/ Mike Bailey mjb@cs.oregonstate.edu Oregon State University mpi.pptx
More informationCornell Theory Center. Discussion: MPI Collective Communication I. Table of Contents. 1. Introduction
1 of 18 11/1/2006 3:59 PM Cornell Theory Center Discussion: MPI Collective Communication I This is the in-depth discussion layer of a two-part module. For an explanation of the layers and how to navigate
More informationAdvanced MPI. Andrew Emerson
Advanced MPI Andrew Emerson (a.emerson@cineca.it) Agenda 1. One sided Communications (MPI-2) 2. Dynamic processes (MPI-2) 3. Profiling MPI and tracing 4. MPI-I/O 5. MPI-3 11/12/2015 Advanced MPI 2 One
More informationCollective Communication in MPI and Advanced Features
Collective Communication in MPI and Advanced Features Pacheco s book. Chapter 3 T. Yang, CS240A. Part of slides from the text book, CS267 K. Yelick from UC Berkeley and B. Gropp, ANL Outline Collective
More informationMessage Passing Interface - MPI
Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 24, 2011 Many slides adapted from lectures by
More informationAdvanced Message-Passing Interface (MPI)
Outline of the workshop 2 Advanced Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Morning: Advanced MPI Revision More on Collectives More on Point-to-Point
More informationCommunication Characteristics in the NAS Parallel Benchmarks
Communication Characteristics in the NAS Parallel Benchmarks Ahmad Faraj Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 32306 {faraj, xyuan}@cs.fsu.edu Abstract In this
More informationPCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.
PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed
More informationMasterpraktikum - Scientific Computing, High Performance Computing
Masterpraktikum - Scientific Computing, High Performance Computing Message Passing Interface (MPI) Thomas Auckenthaler Wolfgang Eckhardt Technische Universität München, Germany Outline Hello World P2P
More informationStandard MPI - Message Passing Interface
c Ewa Szynkiewicz, 2007 1 Standard MPI - Message Passing Interface The message-passing paradigm is one of the oldest and most widely used approaches for programming parallel machines, especially those
More informationOptimised all-to-all communication on multicore architectures applied to FFTs with pencil decomposition
Optimised all-to-all communication on multicore architectures applied to FFTs with pencil decomposition CUG 2018, Stockholm Andreas Jocksch, Matthias Kraushaar (CSCS), David Daverio (University of Cambridge,
More informationMessage Passing Interface
MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across
More informationData parallelism. [ any app performing the *same* operation across a data stream ]
Data parallelism [ any app performing the *same* operation across a data stream ] Contrast stretching: Version Cores Time (secs) Speedup while (step < NumSteps &&!converged) { step++; diffs = 0; foreach
More informationMessage Passing Interface: Basic Course
Overview of DM- HPC2N, UmeåUniversity, 901 87, Sweden. April 23, 2015 Table of contents Overview of DM- 1 Overview of DM- Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2
More informationCSE 160 Lecture 15. Message Passing
CSE 160 Lecture 15 Message Passing Announcements 2013 Scott B. Baden / CSE 160 / Fall 2013 2 Message passing Today s lecture The Message Passing Interface - MPI A first MPI Application The Trapezoidal
More informationIntermediate MPI. M. D. Jones, Ph.D. Center for Computational Research University at Buffalo State University of New York
Intermediate MPI M. D. Jones, Ph.D. Center for Computational Research University at Buffalo State University of New York High Performance Computing I, 2008 M. D. Jones, Ph.D. (CCR/UB) Intermediate MPI
More informationDistributed Memory Programming with MPI
Distributed Memory Programming with MPI Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna moreno.marzolla@unibo.it Algoritmi Avanzati--modulo 2 2 Credits Peter Pacheco,
More informationMPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education
MPI Tutorial Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education Center for Vision, Cognition, Learning and Art, UCLA July 15 22, 2013 A few words before
More informationMA471. Lecture 5. Collective MPI Communication
MA471 Lecture 5 Collective MPI Communication Today: When all the processes want to send, receive or both Excellent website for MPI command syntax available at: http://www-unix.mcs.anl.gov/mpi/www/ 9/10/2003
More informationMessage Passing with MPI
Message Passing with MPI PPCES 2016 Hristo Iliev IT Center / JARA-HPC IT Center der RWTH Aachen University Agenda Motivation Part 1 Concepts Point-to-point communication Non-blocking operations Part 2
More informationWelcome to the introductory workshop in MPI programming at UNICC
Welcome...... to the introductory workshop in MPI programming at UNICC Schedule: 08.00-12.00 Hard work and a short coffee break Scope of the workshop: We will go through the basics of MPI-programming and
More information