Improving the interoperability between MPI and OmpSs-2
|
|
- Benedict Ross
- 5 years ago
- Views:
Transcription
1 Improving the interoperability between MPI and OmpSs-2 Vicenç Beltran Querol 19/04/2018 INTERTWinE Exascale Application Workshop, Edinburgh
2 Why hybrid MPI+OmpSs-2 programming? Gauss-Seidel method Pure MPI MPI + OmpSs (fork-join) MPI + OmpSs (tasks + sentinel) OmpSs-2 pause/resume API Task-Aware MPI (TAMPI) library Gauss-Seidel method MPI + OmpSs (tasks + TAMPI) Evaluation Conclusions Outline
3 Why hybrid MPI+OmpSs-2 programming? Try to leverage best of both programing models... Message Passing Interface (MPI) Designed to exploit distributed memory systems Efficient and scalable message passing interface OmpSs-2 tasking model Designed to exploit shared memory system Write sequential code, but execute it in parallel Fine grained synchronizations Automatic load-balancing but also to exploit some potential synergies J Fine-grained synchronization across nodes Overlap of computation and communication phases Leverage intra-node application parallelism to hide network latency and maximize network throughput However, interoperability issues between MPI and OmpSs-2 prevents application developers to achieve most of these goals L
4 Gauss-Seidel method: Sequential In-place iterative algorithm Ex: 3 x 3 tile domain i i i-1 Rank 0 i-1 Rank 1 i-1 nk 2 nk 3 1 Each tile depend on top and left tile from current iteration and right and bottom tile from previous iteration nk 0 nk 1 0 Rank 2 Task that computes a block on the i-th iteration Rank 0 Rank 1 Rank 2 Rank 3 Rank 3
5 Gauss-Seidel method: Pure MPI Ex: 12 x 3 blocks domain, decomposition across 4 MPI ranks Rank 0 Rank 1 lock Rank 2 Rank 3 After each iteration, neighbor MPI ranks has to exchange halos k 0 Rank 1 Rank 2 Rank 3 Data dependency MPI communication Rank 0 Rank 1 Rank 2 Rank 3 Data dependency MPI communication
6 Gauss-Seidel method: Pure MPI void solve(block_t *matrix, int rowblocks, int colblocks, int timesteps) int rank, rank_size; MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &rank_size); for (int t = 0; t < timesteps; ++t) solvegaussseidel(matrix, rowblocks, colblocks, rank, rank_size); MPI_Barrier(MPI_COMM_WORLD);
7 Gauss-Seidel method: Pure MPI void solvegaussseidel(block_t *matrix, int nbx, int nby, int rank, int rank_size) if (rank!= 0) sendfirstcomputerow(matrix, nbx, nby, rank, rank_size); receiveupperborder(matrix, nbx, nby, rank, rank_size); if (rank!= rank_size - 1) receivelowerborder(matrix, nbx, nby, rank, rank_size); for (int bx = 1; bx < nbx-1; ++bx) for (int by = 1; by < nby-1; ++by) solveblock(matrix, nbx, nby, bx, by); if (rank!= rank_size - 1) sendlastcomputerow(matrix, nbx, nby, rank, rank_size); void sendlastcomputerow(block_t *matrix, int nbx, int nby, int rank, int rank_size) for (int by = 1; by < nby-1; ++by) MPI_Send(&matrix[(nbx-2)*nby + by][bsx-1], BSY, MPI_DOUBLE, rank + 1, by, MPI_COMM_WORLD); void receiveupperborder(block_t *matrix, int nbx, int nby, int rank, int rank_size) Fig. 4. Left: simplified dependency diagra Right: simplified dependency diagram of interoperability library and with the interop cies) for (int by = 1; by < nby-1; ++by) MPI_Recv(&matrix[by][BSX-1], BSY, MPI_DOUBLE, rank - 1, by, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
8 Gauss-Seidel method: Pure MPI void solveblock(block_t *matrix, int nbx, int nby, int bx, int by) block_t &targetblock = matrix[bx*nby + by]; const block_t ¢erblock = matrix[bx*nby + by]; const block_t &topblock = matrix[(bx-1)*nby + by]; const block_t &leftblock = matrix[bx*nby + (by-1)]; const block_t &rightblock = matrix[bx*nby + (by+1)]; const block_t &bottomblock = matrix[(bx+1)*nby + by]; for (int x = 0; x < BSX; ++x) const row_t &toprow = (x > 0)? centerblock[x-1] : topblock[bsx-1]; const row_t &bottomrow = (x < BSX-1)? centerblock[x+1] : bottomblock[0]; for (int y = 0; y < BSY; ++y) double left = (y > 0)? centerblock[x][y-1] : leftblock[x][bsy-1]; double right = (y < BSY-1)? centerblock[x][y+1] : rightblock[x][0]; targetblock[x][y] = 0.25 * (toprow[y] + bottomrow[y] + left + right);
9 Gauss-Seidel method: Fork-Join OmpSs-2 used to execute in parallel the computational phase of the program MPI used only on sequential phase of the program for communications No overlapping of communication an computation phases void solvegaussseidel(block_t *matrix, int nbx, int nby, int rank, int rank_size) if (rank!= 0) sendfirstcomputerow(matrix, nbx, nby, rank, rank_size); receiveupperborder(matrix, nbx, nby, rank, rank_size); if (rank!= rank_size - 1) receivelowerborder(matrix, nbx, nby, rank, rank_size); for (int bx = 1; bx < nbx-1; ++bx) for (int by = 1; by < nby-1; ++by) #pragma oss task \ in(([nbx][nby]matrix)[bx-1][by]) \ in(([nbx][nby]matrix)[bx][by-1]) \ in(([nbx][nby]matrix)[bx][by+1]) \ in(([nbx][nby]matrix)[bx+1][by]) \ inout(([nbx][nby]matrix)[bx][by]) solveblock(matrix, nbx, nby, bx, by); #pragma oss taskwait if (rank!= rank_size - 1) sendlastcomputerow(matrix, nbx, nby, rank, rank_size); 1 Rank x Node!!!
10 Gauss-Seidel method: Tasks + sentinel Tasks used for both computations and communications Tags used to match send and receive operations but... Communication tasks have to be serialized to avoid deadlocks Partial overlapping of communication an computation phases void solvegaussseidel(block_t *matrix, int nbx, int nby, int rank, int rank_size) if (rank!= 0) sendfirstcomputerow(matrix, nbx, nby, rank, rank_size); receiveupperborder(matrix, nbx, nby, rank, rank_size); if (rank!= rank_size - 1) receivelowerborder(matrix, nbx, nby, rank, rank_size); for (int bx = 1; bx < nbx-1; ++bx) for (int by = 1; by < nby-1; ++by) #pragma oss task \ in(([nbx][nby]matrix)[bx-1][by]) \ in(([nbx][nby]matrix)[bx][by-1]) \ in(([nbx][nby]matrix)[bx][by+1]) \ in(([nbx][nby]matrix)[bx+1][by]) \ inout(([nbx][nby]matrix)[bx][by]) solveblock(matrix, nbx, nby, bx, by); if (rank!= rank_size - 1) sendlastcomputerow(matrix, nbx, nby, rank, rank_size); 1 Rank x Node!!!
11 Gauss-Seidel method: Tasks + sentinel Tasks used for both computations and communications Tags used to match send and receive operations but... Communication tasks have to be serialized to avoid deadlocks Partial overlapping of communication an computation phases void sendlastcomputerow(block_t *matrix, int nbx, int nby, int rank, int rank_size) for (int by = 1; by < nby-1; ++by) #pragma oss task in(([nbx][nby]matrix)[nbx-2][by]) inout(*serial) MPI_Send(&matrix[(nbx-2)*nby + by][bsx-1], BSY, MPI_DOUBLE, rank + 1, by, MPI_COMM_WORLD); void receiveupperborder(block_t *matrix, int nbx, int nby, int rank, int rank_size) for (int by = 1; by < nby-1; ++by) #pragma oss task out(([nbx][nby]matrix)[0][by]) inout(*serial) MPI_Recv(&matrix[by][BSX-1], BSY, MPI_DOUBLE, rank - 1, by, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
12 Gauss-Seidel method: Tasks + sentinel Why it is needed to serialize all communication tasks? Because tasks can be scheduled out-of-order, so MPI operations can also be executed out-of-order... All CPUs can stall executing MPI receive operations that depend on the eventual completion of MPI send operations of this rank, but these will never be executed! Create BLOCKED Unblock READY Create T 7 T 2 T 3 Unblock MPI_Send() Taskwait Schedule Blocked Tasks T 9 T 1 T 8 T 6 RUNNING MPI_receive() Taskwait Ready Tasks Complet FINISHED T 4 CPU T 5 T 6 T 8 CPU CPU CPU Schedule Running Tasks
13 OmpSs-2: Pause/resume API Low-level API to programmatically pause and resume the execution of a task. void * nanos_get_current_blocking_context(); // Get task id void nanos_block_current_task(void *context); // Block task void nanos_unblock_task(void *context); // Unblock task Create BLOCKED Unblock READY T 7 T 2 T 3 Unblock or Resume T 9 T 1 T 8 T 6 Taskwait RUNNING Schedule Pause PAUSED Resume Paused & Blocked Tasks Ready Tasks FINISHED Complet Taskwait or Pause T 4 CPU T 5 T 6 T 8 CPU CPU CPU Schedule Running Tasks
14 Task-Aware MPI (TAMPI) library Leverage the low-level pause/resume API to improve the interoperability between MPI and OmpSs-2 tasks Expose this new feature as a new threading support level in MPI: MPI_TASK_MULTIPLE When TAMPI is initialized using the MPI_TASK_MULTIPLE threading model, all the blocking operations are intercepted and converted to their nonblocking counter parts. Ex: MPI_Recv() executed inside a task. int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) int err, completed = 0; if (Interop::isEnabled()) MPI_Request request; err = MPI_Irecv(buf, count, datatype, source, tag, comm, &request); MPI_Test(&request, &completed, status); if (!completed) Ticket ticket(&request, status); ticket._waiter = get_current_blocking_context(); _pendingtickets.add(ticket); block_current_task(ticket._waiter); return err; return PMPI_Recv(buf, count, datatype, source, tag, comm, status);
15 Task-Aware MPI (TAMPI) library TAMPI library registers a polling service with the OmpSs-2 runtime to check the completion of in-flight MPI operations. Once a MPI operation is completed the task waiting for this operation is put again into the ready queue The polling service is executed by the runtime worker-threads periodically Resume T T 7 3 T T T 9 T 1 T Paused Tasks Ready Tasks void Interop::poll() for (Ticket &ticket : _pendingtickets) int completed = 0; MPI_Test(ticket._request, &completed, ticket._status); if (completed) _pendingtickets.remove(ticket); unblock_task(ticket._waiter); Pause MPI Blocking Operation T 4 CPU MPI Req T 5 T 6 T 8 CPU MPI Req TAMPI MPI Req CPU Polling Service CPU Schedule Running Tasks
16 Gauss-Seidel method: Tasks + TAMPI Tasks used for both computations and communications Tags used to match send and receive operations but... Full overlapping of communication an computation phases void sendlastcomputerow(block_t *matrix, int nbx, int nby, int rank, int rank_size) for (int by = 1; by < nby-1; ++by) #pragma oss task in(([nbx][nby]matrix)[nbx-2][by]) MPI_Send(&matrix[(nbx-2)*nby + by][bsx-1], BSY, MPI_DOUBLE, rank + 1, by, MPI_COMM_WORLD); Rank 0 Rank 1 Rank 2 Rank 3 Data dependency MPI communication MPI serialization Time void receiveupperborder(block_t *matrix, int nbx, int nby, int rank, int rank_size) for (int by = 1; by < nby-1; ++by) #pragma oss task out(([nbx][nby]matrix)[0][by]) MPI_Recv(&matrix[by][BSX-1], BSY, MPI_DOUBLE, rank - 1, by, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
17 Tiled Gauss-Seidel: Results BS=1024 & 1000 iterations Pure MPI: 48 ranks/node Hybrids: 1 rank/node & 48 cores/rank Baseline: Pure MPI Up to 8x speedup w.r.t. Pure MPI
18 Tiled Gauss-Seidel: Traces Pure MPI 4 nodes 100 iterations 32K x 32K matrix Same Time Duration MPI + Fork-Join MPI + Tasks (sentinel) MPI + Tasks + Interop
19 Conclusions TAMPI library benefits Provides inter-node fine grained-synchronization across tasks Automatic overlap of computation and communication phases Load-balancing Exposes more parallelism (remove barriers and artificial dependencies) Do not increase application complexity Fork-jon Task+sentinel Task+TAMPI
Extending the Task-Aware MPI (TAMPI) Library to Support Asynchronous MPI primitives
Extending the Task-Aware MPI (TAMPI) Library to Support Asynchronous MPI primitives Kevin Sala, X. Teruel, J. M. Perez, V. Beltran, J. Labarta 24/09/2018 OpenMPCon 2018, Barcelona Overview TAMPI Library
More informationarxiv: v1 [cs.dc] 10 Jan 2019
Integrating Blocking and Non-Blocking MPI Primitives with Task-Based Programming Models Kevin Sala a,, Xavier Teruel a,, Josep M. Perez a,, Antonio J. Peña a,, Vicenç Beltran a,, Jesus Labarta a, a Barcelona
More informationD4.4 Report on runtime systems implementation of resource management API
D4.4 Report on runtime systems implementation of resource management API Project Acronym Project Title Due date Submission date Project start date Project duration Deliverable lead organization M24 30/SEP/2017
More informationCS 426. Building and Running a Parallel Application
CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations
More informationTopics. Lecture 6. Point-to-point Communication. Point-to-point Communication. Broadcast. Basic Point-to-point communication. MPI Programming (III)
Topics Lecture 6 MPI Programming (III) Point-to-point communication Basic point-to-point communication Non-blocking point-to-point communication Four modes of blocking communication Manager-Worker Programming
More informationIntermediate MPI (Message-Passing Interface) 1/11
Intermediate MPI (Message-Passing Interface) 1/11 What happens when a process sends a message? Suppose process 0 wants to send a message to process 1. Three possibilities: Process 0 can stop and wait until
More informationIntermediate MPI (Message-Passing Interface) 1/11
Intermediate MPI (Message-Passing Interface) 1/11 What happens when a process sends a message? Suppose process 0 wants to send a message to process 1. Three possibilities: Process 0 can stop and wait until
More informationPCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.
PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed
More informationTutorial OmpSs: Overlapping communication and computation
www.bsc.es Tutorial OmpSs: Overlapping communication and computation PATC course Parallel Programming Workshop Rosa M Badia, Xavier Martorell PATC 2013, 18 October 2013 Tutorial OmpSs Agenda 10:00 11:00
More informationIntroduction to parallel computing concepts and technics
Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing
More informationDepartment of Informatics V. HPC-Lab. Session 4: MPI, CG M. Bader, A. Breuer. Alex Breuer
HPC-Lab Session 4: MPI, CG M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14
More informationMPI. (message passing, MIMD)
MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point
More informationPoint-to-Point Communication. Reference:
Point-to-Point Communication Reference: http://foxtrot.ncsa.uiuc.edu:8900/public/mpi/ Introduction Point-to-point communication is the fundamental communication facility provided by the MPI library. Point-to-point
More informationParallel Programming with MPI and OpenMP
Parallel Programming with MPI and OpenMP Michael J. Quinn Chapter 6 Floyd s Algorithm Chapter Objectives Creating 2-D arrays Thinking about grain size Introducing point-to-point communications Reading
More informationParallel programming MPI
Parallel programming MPI Distributed memory Each unit has its own memory space If a unit needs data in some other memory space, explicit communication (often through network) is required Point-to-point
More informationPARALLEL AND DISTRIBUTED COMPUTING
PARALLEL AND DISTRIBUTED COMPUTING 2013/2014 1 st Semester 1 st Exam January 7, 2014 Duration: 2h00 - No extra material allowed. This includes notes, scratch paper, calculator, etc. - Give your answers
More informationMessage Passing Interface
MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across
More informationDocument Classification
Document Classification Introduction Search engine on web Search directories, subdirectories for documents Search for documents with extensions.html,.txt, and.tex Using a dictionary of key words, create
More informationCOSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines
Network card Network card 1 COSC 6374 Parallel Computation Message Passing Interface (MPI ) I Introduction Edgar Gabriel Fall 015 Distributed memory machines Each compute node represents an independent
More informationMore about MPI programming. More about MPI programming p. 1
More about MPI programming More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems, the CPUs share
More informationNon-Blocking Communications
Non-Blocking Communications Deadlock 1 5 2 3 4 Communicator 0 2 Completion The mode of a communication determines when its constituent operations complete. - i.e. synchronous / asynchronous The form of
More informationAll-Pairs Shortest Paths - Floyd s Algorithm
All-Pairs Shortest Paths - Floyd s Algorithm Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 31, 2011 CPD (DEI / IST) Parallel
More informationBest Practice Guide for Writing MPI + OmpSs Interoperable Programs. Version 1.0, 3 rd April 2017
Best Practice Guide for Writing MPI + OmpSs Interoperable Programs Version 1.0, 3 rd April 2017 Copyright INTERTWinE Consortium 2017 Table of Contents 1 INTRODUCTION... 1 1.1 PURPOSE... 1 1.2 GLOSSARY
More informationCS4961 Parallel Programming. Lecture 19: Message Passing, cont. 11/5/10. Programming Assignment #3: Simple CUDA Due Thursday, November 18, 11:59 PM
Parallel Programming Lecture 19: Message Passing, cont. Mary Hall November 4, 2010 Programming Assignment #3: Simple CUDA Due Thursday, November 18, 11:59 PM Today we will cover Successive Over Relaxation.
More informationTopics. Lecture 7. Review. Other MPI collective functions. Collective Communication (cont d) MPI Programming (III)
Topics Lecture 7 MPI Programming (III) Collective communication (cont d) Point-to-point communication Basic point-to-point communication Non-blocking point-to-point communication Four modes of blocking
More informationECE 587 Hardware/Software Co-Design Lecture 09 Concurrency in Practice Message Passing
ECE 587 Hardware/Software Co-Design Spring 2018 1/14 ECE 587 Hardware/Software Co-Design Lecture 09 Concurrency in Practice Message Passing Professor Jia Wang Department of Electrical and Computer Engineering
More informationIntroduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2
Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS Teacher: Jan Kwiatkowski, Office 201/15, D-2 COMMUNICATION For questions, email to jan.kwiatkowski@pwr.edu.pl with 'Subject=your name.
More informationHPC Parallel Programing Multi-node Computation with MPI - I
HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright
More informationCS 179: GPU Programming. Lecture 14: Inter-process Communication
CS 179: GPU Programming Lecture 14: Inter-process Communication The Problem What if we want to use GPUs across a distributed system? GPU cluster, CSIRO Distributed System A collection of computers Each
More informationNon-Blocking Communications
Non-Blocking Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationLecture 7: More about MPI programming. Lecture 7: More about MPI programming p. 1
Lecture 7: More about MPI programming Lecture 7: More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems
More informationMore advanced MPI and mixed programming topics
More advanced MPI and mixed programming topics Extracting messages from MPI MPI_Recv delivers each message from a peer in the order in which these messages were send No coordination between peers is possible
More informationExercises: April 11. Hermann Härtig, TU Dresden, Distributed OS, Load Balancing
Exercises: April 11 1 PARTITIONING IN MPI COMMUNICATION AND NOISE AS HPC BOTTLENECK LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2017 Hermann Härtig THIS LECTURE Partitioning: bulk synchronous
More informationMulti GPU Programming with MPI and OpenACC
Multi GPU Programming with MPI and J. Kraus (NVIDIA) MPI+ Node 0 Node 1 Node N-1 MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM GPU CPU GPU CPU GPU CPU PCIe Switch PCIe Switch PCIe Switch IB IB IB 2 MPI+
More informationLOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS Hermann Härtig
LOAD BALANCING DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2016 Hermann Härtig LECTURE OBJECTIVES starting points independent Unix processes and block synchronous execution which component (point in
More information15-440: Recitation 8
15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs
More informationMPI and CUDA. Filippo Spiga, HPCS, University of Cambridge.
MPI and CUDA Filippo Spiga, HPCS, University of Cambridge Outline Basic principle of MPI Mixing MPI and CUDA 1 st example : parallel GPU detect 2 nd example: heat2d CUDA- aware MPI, how
More informationLecture 14: Mixed MPI-OpenMP programming. Lecture 14: Mixed MPI-OpenMP programming p. 1
Lecture 14: Mixed MPI-OpenMP programming Lecture 14: Mixed MPI-OpenMP programming p. 1 Overview Motivations for mixed MPI-OpenMP programming Advantages and disadvantages The example of the Jacobi method
More informationProgramming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction
More informationMULTI GPU PROGRAMMING WITH MPI AND OPENACC JIRI KRAUS, NVIDIA
MULTI GPU PROGRAMMING WITH MPI AND OPENACC JIRI KRAUS, NVIDIA MPI+OPENACC GDDR5 Memory System Memory GDDR5 Memory System Memory GDDR5 Memory System Memory GPU CPU GPU CPU GPU CPU PCI-e PCI-e PCI-e Network
More informationMessage Passing Interface
Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented
More informationLECTURE ON MULTI-GPU PROGRAMMING. Jiri Kraus, November 14 th 2016
LECTURE ON MULTI-GPU PROGRAMMING Jiri Kraus, November 14 th 2016 MULTI GPU PROGRAMMING With MPI and OpenACC Node 0 Node 1 Node N-1 MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM MEM GPU CPU GPU CPU GPU CPU
More informationRecap of Parallelism & MPI
Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break
More informationMessage Passing Interface. George Bosilca
Message Passing Interface George Bosilca bosilca@icl.utk.edu Message Passing Interface Standard http://www.mpi-forum.org Current version: 3.1 All parallelism is explicit: the programmer is responsible
More informationint sum;... sum = sum + c?
int sum;... sum = sum + c? Version Cores Time (secs) Speedup manycore Message Passing Interface mpiexec int main( ) { int ; char ; } MPI_Init( ); MPI_Comm_size(, &N); MPI_Comm_rank(, &R); gethostname(
More informationLecture 7: Distributed memory
Lecture 7: Distributed memory David Bindel 15 Feb 2010 Logistics HW 1 due Wednesday: See wiki for notes on: Bottom-up strategy and debugging Matrix allocation issues Using SSE and alignment comments Timing
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18
More informationParallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:
COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single
More informationProgramming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam
Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing
More informationParallel Programming, MPI Lecture 2
Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Point-to-Point Communication Non Blocking PTP Communication 2 Collective
More informationA Message Passing Standard for MPP and Workstations
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker Message Passing Interface (MPI) Message passing library Can be
More informationMPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016
MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared
More informationIntermediate MPI features
Intermediate MPI features Advanced message passing Collective communication Topologies Group communication Forms of message passing (1) Communication modes: Standard: system decides whether message is
More informationDocument Classification Problem
Document Classification Problem Search directories, subdirectories for documents (look for.html,.txt,.tex, etc.) Using a dictionary of key words, create a profile vector for each document Store profile
More informationCUDA GPGPU Workshop 2012
CUDA GPGPU Workshop 2012 Parallel Programming: C thread, Open MP, and Open MPI Presenter: Nasrin Sultana Wichita State University 07/10/2012 Parallel Programming: Open MP, MPI, Open MPI & CUDA Outline
More informationCINES MPI. Johanne Charpentier & Gabriel Hautreux
Training @ CINES MPI Johanne Charpentier & Gabriel Hautreux charpentier@cines.fr hautreux@cines.fr Clusters Architecture OpenMP MPI Hybrid MPI+OpenMP MPI Message Passing Interface 1. Introduction 2. MPI
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More informationMessage Passing Interface. most of the slides taken from Hanjun Kim
Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message
More informationCOSC 6374 Parallel Computation
COSC 6374 Parallel Computation Message Passing Interface (MPI ) II Advanced point-to-point operations Spring 2008 Overview Point-to-point taxonomy and available functions What is the status of a message?
More informationProgramming with MPI. Pedro Velho
Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?
More informationIntroduction to MPI: Part II
Introduction to MPI: Part II Pawel Pomorski, University of Waterloo, SHARCNET ppomorsk@sharcnetca November 25, 2015 Summary of Part I: To write working MPI (Message Passing Interface) parallel programs
More informationFirst day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS
First day Basics of parallel programming RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS Today s schedule: Basics of parallel programming 7/22 AM: Lecture Goals Understand the design of typical parallel
More informationHPC - HIGH PERFORMANCE COMPUTING (SUPERCOMPUTING)
HPC - HIGH PERFORMANCE COMPUTING (SUPERCOMPUTING)! DISTRIBUTED OPERATING SYSTEMS, SCALABILITY, SS 2014 Hermann Härtig Understand Systems Software for High Performance Computing (HPC), today & expected
More informationHPCSE - I. «MPI Programming Model - Part II» Panos Hadjidoukas
HPCSE - I «MPI Programming Model - Part II» Panos Hadjidoukas 1 Schedule and Goals 24.11.2017: MPI - part 2 asynchronous communication how MPI works study and discuss more examples 2 Outline Measuring
More informationHPX. High Performance ParalleX CCT Tech Talk Series. Hartmut Kaiser
HPX High Performance CCT Tech Talk Hartmut Kaiser (hkaiser@cct.lsu.edu) 2 What s HPX? Exemplar runtime system implementation Targeting conventional architectures (Linux based SMPs and clusters) Currently,
More informationParallel Short Course. Distributed memory machines
Parallel Short Course Message Passing Interface (MPI ) I Introduction and Point-to-point operations Spring 2007 Distributed memory machines local disks Memory Network card 1 Compute node message passing
More informationProgramming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho
Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -
More informationMore MPI. Bryan Mills, PhD. Spring 2017
More MPI Bryan Mills, PhD Spring 2017 MPI So Far Communicators Blocking Point- to- Point MPI_Send MPI_Recv CollecEve CommunicaEons MPI_Bcast MPI_Barrier MPI_Reduce MPI_Allreduce Non-blocking Send int MPI_Isend(
More informationLesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.
The goals of this lesson are: understanding the MPI programming model managing the MPI environment handling errors point-to-point communication 1. The MPI Environment Lesson 1 MPI (Message Passing Interface)
More informationOpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.
OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing
More informationA message contains a number of elements of some particular datatype. MPI datatypes:
Messages Messages A message contains a number of elements of some particular datatype. MPI datatypes: Basic types. Derived types. Derived types can be built up from basic types. C types are different from
More informationComputer Architecture
Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 2 Part I Programming
More informationIntroduction to Parallel Programming
Introduction to Parallel Programming Linda Woodard CAC 19 May 2010 Introduction to Parallel Computing on Ranger 5/18/2010 www.cac.cornell.edu 1 y What is Parallel Programming? Using more than one processor
More informationHigh-Performance Computing: MPI (ctd)
High-Performance Computing: MPI (ctd) Adrian F. Clark: alien@essex.ac.uk 2015 16 Adrian F. Clark: alien@essex.ac.uk High-Performance Computing: MPI (ctd) 2015 16 1 / 22 A reminder Last time, we started
More informationReusing this material
Messages Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationCS 470 Spring Mike Lam, Professor. Distributed Programming & MPI
CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI
More information6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT
6.189 IAP 2007 Lecture 5 Parallel Programming Concepts 1 6.189 IAP 2007 MIT Recap Two primary patterns of multicore architecture design Shared memory Ex: Intel Core 2 Duo/Quad One copy of data shared among
More informationCluster Computing MPI. Industrial Standard Message Passing
MPI Industrial Standard Message Passing MPI Features Industrial Standard Highly portable Widely available SPMD programming model Synchronous execution MPI Outer scope int MPI_Init( int *argc, char ** argv)
More informationParallel Programming, MPI Lecture 2
Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel
More informationMPI Runtime Error Detection with MUST
MPI Runtime Error Detection with MUST At the 27th VI-HPS Tuning Workshop Joachim Protze IT Center RWTH Aachen University April 2018 How many issues can you spot in this tiny example? #include #include
More informationDPHPC Recitation Session 2 Advanced MPI Concepts
TIMO SCHNEIDER DPHPC Recitation Session 2 Advanced MPI Concepts Recap MPI is a widely used API to support message passing for HPC We saw that six functions are enough to write useful
More informationCopyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 18. Combining MPI and OpenMP
Chapter 18 Combining MPI and OpenMP Outline Advantages of using both MPI and OpenMP Case Study: Conjugate gradient method Case Study: Jacobi method C+MPI vs. C+MPI+OpenMP Interconnection Network P P P
More informationOutline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.
Lecture 8 MPI Outline Introduction to HPC computing OpenMP MPI Introduction Understanding communications Collective communications Communicators Topologies Grouping Data for Communication Input / output
More informationLecture 9: MPI continued
Lecture 9: MPI continued David Bindel 27 Sep 2011 Logistics Matrix multiply is done! Still have to run. Small HW 2 will be up before lecture on Thursday, due next Tuesday. Project 2 will be posted next
More information5/5/2012. Message Passing Programming Model Blocking communication. Non-Blocking communication Introducing MPI. Non-Buffered Buffered
Lecture 7: Programming Using the Message-Passing Paradigm 1 Message Passing Programming Model Blocking communication Non-Buffered Buffered Non-Blocking communication Introducing MPI 2 1 Programming models
More informationFramework of an MPI Program
MPI Charles Bacon Framework of an MPI Program Initialize the MPI environment MPI_Init( ) Run computation / message passing Finalize the MPI environment MPI_Finalize() Hello World fragment #include
More informationCSE 613: Parallel Programming. Lecture 21 ( The Message Passing Interface )
CSE 613: Parallel Programming Lecture 21 ( The Message Passing Interface ) Jesmin Jahan Tithi Department of Computer Science SUNY Stony Brook Fall 2013 ( Slides from Rezaul A. Chowdhury ) Principles of
More informationIntroduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi
More informationMIT OpenCourseWare Multicore Programming Primer, January (IAP) Please use the following citation format:
MIT OpenCourseWare http://ocw.mit.edu 6.189 Multicore Programming Primer, January (IAP) 2007 Please use the following citation format: Rodric Rabbah, 6.189 Multicore Programming Primer, January (IAP) 2007.
More informationO.I. Streltsova, D.V. Podgainy, M.V. Bashashin, M.I.Zuev
High Performance Computing Technologies Lecture, Practical training 9 Parallel Computing with MPI: parallel algorithm for linear algebra https://indico-hlit.jinr.ru/event/120/ O.I. Streltsova, D.V. Podgainy,
More informationParallel Programming. Functional Decomposition (Document Classification)
Parallel Programming Functional Decomposition (Document Classification) Document Classification Problem Search directories, subdirectories for text documents (look for.html,.txt,.tex, etc.) Using a dictionary
More informationKommunikations- und Optimierungsaspekte paralleler Programmiermodelle auf hybriden HPC-Plattformen
Kommunikations- und Optimierungsaspekte paralleler Programmiermodelle auf hybriden HPC-Plattformen Rolf Rabenseifner rabenseifner@hlrs.de Universität Stuttgart, Höchstleistungsrechenzentrum Stuttgart (HLRS)
More informationMPI and OpenMP. Mark Bull EPCC, University of Edinburgh
1 MPI and OpenMP Mark Bull EPCC, University of Edinburgh markb@epcc.ed.ac.uk 2 Overview Motivation Potential advantages of MPI + OpenMP Problems with MPI + OpenMP Styles of MPI + OpenMP programming MPI
More informationParallel Computing Paradigms
Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is
More informationIntroduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/
Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point
More informationIntroduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014
Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction
More informationMPI MESSAGE PASSING INTERFACE
MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions
More informationCSE 262 Lecture 13. Communication overlap Continued Heterogeneous processing
CSE 262 Lecture 13 Communication overlap Continued Heterogeneous processing Final presentations Announcements Friday March 13 th, 10:00 AM to 1:00PM Note time change. Room 3217, CSE Building (EBU3B) Scott
More informationCS 351 Week The C Programming Language, Dennis M Ritchie, Kernighan, Brain.W
CS 351 Week 6 Reading: 1. The C Programming Language, Dennis M Ritchie, Kernighan, Brain.W Objectives: 1. An Introduction to Message Passing Model 2. To learn about Message Passing Libraries Concepts:
More informationThe Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing
The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task
More information