Message Passing Interface (MPI-1)

Size: px
Start display at page:

Download "Message Passing Interface (MPI-1)"

Transcription

1 IPGP INSTITUT DE PHYSIQUE DU GLOBE DE PARIS Parallel Programming: Message Passing Interface (MPI-1) Geneviève Moguilny Institut de Physique du Globe de Paris December 2005

2 Architecture and programming models According to Flynn (1972), S: Single, I: Instruction, M: Multiple, D: Data. Sequential or Parallel Data Control (Inst.) S I S SISD MISD M SIMD MIMD Among MIMD, Networks of Workstations (NOWs). Parallel programming writing of one (or several) program(s) that will simultaneously run to reach a common goal: each processor will execute an instance of the program, data (message) exchange call to a routine of a message passing (or exchange) library like PVM (Parallel Virtual Machine) or MPI (Message Passing Interface). Programming models on NOWs with distributed memory (P = Program): MPMD for heterogeneous networks PVM or MPI-2, SPMD for homogeneous networks PVM (obsolete) or MPI-[1 2]. December 2005 MPI 2 / 21

3 SPMD application A program written with a traditional language (Fortran, C...). Each process receives the same instance of the program, but with conditional instructions, run the same code or a different part of the code, on the same data or on different data. The variables of the program are private and stay in the memory of the processor allocated to the process. Data is exchanged between processes through calls to routines of a message passing library like MPI. Mem 1 Mem 2 Mem n High throughput and low latency network: Myrinet, SCI, Infiniband... CPU 1 CPU 2 CPU n msgs, NFS,... msgs bandwidth 90 Mb/s, lat. 90 µs bandwidth 2 Gb/s, lat. 5 µs Program December 2005 MPI 3 / 21

4 The MPI message passing library Review: 1992: Need to create portable applications with good performance creation of a working group (mainly from the US and Europe) to adopt the HPF (High Performance Fortran) methods. 1994: Version 1.0 of MPI. 1997: Definition of a standard for MPI-2 (dynamic control of tasks, parallel I/O...), available today. Main open source implementations: LAM MPI (Indiana University), MPICH (Argonne National Laboratory); and based on, specific network libraries (MPI-GM: MPICH over Myrinet). December 2005 MPI 4 / 21

5 Concept of messages passing MPI application = set of independent processes that run their own code, and communicate by calling MPI routines. Communication Message exchange Message = identifier(sender process) + data type (simple or derived) + length + identifier(receiver process) + data Mem 1 Mem 2 CPU 1 sender message CPU 2 receiver Point to point communication December 2005 MPI 5 / 21

6 Main categories of the MPI API ➊ Environment management initialization of / exit from the environment (MPI INIT, MPI FINALIZE), processes identification (MPI COMM SIZE, MPI COMM RANK). ➋ Communications (message send and receive) point to point, collectives. ➌ Groups and communicators management Communicator: set of processes in which MPI operations are done, processors + communication context (managed by MPI). Communicator creation from: processes group created first by the programmer, another communicator (MPI COMM WORLD = all processes). December 2005 MPI 6 / 21

7 Very small example (1/2) Source: HelloMPI.f90 1: program HelloMPI 2: 3: implicit none 4: include mpif.h 5: integer :: nb_procs, rang, ierr 6: 7: call MPI_INIT(ierr) 8: 9: call MPI_COMM_SIZE(MPI_COMM_WORLD, nb_procs, ierr) 10: call MPI_COMM_RANK(MPI_COMM_WORLD, rang, ierr) 11: print *, I am the process number, rang, among,nb_procs 12: 13: call MPI_FINALIZE(ierr) 14: 15: end program HelloMPI December 2005 MPI 7 / 21

8 Very small example (2/2) Compilation and link: mpif90 HelloMPI.f90 -o HelloMPI 1: program HelloMPI 2: 3: implicit none 4: include mpif.h 5: integer :: nb_procs, rang, ierr 6: 7: call MPI_INIT(ierr) 8: 9: call MPI_COMM_SIZE(MPI_COMM_WORLD, nb_procs, ierr) 10: call MPI_COMM_RANK(MPI_COMM_WORLD, rang, ierr) 11: print *, I am the process number, rang, among,nb_procs 12: 13: call MPI_FINALIZE(ierr) 14: 15: end program HelloMPI ifort -c -Iincludes path HelloMPI.f90 ifort -Llibs path HelloMPI.o -static-libcxa -o HelloMPI -lmpichf90 -lmpich Execution: mpirun -np 4 -machinefile mf HelloMPI Run HelloMPI (with ssh or rsh) on 4 processors (np = number of processors) of hosts listed in the file named mf. Result: I am the process number 0 among 4 I am the process number 2 among 4 I am the process number 3 among 4 I am the process number 1 among 4 December 2005 MPI 8 / 21

9 Point to point communications Communications between 2 processes identified by their rank in the communicator. Transfer modes: standard: Synchronous: Buffered: Ready: MPI does (or does not) a temporary copy according to the implementation. send finished when receive terminated. temporary copy in a buffer created by the programmer, send finished when copy terminated. send done when reception started (client/server). Blocking send (coupled send and receive) or not, Immediate reception or not. Two kinds of exchange: ➊ distinct sender and receiver (MPI SEND, MPI SSEND, MPI IBSEND...): CPU 1 CPU 2 MPI_[mode]SEND MPI_[mode]RECV ➋ the sender also receives and vice versa (MPI SENDRECV, MPI SENDRECV REPLACE): CPU 1 CPU 2 MPI_SENDRECV MPI_SENDRECV December 2005 MPI 9 / 21

10 Simple example of point to point communication 1:!! point_a_point.f90 : Exemple d utilisation de MPI_SEND et MPI_RECV 2:!! Auteur : Denis GIROU (CNRS/IDRIS - France) <Denis.Girou@idris.fr> (1996) 3: program point_to_point 4: implicit none 5: include mpif.h 6: integer, dimension(mpi_status_size) :: statut 7: integer, parameter :: etiquette=100 8: integer :: rank,value,ierr 9: call MPI_INIT(ierr) 10: call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr) 11: 12: if (rank == 0) then 13: value= : call MPI_SEND(value,1,MPI_INTEGER,2,etiquette,MPI_COMM_WORLD,ierr) 15: elseif (rank == 2) then 16: call MPI_RECV(value,1,MPI_INTEGER,0,etiquette,MPI_COMM_WORLD,statut,ierr) 17: print *, Me, process, rank,, I received,value, from the process 0. 18: end if 19: 20: call MPI_FINALIZE(ierr) 21: end program point_to_point CPU 0 CPU 1 CPU 2 CPU 3 mpif90 point to point.f90 -o point to point message: 1234 mpirun -np 4 -machinefile mf point to point Me, process 2, I received 1234 from the process 0. December 2005 MPI 10 / 21

11 Collective communications Set of point to point communications inside a group, done by one operation. Synchronizations (BARRIER) Data motions General spreadings (BCAST) or selective ones (SCATTER) CPU 0 CPU 1 CPU 2 CPU 3 MPI_SCATTER(sendbuf, sendcount, sendtype, recvbuf, & recvcount, recvtype, root, comm) Distributed collects (GATHER) or general ones (ALLGATHER) Cross exchanges (ALLTOALL) Reductions (REDUCE with SUM, PROD, MAX, MIN, MAXLOC, MINLOC, IAND, IOR, IXOR) CPU 0 CPU 1 CPU 2 CPU 3 val = 1 val = 2 val = 3 val = 4 sum = 10 MPI_REDUCE(val, sum, count, datatype, MPI_SUM, root, comm) December 2005 MPI 11 / 21

12 Communicators creation: topologies For the applications that use a domain decomposition, it is interesting to define a virtual grid of the processes to facilitate their manipulation: number of sub-domains = number of processes, work on the sub-domains then finalization with computations at the frontiers. creation of new communicators based on these topologies. These topologies can be: cartesian: periodic (or not) grid, identification of the processes by their coordinates, graph type for more complex topologies Real CPUs (0,2) (1,2) (2,2) (0,1) (1,1) (2,1) (0,0) (1,1) (2,0) Neighbors(3) = 1, 2, 4 MPI_CART_CREATE(...) MPI_GRAPH_CREATE(...) December 2005 MPI 12 / 21

13 MPI application development Why MPI? Allow to change the order of magnitude of the problem to solve, by a possible quasi-unlimited increase of the available memory, with standard hardware and software. Optimization Communication s mode choice: avoid the unnecessary copies Contention of the messages Load balancing taking care of bottlenecks (I/O) Overlap computation and communication Use of persistent requests Development assistance Graphic debuggers with MPI: TotalView, DDT Visualization of communications: Trace Collector/Analyzer (was Vampir/VampirTrace), Blade Well parallelized MPI application execution time inversely proportional to the number of used processors December 2005 MPI 13 / 21

14 MPI documentation Parallel programming: intro/main.html Parallel Programming with MPI (including User s Guide): Final draft of the MPI standard as of 3/31/94: ftp://ftp.irisa.fr/pub/netlib/mpi/drafts/draft-final.ps MPI Subroutine Reference: Open source implementations of MPI: LAM : MPICH : French version of these slides with french language links: moguilny/grille/mpi December 2005 MPI 14 / 21

15 MPI application running on EGEE: SPECFEM3D Resolution of regional scale seismic wave propagation problems to model wave propagation at high frequencies and for complex geological structures, with use of the spectral-element method (SEM). Application first written by D. Komatitsch (Université de Pau), used in more than 75 laboratories in the world, especially for the study of earthquakes, seismic tomography, ultrasonic propagation in crystals, topography, sedimentary basins and valleys, interface waves, and crystal s anisotropy. More information: dkomati1 EGEE positioning: NA4, VO ESR, domain: Solid Earth Physics December 2005 MPI 15 / 21

16 SPECFEM3D: technical caracteristics lines of Fortran 90 / MPI. Very scalable application, ran on 1944 CPUs at the EarthSimulator (Japan). On EGEE, ran on 64 CPUs at Nikhef (NL), on 4 or 16 CPUs at SCAI (DE), LAL (FR), CPPM (FR), CGG (FR), SARA (NL), IISAS-Bratislava (SK), HG-01-GRNET (GR), TU-Kocise (SK) and AEGIS01-PHY-SCL (YU). Constraints: Memory optimization necessary recompilation and update of the input files on SE at each input parameter change. Lot of temporary outputs ( writes on the /tmp of each node), + outputs on shared directory to retrieve ( /home NFS mounted). Need to launch successively two mpirun that have to use the same nodes, allocated in the same order. December 2005 MPI 16 / 21

17 SPECFEM3D on EGEE: Fem4.jdl 1: # Fem4.jdl 2: NodeNumber = 4; 3: VirtualOrganisation = "esr"; 4: Executable = "Fem"; 5: Arguments = "xmeshfem3d4 xspecfem3d4 4"; 6: StdOutput = "Fem.out"; 7: StdError = "Fem.err"; 8: JobType = "mpich"; 9: # LRMS_type = "pbs"; 10: Type = "Job"; 11: InputSandbox = { "Fem", "xmeshfem3d4", "xspecfem3d4"}; 12: OutputSandbox = { "output_mesher.txt","output_solver.txt", "xmeshfem3d4.out", 13: "xspecfem3d4.out", "Fem.out","Fem.err","Par_file"}; 14: Requirements = (other.glueceinfototalcpus >= 4) 15: && Regexp("-pbs-",other.GlueCEUniqueID); December 2005 MPI 17 / 21

18 SPECFEM3D on EGEE: Fem script (1/2) 1: #!/bin/sh 2: # 3: EXE1=$1 4: EXE2=$2 5: CPU_NEEDED=$3 6: TMP_DIR=/tmp/DATABASES_MPI_DIMITRI4 7: institut= echo $HOSTNAME cut -f2-5 -d. 8: MPIHOME=/usr/bin 9: if [ -f "$PWD/.BrokerInfo" ] ; then 10: TEST_LSF= edg-brokerinfo getce cut -d/ -f2 grep lsf 11: else 12: TEST_LSF= ps -ef grep sbatchd grep -v grep 13: fi 14: if [ "x$test_lsf" = "x" ] ; then 15: HOST_NODEFILE=$PBS_NODEFILE 16: else 17: echo "LSF Hosts: $LSB_HOSTS" 18: HOST_NODEFILE= pwd /lsf_nodefile.$$ 19: fi 20: for host in cat ${HOST_NODEFILE} 21: do 22: ssh $host rm -rf $TMP_DIR 23: ssh $host mkdir $TMP_DIR 24: ssh $host chmod 775 $TMP_DIR 25: done 26: export LFC_HOST="mu35.matrix.sara.nl" 27: export LCG_CATALOG_TYPE="lfc" 28: LocalFn="DATA.tgz" 29: SeFn="lfn:/grid/esr/MoguilnyMPI/DATA${CPU_NEEDED}.tgz" 30: cmd="edg-rm --vo esr copyfile $SeFn file:// pwd /$LocalFn" 31: echo ">>> $cmd" 32: $cmd 33: 34: if [! -f $LocalFn ] ; then 35: echo "edg-rm failed, trying lcg-cp..." 36: export LCG_GFAL_INFOSYS="mu3.matrix.sara.nl:2170" 37: cmd="lcg-cp --vo esr lfn:data${cpu_needed}.tgz file:// pwd /$LocalFn" 38: echo ">>> $cmd" 39: $cmd 40: fi } Creation of temporary directories Input data retrieving from SE December 2005 MPI 18 / 21

19 SPECFEM3D on EGEE: Fem script (2/2) 41: if [! -f $LocalFn ] ; then 42: echo "$LocalFn not found." 43: exit 44: fi 45: # 46: echo "*************************************" 47: tar xfz $LocalFn 48: rm $LocalFn 49: rm -rf OUTPUT_FILES 50: mkdir OUTPUT_FILES 51: cp DATA/Par_file. 52: 53: chmod 755 $EXE1 54: chmod 755 $EXE2 55: ls -l 56: time $MPIHOME/mpirun -np $CPU_NEEDED -machinefile $HOST_NODEFILE \ 57: pwd /$EXE1 > $EXE1.out 58: cp OUTPUT_FILES/output_mesher.txt. 59: time $MPIHOME/mpirun -np $CPU_NEEDED -machinefile $HOST_NODEFILE \ 60: pwd /$EXE2 > $EXE2.out 61: echo "Size of OUTPUT_FILES :" 62: du -sk OUTPUT_FILES 63: for host in cat ${HOST_NODEFILE} 64: do 65: ssh $host echo "Size of $TMP_DIR on $host :" 66: ssh $host du -sk $TMP_DIR 67: ssh $host rm -rf $TMP_DIR 68: done 69: cp OUTPUT_FILES/output_solver.txt. 70: tar cfz OUTPUT_FILES${CPU_NEEDED}.tgz OUTPUT_FILES 71: lcg-del --vo esr -s grid11.lal.in2p3.fr lcg-lg --vo esr \ 72: lfn:/grid/esr/moguilnympi/output_files${cpu_needed}.tgz 73: lcg-cr --vo esr -d grid11.lal.in2p3.fr \ 74: file:// pwd /OUTPUT_FILES${CPU_NEEDED}.tgz \ 75: -l /grid/esr/moguilnympi/output_files${cpu_needed}.tgz 76: exit } Launch of the mpirun Output data storage on SE December 2005 MPI 19 / 21

20 SPECFEM3D on EGEE: run UI RB SE CE WN WN CPU 0 CPU 1 WN CPU 0 CPU 1 /tmp exec1 exec2 /tmp mpirun... exec1 mpirun... exec2 CPU 0 CPU 1 /tmp exec1 exec2 I/O messages Disk server /home Site December 2005 MPI 20 / 21

21 Perspectives Some successful tests done with glite 1.1. Short term wishes: Node CPU (pbs), incompatibility between lcgpbs / lcglsf and MPI another LRMS like SGE or OAR? Specific MPI queues with equivalent power CPUs. In the future... CPUs reservation. Inter-sites MPI (MPICH-G2, MPICH-V). High performance networks (Myrinet, Infiniband...). MPI on RCs that have hundreds of nodes. But globally, MPI / Fortran90 on EGEE works!! December 2005 MPI 21 / 21

MPI MESSAGE PASSING INTERFACE

MPI MESSAGE PASSING INTERFACE MPI MESSAGE PASSING INTERFACE David COLIGNON CÉCI - Consortium des Équipements de Calcul Intensif http://hpc.montefiore.ulg.ac.be Outline Introduction From serial source code to parallel execution MPI

More information

Introduction to parallel computing with MPI

Introduction to parallel computing with MPI Introduction to parallel computing with MPI Sergiy Bubin Department of Physics Nazarbayev University Distributed Memory Environment image credit: LLNL Hybrid Memory Environment Most modern clusters and

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why serial is not enough Computing architectures Parallel paradigms Message Passing Interface How

More information

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction

More information

Introduction to MPI, the Message Passing Library

Introduction to MPI, the Message Passing Library Chapter 3, p. 1/57 Basics of Basic Messages -To-? Introduction to, the Message Passing Library School of Engineering Sciences Computations for Large-Scale Problems I Chapter 3, p. 2/57 Outline Basics of

More information

HPC Parallel Programing Multi-node Computation with MPI - I

HPC Parallel Programing Multi-node Computation with MPI - I HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright

More information

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign

Introduction to MPI. May 20, Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Introduction to MPI May 20, 2013 Daniel J. Bodony Department of Aerospace Engineering University of Illinois at Urbana-Champaign Top500.org PERFORMANCE DEVELOPMENT 1 Eflop/s 162 Pflop/s PROJECTED 100 Pflop/s

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Outline of the workshop 2 Practical Introduction to Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Theoretical / practical introduction Parallelizing your

More information

CEE 618 Scientific Parallel Computing (Lecture 5): Message-Passing Interface (MPI) advanced

CEE 618 Scientific Parallel Computing (Lecture 5): Message-Passing Interface (MPI) advanced 1 / 32 CEE 618 Scientific Parallel Computing (Lecture 5): Message-Passing Interface (MPI) advanced Albert S. Kim Department of Civil and Environmental Engineering University of Hawai i at Manoa 2540 Dole

More information

CSE. Parallel Algorithms on a cluster of PCs. Ian Bush. Daresbury Laboratory (With thanks to Lorna Smith and Mark Bull at EPCC)

CSE. Parallel Algorithms on a cluster of PCs. Ian Bush. Daresbury Laboratory (With thanks to Lorna Smith and Mark Bull at EPCC) Parallel Algorithms on a cluster of PCs Ian Bush Daresbury Laboratory I.J.Bush@dl.ac.uk (With thanks to Lorna Smith and Mark Bull at EPCC) Overview This lecture will cover General Message passing concepts

More information

Agenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2.

Agenda. MPI Application Example. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI. 1) Recap: MPI. 2) 2. Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Agenda 1) Recap: MPI 2) 2. Übungszettel 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel, Projektauswahl, Konzepte 5)

More information

glite Advanced Job Management

glite Advanced Job Management The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Advanced Job Management Porting Application School Cesar Fernández (cesar.fernandez@usm.cl) Valparaíso, 30 November 2010

More information

Welcome to the introductory workshop in MPI programming at UNICC

Welcome to the introductory workshop in MPI programming at UNICC Welcome...... to the introductory workshop in MPI programming at UNICC Schedule: 08.00-12.00 Hard work and a short coffee break Scope of the workshop: We will go through the basics of MPI-programming and

More information

Introduction to MPI Part II Collective Communications and communicators

Introduction to MPI Part II Collective Communications and communicators Introduction to MPI Part II Collective Communications and communicators Andrew Emerson, Fabio Affinito {a.emerson,f.affinito}@cineca.it SuperComputing Applications and Innovation Department Collective

More information

Advanced MPI. Andrew Emerson

Advanced MPI. Andrew Emerson Advanced MPI Andrew Emerson (a.emerson@cineca.it) Agenda 1. One sided Communications (MPI-2) 2. Dynamic processes (MPI-2) 3. Profiling MPI and tracing 4. MPI-I/O 5. MPI-3 11/12/2015 Advanced MPI 2 One

More information

MPI introduction - exercises -

MPI introduction - exercises - MPI introduction - exercises - Paolo Ramieri, Maurizio Cremonesi May 2016 Startup notes Access the server and go on scratch partition: ssh a08tra49@login.galileo.cineca.it cd $CINECA_SCRATCH Create a job

More information

Optimization of MPI Applications Rolf Rabenseifner

Optimization of MPI Applications Rolf Rabenseifner Optimization of MPI Applications Rolf Rabenseifner University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Optimization of MPI Applications Slide 1 Optimization and Standardization

More information

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain

Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Parallel Job Support in the Spanish NGI! Enol Fernández del Cas/llo Ins/tuto de Física de Cantabria (IFCA) Spain Introduction (I)! Parallel applica/ons are common in clusters and HPC systems Grid infrastructures

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

An introduction to MPI

An introduction to MPI An introduction to MPI C MPI is a Library for Message-Passing Not built in to compiler Function calls that can be made from any compiler, many languages Just link to it Wrappers: mpicc, mpif77 Fortran

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

Recap of Parallelism & MPI

Recap of Parallelism & MPI Recap of Parallelism & MPI Chris Brady Heather Ratcliffe The Angry Penguin, used under creative commons licence from Swantje Hess and Jannis Pohlmann. Warwick RSE 13/12/2017 Parallel programming Break

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

Advanced Message-Passing Interface (MPI)

Advanced Message-Passing Interface (MPI) Outline of the workshop 2 Advanced Message-Passing Interface (MPI) Bart Oldeman, Calcul Québec McGill HPC Bart.Oldeman@mcgill.ca Morning: Advanced MPI Revision More on Collectives More on Point-to-Point

More information

CINES MPI. Johanne Charpentier & Gabriel Hautreux

CINES MPI. Johanne Charpentier & Gabriel Hautreux Training @ CINES MPI Johanne Charpentier & Gabriel Hautreux charpentier@cines.fr hautreux@cines.fr Clusters Architecture OpenMP MPI Hybrid MPI+OpenMP MPI Message Passing Interface 1. Introduction 2. MPI

More information

Programming with MPI Collectives

Programming with MPI Collectives Programming with MPI Collectives Jan Thorbecke Type to enter text Delft University of Technology Challenge the future Collectives Classes Communication types exercise: BroadcastBarrier Gather Scatter exercise:

More information

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS

First day. Basics of parallel programming. RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS First day Basics of parallel programming RIKEN CCS HPC Summer School Hiroya Matsuba, RIKEN CCS Today s schedule: Basics of parallel programming 7/22 AM: Lecture Goals Understand the design of typical parallel

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education MPI Tutorial Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education Center for Vision, Cognition, Learning and Art, UCLA July 15 22, 2013 A few words before

More information

Programming with MPI. Pedro Velho

Programming with MPI. Pedro Velho Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?

More information

Week 3: MPI. Day 02 :: Message passing, point-to-point and collective communications

Week 3: MPI. Day 02 :: Message passing, point-to-point and collective communications Week 3: MPI Day 02 :: Message passing, point-to-point and collective communications Message passing What is MPI? A message-passing interface standard MPI-1.0: 1993 MPI-1.1: 1995 MPI-2.0: 1997 (backward-compatible

More information

Elementary Parallel Programming with Examples. Reinhold Bader (LRZ) Georg Hager (RRZE)

Elementary Parallel Programming with Examples. Reinhold Bader (LRZ) Georg Hager (RRZE) Elementary Parallel Programming with Examples Reinhold Bader (LRZ) Georg Hager (RRZE) Two Paradigms for Parallel Programming Hardware Designs Distributed Memory M Message Passing explicit programming required

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Advanced MPI. Andrew Emerson

Advanced MPI. Andrew Emerson Advanced MPI Andrew Emerson (a.emerson@cineca.it) Agenda 1. One sided Communications (MPI-2) 2. Dynamic processes (MPI-2) 3. Profiling MPI and tracing 4. MPI-I/O 5. MPI-3 22/02/2017 Advanced MPI 2 One

More information

Introduction to Parallel Programming with MPI

Introduction to Parallel Programming with MPI Introduction to Parallel Programming with MPI PICASso Tutorial October 25-26, 2006 Stéphane Ethier (ethier@pppl.gov) Computational Plasma Physics Group Princeton Plasma Physics Lab Why Parallel Computing?

More information

Lecture 9: MPI continued

Lecture 9: MPI continued Lecture 9: MPI continued David Bindel 27 Sep 2011 Logistics Matrix multiply is done! Still have to run. Small HW 2 will be up before lecture on Thursday, due next Tuesday. Project 2 will be posted next

More information

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -

More information

MVAPICH MPI and Open MPI

MVAPICH MPI and Open MPI CHAPTER 6 The following sections appear in this chapter: Introduction, page 6-1 Initial Setup, page 6-2 Configure SSH, page 6-2 Edit Environment Variables, page 6-5 Perform MPI Bandwidth Test, page 6-8

More information

MPI. (message passing, MIMD)

MPI. (message passing, MIMD) MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point

More information

Message Passing Programming. Designing MPI Applications

Message Passing Programming. Designing MPI Applications Message Passing Programming Designing MPI Applications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

AMath 483/583 Lecture 18 May 6, 2011

AMath 483/583 Lecture 18 May 6, 2011 AMath 483/583 Lecture 18 May 6, 2011 Today: MPI concepts Communicators, broadcast, reduce Next week: MPI send and receive Iterative methods Read: Class notes and references $CLASSHG/codes/mpi MPI Message

More information

Message Passing Interface: Basic Course

Message Passing Interface: Basic Course Overview of DM- HPC2N, UmeåUniversity, 901 87, Sweden. April 23, 2015 Table of contents Overview of DM- 1 Overview of DM- Parallelism Importance Partitioning Data Distributed Memory Working on Abisko 2

More information

Parallelization. Tianhe-1A, 2.45 Pentaflops/s, 224 Terabytes RAM. Nigel Mitchell

Parallelization. Tianhe-1A, 2.45 Pentaflops/s, 224 Terabytes RAM. Nigel Mitchell Parallelization Tianhe-1A, 2.45 Pentaflops/s, 224 Terabytes RAM Nigel Mitchell Outline Pros and Cons of parallelization Shared memory vs. cluster computing MPI as a tool for sending and receiving messages

More information

An Introduction to MPI

An Introduction to MPI An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current

More information

GRID COMPANION GUIDE

GRID COMPANION GUIDE Companion Subject: GRID COMPANION Author(s): Miguel Cárdenas Montes, Antonio Gómez Iglesias, Francisco Castejón, Adrian Jackson, Joachim Hein Distribution: Public 1.Introduction Here you will find the

More information

Tutorial on MPI: part I

Tutorial on MPI: part I Workshop on High Performance Computing (HPC08) School of Physics, IPM February 16-21, 2008 Tutorial on MPI: part I Stefano Cozzini CNR/INFM Democritos and SISSA/eLab Agenda first part WRAP UP of the yesterday's

More information

Parallel Programming Using MPI

Parallel Programming Using MPI Parallel Programming Using MPI Short Course on HPC 15th February 2019 Aditya Krishna Swamy adityaks@iisc.ac.in SERC, Indian Institute of Science When Parallel Computing Helps? Want to speed up your calculation

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

Parallel Programming

Parallel Programming Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

MPI Collective communication

MPI Collective communication MPI Collective communication CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) MPI Collective communication Spring 2018 1 / 43 Outline 1 MPI Collective communication

More information

Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI

Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Praktikum: Verteiltes Rechnen und Parallelprogrammierung Introduction to MPI Agenda 1) MPI für Java Installation OK? 2) 2. Übungszettel Grundidee klar? 3) Projektpräferenzen? 4) Nächste Woche: 3. Übungszettel,

More information

Slides prepared by : Farzana Rahman 1

Slides prepared by : Farzana Rahman 1 Introduction to MPI 1 Background on MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers, and parallel programmers Used to create parallel programs based

More information

Parallel programming MPI

Parallel programming MPI Parallel programming MPI Distributed memory Each unit has its own memory space If a unit needs data in some other memory space, explicit communication (often through network) is required Point-to-point

More information

Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma

Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma SuperB Computing R&D Workshop Multi-thread and Mpi usage in GRID Roberto Alfieri - Parma University & INFN, Gr.Coll. di Parma Ferrara, Thursday, March 11, 2010 1 Outline MPI and multi-thread support in

More information

MPI Message Passing Interface

MPI Message Passing Interface MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information

More information

MPI Application Development with MARMOT

MPI Application Development with MARMOT MPI Application Development with MARMOT Bettina Krammer University of Stuttgart High-Performance Computing-Center Stuttgart (HLRS) www.hlrs.de Matthias Müller University of Dresden Centre for Information

More information

A Message Passing Standard for MPP and Workstations. Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W.

A Message Passing Standard for MPP and Workstations. Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. 1 A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker 2 Message Passing Interface (MPI) Message passing library Can

More information

Non-Blocking Communications

Non-Blocking Communications Non-Blocking Communications Deadlock 1 5 2 3 4 Communicator 0 2 Completion The mode of a communication determines when its constituent operations complete. - i.e. synchronous / asynchronous The form of

More information

Topic Notes: Message Passing Interface (MPI)

Topic Notes: Message Passing Interface (MPI) Computer Science 400 Parallel Processing Siena College Fall 2008 Topic Notes: Message Passing Interface (MPI) The Message Passing Interface (MPI) was created by a standards committee in the early 1990

More information

EGEODE. !Dominique Thomas;! Compagnie Générale de Géophysique (CGG, France) R&D. Expanding Geosciences On Demand 1. «Expanding Geosciences On Demand»

EGEODE. !Dominique Thomas;! Compagnie Générale de Géophysique (CGG, France) R&D. Expanding Geosciences On Demand 1. «Expanding Geosciences On Demand» EGEODE «Expanding Geosciences On Demand»!Dominique Thomas;! Compagnie Générale de Géophysique (CGG, France) R&D Expanding Geosciences On Demand 1 Web data browser Expanding Geosciences On Demand 2 Data

More information

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums MPI Lab Parallelization (Calculating π in parallel) How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums Sharing Data Across Processors

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

Introduction to MPI. Susan Mehringer Cornell Center for Advanced Computing. 19 May Based on materials developed by CAC and TACC

Introduction to MPI. Susan Mehringer Cornell Center for Advanced Computing. 19 May Based on materials developed by CAC and TACC Introduction to MPI Susan Mehringer Cornell Center for Advanced Computing 19 May 2010 Based on materials developed by CAC and TACC Overview Outline Overview Basics Hello World in MPI Compiling and running

More information

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d

lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar cd mpibasic_lab/pi cd mpibasic_lab/decomp1d MPI Lab Getting Started Login to ranger.tacc.utexas.edu Untar the lab source code lslogin3$ cd lslogin3$ tar -xvf ~train00/mpibasic_lab.tar Part 1: Getting Started with simple parallel coding hello mpi-world

More information

Introduction to MPI part II. Fabio AFFINITO

Introduction to MPI part II. Fabio AFFINITO Introduction to MPI part II Fabio AFFINITO (f.affinito@cineca.it) Collective communications Communications involving a group of processes. They are called by all the ranks involved in a communicator (or

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):

More information

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking

More information

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 1 Overview Introduction and very short historical review MPI - as simple as it comes Communications Process Topologies (I have no

More information

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah

Introduction to Parallel Programming. Martin Čuma Center for High Performance Computing University of Utah Introduction to Parallel Programming Martin Čuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu Overview Types of parallel computers. Parallel programming options. How to

More information

Standard MPI - Message Passing Interface

Standard MPI - Message Passing Interface c Ewa Szynkiewicz, 2007 1 Standard MPI - Message Passing Interface The message-passing paradigm is one of the oldest and most widely used approaches for programming parallel machines, especially those

More information

MA471. Lecture 5. Collective MPI Communication

MA471. Lecture 5. Collective MPI Communication MA471 Lecture 5 Collective MPI Communication Today: When all the processes want to send, receive or both Excellent website for MPI command syntax available at: http://www-unix.mcs.anl.gov/mpi/www/ 9/10/2003

More information

glite Middleware Usage

glite Middleware Usage glite Middleware Usage Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Nov. 18, 2008 www.eu-egee.org EGEE and glite are registered trademarks Usage

More information

Parallel Computing Paradigms

Parallel Computing Paradigms Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is

More information

Message Passing Interface. most of the slides taken from Hanjun Kim

Message Passing Interface. most of the slides taken from Hanjun Kim Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message

More information

High Performance Beowulf Cluster Environment User Manual

High Performance Beowulf Cluster Environment User Manual High Performance Beowulf Cluster Environment User Manual Version 3.1c 2 This guide is intended for cluster users who want a quick introduction to the Compusys Beowulf Cluster Environment. It explains how

More information

Collective Communication: Gatherv. MPI v Operations. root

Collective Communication: Gatherv. MPI v Operations. root Collective Communication: Gather MPI v Operations A Gather operation has data from all processes collected, or gathered, at a central process, referred to as the root Even the root process contributes

More information

L14 Supercomputing - Part 2

L14 Supercomputing - Part 2 Geophysical Computing L14-1 L14 Supercomputing - Part 2 1. MPI Code Structure Writing parallel code can be done in either C or Fortran. The Message Passing Interface (MPI) is just a set of subroutines

More information

WhatÕs New in the Message-Passing Toolkit

WhatÕs New in the Message-Passing Toolkit WhatÕs New in the Message-Passing Toolkit Karl Feind, Message-passing Toolkit Engineering Team, SGI ABSTRACT: SGI message-passing software has been enhanced in the past year to support larger Origin 2

More information

Claudio Chiaruttini Dipartimento di Matematica e Informatica Centro Interdipartimentale per le Scienze Computazionali (CISC) Università di Trieste

Claudio Chiaruttini Dipartimento di Matematica e Informatica Centro Interdipartimentale per le Scienze Computazionali (CISC) Università di Trieste Claudio Chiaruttini Dipartimento di Matematica e Informatica Centro Interdipartimentale per le Scienze Computazionali (CISC) Università di Trieste http://www.dmi.units.it/~chiarutt/didattica/parallela

More information

ECE 574 Cluster Computing Lecture 13

ECE 574 Cluster Computing Lecture 13 ECE 574 Cluster Computing Lecture 13 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 21 March 2017 Announcements HW#5 Finally Graded Had right idea, but often result not an *exact*

More information

Non-Blocking Communications

Non-Blocking Communications Non-Blocking Communications Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Data parallelism. [ any app performing the *same* operation across a data stream ]

Data parallelism. [ any app performing the *same* operation across a data stream ] Data parallelism [ any app performing the *same* operation across a data stream ] Contrast stretching: Version Cores Time (secs) Speedup while (step < NumSteps &&!converged) { step++; diffs = 0; foreach

More information

Tutorial: parallel coding MPI

Tutorial: parallel coding MPI Tutorial: parallel coding MPI Pascal Viot September 12, 2018 Pascal Viot Tutorial: parallel coding MPI September 12, 2018 1 / 24 Generalities The individual power of a processor is still growing, but at

More information

Future of Grid parallel exploitation

Future of Grid parallel exploitation Future of Grid parallel exploitation Roberto Alfieri - arma University & INFN Italy SuperbB Computing R&D Workshop - Ferrara 6/07/2011 1 Outline MI support in the current grid middleware (glite) MI and

More information

The MPI Message-passing Standard Practical use and implementation (V) SPD Course 6/03/2017 Massimo Coppola

The MPI Message-passing Standard Practical use and implementation (V) SPD Course 6/03/2017 Massimo Coppola The MPI Message-passing Standard Practical use and implementation (V) SPD Course 6/03/2017 Massimo Coppola Intracommunicators COLLECTIVE COMMUNICATIONS SPD - MPI Standard Use and Implementation (5) 2 Collectives

More information

Beginner's Guide for UK IBM systems

Beginner's Guide for UK IBM systems Beginner's Guide for UK IBM systems This document is intended to provide some basic guidelines for those who already had certain programming knowledge with high level computer languages (e.g. Fortran,

More information

Implementation of Parallelization

Implementation of Parallelization Implementation of Parallelization OpenMP, PThreads and MPI Jascha Schewtschenko Institute of Cosmology and Gravitation, University of Portsmouth May 9, 2018 JAS (ICG, Portsmouth) Implementation of Parallelization

More information

Practical Course Scientific Computing and Visualization

Practical Course Scientific Computing and Visualization July 5, 2006 Page 1 of 21 1. Parallelization Architecture our target architecture: MIMD distributed address space machines program1 data1 program2 data2 program program3 data data3.. program(data) program1(data1)

More information

Collective Communication: Gather. MPI - v Operations. Collective Communication: Gather. MPI_Gather. root WORKS A OK

Collective Communication: Gather. MPI - v Operations. Collective Communication: Gather. MPI_Gather. root WORKS A OK Collective Communication: Gather MPI - v Operations A Gather operation has data from all processes collected, or gathered, at a central process, referred to as the root Even the root process contributes

More information

High Performance Computing

High Performance Computing High Performance Computing Course Notes 2009-2010 2010 Message Passing Programming II 1 Communications Point-to-point communications: involving exact two processes, one sender and one receiver For example,

More information

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc. Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright

More information

MPI - v Operations. Collective Communication: Gather

MPI - v Operations. Collective Communication: Gather MPI - v Operations Based on notes by Dr. David Cronk Innovative Computing Lab University of Tennessee Cluster Computing 1 Collective Communication: Gather A Gather operation has data from all processes

More information

Topics. Lecture 7. Review. Other MPI collective functions. Collective Communication (cont d) MPI Programming (III)

Topics. Lecture 7. Review. Other MPI collective functions. Collective Communication (cont d) MPI Programming (III) Topics Lecture 7 MPI Programming (III) Collective communication (cont d) Point-to-point communication Basic point-to-point communication Non-blocking point-to-point communication Four modes of blocking

More information

What s in this talk? Quick Introduction. Programming in Parallel

What s in this talk? Quick Introduction. Programming in Parallel What s in this talk? Parallel programming methodologies - why MPI? Where can I use MPI? MPI in action Getting MPI to work at Warwick Examples MPI: Parallel Programming for Extreme Machines Si Hammond,

More information

Message Passing Interface

Message Passing Interface Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented

More information

AGATA Analysis on the GRID

AGATA Analysis on the GRID AGATA Analysis on the GRID R.M. Pérez-Vidal IFIC-CSIC For the e682 collaboration What is GRID? Grid technologies allow that computers share trough Internet or other telecommunication networks not only

More information

Message-Passing Computing

Message-Passing Computing Chapter 2 Slide 41þþ Message-Passing Computing Slide 42þþ Basics of Message-Passing Programming using userlevel message passing libraries Two primary mechanisms needed: 1. A method of creating separate

More information

Parallel Programming, MPI Lecture 2

Parallel Programming, MPI Lecture 2 Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel

More information