PARA++ : C++ Bindings for Message Passing Libraries

Size: px
Start display at page:

Download "PARA++ : C++ Bindings for Message Passing Libraries"

Transcription

1 PARA++ : C++ Bindings for Message Passing Libraries O. Coulaud, E. Dillon {Olivier.Coulaud, Eric.Dillon}@loria.fr INRIA-lorraine BP101, VILLERS-les-NANCY, FRANCE Abstract The aim of Para++ is to provide a user-level C++ interface to message passing libraries, by encapsulating the notions of processes and inter-processes communications into specific C++ objects and streams. Actually, this abstraction level allows to implement Para++ with any kind of message passing library. Para++ s main idea is to add new C++ io-streams allowing inter-tasks communications. These streams support all generic scalar datatype (int, float, double,), plus some mathematical datatypes(vectors, Matrix,). Para++ has been implemented on top of PVM [1] and MPI [2]. 1 Introduction In this paper, we present a new abstraction level for message passing parallelism. The model of communications between tasks still remains unchanged but is simplified by a new C++ interface. This idea was first proposed in [3], but our approach changes on several points. All kind of message passing libraries are always based on identical sequences of primitive calls like : an init function : to call before anything else. some point to point or collective communication functions : to allow processes to exchange data. an end function : to call before stopping the process. Para++ does not change this model at all, but only provides a new way of exchanging data by using C++ streams instead of communication functions. Indeed, in Para++, inter-tasks communications are just another kind of I/O. That means, we provide new streams called pin and pout (for parallel-in and parallel-out ), which work just like cin and cout, by overloading the standard insertion operator << and >> for C++ input/ouput (see [4]).

2 Consequently, this approachsimplifies all message passing functions, task management, by manipulating C++ objects only. The paper is organised as follows: section 2 presents the C++ objects proposed to manage co-operating tasks. In section 3 we define the syntax and the semantic of both the pin and pout streams. Some collective primitives provided in Para++ are presented in section 4. 2 Task model Para++ allows to use SPMD and MPMD parallelism: both cases will be treated in the following subsections. The was first implemented in [5]. 2.1 within Para++ The main entity in message passing parallelism is the task. In all libraries they are represented as UNIX processes. Within Para++, you do not have a mapping between a UNIX process and a kind of task object. In the, each task must instantiate a ParaProcess object. For example, to compute the value by integration with five processes, the Para++ code could be: #include "Para++" main(int argc, char ** argv) { ParaProcess p("pi", argc, argv); p.init(5); // end of SPMD initialisation // computation on 1 interval // merge of all local results value_reduction(parasum, pi); p.end(); } listing 1 That means, by running this program, the first instantiation of p will set up things in order to be able to spawn 4 other tasks. Actually, the spawn operation is performed by the init(5) method. This method will spawn 4 other tasks within the master task and will have no effect within others. Before leaving the init(5) method, all the five tasks synchronise. Afterwards, the five co-operating tasks will share the same informations in their p object, and will execute the same code (which executable s name is pi ). This initialisation phase ensures integrity of all informations shared (number of task, identifiers of all tasks,) in p. Finally that means, a ParaProcess object represents a set of tasks running the same code. Moreover, the ParaProcess objects include all informations about the cooperating tasks. In particular, a ParaProcess instantiation contains a Vector

3 of all the tasks identifiers. That means, to access task number 4, the programmer may write : int task4 = p.tasks(4); Afterwards, this identifier may be used to send or receive data to or from task number MPMD model within Para++ In order to support MPMD programming model, the new object ParaSlave has been added to Para++. The concept of MPMD programming is to allow one or more master tasks to start slave tasks that will not share the same executable code. That means, whereas master tasks will instantiate a ParaProcess object, slave tasks will instantiate a ParaSlave object. In fact, we group all master tasks by a shared ParaProcess instantiation, and all the identical slaves by a shared ParaSlave instantiation see Figure 1. ParaProcess T1 Tn ParaSlave S1 S1 S2 S1 S2 S3 ParaSlave Sn ParaSlave Figure 1: The MPMD model For example, imagine an application designed to compute the temperaturerepartition in an area. This application might be composed of three sets of tasks : a master task : the interface with the user. some computing tasks : to perform all computation operations. some drawing tasks : to print the results given by the computing tasks. In this example, the master task will instantiate a ParaProcess object, and will call the startslave() method twice: once to spawn the computing group of slaves. once to spawn the printing group of slaves.

4 Each slave will instantiate a ParaSlave object within each group, leading to two shared ParaSlave objects (for the computing group and for the printing group). Actually, we split the world of co-operating processes into three sets : the first with only one task (with the unique ParaProcess object), the second with several tasks (sharing one ParaSlave object) and the third with several tasks (sharing another ParaSlave object). In our model, a slave cannot generate a group of slaves, only ParaProcess object can do it. 3 Inter-tasks streams This section presents a stream-based interface for inter-tasks communication. 3.1 Stream-based data exchanges In C++ language, all I/O operations are performed by specific streams, overloaded by two operators: << and >>. For example, the standard input/output is accessed by cin and cout. To issue the char-string hello world on the standard output, the programmer may write: cout «"hello world" «flush; where cout representsthe target stream, << the flow direction, "hello world" the information flow itself, and flush the flush operation to ensure the string will be printed right now. Since all C++ I/O operations are performed via streams, the idea is to hide send() and recv() operations behind new specific streams that would represent a communication channel between tasks. Para++ provides two specific streams to exchange data with other tasks: pout : to asynchronously send values to another task. pin : to receive values from another task, as a blocking operation. For example, if a task must send the value of n to task t, the programmer may write : pout(t) «n «flush; The syntax is the same as the syntax used with cout. The only difference is the argument passed to pout() in order to specify the target task. 3.2 Matching messages Various possibilities exist to ensure that a >> operation will correctly match a sent message. First, type checking is performed to ensurethat if an integer is put in the pout() stream, only an integer will get it from a pin stream. Secondly, it is possible to tag all messages with an integer. So, to get it from the pin stream you must use the right tag. The syntax remains very simple: pout(t1, 99) «n «flush; sends n s value to task t1 with tag 99. To match it, we use : pin(t2, 99)» n;

5 3.3 Non-blocking receive Finally, Para++ provides a way to performa non-blocking receive. In the pin object the arrived() method can be called to probe the incoming queue before performing the real receive operation with the >> operator. For example, the programmer may write : while (!pin.arrived(t1,tag)) { } pin(t1,tag)» n; In this example, the program will perform actions while waiting for a message from task t1 tagged with tag. When this message has arrived, it is read. 4 Collective Operations Para++ provides several collective operations, they are all presented in the following sections. All these collective operations are only available in the SPMD model either in ParaProcess or in ParaSlave group of tasks. 4.1 Broadcast and multi-cast operations Broadcast and multi-cast operations are supported by Para++, by using particular parameters with the pout stream. That means, to broadcastthe value of n to several tasks, the programmer may write : pout() «n «flush; This function will broadcast the value of n to all tasks within the restricted set of processes defined by the ParaProcess or ParaSlave object. More clearly, if the current task is one of the master tasks (sharing a ParaProcess object), the value will be broadcasted to all the master tasks. On the other side, if the current task is among slave tasks (sharing a particular ParaSlave object), the value will only be broadcasted to the corresponding set of slave tasks. Moreover, Para++ also supports multi-cast operations. To do it, the parameter given to the pout object must be a vector containning all the identifiers of the target tasks. For example, Vector<int> V(5); pout() «n «flush(v); this program will send the value of n to five tasks defined in V.

6 4.2 Synchronisation and Reduction Operations Para++ provides several ways to synchronise tasks. Firstly, the "sync" function allows to synchronise with all the tasks belonging to the same (see Figure 1). Secondly, several methods in ParaProcess and ParaSlave classes allow a ParaProcess object to synchronise with one or all its slaves. Finally, Para++ provides a way to apply a reduction function. The user may call the value_reduction() method to do it. Here again, the scope of the reduction function is limited to set of tasks that the current task is belonging. For example in listing 1 after the value_reduction(parasum, pi) each task has the global value of. In fact we do two operations, first we reduce the value and then we broadcastthis value to all the tasks in the ParaProcess or ParaSlave context. The predefinedoperatorsare ParaSum, ParaProd, ParaMax, ParaMin. 5 Conclusion We have presented new C++ bindings. They are built on top of existing message passing libraries. An implementation has been done on top of PVM and MPI, which allows the use of a unified program with both message passing libraries. Finally, Para++ is not a new C++ extension, but an other way to use, to unify and to simplify all message passing libraries. More details of Para++ and the library are available at coulaud/parapp.html or by anonymous ftp at ftp.loria.fr/pub/loria/numath/para++. References [1] PVM : Parallel Virtual Machine : a users guide and tutorial for networked parallel computing. Al. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, V. Sunderam. Mit Press, [2] Using MPI : portable parallel programming with the message- passing interface. W. Gropp, E. Lusk, A. Skjellum. Mit Press, [3] "A Streams-Based Interface in C++ for Programming Heterogeneous Systems", R. Pozo, CRNS-NSF Workshopon Environmentand Tools for Parallel Scientific Computing, September 7-8, 1992, Saint Hilaire du Touvet, France. [4] The C++ programming language, B. Stroustrup, Addison Wesley, [5] Para++ : C++ binding for Message Passing Libraries : User Guide. O. Coulaud and E. Dillon. Rapport Technique INRIA 174, juin 1995.

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

Message-Passing Computing

Message-Passing Computing Chapter 2 Slide 41þþ Message-Passing Computing Slide 42þþ Basics of Message-Passing Programming using userlevel message passing libraries Two primary mechanisms needed: 1. A method of creating separate

More information

MPI Message Passing Interface

MPI Message Passing Interface MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information

More information

An Introduction to MPI

An Introduction to MPI An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current

More information

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010. Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, 2010 Final Project Purpose: - A chance to dig in deeper into a parallel programming model and explore concepts. -

More information

Your first C++ program

Your first C++ program Your first C++ program #include using namespace std; int main () cout

More information

Developing a Thin and High Performance Implementation of Message Passing Interface 1

Developing a Thin and High Performance Implementation of Message Passing Interface 1 Developing a Thin and High Performance Implementation of Message Passing Interface 1 Theewara Vorakosit and Putchong Uthayopas Parallel Research Group Computer and Network System Research Laboratory Department

More information

MPI (Message Passing Interface)

MPI (Message Passing Interface) MPI (Message Passing Interface) Message passing library standard developed by group of academics and industrial partners to foster more widespread use and portability. Defines routines, not implementation.

More information

Introduction to parallel computing concepts and technics

Introduction to parallel computing concepts and technics Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing

More information

Introduction to Programming Using Java (98-388)

Introduction to Programming Using Java (98-388) Introduction to Programming Using Java (98-388) Understand Java fundamentals Describe the use of main in a Java application Signature of main, why it is static; how to consume an instance of your own class;

More information

MICE: A Prototype MPI Implementation in Converse Environment

MICE: A Prototype MPI Implementation in Converse Environment : A Prototype MPI Implementation in Converse Environment Milind A. Bhandarkar and Laxmikant V. Kalé Parallel Programming Laboratory Department of Computer Science University of Illinois at Urbana-Champaign

More information

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore Module No # 09 Lecture No # 40 This is lecture forty of the course on

More information

BOOLEAN EXPRESSIONS CONTROL FLOW (IF-ELSE) INPUT/OUTPUT. Problem Solving with Computers-I

BOOLEAN EXPRESSIONS CONTROL FLOW (IF-ELSE) INPUT/OUTPUT. Problem Solving with Computers-I BOOLEAN EXPRESSIONS CONTROL FLOW (IF-ELSE) INPUT/OUTPUT Problem Solving with Computers-I Announcements HW02: Complete (individually)using dark pencil or pen, turn in during lab section next Wednesday Please

More information

C++ basics Getting started with, and Data Types.

C++ basics Getting started with, and Data Types. C++ basics Getting started with, and Data Types pm_jat@daiict.ac.in Recap Last Lecture We talked about Variables - Variables, their binding to type, storage etc., Categorization based on storage binding

More information

MPI: Message Passing Interface An Introduction. S. Lakshmivarahan School of Computer Science

MPI: Message Passing Interface An Introduction. S. Lakshmivarahan School of Computer Science MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science MPI: A specification for message passing libraries designed to be a standard for distributed memory message passing,

More information

Advanced MPI. Andrew Emerson

Advanced MPI. Andrew Emerson Advanced MPI Andrew Emerson (a.emerson@cineca.it) Agenda 1. One sided Communications (MPI-2) 2. Dynamic processes (MPI-2) 3. Profiling MPI and tracing 4. MPI-I/O 5. MPI-3 11/12/2015 Advanced MPI 2 One

More information

int sum;... sum = sum + c?

int sum;... sum = sum + c? int sum;... sum = sum + c? Version Cores Time (secs) Speedup manycore Message Passing Interface mpiexec int main( ) { int ; char ; } MPI_Init( ); MPI_Comm_size(, &N); MPI_Comm_rank(, &R); gethostname(

More information

Programmazione. Prof. Marco Bertini

Programmazione. Prof. Marco Bertini Programmazione Prof. Marco Bertini marco.bertini@unifi.it http://www.micc.unifi.it/bertini/ Hello world : a review Some differences between C and C++ Let s review some differences between C and C++ looking

More information

CSE 160 Lecture 18. Message Passing

CSE 160 Lecture 18. Message Passing CSE 160 Lecture 18 Message Passing Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i);

More information

CMSC 714 Lecture 3 Message Passing with PVM and MPI

CMSC 714 Lecture 3 Message Passing with PVM and MPI CMSC 714 Lecture 3 Message Passing with PVM and MPI Alan Sussman PVM Provide a simple, free, portable parallel environment Run on everything Parallel Hardware: SMP, MPPs, Vector Machines Network of Workstations:

More information

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed

More information

What s different about Factor?

What s different about Factor? Harshal Lehri What s different about Factor? Factor is a concatenative programming language - A program can be viewed as a series of functions applied on data Factor is a stack oriented program - Data

More information

B16 Object Oriented Programming

B16 Object Oriented Programming B16 Object Oriented Programming Michael A. Osborne mosb@robots.ox.ac.uk http://www.robots.ox.ac.uk/~mosb/teaching.html#b16 Hilary 2013 This course will introduce object-oriented programming (OOP). It will

More information

Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality)

Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality) COMP 322: Fundamentals of Parallel Programming Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality) Mack Joyner and Zoran Budimlić {mjoyner,

More information

For Teacher's Use Only Q No Total Q No Q No

For Teacher's Use Only Q No Total Q No Q No Student Info Student ID: Center: Exam Date: FINALTERM EXAMINATION Spring 2010 CS201- Introduction to Programming Time: 90 min Marks: 58 For Teacher's Use Only Q No. 1 2 3 4 5 6 7 8 Total Marks Q No. 9

More information

OBJECTS. An object is an entity around us, perceivable through our senses. Types of Object: Objects that operate independently.

OBJECTS. An object is an entity around us, perceivable through our senses. Types of Object: Objects that operate independently. OBJECTS An object is an entity around us, perceivable through our senses. Types of Object: Objects that operate independently. Objects that work in associations with each others. Objects that frequently

More information

Practical Scientific Computing: Performanceoptimized

Practical Scientific Computing: Performanceoptimized Practical Scientific Computing: Performanceoptimized Programming Programming with MPI November 29, 2006 Dr. Ralf-Peter Mundani Department of Computer Science Chair V Technische Universität München, Germany

More information

C++ Programming Fundamentals

C++ Programming Fundamentals C++ Programming Fundamentals 269 Elvis C. Foster Lecture 11: Templates One of the contemporary sophistries of C++ programming is defining and manipulating templates. This lecture focuses on this topic.

More information

Unit 1 : Principles of object oriented programming

Unit 1 : Principles of object oriented programming Unit 1 : Principles of object oriented programming Difference Between Procedure Oriented Programming (POP) & Object Oriented Programming (OOP) Divided Into Importance Procedure Oriented Programming In

More information

CMSC 714 Lecture 3 Message Passing with PVM and MPI

CMSC 714 Lecture 3 Message Passing with PVM and MPI Notes CMSC 714 Lecture 3 Message Passing with PVM and MPI Alan Sussman To access papers in ACM or IEEE digital library, must come from a UMD IP address Accounts handed out next week for deepthought2 cluster,

More information

Scientific Computing

Scientific Computing Scientific Computing Martin Lotz School of Mathematics The University of Manchester Lecture 1, September 22, 2014 Outline Course Overview Programming Basics The C++ Programming Language Outline Course

More information

MPI 3. CSCI 4850/5850 High-Performance Computing Spring 2018

MPI 3. CSCI 4850/5850 High-Performance Computing Spring 2018 MPI 3 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

Programming with MPI. Pedro Velho

Programming with MPI. Pedro Velho Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?

More information

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2]. MPI: The Message-Passing Interface Most of this discussion is from [1] and [2]. What Is MPI? The Message-Passing Interface (MPI) is a standard for expressing distributed parallelism via message passing.

More information

DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA

DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA DISTRIBUTED HIGH-SPEED COMPUTING OF MULTIMEDIA DATA M. GAUS, G. R. JOUBERT, O. KAO, S. RIEDEL AND S. STAPEL Technical University of Clausthal, Department of Computer Science Julius-Albert-Str. 4, 38678

More information

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -

More information

Parallel Programming

Parallel Programming Parallel Programming for Multicore and Cluster Systems von Thomas Rauber, Gudula Rünger 1. Auflage Parallel Programming Rauber / Rünger schnell und portofrei erhältlich bei beck-shop.de DIE FACHBUCHHANDLUNG

More information

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

Message-Passing Programming with MPI

Message-Passing Programming with MPI Message-Passing Programming with MPI Message-Passing Concepts David Henty d.henty@epcc.ed.ac.uk EPCC, University of Edinburgh Overview This lecture will cover message passing model SPMD communication modes

More information

High Performance Computing Lecture 41. Matthew Jacob Indian Institute of Science

High Performance Computing Lecture 41. Matthew Jacob Indian Institute of Science High Performance Computing Lecture 41 Matthew Jacob Indian Institute of Science Example: MPI Pi Calculating Program /Each process initializes, determines the communicator size and its own rank MPI_Init

More information

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

Parallel Programming Using Basic MPI. Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center 05 Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Talk Overview Background on MPI Documentation Hello world in MPI Basic communications Simple

More information

Java Tutorial. Saarland University. Ashkan Taslimi. Tutorial 3 September 6, 2011

Java Tutorial. Saarland University. Ashkan Taslimi. Tutorial 3 September 6, 2011 Java Tutorial Ashkan Taslimi Saarland University Tutorial 3 September 6, 2011 1 Outline Tutorial 2 Review Access Level Modifiers Methods Selection Statements 2 Review Programming Style and Documentation

More information

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI Linux MPI Linux MPI Message Passing Interface Linux MPI Linux MPI Message Passing Interface MPI MPICH MPI Department of Science and Engineering Computing School of Mathematics School Peking University

More information

Dynamic Tuning of Parallel Programs

Dynamic Tuning of Parallel Programs Dynamic Tuning of Parallel Programs A. Morajko, A. Espinosa, T. Margalef, E. Luque Dept. Informática, Unidad de Arquitectura y Sistemas Operativos Universitat Autonoma de Barcelona 08193 Bellaterra, Barcelona

More information

STUDY NOTES UNIT 1 - INTRODUCTION TO OBJECT ORIENTED PROGRAMMING

STUDY NOTES UNIT 1 - INTRODUCTION TO OBJECT ORIENTED PROGRAMMING OBJECT ORIENTED PROGRAMMING STUDY NOTES UNIT 1 - INTRODUCTION TO OBJECT ORIENTED PROGRAMMING 1. Object Oriented Programming Paradigms 2. Comparison of Programming Paradigms 3. Basic Object Oriented Programming

More information

What are the characteristics of Object Oriented programming language?

What are the characteristics of Object Oriented programming language? What are the various elements of OOP? Following are the various elements of OOP:- Class:- A class is a collection of data and the various operations that can be performed on that data. Object- This is

More information

C++ Quick Guide. Advertisements

C++ Quick Guide. Advertisements C++ Quick Guide Advertisements Previous Page Next Page C++ is a statically typed, compiled, general purpose, case sensitive, free form programming language that supports procedural, object oriented, and

More information

Standard MPI - Message Passing Interface

Standard MPI - Message Passing Interface c Ewa Szynkiewicz, 2007 1 Standard MPI - Message Passing Interface The message-passing paradigm is one of the oldest and most widely used approaches for programming parallel machines, especially those

More information

Distributed Memory Programming with Message-Passing

Distributed Memory Programming with Message-Passing Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and

More information

Seismic Code. Given echo data, compute under sea map Computation model

Seismic Code. Given echo data, compute under sea map Computation model Seismic Code Given echo data, compute under sea map Computation model designed for a collection of workstations uses variation of RPC model workers are given an independent trace to compute requires little

More information

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction

More information

Bulk Synchronous Parallel ML: Modular Implementation and Performance Prediction

Bulk Synchronous Parallel ML: Modular Implementation and Performance Prediction Bulk Synchronous Parallel ML: Modular Implementation and Performance Prediction Frédéric Loulergue, Frédéric Gava, and David Billiet Laboratory of Algorithms, Complexity and Logic University Paris XII

More information

Building Grid MPI Applications from Modular Components

Building Grid MPI Applications from Modular Components Building Grid MPI Applications from Modular Components Yiannis Cotronis Department of Informatics and Telecommunications, Univ. of Athens, 15784 Athens, Greece. cotronis@di.uoa.gr Abstract Coupling grid

More information

Parallel Processing using PVM on a Linux Cluster. Thomas K. Gederberg CENG 6532 Fall 2007

Parallel Processing using PVM on a Linux Cluster. Thomas K. Gederberg CENG 6532 Fall 2007 Parallel Processing using PVM on a Linux Cluster Thomas K. Gederberg CENG 6532 Fall 2007 What is PVM? PVM (Parallel Virtual Machine) is a software system that permits a heterogeneous collection of Unix

More information

1. Describe History of C++? 2. What is Dev. C++? 3. Why Use Dev. C++ instead of C++ DOS IDE?

1. Describe History of C++? 2. What is Dev. C++? 3. Why Use Dev. C++ instead of C++ DOS IDE? 1. Describe History of C++? The C++ programming language has a history going back to 1979, when Bjarne Stroustrup was doing work for his Ph.D. thesis. One of the languages Stroustrup had the opportunity

More information

Introduction to Parallel Programming Message Passing Interface Practical Session Part I

Introduction to Parallel Programming Message Passing Interface Practical Session Part I Introduction to Parallel Programming Message Passing Interface Practical Session Part I T. Streit, H.-J. Pflug streit@rz.rwth-aachen.de October 28, 2008 1 1. Examples We provide codes of the theoretical

More information

Application Composition in Ensemble using Intercommunicators and Process Topologies

Application Composition in Ensemble using Intercommunicators and Process Topologies Application Composition in Ensemble using Intercommunicators and Process Topologies Yiannis Cotronis Dept. of Informatics and Telecommunications, Univ. of Athens, 15784 Athens, Greece cotronis@di.uoa.gr

More information

Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine)

Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine) Parallel Program for Sorting NXN Matrix Using PVM (Parallel Virtual Machine) Ehab AbdulRazak Al-Asadi College of Science Kerbala University, Iraq Abstract The study will focus for analysis the possibilities

More information

C++ Important Questions with Answers

C++ Important Questions with Answers 1. Name the operators that cannot be overloaded. sizeof,.,.*,.->, ::,? 2. What is inheritance? Inheritance is property such that a parent (or super) class passes the characteristics of itself to children

More information

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy: COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single

More information

Message Passing Interface (MPI)

Message Passing Interface (MPI) What the course is: An introduction to parallel systems and their implementation using MPI A presentation of all the basic functions and types you are likely to need in MPI A collection of examples What

More information

Objects and streams and files CS427: Elements of Software Engineering

Objects and streams and files CS427: Elements of Software Engineering Objects and streams and files CS427: Elements of Software Engineering Lecture 6.2 (C++) 10am, 13 Feb 2012 CS427 Objects and streams and files 1/18 Today s topics 1 Recall...... Dynamic Memory Allocation...

More information

Message-Passing Programming with MPI. Message-Passing Concepts

Message-Passing Programming with MPI. Message-Passing Concepts Message-Passing Programming with MPI Message-Passing Concepts Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Cluster Computing MPI. Industrial Standard Message Passing

Cluster Computing MPI. Industrial Standard Message Passing MPI Industrial Standard Message Passing MPI Features Industrial Standard Highly portable Widely available SPMD programming model Synchronous execution MPI Outer scope int MPI_Init( int *argc, char ** argv)

More information

C, C++, Fortran: Basics

C, C++, Fortran: Basics C, C++, Fortran: Basics Bruno Abreu Calfa Last Update: September 27, 2011 Table of Contents Outline Contents 1 Introduction and Requirements 1 2 Basic Programming Elements 2 3 Application: Numerical Linear

More information

Fundamentals of Programming CS-110. Lecture 2

Fundamentals of Programming CS-110. Lecture 2 Fundamentals of Programming CS-110 Lecture 2 Last Lab // Example program #include using namespace std; int main() { cout

More information

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather

More information

A brief introduction to C++

A brief introduction to C++ A brief introduction to C++ Rupert Nash r.nash@epcc.ed.ac.uk 13 June 2018 1 References Bjarne Stroustrup, Programming: Principles and Practice Using C++ (2nd Ed.). Assumes very little but it s long Bjarne

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

10. Functions (Part 2)

10. Functions (Part 2) 10.1 Overloaded functions 10. Functions (Part 2) In C++, two different functions can have the same name if their parameters are different; either because they have a different number of parameters, or

More information

Analysis of Matrix Multiplication Computational Methods

Analysis of Matrix Multiplication Computational Methods European Journal of Scientific Research ISSN 1450-216X / 1450-202X Vol.121 No.3, 2014, pp.258-266 http://www.europeanjournalofscientificresearch.com Analysis of Matrix Multiplication Computational Methods

More information

I BCS-031 BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination. June, 2015 BCS-031 : PROGRAMMING IN C ++

I BCS-031 BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination. June, 2015 BCS-031 : PROGRAMMING IN C ++ No. of Printed Pages : 3 I BCS-031 BACHELOR OF COMPUTER APPLICATIONS (BCA) (Revised) Term-End Examination 05723. June, 2015 BCS-031 : PROGRAMMING IN C ++ Time : 3 hours Maximum Marks : 100 (Weightage 75%)

More information

Advanced MPI. Andrew Emerson

Advanced MPI. Andrew Emerson Advanced MPI Andrew Emerson (a.emerson@cineca.it) Agenda 1. One sided Communications (MPI-2) 2. Dynamic processes (MPI-2) 3. Profiling MPI and tracing 4. MPI-I/O 5. MPI-3 22/02/2017 Advanced MPI 2 One

More information

Partha Sarathi Mandal

Partha Sarathi Mandal MA 253: Data Structures Lab with OOP Tutorial 1 http://www.iitg.ernet.in/psm/indexing_ma253/y13/index.html Partha Sarathi Mandal psm@iitg.ernet.in Dept. of Mathematics, IIT Guwahati Reference Books Cormen,

More information

Introduction to C++ (Extensions to C)

Introduction to C++ (Extensions to C) Introduction to C++ (Extensions to C) C is purely procedural, with no objects, classes or inheritance. C++ is a hybrid of C with OOP! The most significant extensions to C are: much stronger type checking.

More information

11/6/17. Functional programming. FP Foundations, Scheme (2) LISP Data Types. LISP Data Types. LISP Data Types. Scheme. LISP: John McCarthy 1958 MIT

11/6/17. Functional programming. FP Foundations, Scheme (2) LISP Data Types. LISP Data Types. LISP Data Types. Scheme. LISP: John McCarthy 1958 MIT Functional programming FP Foundations, Scheme (2 In Text: Chapter 15 LISP: John McCarthy 1958 MIT List Processing => Symbolic Manipulation First functional programming language Every version after the

More information

High Performance Computing Course Notes Message Passing Programming I

High Performance Computing Course Notes Message Passing Programming I High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works

More information

Cours de C++ Introduction

Cours de C++ Introduction Cours de C++ Introduction Cécile Braunstein cecile.braunstein@lip6.fr Cours de C++ 1 / 20 Généralité Notes Interros cours 1/3 Contrôle TP 1/3 Mini-projet 1/3 Bonus (Note de Participation) jusqu à 2 points

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

A Message Passing Standard for MPP and Workstations

A Message Passing Standard for MPP and Workstations A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker Message Passing Interface (MPI) Message passing library Can be

More information

CE221 Programming in C++ Part 1 Introduction

CE221 Programming in C++ Part 1 Introduction CE221 Programming in C++ Part 1 Introduction 06/10/2017 CE221 Part 1 1 Module Schedule There are two lectures (Monday 13.00-13.50 and Tuesday 11.00-11.50) each week in the autumn term, and a 2-hour lab

More information

Dynamic Process Management in an MPI Setting. William Gropp. Ewing Lusk. Abstract

Dynamic Process Management in an MPI Setting. William Gropp. Ewing Lusk.  Abstract Dynamic Process Management in an MPI Setting William Gropp Ewing Lusk Mathematics and Computer Science Division Argonne National Laboratory gropp@mcs.anl.gov lusk@mcs.anl.gov Abstract We propose extensions

More information

CSE 333 Lecture 9 - intro to C++

CSE 333 Lecture 9 - intro to C++ CSE 333 Lecture 9 - intro to C++ Hal Perkins Department of Computer Science & Engineering University of Washington Administrivia & Agenda Main topic: Intro to C++ But first: Some hints on HW2 Labs: The

More information

Summary of Last Class. Processes. C vs. Java. C vs. Java (cont.) C vs. Java (cont.) Tevfik Ko!ar. CSC Systems Programming Fall 2008

Summary of Last Class. Processes. C vs. Java. C vs. Java (cont.) C vs. Java (cont.) Tevfik Ko!ar. CSC Systems Programming Fall 2008 CSC 4304 - Systems Programming Fall 2008 Lecture - II Basics of C Programming Summary of Last Class Basics of UNIX: logging in, changing password text editing with vi, emacs and pico file and director

More information

Task Oriented Parallel C/C++: A Tutorial (Version 0.92)

Task Oriented Parallel C/C++: A Tutorial (Version 0.92) Task Oriented Parallel C/C++: A Tutorial (Version 0.92) Gene Cooperman gene@ccs.neu.edu October 25, 2003 1 Introduction The goal of Task Oriented Parallel C/C++ (TOP-C) is to provide a utility that makes

More information

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com.

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com. December 2003 Provided by ClusterWorld for Jeff Squyres cw.squyres.com www.clusterworld.com Copyright 2004 ClusterWorld, All Rights Reserved For individual private use only. Not to be reproduced or distributed

More information

5. Conclusion. References

5. Conclusion. References They take in one argument: an integer array, ArrOfChan, containing the values of the logical channels (i.e. message identifiers) to be checked for messages. They return the buffer location, BufID, and

More information

CS3157: Advanced Programming. Outline

CS3157: Advanced Programming. Outline CS3157: Advanced Programming Lecture #12 Apr 3 Shlomo Hershkop shlomo@cs.columbia.edu 1 Outline Intro CPP Boring stuff: Language basics: identifiers, data types, operators, type conversions, branching

More information

Chapter 11 Customizing I/O

Chapter 11 Customizing I/O Chapter 11 Customizing I/O Bjarne Stroustrup www.stroustup.com/programming Overview Input and output Numeric output Integer Floating point File modes Binary I/O Positioning String streams Line-oriented

More information

Creating a Shell or Command Interperter Program CSCI411 Lab

Creating a Shell or Command Interperter Program CSCI411 Lab Creating a Shell or Command Interperter Program CSCI411 Lab Adapted from Linux Kernel Projects by Gary Nutt and Operating Systems by Tannenbaum Exercise Goal: You will learn how to write a LINUX shell

More information

Course "Data Processing" Name: Master-1: Nuclear Energy Session /2018 Examen - Part A Page 1

Course Data Processing Name: Master-1: Nuclear Energy Session /2018 Examen - Part A Page 1 Examen - Part A Page 1 1. mydir directory contains three files: filea.txt fileb.txt filec.txt. How many files will be in the directory after performing the following operations: $ ls filea.txt fileb.txt

More information

PROGRAMMING IN C++ CVIČENÍ

PROGRAMMING IN C++ CVIČENÍ PROGRAMMING IN C++ CVIČENÍ INFORMACE Michal Brabec http://www.ksi.mff.cuni.cz/ http://www.ksi.mff.cuni.cz/~brabec/ brabec@ksi.mff.cuni.cz gmichal.brabec@gmail.com REQUIREMENTS FOR COURSE CREDIT Basic requirements

More information

primitive arrays v. vectors (1)

primitive arrays v. vectors (1) Arrays 1 primitive arrays v. vectors (1) 2 int a[10]; allocate new, 10 elements vector v(10); // or: vector v; v.resize(10); primitive arrays v. vectors (1) 2 int a[10]; allocate new, 10 elements

More information

The Public Shared Objects Run-Time System

The Public Shared Objects Run-Time System The Public Shared Objects Run-Time System Stefan Lüpke, Jürgen W. Quittek, Torsten Wiese E-mail: wiese@tu-harburg.d400.de Arbeitsbereich Technische Informatik 2, Technische Universität Hamburg-Harburg

More information

OOPs Concepts. 1. Data Hiding 2. Encapsulation 3. Abstraction 4. Is-A Relationship 5. Method Signature 6. Polymorphism 7. Constructors 8.

OOPs Concepts. 1. Data Hiding 2. Encapsulation 3. Abstraction 4. Is-A Relationship 5. Method Signature 6. Polymorphism 7. Constructors 8. OOPs Concepts 1. Data Hiding 2. Encapsulation 3. Abstraction 4. Is-A Relationship 5. Method Signature 6. Polymorphism 7. Constructors 8. Type Casting Let us discuss them in detail: 1. Data Hiding: Every

More information

6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT

6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT 6.189 IAP 2007 Lecture 5 Parallel Programming Concepts 1 6.189 IAP 2007 MIT Recap Two primary patterns of multicore architecture design Shared memory Ex: Intel Core 2 Duo/Quad One copy of data shared among

More information

M/s. Managing distributed workloads. Language Reference Manual. Miranda Li (mjl2206) Benjamin Hanser (bwh2124) Mengdi Lin (ml3567)

M/s. Managing distributed workloads. Language Reference Manual. Miranda Li (mjl2206) Benjamin Hanser (bwh2124) Mengdi Lin (ml3567) 1 M/s Managing distributed workloads Language Reference Manual Miranda Li (mjl2206) Benjamin Hanser (bwh2124) Mengdi Lin (ml3567) Table of Contents 1. Introduction 2. Lexical elements 2.1 Comments 2.2

More information

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,

More information

Compaq Interview Questions And Answers

Compaq Interview Questions And Answers Part A: Q1. What are the difference between java and C++? Java adopts byte code whereas C++ does not C++ supports destructor whereas java does not support. Multiple inheritance possible in C++ but not

More information

The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola

The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola The MPI Message-passing Standard Practical use and implementation (I) SPD Course 2/03/2010 Massimo Coppola What is MPI MPI: Message Passing Interface a standard defining a communication library that allows

More information