Message Passing. Frédéric Haziza Summer Department of Computer Systems Uppsala University

Similar documents
Concurrent Programming Introduction

Computer Architecture Crash course

Distributed Programming

Multiprocessors - Flynn s Taxonomy (1966)

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

Introduction II. Overview

Remote Procedure Call

Remote Invocation. Today. Next time. l Overlay networks and P2P. l Request-reply, RPC, RMI

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 22: Remote Procedure Call (RPC)

Message Passing. Lecture 7. Shared Variables? Shared Variables? Synchronisation. Overview of Message Passing

Communication. Outline

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level

Lecture 6. Introduction to Message Passing

Parallel Architectures

Parallel Computing Introduction

G52CON: Concepts of Concurrency Lecture 15: Message Passing. Gabriela Ochoa School of Computer Science & IT

Chapter 4: Processes. Process Concept

Chapter 4: Processes

Distributed Systems Course. a.o. Univ.-Prof. Dr. Harald Kosch

Introduction to parallel computers and parallel programming. Introduction to parallel computersand parallel programming p. 1

THREAD LEVEL PARALLELISM

03 Remote invoaction. Request-reply RPC. Coulouris 5 Birrel_Nelson_84.pdf RMI

Introduction to Parallel Computing

Parallel Processors. The dream of computer architects since 1950s: replicate processors to add performance vs. design a faster processor

Last Class: RPCs. Today:

CSE 451 Midterm 1. Name:

Multiprocessors 2007/2008

Computer parallelism Flynn s categories

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

Multi-Processor / Parallel Processing

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Objectives of the Course

Communication Paradigms

Collaboration of Tasks

Lecture 2 Process Management

Moore s Law. Computer architect goal Software developer assumption

02 - Distributed Systems

Communication. Distributed Systems Santa Clara University 2016

Concurrency: Mutual Exclusion and

Remote Procedure Call

Message-Passing Programming with MPI

Networks and distributed computing

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State

Indirect Communication

Operating System Support

Executive Summary. It is important for a Java Programmer to understand the power and limitations of concurrent programming in Java using threads.

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)

Chapter 4: Processes

Synchronization. CSCI 5103 Operating Systems. Semaphore Usage. Bounded-Buffer Problem With Semaphores. Monitors Other Approaches

Threads and Parallelism in Java

The Big Picture So Far. Chapter 4: Processes

Networks and distributed computing

Chapter 4: Processes. Process Concept. Process State

Crossbar switch. Chapter 2: Concepts and Architectures. Traditional Computer Architecture. Computer System Architectures. Flynn Architectures (2)

Chapter 13 Topics. Introduction. Introduction

Chapter 4: Processes

Middleware and Interprocess Communication

02 - Distributed Systems

WHY PARALLEL PROCESSING? (CE-401)

Interprocess Communication Tanenbaum, van Steen: Ch2 (Ch3) CoDoKi: Ch2, Ch3, Ch5

Chapter 11. Introduction to Multiprocessors

Chapter 4: Processes. Process Concept

CSci Introduction to Distributed Systems. Communication: RPC

Multithreading and Interactive Programs

Issues in Multiprocessors

Last Class: RPCs. Today:

Concurrency: Mutual Exclusion and Synchronization

Diagram of Process State Process Control Block (PCB)

Concurrency: what, why, how

Message Passing Models and Multicomputer distributed system LECTURE 7

High Performance Computing. University questions with solution

Parallel Architectures

Non-uniform memory access machine or (NUMA) is a system where the memory access time to any region of memory is not the same for all processors.

Chap. 4 Multiprocessors and Thread-Level Parallelism

Ch 9: Control flow. Sequencers. Jumps. Jumps

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Computer Architecture Lecture 27: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Message-Passing Programming with MPI. Message-Passing Concepts

Multiple Inheritance. Computer object can be viewed as

Time and Space. Indirect communication. Time and space uncoupling. indirect communication

Part Two - Process Management. Chapter 3: Processes

Modern Processor Architectures. L25: Modern Compiler Design

Concurrency: what, why, how

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen

Module 20: Multi-core Computing Multi-processor Scheduling Lecture 39: Multi-processor Scheduling. The Lecture Contains: User Control.

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

The Big Picture So Far. Chapter 4: Processes

Issues in Multiprocessors

416 Distributed Systems. RPC Day 2 Jan 11, 2017

Concurrency. Lecture 14: Concurrency & exceptions. Why concurrent subprograms? Processes and threads. Design Issues for Concurrency.

KTH ROYAL INSTITUTE OF TECHNOLOGY. Remote Invocation. Vladimir Vlassov and Johan Montelius

Outline for Today. Readers/Writers Problem

Software Architecture Patterns

Lecture 28: Introduction to the Message Passing Interface (MPI) (Start of Module 3 on Distribution and Locality)

Transcription:

Message Passing Frédéric Haziza <daz@it.uu.se> Department of Computer Systems Uppsala University Summer 2009

MultiProcessor world - Taxonomy SIMD MIMD Message Passing Shared Memory Fine-grained Coarse-grained UMA NUMA COMA 3 OS2 09 Message Passing

Scenario Several cars want to drive from point A to point B. Sequential Programming They can compete for space on the same road and end up either: following each other or competing for positions (and having accidents!). Parallel Programming Or they could drive in parallel lanes, thus arriving at about the same time without getting in each other s way. Distributed Programming Or they could travel different routes, using separate roads. 4 OS2 09 Message Passing

Distributed Programming No shared-memory Have to exchange messages with each other. Important to define communication interface Reads and writes like reads/writes on shared-memory? Instead, a better approach is to define special network operations that include synchronization (in the same way as semaphores were special operations on shared variables) 5 OS2 09 Message Passing

Distributed Programming are typically the only objects processes share Each variable is No concurrent access 6 OS2 09 Message Passing

Communication in the channel One-way or two-way information flow Asynchronous or synchronous communication (non-blocking/blocking) Direct or indirect communication (mailbox/ports) Automatic or explicit buffering 7 OS2 09 Message Passing

4 communication patterns Remote Procedure Call Rendez-vous Asynchronous message passing Synchronous message passing 8 OS2 09 Message Passing

Relation between concurrent mechanism 9 OS2 09 Message Passing

Channels send Shared channel receive send send channel 3 receive send 11 OS2 09 Message Passing

send is blocking or non-blocking receive is blocking send Shared channel receive send receive 12 OS2 09 Message Passing

receive has blocking semantics...... so the receiving process does not have to use busy-waiting to poll the channel if it has nothing else to do until a message arrives. Assumption Access to the content of each channel is atomic and that message deliver y is reliable and error-free. 13 OS2 09 Message Passing

Naming convention Sending to and receiving from any channel exactly one receiver eventually many senders exactly one receiver exactly one sender 14 OS2 09 Message Passing

Clients / Server Client n Request Reply...... Reply Client 1 Request 16 OS2 09 Message Passing

Clients/Server with one operation op channel request(int clientid, types of input values); channel reply[n](types of results); process Client { i= 0,...,n-1 send request(i, value arguments); receive reply[i](result arguments); process Server { int clientid; 17 OS2 09 Message Passing declaration of other permanent variables initialization code; while ( true ) { receive request(clientid, input values); code from body of operation op; send reply[clientid](results);

Clients/Server with multiple operation type op_kind, arg_type, result_type; channel request(int clientid, op_kind, arg_type); channel reply[n](res_type); process Client { i= 0,...,n-1 arg_type myargs; result_type myresults; place value arguments in myargs; send request(i, op j, myargs); call op j receive reply[i](myresults); wait for reply process Server { int clientid; op_kind kind; arg_type args; res_type results; declaration of other permanent variables; initialization code; while ( true ) { receive request(clientid, kind, args); if (kind == op 1 ){body of op 1... else if (kind == op n){body of op n send reply[clientid](results); 18 OS2 09 Message Passing

Interacting Peers Worker 1 Results Results Worker n 1 Worker 1... Worker n 1 Data Coordinator Data Each worker has a local value. Task: Sort the smallest and biggest values among the workers. 19 OS2 09 Message Passing

Interacting peers Workers/Coordinator channel values(int); channel results[n](int smallest, int largest); process P i =1,...,n 1 { int val; Assume val has been initialized int smallest, int largest; send values(val); receive results[i](smallest,largest); process { int val; Assume val has been initialized int new, smallest = val, largest = val; initial state for [i = 1 to n-1] { gather values and save the smallest and largest for [i = 1 to n-1] { Send the result to the other processes 20 OS2 09 Message Passing

Interacting peers Symmetric solution channel values[k](int); k = n (n+1) 2 process P i =0,...,n 1 { int val; Assume val has been initialized int new, smallest = val, largest = val; initial state P 0 P5 P 1 P 2 P 3 P 4 send my value to the other processes for [j = 0 to n-1 but j i] { send values[j](val); gather values and save the smallest and largest for [j = 0 to n-1 but j i] { 21 OS2 09 Message Passing

Interacting peers Circular pipeline channel values[n](int smallest, int largest); process P 1,...,n 1 { int val; Assume val has been initialized int smallest, int largest; initial state receive smallest and largest so far then update them by comparing their value to val send the result to the next process and then wait to get the global result process { initiates the exchanges int val; Assume val has been initialized int smallest = val, largest = val; initial state send val to the next process, P 1 P 1 P 2 P 3 P 0 P 4 P5 get global smallest and largest from P n 1 and pass them on to P 1 22 OS2 09 Message Passing

Conclusion Tools: MPI Java RMI CORBA SOAP RPC used in Microkernels Erlang Message passing systems have been called shared nothing systems because the message passing abstraction hides underlying state changes that may be used in the implementation of sending messages. Message passing model based programming languages typically define messaging as the (usually asynchronous) sending (usually by copy) of a data item to a communication endpoint (Actor, process, thread, socket, etc...) 23 OS2 09 Message Passing