Linda and TupleSpaces. Prabhaker Mateti

Size: px
Start display at page:

Download "Linda and TupleSpaces. Prabhaker Mateti"

Transcription

1 Linda and TupleSpaces Prabhaker Mateti

2 Linda Overview an example of Asynchronous Message Passing send never blocks (i.e., implicit infinite capacity buffering) ignores the order of send Associative abstract distributed shared memory system on heterogeneous networks Mateti CEG7370 Linda 2

3 Tuple Space A tuple is an ordered list of (possibly dissimilar) items (x, y), coordinates in a 2-d plane, both numbers (true, a, hello, (x, y)), a quadruple of dissimilars Instead of () some papers use < > Tuple Space is a collection of tuples Consider it a bag, not a set Count of occurrences matters. T # TS stands for #occurrences of T in TS Tuples are accessed associatively Tuples are equally accessible to all processes Mateti CEG7370 Linda 3

4 Linda s Primitives Four primitives added to a host prog lang out(t) output T into TS the number of T s in TS increases by 1 Atomic no processes are created eval(t) creates a process that evaluates T residual tuple is output to TS in(t) input T from TS the number of T s in TS decreases by 1 no processes are created more rd(t) abbrev of read(t) input T from TS the number of T s in TS does not change no processes are created Mateti CEG7370 Linda 4

5 Example: in(t) and inp(t) Suppose multiple processes are attempting Let T # TS stand for no. occurrences of T in TS if T # TS 1: input the tuple T T # TS decreases by 1 atomic operation if T # TS = 1: Only one process succeeds Which? Unspecified; nondeterministic if T # TS = 0: All processes wait for some process to out(t) may block for ever inp(t) a predicated in(t) if T#TS = 0, inp(t) fails but the process is not blocked if T#TS = 1, inp(t) succeeds effect is identical to in(t) process is not blocked rdp(t) Mateti CEG7370 Linda 5

6 Example: in( hi,?x, false) x declared to be an int the tuple pattern matches any tuple T provided: length of T = 3 T.1 = hi T.2 is any int T.3 = false X is then assigned that int Suppose TS = { ( hi, 2, false), ( hi, 2, false), ( hi, 35, false), ( hi, 7, false), in( hi,?x, false) inputs one of the above which? unspecified Tuple patterns may have multiple? symbols Mateti CEG7370 Linda 6

7 in(n, P2,, Pj) N an actual arg of type Name P2 Pj are actual/ formal params The values found in the matched tuple are assigned to the formals; the process then continues The withdrawal of the matched tuple is atomic. If multiples tuples match, non deterministic choice If no matching tuple exists, in( ) suspends until one becomes available, and does the above. Mateti CEG7370 Linda 7

8 Example: eval( i,i, sqrt(i)) Creates a new process(es) to evaluate each field of eval( i, i, sqrt(i)) the result is output to TS The tuple ( i, i, sqrt(i)) is known as an active tuple. Suppose i = 4 sqrt(i) is computed by the new process. Resulting tuple is ( i, 4, 2.0) known as a passive tuple can also be ( i, 4, -2.0) ( i, 4, 2.0) is output to TS Process(es) terminate(s). Bindings inherited from the eval-executing process only for names cited explicitly. Mateti CEG7370 Linda 8

9 Example: eval("q", f(x,y)) Suppose eval("q", f(x,y)) is being executed by process P0 P0 creates two new processes, say, P1 and P2. P1 evaluates Q P2 evaluates f(x,y) P0 moves on P0 does not wait for P1 to terminate P0 does not wait for P2 to terminate P0 may later on do an in( Q,?result) P2 evaluates f(x,y) in a context where f, x and y have the same values they had in P0 No bindings are inherited for any variables that happen to be free (i.e., global) in f, unless explicitly in the eval Mateti CEG7370 Linda 9

10 Linda Algorithm Design Example Given a finite bag B of numbers, as well as the size nb of the bag B, find the second largest number in B. Use p processes Assume the TS is preloaded with B: ( bi, b i ) for i: 1..nb ( size, nb) Each process inputs nb/p numbers of B Is nb % p = 0? Each process outputs the largest and the second largest it found A selected process considers these 2*p numbers and does as above Result Parallel Paradigm Mateti CEG7370 Linda 10

11 Linda Algorithm: Second Largest int firstandsecond(int nx) { int bi, fst, snd; in( bi,?bi); fst = snd = bi; for (int i = 1; i < nx; i++) { in( bi,?bi); if (bi > fst) { snd = fst; fst = bi; out( first, fst); out( second, snd); return 0; main(int argc, char *argv[]) { /* open a file, read numbers, * out( bi, bi) * out( nb, nb) * p = */ int i, nx = nb / p; /* Is nb % p = 0? */ for (i=0; i < p; i++) eval(firstandsecond(nx)); /* in( first, fst) and * in( second, snd) tuples * finish the computation */ Mateti CEG7370 Linda 11

12 Arrays and Matrices An Array (Array Name, index fields, value) ( V, 14, 123.5) ( A, 12, 18, 5, 123.5) That A is 3d you know it from your design; does not follow from the tuple Tuple elements can be tuples ( A, (12, 18, 5), 123.5) Mateti CEG7370 Linda 12

13 Linked Data Structures in Linda A Binary Tree Number the nodes: 1.. Number the root with 1 Use the number 0 for nil ( node, nodenumber, nodedata, leftchildnumber, rightchildnumber) A Directed Graph Represent it as a collection of directed edges. Number the nodes: 1.. ( edge, fromnodenumber, tonodenumber) Mateti CEG7370 Linda 13

14 More on Data Structures in Linda Binary Tree (again) A Lisp-like cons cell ( C, cons, [ A, B ]) ( B, cons, []) An atom ( A, atom, value) Undirected Graphs Similar to Directed Graphs How to ignore the direction in ( edge, fromnodenumber, tonodenumber)? Add ( edge, tonodenumber fromnodenumber) Or, use Set Representation. Mateti CEG7370 Linda 14

15 Coordinated Programming Programming = Computation + Coordination The term coordination refers to the process of building programs by gluing together processes. Unix glue operation: Pipe Coordination is managing dependencies between activities. Barrier Synchronization: Each process within some group must until all processes in the group have reached a barrier ; then all can proceed. Set up barrier: out ( barrier, n); Each process does the following: in( barrier,? val); out( barrier, val-1); rd( barrier, 0) Mateti CEG7370 Linda 15

16 RPC Clients and Servers servicearequest() { int ix, cid; typerq req; typers response; for (;;) { in ( request,?cid,?ix,?req) out ( response, cid, ix, response); a client process:: int clientid =, rqix = 0; typerq req; typers response; out ( request, clientid, ++rqix, req); in ( response, clientid, rqix,?response);

17 Dining Philosophers, Readers/Wr phil(int i) { while(1) { think (); in(in"room ticket") in("chopstick", i); in("chopstick", (i+i)%num); eat(); out("chopstick", i); out("chopstick",(i+i)%num); out("room ticket"); initialize() { int i; for (i = 0; i < Num; i++) { out("chopstick", i); eval(phil(i); if (i < (Num-1)) out("room ticket"); startread(); read; stopread(); startread() { rd("rw-head", incr("rw-tail")); rd("writers", 0); incr("active-readers"); incr("rw-head"); int incr(countername); { in(countername,?value); out(countername, value + 1); return value; /* complete the rest of the implementation of * the readers-writers */ Mateti CEG7370 Linda 17

18 Semaphores in Linda Create a semaphore named xyz whose initial value is 3. Solution: RHS Properties: Is it a semaphore satisfying the weak semaphore assumption? Load the tuple space with ( xyz ), ( xyz ), ( xyz ) P(nm) { in(nm); V(nm) { out(nm); Mateti CEG7370 Linda 18

19 Programming Paradigms Result Parallel focus on the structure of input space. Divide this into many pieces of the same structure. Solve each piece the same way Combine the sub-results into a final result Divide-and-Conquer Hadoop Agenda Of Activities A list of things to do and their order Example: Build A House Build Walls Frame the walls Plumbing Electrical Wiring Drywalls Doors, Windows Build a Drive Way Paint the House Ensemble Of Specialists Example: Build A House Carpenters Masons Electrician Plumbers Painters Master-slave Architecture These paradigms are applicable to not only Linda but other languages and systems. Mateti CEG7370 Linda 19

20 Result Parallel Generate Primes /* From Linda book, Chapter 5 */ int isprime(int me) { int p,limit,ok; limit=sqrt((double)me)+1; for (p=2; p < limit; ++p) { rd("primes,p,?ok); if (ok && (me%p == 0)) return 0; return 1; real_main() { int count = 0, i, ok; for(i=2; i <= LIMIT; ++i) eval("primes",i,isprime(i)); for(i = 2; i <= LIMIT; ++i) { rd("primes", i,?ok); if (ok) { ++count; printf( prime: %n\n, i); Mateti CEG7370 Linda 20

21 Paradigm: Agenda Parallelism /* From Linda book */ real_main(int argc, char *argv[]) { int eot,first_num,i,length, new_primes[grain],np2; int num,num_prices, num_workers, primes[max], p2[max]; num_workers = atoi(argv[1]); for (i = 0; i < num_workers; ++i) eval("worker", worker()); num_primes = init_primes(primes, p2); first_num = primes[num_primes-1] + 2; out("next task", first_num); eot = 0; /* Becomes 1 at end of table */ for (num = first_num; num < LIMIT; num += GRAIN){ in("result", num,? new_primes:length); for (i = 0; i < length; ++i, ++num_primes) { primes[num_primes] = new_primes[i]; if (!eot) { np2 = new_primes[i]*new_primes)[i]; if (np2 > LIMIT) { eot = 1; np2 = -1; out("primes", num_primes, new_primes[i], np2); /* "? int" match any int and throw out the value */ for (i = 0; i < num_workers; ++i) in("worker",?int); printf("count: %d\n", num_primes); worker() { int count, eot,i, limit, num, num_primes, ok,start; int my_primes[grain], primes[max], p2[max]; num_primes = init_primes(primes, p2); eot = 0; while(1) { in("next task",? num); if (num == -1) { out("next task", -1); return; limit = num + GRAIN; out("next task", (limit > LIMIT)? -1 : limit); if (limit > LIMIT) limit = LIMIT: start = num; for (count = 0; num < limit; num += 2) { while (!eot && num > p2[num_primes-1]) { rd("primes", num_primes,?primes[num_primes],?p2[num_primes]); if (p2[num_primes] < 0) eot = 1; else ++num_primes; for (i = 1, ok = 1; i < num_primes; ++i) { if (! num % primes[i])) { ok = 0; break ; if (num < p2[i]) break; if (ok) {my_primes[count] = num; ++count; /* Send the control process any primes found. */ out("result", start, my_primes:count); Mateti CEG7370 Linda 21

22 Paradigm: Specialist Parallelism /* From Linda book */ source() { int i, out_index=0; for (i = 5; i < LIMIT; i += 2) out("seg", 3, out_index++, i); out("seg", 3, out_index, 0); pipe_seg(prime, next, in_index) { int num, out_index=0; while(1) { in("seg", prime, in_index++,? num); if (!num) break; if (num % prime) out("seg", next, out_index++, num); out("seg", next, out_index, num); sink() { int in_index=0, num, pipe_seg(), prime=3, prime_count=2; while(1) { in("seg", prime, in_index++,?num); if (!num) break; if (num % prime) { ++prime_count; if (num*num < LIMIT) { eval("pipe seg, pipe_seg(prime,num,in_index)); prime = num; in_index = 0 printf("count: %d.\n", prime_count); real_main() { eval("source", source()); eval("sink", sink()); Mateti CEG7370 Linda 22

23 Linda Summary out(), in(), rd(), inp(), rdp() are heavier than host language computations. eval() is the heaviest of Linda primitives Nondeterminism in pattern matching Time uncoupling Communication between time-disjoint processes Can even send messages to self Distributed sharing Variables shared between disjoint processes Many implementations permit multiple tuple spaces No Security (no encapsulation) Linda is not fault-tolerant Processes are assumed to be fail-safe Beginners do this in a loop { in(?t); if notok(t) out(t); No guarantee you won t get the same T. The following can sequentialize the processes using this code block: {in(?count); out(count+1); Where most distributed languages are partially distributed in space and non-distributed in time, Linda is fully distributed in space and distributed in time as well.

24 JavaSpaces and TSpaces JavaSpaces is Linda adapted to Java net.jini.space.javaspace write( ): into a space take( ): from a space read( ): notify: Notifies a specified object when entries that match the given template are written into a space java.sun.com/developer/technical Articles/tools/JavaSpaces/ Tspaces is an IBM adaptation of Linda. TSpaces is network middleware for the new age of ubiquitous computing. TSpaces = Tuple + Database + Java write( ): into a space take( ): from a space read( ): Scan and ConsumingScan rendezvous operator, Rhonda. Tspaces Whiteboard es/ Mateti CEG7370 Linda 24

25 NetWorkSpaces open-source software package that makes it easy to use clusters from within scripting languages like Matlab, Python, and R. Nicholas Carriero and David Gelernter, How to Write Parallel Programs book, MIT Press, 1992 Tutorial on Parallel Programming with Linda Mateti CEG7370 Linda 25

26 CEG 730 Preferences Assume TS is preloaded with input data in a form that is helpful. At the end of the algorithm, TS should have only the results the preloaded input data is removed Any C-program can be embedded into C-Linda not acceptable at all Use p processes In general, you choose p so that elapsed time is minimized assuming the p processes do time-overlapped parallel computation. Is nb % p == 0? pad the input data space with dummy values that preserve the solutions Let some worker processes do more Avoid using inp() and/or rdp() because it confuses our thinking we can get better designs without them A badly used inp() can produce a livelock where a plain in() would have cause a block. Typically, we can avoid the use of inp(). Not always. Problem: Compute the number of elements in a bag B. Assume B is preloaded into TS. Solution needs inp(). Mateti CEG7370 Linda 26

27 References Sudhir Ahuja, Nicholas Carriero and David Gelernter, ``Linda and Friends,'' IEEE Computer (magazine), Vol. 19, No. 8, has an entire book. JavaSpaces,en.wikipedia.org/wiki/Tuple_spac e Andrews, Section 10.x on Linda. Yet another prime number generator. Jeremy R. Johnson, ~jjohnson/ /winter/cs676.htm Mateti CEG7370 Linda 27

Middleware-Konzepte. Tuple Spaces. Dr. Gero Mühl

Middleware-Konzepte. Tuple Spaces. Dr. Gero Mühl Middleware-Konzepte Tuple Spaces Dr. Gero Mühl Kommunikations- und Betriebssysteme Fakultät für Elektrotechnik und Informatik Technische Universität Berlin Agenda > Introduction > Linda Tuple Spaces >

More information

Linda, JavaSpaces & Jini

Linda, JavaSpaces & Jini ECE 451/566 - Introduction to Parallel and Distributed Computing Linda, JavaSpaces & Jini Manish Parashar parashar@ece.rutgers.edu Department of Electrical & Computer Engineering Rutgers University Linda

More information

SE3BB4 Section 11. Concurrent (Software) Architectures

SE3BB4 Section 11. Concurrent (Software) Architectures SE3BB4 Section 11 Concurrent (Software) Architectures 1 SOFTWARE ARCHITECTURE Architecture is used as a term to descrobe the gross structure of a system/application in terms of the structure of components

More information

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp> Processor speed 6.037 - Structure and Interpretation of Computer Programs Mike Phillips Massachusetts Institute of Technology http://en.wikipedia.org/wiki/file:transistor_count_and_moore%27s_law_-

More information

Distributed Systems/Middleware JavaSpaces

Distributed Systems/Middleware JavaSpaces Distributed Systems/Middleware JavaSpaces Alessandro Sivieri Dipartimento di Elettronica e Informazione Politecnico, Italy sivieri@elet.polimi.it http://corsi.dei.polimi.it/distsys Slides based on previous

More information

SFDV3006 Concurrent Programming

SFDV3006 Concurrent Programming SFDV3006 Concurrent Programming Lecture 6 Concurrent Architecture Concurrent Architectures Software architectures identify software components and their interaction Architectures are process structures

More information

LINDA. The eval operation resembles out, except that it creates an active tuple. For example, if fcn is a function, then

LINDA. The eval operation resembles out, except that it creates an active tuple. For example, if fcn is a function, then LINDA Linda is different, which is why we've put it into a separate chapter. Linda is not a programming language, but a way of extending ( in principle ) any language to include parallelism IMP13, IMP14.

More information

Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci

Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci v1.0 20140421 Programmazione Avanzata e Paradigmi Ingegneria e Scienze Informatiche - UNIBO a.a 2013/2014 Lecturer: Alessandro Ricci [module 3.1] ELEMENTS OF CONCURRENT PROGRAM DESIGN 1 STEPS IN DESIGN

More information

Lecture 9. Part I. Overview of Message Passing. Communication Coupling. Decoupling Blackboard. Decoupling Broadcast. Linda and Erlang.

Lecture 9. Part I. Overview of Message Passing. Communication Coupling. Decoupling Blackboard. Decoupling Broadcast. Linda and Erlang. Lecture 9 Part I Linda and Erlang Linda Overview of Message Passing Communication Coupling One process sends a message Another process awaits for a message We will consider two dimensions of this approach:

More information

Inter-process communication (IPC)

Inter-process communication (IPC) Inter-process communication (IPC) We have studied IPC via shared data in main memory. Processes in separate address spaces also need to communicate. Consider system architecture both shared memory and

More information

Lecture 8. Linda and Erlang

Lecture 8. Linda and Erlang Lecture 8 Linda and Erlang Part I Linda 2 Overview of Message Passing One process sends a message Another process awaits for a message We will consider two dimensions of this approach: What form of synchronisation

More information

CS558 Programming Languages

CS558 Programming Languages CS558 Programming Languages Winter 2017 Lecture 7b Andrew Tolmach Portland State University 1994-2017 Values and Types We divide the universe of values according to types A type is a set of values and

More information

CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM *

CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM * CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM * S. Krishnaprasad Mathematical, Computing, and Information Sciences Jacksonville State University Jacksonville, AL

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Functional Programming and Haskell

Functional Programming and Haskell Functional Programming and Haskell Tim Dawborn University of Sydney, Australia School of Information Technologies Tim Dawborn Functional Programming and Haskell 1/22 What are Programming Paradigms? A programming

More information

Abstract Memory The TupleSpace

Abstract Memory The TupleSpace Abstract Memory The TupleSpace Linda, JavaSpaces, Tspaces and PastSet Tuple Space Main idea: Byte ordered memory is a product of hardware development, not programmers needs Remodel memory to support the

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Coordination Languages

Coordination Languages c.hankin@imperial.ac.uk Department of Computing, BCS Advanced Programming Group Lecture, December 2008 Outline 1 Introduction 2 Linda 3 JavaSpaces 4 KLAIM 5 AspectK 6 Example Programs 7 Conclusions Outline

More information

Indirect Communication

Indirect Communication Indirect Communication Today l Space and time (un)coupling l Group communication, pub/sub, message queues and shared memory Next time l Distributed file systems xkdc Indirect communication " Indirect communication

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 1 Review: Sync Terminology Worksheet 2 Review: Semaphores 3 Semaphores o Motivation: Avoid busy waiting by blocking a process execution

More information

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018 MPI and comparison of models Lecture 23, cs262a Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018 MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers,

More information

Part Two - Process Management. Chapter 3: Processes

Part Two - Process Management. Chapter 3: Processes Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems

More information

CSE Traditional Operating Systems deal with typical system software designed to be:

CSE Traditional Operating Systems deal with typical system software designed to be: CSE 6431 Traditional Operating Systems deal with typical system software designed to be: general purpose running on single processor machines Advanced Operating Systems are designed for either a special

More information

gcc hello.c a.out Hello, world gcc -o hello hello.c hello Hello, world

gcc hello.c a.out Hello, world gcc -o hello hello.c hello Hello, world alun@debian:~$ gcc hello.c alun@debian:~$ a.out Hello, world alun@debian:~$ gcc -o hello hello.c alun@debian:~$ hello Hello, world alun@debian:~$ 1 A Quick guide to C for Networks and Operating Systems

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT

6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT 6.189 IAP 2007 Lecture 5 Parallel Programming Concepts 1 6.189 IAP 2007 MIT Recap Two primary patterns of multicore architecture design Shared memory Ex: Intel Core 2 Duo/Quad One copy of data shared among

More information

Multiple Inheritance. Computer object can be viewed as

Multiple Inheritance. Computer object can be viewed as Multiple Inheritance We have seen that a class may be derived from a given parent class. It is sometimes useful to allow a class to be derived from more than one parent, inheriting members of all parents.

More information

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC]

Processes. CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] Processes CSE 2431: Introduction to Operating Systems Reading: Chap. 3, [OSC] 1 Outline What Is A Process? Process States & PCB Process Memory Layout Process Scheduling Context Switch Process Operations

More information

New and Improved: Linda in Java

New and Improved: Linda in Java New and Improved: Linda in Java George C. Wells Department of Computer Science, Rhodes University, Grahamstown, 6140, South Africa G.Wells@ru.ac.za Abstract This paper discusses the current resurgence

More information

MIT OpenCourseWare Multicore Programming Primer, January (IAP) Please use the following citation format:

MIT OpenCourseWare Multicore Programming Primer, January (IAP) Please use the following citation format: MIT OpenCourseWare http://ocw.mit.edu 6.189 Multicore Programming Primer, January (IAP) 2007 Please use the following citation format: Rodric Rabbah, 6.189 Multicore Programming Primer, January (IAP) 2007.

More information

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA)

Introduction to MapReduce. Adapted from Jimmy Lin (U. Maryland, USA) Introduction to MapReduce Adapted from Jimmy Lin (U. Maryland, USA) Motivation Overview Need for handling big data New programming paradigm Review of functional programming mapreduce uses this abstraction

More information

Threads and Parallelism in Java

Threads and Parallelism in Java Threads and Parallelism in Java Java is one of the few main stream programming languages to explicitly provide for user-programmed parallelism in the form of threads. A Java programmer may organize a program

More information

CS3502 OPERATING SYSTEMS

CS3502 OPERATING SYSTEMS CS3502 OPERATING SYSTEMS Spring 2018 Synchronization Chapter 6 Synchronization The coordination of the activities of the processes Processes interfere with each other Processes compete for resources Processes

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

Functional Programming. Pure Functional Programming

Functional Programming. Pure Functional Programming Functional Programming Pure Functional Programming Computation is largely performed by applying functions to values. The value of an expression depends only on the values of its sub-expressions (if any).

More information

6.001 Notes: Section 17.5

6.001 Notes: Section 17.5 6.001 Notes: Section 17.5 Slide 17.5.1 Now, let's look at one example in which changing the evaluation model allows us to explore a very different kind of computational problem. Our goal is to show how

More information

Concept of a process

Concept of a process Concept of a process In the context of this course a process is a program whose execution is in progress States of a process: running, ready, blocked Submit Ready Running Completion Blocked Concurrent

More information

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

Indirect Communication

Indirect Communication Indirect Communication To do q Today q q Space and time (un)coupling Common techniques q Next time: Overlay networks xkdc Direct coupling communication With R-R, RPC, RMI Space coupled Sender knows the

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Distributed Programming in Java

Distributed Programming in Java Distributed Programming in Java Distribution (5) RPC-style middleware Disadvantages: Strongly coupled Synchronous Limited extensibility Advantages: Transparency. Type safety 2/24 Space-based middleware

More information

CS 314 Principles of Programming Languages

CS 314 Principles of Programming Languages CS 314 Principles of Programming Languages Lecture 16: Functional Programming Zheng (Eddy Zhang Rutgers University April 2, 2018 Review: Computation Paradigms Functional: Composition of operations on data.

More information

Concurrent ML. John Reppy January 21, University of Chicago

Concurrent ML. John Reppy January 21, University of Chicago Concurrent ML John Reppy jhr@cs.uchicago.edu University of Chicago January 21, 2016 Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is

More information

Complex Systems Design &DistributedCalculusandCoordination

Complex Systems Design &DistributedCalculusandCoordination Complex Systems Design &DistributedCalculusandCoordination Concurrency and Process Algebras: Theory and Practice - Klaim Francesco Tiezzi University of Camerino francesco.tiezzi@unicam.it A.A. 2014/2015

More information

Parallel And Distributed Compilers

Parallel And Distributed Compilers Parallel And Distributed Compilers Parallel Systems Parallel System has the goal to solve a given problem using multiple processors. Normally Solve a single problem These typically are used in applications

More information

PROLOG-LINDA : AN EMBEDDING OF LINDA IN muprolog

PROLOG-LINDA : AN EMBEDDING OF LINDA IN muprolog PROLOG-LINDA : AN EMBEDDING OF LINDA IN muprolog GEOFF SUTCLIFFE Department of Computer Science,The University of Western Australia, Nedlands, 6009, Western Australia and JAMES PINAKIS Department of Computer

More information

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Sana a University, Dr aimen Process Concept

More information

Models and languages of concurrent computation

Models and languages of concurrent computation Models and languages of concurrent computation Lecture 11 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu

More information

Berkeley Scheme s OOP

Berkeley Scheme s OOP Page < 1 > Berkeley Scheme s OOP Introduction to Mutation If we want to directly change the value of a variable, we need a new special form, set! (pronounced set BANG! ). (set! )

More information

LISP. Everything in a computer is a string of binary digits, ones and zeros, which everyone calls bits.

LISP. Everything in a computer is a string of binary digits, ones and zeros, which everyone calls bits. LISP Everything in a computer is a string of binary digits, ones and zeros, which everyone calls bits. From one perspective, sequences of bits can be interpreted as a code for ordinary decimal digits,

More information

Semaphores. Derived Inference Rules. (R g>0) Q g (g 1) {R} V (g) {Q} R Q g (g+1) V (s), hs := 1; i

Semaphores. Derived Inference Rules. (R g>0) Q g (g 1) {R} V (g) {Q} R Q g (g+1) V (s), hs := 1; i Semaphores Derived Inference Rules A shared integer variable, s, initialized to i, and manipulated only by two operations: declare: sem s := i # i 0.Default is i =0. pass (proberen): P (s), hawait(s >0)

More information

Computer Components. Software{ User Programs. Operating System. Hardware

Computer Components. Software{ User Programs. Operating System. Hardware Computer Components Software{ User Programs Operating System Hardware What are Programs? Programs provide instructions for computers Similar to giving directions to a person who is trying to get from point

More information

CS16 Midterm Exam 1 E01, 10S, Phill Conrad, UC Santa Barbara Wednesday, 04/21/2010, 1pm-1:50pm

CS16 Midterm Exam 1 E01, 10S, Phill Conrad, UC Santa Barbara Wednesday, 04/21/2010, 1pm-1:50pm CS16 Midterm Exam 1 E01, 10S, Phill Conrad, UC Santa Barbara Wednesday, 04/21/2010, 1pm-1:50pm Name: Umail Address: @ umail.ucsb.edu Circle Lab section: 9am 10am 11am noon (Link to Printer Friendly-PDF

More information

Programmin Languages/Variables and Storage

Programmin Languages/Variables and Storage Programmin Languages/Variables and Storage Onur Tolga Şehitoğlu Computer Engineering 4 Mart 2007 Outline 1 Storage Array Variables 2 Semantics of Assignment 3 Variable Lifetime Global Lifetime Local Lifetime

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

Advantages of concurrent programs. Concurrent Programming Actors, SALSA, Coordination Abstractions. Overview of concurrent programming

Advantages of concurrent programs. Concurrent Programming Actors, SALSA, Coordination Abstractions. Overview of concurrent programming Concurrent Programming Actors, SALSA, Coordination Abstractions Carlos Varela RPI November 2, 2006 C. Varela 1 Advantages of concurrent programs Reactive programming User can interact with applications

More information

Pierce Ch. 3, 8, 11, 15. Type Systems

Pierce Ch. 3, 8, 11, 15. Type Systems Pierce Ch. 3, 8, 11, 15 Type Systems Goals Define the simple language of expressions A small subset of Lisp, with minor modifications Define the type system of this language Mathematical definition using

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

Topic 8: Lazy Evaluation

Topic 8: Lazy Evaluation Topic 8: Lazy Evaluation 1 Recommended Exercises and Readings From Haskell: The craft of functional programming (3 rd Ed.) Exercises: 17.1, 17.2, 17.4, 17.8, 17.23, 17.25, 17.28, 17.29 Readings: Chapter

More information

Topic 7: Algebraic Data Types

Topic 7: Algebraic Data Types Topic 7: Algebraic Data Types 1 Recommended Exercises and Readings From Haskell: The craft of functional programming (3 rd Ed.) Exercises: 5.5, 5.7, 5.8, 5.10, 5.11, 5.12, 5.14 14.4, 14.5, 14.6 14.9, 14.11,

More information

Lecture 2. Examples of Software. Programming and Data Structure. Programming Languages. Operating Systems. Sudeshna Sarkar

Lecture 2. Examples of Software. Programming and Data Structure. Programming Languages. Operating Systems. Sudeshna Sarkar Examples of Software Programming and Data Structure Lecture 2 Sudeshna Sarkar Read an integer and determine if it is a prime number. A Palindrome recognizer Read in airline route information as a matrix

More information

CSc 520 Principles of Programming Languages

CSc 520 Principles of Programming Languages CSc 520 Principles of Programming Languages 3: Scheme Introduction Christian Collberg collberg@cs.arizona.edu Department of Computer Science University of Arizona Copyright c 2005 Christian Collberg [1]

More information

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1 Processes Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & Apps 1 Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess

More information

Performance Throughput Utilization of system resources

Performance Throughput Utilization of system resources Concurrency 1. Why concurrent programming?... 2 2. Evolution... 2 3. Definitions... 3 4. Concurrent languages... 5 5. Problems with concurrency... 6 6. Process Interactions... 7 7. Low-level Concurrency

More information

CPS 310 first midterm exam, 10/6/2014

CPS 310 first midterm exam, 10/6/2014 CPS 310 first midterm exam, 10/6/2014 Your name please: Part 1. More fun with fork and exec* What is the output generated by this program? Please assume that each executed print statement completes, e.g.,

More information

COMP s1 Lecture 1

COMP s1 Lecture 1 COMP1511 18s1 Lecture 1 1 Numbers In, Numbers Out Andrew Bennett more printf variables scanf 2 Before we begin introduce yourself to the person sitting next to you why did

More information

Distributed Simulation of Parallel DSP Architectures on Workstation Clusters

Distributed Simulation of Parallel DSP Architectures on Workstation Clusters Distributed Simulation of Parallel DSP Architectures on Workstation Clusters Alan D. George and Steven W. Cook, Jr. 1 High-performance Computing and Simulation (HCS) Research Laboratory Electrical Engineering

More information

Jarcler: Aspect-Oriented Middleware for Distributed Software in Java

Jarcler: Aspect-Oriented Middleware for Distributed Software in Java Jarcler: Aspect-Oriented Middleware for Distributed Software in Java Muga Nishizawa Shigeru Chiba Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology Email: {muga,chiba@csg.is.titech.ac.jp

More information

CS533 Concepts of Operating Systems. Jonathan Walpole

CS533 Concepts of Operating Systems. Jonathan Walpole CS533 Concepts of Operating Systems Jonathan Walpole Introduction to Threads and Concurrency Why is Concurrency Important? Why study threads and concurrent programming in an OS class? What is a thread?

More information

Concurrency. Chapter 5

Concurrency. Chapter 5 Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits

More information

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather

More information

Lecture #24: Programming Languages and Programs

Lecture #24: Programming Languages and Programs Lecture #24: Programming Languages and Programs A programming language is a notation for describing computations or processes. These range from low-level notations, such as machine language or simple hardware

More information

CS558 Programming Languages

CS558 Programming Languages CS558 Programming Languages Winter 2018 Lecture 7b Andrew Tolmach Portland State University 1994-2018 Dynamic Type Checking Static type checking offers the great advantage of catching errors early And

More information

Introduction to Scheme

Introduction to Scheme How do you describe them Introduction to Scheme Gul Agha CS 421 Fall 2006 A language is described by specifying its syntax and semantics Syntax: The rules for writing programs. We will use Context Free

More information

Introduction to Operating Systems

Introduction to Operating Systems Introduction to Operating Systems Lecture 4: Process Synchronization MING GAO SE@ecnu (for course related communications) mgao@sei.ecnu.edu.cn Mar. 18, 2015 Outline 1 The synchronization problem 2 A roadmap

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

CS 426. Building and Running a Parallel Application

CS 426. Building and Running a Parallel Application CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations

More information

The SPIN Model Checker

The SPIN Model Checker The SPIN Model Checker Metodi di Verifica del Software Andrea Corradini Lezione 1 2013 Slides liberamente adattate da Logic Model Checking, per gentile concessione di Gerard J. Holzmann http://spinroot.com/spin/doc/course/

More information

CS558 Programming Languages

CS558 Programming Languages CS558 Programming Languages Fall 2016 Lecture 7a Andrew Tolmach Portland State University 1994-2016 Values and Types We divide the universe of values according to types A type is a set of values and a

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing

More information

ECE 462 Fall 2011, Third Exam

ECE 462 Fall 2011, Third Exam ECE 462 Fall 2011, Third Exam DO NOT START WORKING ON THIS UNTIL TOLD TO DO SO. You have until 9:20 to take this exam. Your exam should have 12 pages total (including this cover sheet). Please let Prof.

More information

CS Operating system

CS Operating system Name / ID (please PRINT) Seq#: Seat Number CS 3733.001 -- Operating system Spring 2017 -- Midterm II -- April 13, 2017 You have 75 min. Good Luck! This is a closed book/note examination. But You can use

More information

CA341 - Comparative Programming Languages

CA341 - Comparative Programming Languages CA341 - Comparative Programming Languages David Sinclair There are 3 common memory models: RAM Random Access Memory is the most common memory model. In programming language terms, this means that variables

More information

Transaction Management: Concurrency Control, part 2

Transaction Management: Concurrency Control, part 2 Transaction Management: Concurrency Control, part 2 CS634 Class 16 Slides based on Database Management Systems 3 rd ed, Ramakrishnan and Gehrke Locking for B+ Trees Naïve solution Ignore tree structure,

More information

Locking for B+ Trees. Transaction Management: Concurrency Control, part 2. Locking for B+ Trees (contd.) Locking vs. Latching

Locking for B+ Trees. Transaction Management: Concurrency Control, part 2. Locking for B+ Trees (contd.) Locking vs. Latching Locking for B+ Trees Transaction Management: Concurrency Control, part 2 Slides based on Database Management Systems 3 rd ed, Ramakrishnan and Gehrke CS634 Class 16 Naïve solution Ignore tree structure,

More information

Reminder from last time

Reminder from last time Concurrent systems Lecture 3: CCR, monitors, and concurrency in practice DrRobert N. M. Watson 1 Reminder from last time Implementing mutual exclusion: hardware support for atomicity and inter-processor

More information

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

Process Synchronization(2)

Process Synchronization(2) CSE 3221.3 Operating System Fundamentals No.6 Process Synchronization(2) Prof. Hui Jiang Dept of Computer Science and Engineering York University Semaphores Problems with the software solutions. Not easy

More information

CMSC 330: Organization of Programming Languages. Formal Semantics of a Prog. Lang. Specifying Syntax, Semantics

CMSC 330: Organization of Programming Languages. Formal Semantics of a Prog. Lang. Specifying Syntax, Semantics Recall Architecture of Compilers, Interpreters CMSC 330: Organization of Programming Languages Source Scanner Parser Static Analyzer Operational Semantics Intermediate Representation Front End Back End

More information

Concurrent Computing

Concurrent Computing Concurrent Computing Introduction SE205, P1, 2017 Administrivia Language: (fr)anglais? Lectures: Fridays (15.09-03.11), 13:30-16:45, Amphi Grenat Web page: https://se205.wp.imt.fr/ Exam: 03.11, 15:15-16:45

More information

Highlights. - Making threads. - Waiting for threads. - Review (classes, pointers, inheritance)

Highlights. - Making threads. - Waiting for threads. - Review (classes, pointers, inheritance) Parallel processing Highlights - Making threads - Waiting for threads - Review (classes, pointers, inheritance) Review: CPUs Review: CPUs In the 2000s, computing too a major turn: multi-core processors

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18

More information

Week 7. Concurrent Programming: Thread Synchronization. CS 180 Sunil Prabhakar Department of Computer Science Purdue University

Week 7. Concurrent Programming: Thread Synchronization. CS 180 Sunil Prabhakar Department of Computer Science Purdue University Week 7 Concurrent Programming: Thread Synchronization CS 180 Sunil Prabhakar Department of Computer Science Purdue University Announcements Exam 1 tonight 6:30 pm - 7:30 pm MTHW 210 2 Outcomes Understand

More information

Monads. COS 441 Slides 16

Monads. COS 441 Slides 16 Monads COS 441 Slides 16 Last time: Agenda We looked at implementation strategies for languages with errors, with printing and with storage We introduced the concept of a monad, which involves 3 things:

More information

Dining Philosophers, Semaphores

Dining Philosophers, Semaphores CS 220: Introduction to Parallel Computing Dining Philosophers, Semaphores Lecture 27 Today s Schedule Dining Philosophers Semaphores Barriers Thread Safety 4/30/18 CS 220: Parallel Computing 2 Today s

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:

More information

CSE 303: Concepts and Tools for Software Development

CSE 303: Concepts and Tools for Software Development CSE 303: Concepts and Tools for Software Development Hal Perkins Winter 2009 Lecture 7 Introduction to C: The C-Level of Abstraction CSE 303 Winter 2009, Lecture 7 1 Welcome to C Compared to Java, in rough

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level Concurrency can occur at four levels: 1. Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level Because there are no language issues in instruction- and program-level

More information