Declarative Concurrency (CTM 4)

Similar documents
Declarative Concurrency (CTM 4)

Programming Language Concepts, cs2104 Lecture 09 ( )

Lazy evaluation. Lazy evaluation (3) Lazy evaluation (2) Lazy execution. Example. Declarative Concurrency. Lazy Execution (VRH 4.

On Academic Dishonesty. Declarative Computation Model. Single assignment store. Single assignment store (2) Single assignment store (3)

State, Object-Oriented Programming Explicit State, Polymorphism (CTM ) Objects, Classes, and Inheritance (CTM )

Advantages of concurrent programs. Concurrent Programming Actors, SALSA, Coordination Abstractions. Overview of concurrent programming

News and remarks. CS68 Principles of Programming Languages. Contents today. Concurrency. Declarative concurrency. Threads

Declarative Programming Techniques

Carlos Varela RPI September 19, Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL

Declarative Computation Model

Programming Language Concepts, cs2104 Lecture 04 ( )

Exam for 2G1512 DatalogiII, Apr 12th

Logic Programming (CTM ) Constraint Programming: Constraints and Computation Spaces

Types in Programming Languages Dynamic and Static Typing, Type Inference (CTM 2.8.3, EPL* 4) Abstract Data Types (CTM 3.7) Monads (GIH** 9)

Overview. Distributed Computing with Oz/Mozart (VRH 11) Mozart research at a glance. Basic principles. Language design.

Lecture 5: Declarative Programming. The Declarative Kernel Language Machine. September 12th, 2011

Programming Language Concepts, cs2104 Lecture 01 ( )

Shared state model. April 3, / 29

Declarative concurrency. March 3, 2014

Producing Production Quality Software. Lecture 12: Concurrent and Distributed Programming Prof. Arthur P. Goldberg Fall, 2004

proc {Produce State Out} local State2 Out2 in State2 = State + 1 Out = State Out2 {Produce State2 Out2}

Mozart System Limitations for Version (draft)

Concurrent & Distributed Systems Supervision Exercises

General Overview of Mozart/Oz

Concepts, Techniques, and Models of Computer Programming

Logic programming in the context of multiparadigm programming: the Oz experience

A Concepts-Based Approach for Teaching Programming

proc {P V1 V2... Vn} <Decl> in <Stmt> end is equivalent to proc {P V1 V2... Vn} local <Decl> in <Stmt> end end, where <Decl> is a

Threads and Continuations COS 320, David Walker

Lecture 7: General Computation Models

Logic programming in the context of multiparadigm programming: the Oz experience

Test on Declarative Concurrency, the Message Passing Model, and Programming Models vs. Problems

Threads and Parallelism in Java

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Multiple Inheritance. Computer object can be viewed as

Implementation of Process Networks in Java

6.001 Notes: Section 17.5

Don t write anything in the field marked Eventuell ekstra ID. For each subtask, choose the answer you think is the most correct one.

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

PROFESSOR: DR.JALILI BY: MAHDI ESHAGHI

Process Management And Synchronization

Concurrent Programming

Actors in SALSA and Erlang (PDCS 9, CPE 5*) Support for actor model in SALSA and Erlang

Concurrent Programming with Actors (PDCS 9, CPE 5*) Support for the actor model in SALSA and Erlang

Lecture 6: The Declarative Kernel Language Machine. September 13th, 2011

Chapter Machine instruction level 2. High-level language statement level 3. Unit level 4. Program level

5. Synchronization. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA, C++ etc!

Internet and Intranet Protocols and Applications

Programming Languages (CSCI 4430/6430) History, Syntax, Semantics, Essentials, Paradigms

Dealing with Issues for Interprocess Communication

Lecture 21: Relational Programming II. November 15th, 2011

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

ITERATORS AND STREAMS 9

What s different about Factor?

The Deadlock Lecture

Parallel Programming in Erlang (PFP Lecture 10) John Hughes

Haskell: From Basic to Advanced. Part 2 Type Classes, Laziness, IO, Modules

Chapter 13 Topics. Introduction. Introduction

3 Concurrency and dataow synchronization Mozart provides lightweight threads and dataow synchronization. These help the programmer structure his or he

Thread Safety. Review. Today o Confinement o Threadsafe datatypes Required reading. Concurrency Wrapper Collections

Course: Operating Systems Instructor: M Umair. M Umair

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Concurrency control abstractions (PDCS 9, CPE 5*)

Lecture 2 Process Management

Sometimes it s good to be lazy

CS A331 Programming Language Concepts

Mobile Objects in Distributed Oz

Performance Throughput Utilization of system resources

Programming Language Concepts, CS2104 Lab/Assignment 0 (Lab Session, 3-6pm, 24 th Aug 2007) Deadline for Lab0 : 5pm 28Aug 2007 Tue (submit via IVLE)

Concurrent ML. John Reppy January 21, University of Chicago

Reintroduction to Concurrency

Homework 4: Declarative Concurrency

The mutual-exclusion problem involves making certain that two things don t happen at once. A non-computer example arose in the fighter aircraft of

Programming Language Concepts, CS2104 Lecture 7

Ch 9: Control flow. Sequencers. Jumps. Jumps

Lecture 4: The Declarative Sequential Kernel Language. September 5th 2011

Concurrency. Glossary

1 Process Coordination

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Operating Systems. Figure: Process States. 1 P a g e

Concurrent Models of Computation

CHAPTER 6: PROCESS SYNCHRONIZATION

Processes and Threads. Processes: Review

Multitasking / Multithreading system Supports multiple tasks

Operating Systems 2 nd semester 2016/2017. Chapter 4: Threads

Promela and SPIN. Mads Dam Dept. Microelectronics and Information Technology Royal Institute of Technology, KTH. Promela and SPIN

Draft Standard for Verilog. Randomization and Constraints. Extensions

Multithreaded Programming Part II. CSE 219 Stony Brook University, Department of Computer Science

Computation Abstractions. Processes vs. Threads. So, What Is a Thread? CMSC 433 Programming Language Technologies and Paradigms Spring 2007

Threads Tuesday, September 28, :37 AM

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

A Model for Concurrent Object-Oriented Programming

Reminder from last time

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Embedded System Design and Modeling EE382V, Fall 2008

Executive Summary. It is important for a Java Programmer to understand the power and limitations of concurrent programming in Java using threads.

Transcription:

Declarative Concurrency (CTM 4) Carlos Varela Rensselaer Polytechnic Institute Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL October 30, 2015 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 1

Review of concurrent programming There are four basic approaches: Sequential programming (no concurrency) Declarative concurrency (streams in a functional language, Oz) Message passing with active objects (Erlang, SALSA) Atomic actions on shared state (Java) The atomic action approach is the most difficult, yet it is the one you will probably be most exposed to! But, if you have the choice, which approach to use? Use the simplest approach that does the job: sequential if that is ok, else declarative concurrency if there is no observable nondeterminism, else message passing if you can get away with it. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 2

Concurrency How to do several things at once Concurrency: running several activities each running at its own pace A thread is an executing sequential program A program can have multiple threads by using the thread instruction {Browse 99*99} can immediately respond while Pascal is computing thread P in P = {Pascal 21} {Browse P} {Browse 99*99} S. Haridi and P. Van Roy 3

State How to make a function learn from its past? We would like to add memory to a function to remember past results Adding memory as well as concurrency is an essential aspect of modeling the real world Consider {FastPascal N}: we would like it to remember the previous rows it calculated in order to avoid recalculating them We need a concept (memory cell) to store, change and retrieve a value The simplest concept is a (memory) cell which is a container of a value One can create a cell, assign a value to a cell, and access the current value of the cell Cells are not variables declare C = {NewCell 0} {Assign C {Access C}+1} {Browse {Access C}} S. Haridi and P. Van Roy 4

Nondeterminism What happens if a program has both concurrency and state together? This is very tricky The same program can give different results from one execution to the next This variability is called nondeterminism Internal nondeterminism is not a problem if it is not observable from outside S. Haridi and P. Van Roy 5

Nondeterminism (2) declare C = {NewCell 0} thread {Assign C 1} thread {Assign C 2} t 0 t 1 C = {NewCell 0} cell C contains 0 {Assign C 1} cell C contains 1 t 2 {Assign C 2} cell C contains 2 (final value) time S. Haridi and P. Van Roy 6

Nondeterminism (3) declare C = {NewCell 0} thread {Assign C 1} thread {Assign C 2} t 0 t 1 C = {NewCell 0} cell C contains 0 {Assign C 2} cell C contains 2 t 2 {Assign C 1} cell C contains 1 (final value) time S. Haridi and P. Van Roy 7

Nondeterminism (4) declare C = {NewCell 0} thread I in I = {Access C} {Assign C I+1} thread J in J = {Access C} {Assign C J+1} What are the possible results? Both threads increment the cell C by 1 Expected final result of C is 2 Is that all? S. Haridi and P. Van Roy 8

Nondeterminism (5) Another possible final result is the cell C containing the value 1 t 0 C = {NewCell 0} declare C = {NewCell 0} thread I in I = {Access C} {Assign C I+1} thread J in J = {Access C} {Assign C J+1} t 1 I = {Access C} I equal 0 t 2 J = {Access C} J equal 0 t 3 {Assign C J+1} C contains 1 t 4 time {Assign C I+1} C contains 1 S. Haridi and P. Van Roy 9

Lessons learned Combining concurrency and state is tricky Complex programs have many possible interleavings Programming is a question of mastering the interleavings Famous bugs in the history of computer technology are due to designers overlooking an interleaving (e.g., the Therac-25 radiation therapy machine giving doses 1000 s of times too high, resulting in death or injury) 1. If possible try to avoid concurrency and state together 2. Encapsulate state and communicate between threads using dataflow 3. Try to master interleavings by using atomic operations S. Haridi and P. Van Roy 10

Atomicity How can we master the interleavings? One idea is to reduce the number of interleavings by programming with coarse-grained atomic operations An operation is atomic if it is performed as a whole or nothing No intermediate (partial) results can be observed by any other concurrent activity In simple cases we can use a lock to ensure atomicity of a sequence of operations For this we need a new entity (a lock) S. Haridi and P. Van Roy 11

Atomicity (2) declare L = {NewLock} lock L then sequence of ops 1 Thread 1 lock L then sequence of ops 2 Thread 2 S. Haridi and P. Van Roy 12

The program declare C = {NewCell 0} L = {NewLock} thread lock L then I in I = {Access C} {Assign C I+1} thread lock L then J in J = {Access C} {Assign C J+1} The final result of C is always 2 S. Haridi and P. Van Roy 13

Locks and Deadlock: Dining Philosophers Ph 0 ch 1 ch 0 Ph 1 Ph 3 ch 2 ch 3 Ph 2 C. Varela 14

Review of concurrent programming There are four basic approaches: Sequential programming (no concurrency) Declarative concurrency (streams in a functional language, Oz) Message passing with active objects (Erlang, SALSA) Atomic actions on shared state (Java) The atomic action approach is the most difficult, yet it is the one you will probably be most exposed to! But, if you have the choice, which approach to use? Use the simplest approach that does the job: sequential if that is ok, else declarative concurrency if there is no observable nondeterminism, else message passing if you can get away with it. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 15

Declarative Concurrency This lecture is about declarative concurrency, programs with no observable nondeterminism, the result is a function Indepent procedures that execute on their pace and may communicate through shared dataflow variables C. Varela; Adapted with permission from S. Haridi and P. Van Roy 16

Single-assignment Variables Variables are short-cuts for values, they cannot be assigned more than once declare V = 9999*9999 {Browse V*V} Variable identifiers: is what you type Store variable: is part of the memory system The declare statement creates a store variable and assigns its memory address to the identifier V in the environment S. Haridi and P. Van Roy 17

Dataflow What happens when multiple threads try to communicate? A simple way is to make communicating threads synchronize on the availability of data (data-driven execution) If an operation tries to use a variable that is not yet bound it will wait The variable is called a dataflow variable X Y Z U * * + S. Haridi and P. Van Roy 18

Dataflow (II) Two important properties of dataflow Calculations work correctly indepent of how they are partitioned between threads (concurrent activities) Calculations are patient, they do not signal error; they wait for data availability The dataflow property of variables makes sense when programs are composed of multiple threads declare X thread {Delay 5000} X=99 {Browse Start } {Browse X*X} declare X thread {Browse Start } {Browse X*X} {Delay 5000} X=99 S. Haridi and P. Van Roy 19

The concurrent model Multiple semantic stacks (threads) Semantic Stack 1 Semantic Stack N Single-assignment store w = a z = person(age: y) x y = 42 u C. Varela; Adapted with permission from S. Haridi and P. Van Roy 20

Concurrent declarative model The following defines the syntax of a statement, s denotes a statement s ::= skip empty statement x = y variable-variable binding x = v variable-value binding s 1 s 2 sequential composition local x in s 1 declaration proc { x y 1 y n } s 1 procedure introduction if x then s 1 else s 2 conditional { x y 1 y n } procedure application case x of pattern then s 1 else s 2 pattern matching thread s 1 thread creation C. Varela; Adapted with permission from S. Haridi and P. Van Roy 21

The concurrent model Top of Stack, Thread i ST thread s 1,E Single-assignment store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 22

The concurrent model ST Top of Stack, Thread i s 1,E Single-assignment store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 23

Basic concepts The model allows multiple statements to execute at the same time. Imagine that these threads really execute in parallel, each has its own processor, but share the same memory Reading and writing different variables can be done simultaneously by different threads, as well as reading the same variable Writing the same variable is done sequentially The above view is in fact equivalent to an interleaving execution: a totally ordered sequence of computation steps, where threads take turns doing one or more steps in sequence C. Varela; Adapted with permission from S. Haridi and P. Van Roy 24

Nondeterminism An execution is nondeterministic if there is a computation step in which there is a choice what to do next Nondeterminism appears naturally when there is concurrent access to shared state C. Varela; Adapted with permission from S. Haridi and P. Van Roy 25

Example of nondeterminism store Thread 1 Thread 2 x y = 5 x = 1 x = 3 time The thread that binds x first will continue, the other thread will raise an exception time C. Varela; Adapted with permission from S. Haridi and P. Van Roy 26

Nondeterminism An execution is nondeterministic if there is a computation step in which there is a choice what to do next Nondeterminism appears naturally when there is concurrent access to shared state In the concurrent declarative model when there is only one binder for each dataflow variable or multiple compatible bindings (e.g., to partial values), the nondeterminism is not observable on the store (i.e. the store develops to the same final results) This means for correctness we can ignore the concurrency C. Varela; Adapted with permission from S. Haridi and P. Van Roy 27

Scheduling The choice of which thread to execute next and for how long is done by a part of the system called the scheduler A thread is runnable if its next statement to execute is not blocked on a dataflow variable, otherwise the thread is susped A scheduler is fair if it does not starve a runnable thread, i.e. all runnable threads eventually execute Fair scheduling makes it easy to reason about programs and program composition Otherwise some correct program (in isolation) may never get processing time when composed with other programs C. Varela; Adapted with permission from S. Haridi and P. Van Roy 28

Example of runnable threads proc {Loop P N} if N > 0 then {P} {Loop P N-1} else skip thread {Loop proc {$} {Show 1} 1000} thread {Loop proc {$} {Show 2} 1000} This program will interleave the execution of two threads, one printing 1, and the other printing 2 We assume a fair scheduler C. Varela; Adapted with permission from S. Haridi and P. Van Roy 29

Dataflow computation Threads susp on data unavailability in dataflow variables The {Delay X} primitive makes the thread susps for X milliseconds, after that, the thread is runnable declare X {Browse X} local Y in thread {Delay 1000} Y = 10*10 X = Y + 100*100 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 30

Illustrating dataflow computation declare X0 X1 X2 X3 {Browse [X0 X1 X2 X3]} thread Y0 Y1 Y2 Y3 in {Browse [Y0 Y1 Y2 Y3]} Y0 = X0 + 1 Y1 = X1 + Y0 Y2 = X2 + Y1 Y3 = X3 + Y2 {Browse completed} Enter incrementally the values of X0 to X3 When X0 is bound the thread will compute Y0=X0+1, and will susp again until X1 is bound C. Varela; Adapted with permission from S. Haridi and P. Van Roy 31

Concurrent Map fun {Map Xs F} case Xs of nil then nil [] X Xr then thread {F X} {Map Xr F} This will fork a thread for each individual element in the input list Each thread will run only if both the element X and the procedure F is known C. Varela; Adapted with permission from S. Haridi and P. Van Roy 32

Concurrent Map Function fun {Map Xs F} case Xs of nil then nil [] X Xr then thread {F X} {Map Xr F} What this looks like in the kernel language: proc {Map Xs F Rs} case Xs of nil then Rs = nil [] X Xr then R Rr in Rs = R Rr thread {F X R} {Map Xr F Rr} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 33

How does it work? If we enter the following statements: declare F X Y Z {Browse thread {Map X F} } A thread executing Map is created. It will susp immediately in the case-statement because X is unbound. If we thereafter enter the following statements: X = 1 2 Y fun {F X} X*X The main thread will traverse the list creating two threads for the first two arguments of the list C. Varela; Adapted with permission from S. Haridi and P. Van Roy 34

How does it work? The main thread will traverse the list creating two threads for the first two arguments of the list: thread {F 1}, and thread {F 2}, After entering: Y = 3 Z Z = nil the program will complete the computation of the main thread and the newly created thread thread {F 3}, resulting in the final list [1 4 9]. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 35

Simple concurrency with dataflow Declarative programs can be easily made concurrent Just use the thread statement where concurrency is needed fun {Fib X} if X=<2 then 1 else thread {Fib X-1} + {Fib X-2} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 36

Understanding why fun {Fib X} if X=<2 then 1 else F1 F2 in F1 = thread {Fib X-1} F2 = {Fib X-2} F1 + F2 Dataflow depency C. Varela; Adapted with permission from S. Haridi and P. Van Roy 37

Execution of {Fib 6} F3 F2 F1 Fork a thread F4 F2 F2 Synchronize on result F5 F3 F1 F3 F2 F1 Running thread F6 F4 F2 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 38

Streams A stream is a sequence of messages A stream is a First-In First-Out (FIFO) channel The producer augments the stream with new messages, and the consumer reads the messages, one by one. producer x 5 x 4 x 3 x 2 x 1 consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 39

Stream Communication I The data-flow property of Oz easily enables writing threads that communicate through streams in a producerconsumer pattern. A stream is a list that is created incrementally by one thread (the producer) and subsequently consumed by one or more threads (the consumers). The consumers consume the same elements of the stream. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 40

Stream Communication II Producer, produces incrementally the elements Transducer(s), transform(s) the elements of the stream Consumer, accumulates the results thread 1 thread 2 thread 3 thread N producer transducer transducer consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 41

Stream communication patterns The producer, transducers, and the consumer can, in general, be described by certain program patterns We show various patterns C. Varela; Adapted with permission from S. Haridi and P. Van Roy 42

Producer fun {Producer State} if {More State} then X = {Produce State} in X {Producer {Transform State}} else nil The definition of More, Produce, and Transform is problem depent State could be multiple arguments The above definition is not a complete program! C. Varela; Adapted with permission from S. Haridi and P. Van Roy 43

Example Producer fun {Generate N Limit} if N=<Limit then N {Generate N+1 Limit} else nil fun {Producer State} if {More State} then X = {Produce State} in X {Producer {Transform State}} else nil The State is the two arguments N and Limit The predicate More is the condition N=<Limit The Produce function is the identity function on N The Transform function (N,Limit) (N+1,Limit) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 44

Consumer Pattern fun {Consumer State InStream} case InStream of nil then {Final State} [] X RestInStream then NextState = {Consume X State} in {Consumer NextState RestInStream} Final and Consume are problem depent The consumer susps until InStream is either a cons or a nil C. Varela; Adapted with permission from S. Haridi and P. Van Roy 45

Example Consumer fun {Sum A Xs} case Xs of nil then A [] X Xr then {Sum A+X Xr} The State is A fun {Consumer State InStream} case InStream of nil then {Final State} [] X RestInStream then NextState = {Consume X State} in Final is just the identity function on State Consume takes X and State X + State {Consumer NextState RestInStream} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 46

Transducer Pattern 1 fun {Transducer State InStream} case InStream of nil then nil [] X RestInStream then NextState#TX = {Transform X State} TX {Transducer NextState RestInStream} A transducer keeps its state in State, receives messages on InStream and ss messages on OutStream C. Varela; Adapted with permission from S. Haridi and P. Van Roy 47

Transducer Pattern 2 fun {Transducer State InStream} case InStream of nil then nil [] X RestInStream then if {Test X#State} then NextState#TX = {Transform X State} TX {Transducer NextState RestInStream} else {Transducer State RestInStream} A transducer keeps its state in State, receives messages on InStream and ss messages on OutStream C. Varela; Adapted with permission from S. Haridi and P. Van Roy 48

Example Transducer IsOdd Generate 6 5 4 3 2 1 5 3 1 Filter fun {Filter Xs F} case Xs of nil then nil [] X Xr then if {F X} then X {Filter Xr F} else {Filter Xr F} Filter is a transducer that takes an Instream and incremently produces an Outstream that satisfies the predicate F local Xs Ys in thread Xs = {Generate 1 100} thread Ys = {Filter Xs IsOdd} thread {Browse Ys} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 49

Larger example: The sieve of Eratosthenes Produces prime numbers It takes a stream 2...N, peals off 2 from the rest of the stream Delivers the rest to the next sieve Sieve Xs X X Zs Xr Filter Ys Sieve Zs C. Varela; Adapted with permission from S. Haridi and P. Van Roy 50

Sieve fun {Sieve Xs} case Xs of nil then nil [] X Xr then Ys in thread Ys = {Filter Xr fun {$ Y} Y mod X \= 0 } X {Sieve Ys} The program forks a filter thread on each sieve call C. Varela; Adapted with permission from S. Haridi and P. Van Roy 51

Example call local Xs Ys in thread Xs = {Generate 2 100000} thread Ys = {Sieve Xs} thread for Y in Ys do {Show Y} 7 11... Filter 2 Filter 3 Filter 5 Sieve C. Varela; Adapted with permission from S. Haridi and P. Van Roy 52

Limitation of eager stream processing Streams The producer might be much faster than the consumer This will produce a large intermediate stream that requires potentially unbounded memory storage producer x 5 x 4 x 3 x 2 x 1 consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 53

Solutions There are three alternatives: 1. Play with the speed of the different threads, i.e. play with the scheduler to make the producer slower 2. Create a bounded buffer, say of size N, so that the producer waits automatically when the buffer is full 3. Use demand-driven approach, where the consumer activates the producer when it needs a new element (lazy evaluation) The last two approaches introduce the notion of flowcontrol between concurrent activities (very common) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 54

Coroutines I Languages that do not support concurrent threads might instead support a notion called coroutining A coroutine is a nonpreemptive thread (sequence of instructions), there is no scheduler Switching between threads is the programmer s responsibility C. Varela; Adapted with permission from S. Haridi and P. Van Roy 55

Coroutines II, Comparison {P...} -- call procedure Q procedure P return Procedures: one sequence of instructions, program transfers explicitly when terminated it returns to the caller spawn P resume P coroutine Q resume Q coroutine P resume Q C. Varela; Adapted with permission from S. Haridi and P. Van Roy 56

Coroutines II, Comparison {P...} -- call procedure Q procedure P return Coroutines: New sequences of instructions, programs explicitly do all the scheduling, by spawn, susp and resume spawn P resume P coroutine Q resume Q coroutine P resume Q C. Varela; Adapted with permission from S. Haridi and P. Van Roy 57

Time In concurrent computation one would like to handle time proc {Time.delay T} The running thread susps for T milliseconds proc {Time.alarm T U} Immediately creates its own thread, and binds U to unit after T milliseconds C. Varela; Adapted with permission from S. Haridi and P. Van Roy 58

Example local proc {Ping N} for I in 1..N do {Delay 500} {Browse ping} {Browse 'ping terminate'} proc {Pong N} for I in 1..N do {Delay 600} {Browse pong} {Browse 'pong terminate'} in... local... in {Browse 'game started'} thread {Ping 1000} thread {Pong 1000} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 59

Concurrent control abstraction We have seen how threads are forked by thread... A natural question to ask is: how can we join threads? fork threads join C. Varela; Adapted with permission from S. Haridi and P. Van Roy 60

Termination detection This is a special case of detecting termination of multiple threads, and making another thread wait on that event. The general scheme is quite easy because of dataflow variables: thread S1 X1 = unit thread S2 X2 = X1... thread Sn X n = X n-1 {Wait Xn} % Continue main thread When all threads terminate the variables X 1 X N will be merged together labeling a single box that contains the value unit. {Wait X N } susps the main thread until X N is bound. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 61

Concurrent Composition conc S 1 [] S 2 [] [] S n {Conc [ proc{$} S1 proc{$} S2... proc{$} Sn ] } Takes a single argument that is a list of nullary procedures. When it is executed, the procedures are forked concurrently. The next statement is executed only when all procedures in the list terminate. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 62

Conc local proc {Conc1 Ps I O} case Ps of P Pr then M in thread {P} M = I {Conc1 Pr M O} [] nil then O = I in proc {Conc Ps} X in {Conc1 Ps unit X} {Wait X} This abstraction takes a list of zero-argument procedures and terminate after all these threads have terminated C. Varela; Adapted with permission from S. Haridi and P. Van Roy 63

Example local proc {Ping N} for I in 1..N do {Delay 500} {Browse ping} {Browse 'ping terminate'} proc {Pong N} for I in 1..N do {Delay 600} {Browse pong} {Browse 'pong terminate'} in... local... in {Browse 'game started'} {Conc [ proc {$} {Ping 1000} proc {$} {Pong 1000} ]} {Browse game terminated } C. Varela; Adapted with permission from S. Haridi and P. Van Roy 64

Futures A future is a read-only capability of a single-assignment variable. For example to create a future of the variable X we perform the operation!! to create a future Y: Y =!!X A thread trying to use the value of a future, e.g. using Y, will susp until the variable of the future, e.g. X, gets bound. One way to execute a procedure lazily, i.e. in a demand-driven manner, is to use the operation {ByNeed +P?F}. ByNeed takes a zero-argument function P, and returns a future F. When a thread tries to access the value of F, the function {P} is called, and its result is bound to F. This allows us to perform demand-driven computations in a straightforward manner. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 65

Example declare Y {ByNeed fun {$} 1 Y} {Browse Y} we will observe that Y becomes a future, i.e. we will see Y<Future> in the Browser. If we try to access the value of Y, it will get bound to 1. One way to access Y is by perform the operation {Wait Y} which triggers the producing procedure. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 66

Thread Priority and Real Time Try to run the program using the following statement: {Sum 0 thread {Generate 0 100000000} } Switch on the panel and observe the memory behavior of the program. You will quickly notice that this program does not behave well. The reason has to do with the asynchronous message passing. If the producer ss messages i.e. create new elements in the stream, in a faster rate than the consumer can consume, increasingly more buffering will be needed until the system starts to break down. One possible solution is to control experimentally the rate of thread execution so that the consumers get a larger time-slice than the producers do. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 67

Priorities There are three priority levels: high, medium, and low (the default) A priority level determines how often a runnable thread is allocated a time slice. In Oz, a high priority thread cannot starve a low priority one. Priority determines only how large piece of the processor-cake a thread can get. Each thread has a unique name. To get the name of the current thread the procedure Thread.this/1 is called. Having a reference to a thread, by using its name, enables operations on threads such as: terminating a thread, or raising an exception in a thread. Thread operations are defined the standard module Thread. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 68

Thread priority and thread control fun {Thread.state T} proc{thread.injectexception T E} fun {Thread.this} proc{thread.setpriority T P} proc{thread.setthispriority P} %% returns thread state %% exception E injected into thread %% returns 1st class reference to thread %% P is high, medium or low %% as above on current thread fun{property.get priorities} proc{property.put priorities(high:h medium:m)} %% get priority ratios C. Varela; Adapted with permission from S. Haridi and P. Van Roy 69

Thread Priorities Oz has three priority levels. The system procedure {Property.put priorities p(medium:y high:x)} Sets the processor-time ratio to X:1 between high-priority threads and mediumpriority thread. It also sets the processor-time ratio to Y:1 between medium-priority threads and low-priority threads. X and Y are integers. Example: {Property.put priorities p(high:10 medium:10)} Now let us make our producer-consumer program work. We give the producer low priority, and the consumer high. We also set the priority ratios to 10:1 and 10:1. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 70

The program with priorities local L in {Property.put priorities p(high:10 medium:10)} thread {Thread.setThisPriority low} L = {Generate 0 100000000} thread {Thread.setThisPriority high} {Sum 0 L} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 71

Exercises 67. SALSA asynchronous message passing enables to tag messages with properties: priority, delay, and waitfor. Erlang uses a selective receive mechanism that can be used to implement priorities and delays. Compare these mechanisms with Oz thread priorities, time delays and alarms, and futures. 68. How do SALSA tokens relate to Oz dataflow variables and futures? 69. What is the difference between multiple thread termination detection in Oz, process groups in Erlang, and join blocks in SALSA? 70. CTM Exercise 4.11.3 (page 339) - Compare the sequential and concurrent execution performance of equivalent SALSA and Erlang programs. 71. CTM Exercise 4.11.5 (page 339) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 72