Declarative Concurrency (CTM 4)

Similar documents
Declarative Concurrency (CTM 4)

Programming Language Concepts, cs2104 Lecture 09 ( )

Lazy evaluation. Lazy evaluation (3) Lazy evaluation (2) Lazy execution. Example. Declarative Concurrency. Lazy Execution (VRH 4.

On Academic Dishonesty. Declarative Computation Model. Single assignment store. Single assignment store (2) Single assignment store (3)

Declarative Computation Model

News and remarks. CS68 Principles of Programming Languages. Contents today. Concurrency. Declarative concurrency. Threads

Advantages of concurrent programs. Concurrent Programming Actors, SALSA, Coordination Abstractions. Overview of concurrent programming

State, Object-Oriented Programming Explicit State, Polymorphism (CTM ) Objects, Classes, and Inheritance (CTM )

Carlos Varela RPI September 19, Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL

Declarative Programming Techniques

Logic Programming (CTM ) Constraint Programming: Constraints and Computation Spaces

Programming Language Concepts, cs2104 Lecture 04 ( )

Overview. Distributed Computing with Oz/Mozart (VRH 11) Mozart research at a glance. Basic principles. Language design.

Types in Programming Languages Dynamic and Static Typing, Type Inference (CTM 2.8.3, EPL* 4) Abstract Data Types (CTM 3.7) Monads (GIH** 9)

Exam for 2G1512 DatalogiII, Apr 12th

Lecture 5: Declarative Programming. The Declarative Kernel Language Machine. September 12th, 2011

Declarative concurrency. March 3, 2014

General Overview of Mozart/Oz

Mozart System Limitations for Version (draft)

Programming Language Concepts, cs2104 Lecture 01 ( )

proc {Produce State Out} local State2 Out2 in State2 = State + 1 Out = State Out2 {Produce State2 Out2}

Threads and Continuations COS 320, David Walker

Lecture 7: General Computation Models

Concepts, Techniques, and Models of Computer Programming

A Concepts-Based Approach for Teaching Programming

proc {P V1 V2... Vn} <Decl> in <Stmt> end is equivalent to proc {P V1 V2... Vn} local <Decl> in <Stmt> end end, where <Decl> is a

Don t write anything in the field marked Eventuell ekstra ID. For each subtask, choose the answer you think is the most correct one.

Logic programming in the context of multiparadigm programming: the Oz experience

Logic programming in the context of multiparadigm programming: the Oz experience

ITERATORS AND STREAMS 9

6.001 Notes: Section 17.5

Test on Declarative Concurrency, the Message Passing Model, and Programming Models vs. Problems

Lecture 6: The Declarative Kernel Language Machine. September 13th, 2011

Lecture 21: Relational Programming II. November 15th, 2011

Threads and Parallelism in Java

Implementation of Process Networks in Java

Concurrent & Distributed Systems Supervision Exercises

Multiple Inheritance. Computer object can be viewed as

Sometimes it s good to be lazy

Parallel Programming in Erlang (PFP Lecture 10) John Hughes

Programming Languages (CSCI 4430/6430) History, Syntax, Semantics, Essentials, Paradigms

Shared state model. April 3, / 29

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Homework 4: Declarative Concurrency

What s different about Factor?

Producing Production Quality Software. Lecture 12: Concurrent and Distributed Programming Prof. Arthur P. Goldberg Fall, 2004

Concurrency control abstractions (PDCS 9, CPE 5*)

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Course: Operating Systems Instructor: M Umair. M Umair

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Promela and SPIN. Mads Dam Dept. Microelectronics and Information Technology Royal Institute of Technology, KTH. Promela and SPIN

3 Concurrency and dataow synchronization Mozart provides lightweight threads and dataow synchronization. These help the programmer structure his or he

Lecture 2 Process Management

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

Draft Standard for Verilog. Randomization and Constraints. Extensions

Programming Language Concepts, CS2104 Lecture 7

Streams, Delayed Evaluation and a Normal Order Interpreter. CS 550 Programming Languages Jeremy Johnson

Internet and Intranet Protocols and Applications

Shared Variables and Interference

Concurrent Programming

Programming Languages (CSCI 4430/6430) Part 1: Functional Programming: Summary

Operating Systems. Figure: Process States. 1 P a g e

Embedded System Design and Modeling EE382V, Fall 2008

Programming Language Concepts, CS2104 Lab/Assignment 0 (Lab Session, 3-6pm, 24 th Aug 2007) Deadline for Lab0 : 5pm 28Aug 2007 Tue (submit via IVLE)

Haskell: From Basic to Advanced. Part 2 Type Classes, Laziness, IO, Modules

PROFESSOR: DR.JALILI BY: MAHDI ESHAGHI

CHAPTER 2: PROCESS MANAGEMENT

COMPUTER SCIENCE TRIPOS

Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA, C++ etc!

Scheme in Scheme: The Metacircular Evaluator Eval and Apply

Distributed programming with the Mozart system: principles, practice, and protocols

Operating Systems 2 nd semester 2016/2017. Chapter 4: Threads

Shared Variables and Interference

An Oz Subset. 1 Microsyntax for An Oz Subset. Notational Conventions. COP 4020 Programming Languages 1 January 17, 2012

Chapter 4: Processes. Process Concept

CS A331 Programming Language Concepts

DOWNLOAD PDF CORE JAVA APTITUDE QUESTIONS AND ANSWERS

Chapter 4: Processes

Actors in SALSA and Erlang (PDCS 9, CPE 5*) Support for actor model in SALSA and Erlang

Concurrent Models of Computation

CS450 - Structure of Higher Level Languages

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective

Message Passing Part I

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

! How is a thread different from a process? ! Why are threads useful? ! How can POSIX threads be useful?

Transactum Business Process Manager with High-Performance Elastic Scaling. November 2011 Ivan Klianev

Programming Languages Third Edition. Chapter 9 Control I Expressions and Statements

!! How is a thread different from a process? !! Why are threads useful? !! How can POSIX threads be useful?

Concurrent Programming with Actors (PDCS 9, CPE 5*) Support for the actor model in SALSA and Erlang

6.001 Notes: Section 15.1

Organization of Programming Languages CS3200/5200N. Lecture 11

Process- Concept &Process Scheduling OPERATING SYSTEMS

Concurrent Programming Constructs and First-Class Logic Engines

Deferred operations. Continuations Structure and Interpretation of Computer Programs. Tail recursion in action.

A Second Look At ML. Chapter Seven Modern Programming Languages, 2nd ed. 1

Lecture 4: Memory Management & The Programming Interface

Extensions to Barrelfish Asynchronous C

Ozma: Extending Scala with Oz Concurrency

Streams and Evalutation Strategies

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

Transcription:

Declarative Concurrency (CTM 4) Carlos Varela Rensselaer Polytechnic Institute Adapted with permission from: Seif Haridi KTH Peter Van Roy UCL November 21, 2014 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 1

Review of concurrent programming There are four basic approaches: Sequential programming (no concurrency) Declarative concurrency (streams in a functional language, Oz) Message passing with active objects (Erlang, SALSA) Atomic actions on shared state (Java) The atomic action approach is the most difficult, yet it is the one you will probably be most exposed to! But, if you have the choice, which approach to use? Use the simplest approach that does the job: sequential if that is ok, else declarative concurrency if there is no observable nondeterminism, else message passing if you can get away with it. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 2

Concurrency Some programs are best written as a set of activities that run indepently (concurrent programs) Concurrency is essential for interaction with the external environment Examples includes GUI (Graphical User Interfaces), operating systems, web services Also programs that are written indepently but interact only when needed (client-server, peer-to-peer applications) This lecture is about declarative concurrency, programs with no observable nondeterminism, the result is a function Indepent procedures that execute on their pace and may communicate through shared dataflow variables C. Varela; Adapted with permission from S. Haridi and P. Van Roy 3

Overview Programming with threads The model is augmented with threads Programming techniques: stream communication, order-determining concurrency, coroutines, concurrent composition Lazy execution demand-driven computations, lazy streams, and list comprehensions Soft real-time programming C. Varela; Adapted with permission from S. Haridi and P. Van Roy 4

The sequential model Statements are executed sequentially from a single semantic stack Single-assignment store Semantic Stack w = a z = person(age: y) x y = 42 u C. Varela; Adapted with permission from S. Haridi and P. Van Roy 5

The concurrent model Multiple semantic stacks (threads) Semantic Stack 1 Semantic Stack N Single-assignment store w = a z = person(age: y) x y = 42 u C. Varela; Adapted with permission from S. Haridi and P. Van Roy 6

Concurrent declarative model The following defines the syntax of a statement, s denotes a statement s ::= skip empty statement x = y variable-variable binding x = v variable-value binding s 1 s 2 sequential composition local x in s 1 declaration proc { x y 1 y n } s 1 procedure introduction if x then s 1 else s 2 conditional { x y 1 y n } procedure application case x of pattern then s 1 else s 2 pattern matching thread s 1 thread creation C. Varela; Adapted with permission from S. Haridi and P. Van Roy 7

The concurrent model Top of Stack, Thread i ST thread s 1,E Single-assignment store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 8

The concurrent model ST Top of Stack, Thread i s 1,E Single-assignment store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 9

Basic concepts The model allows multiple statements to execute at the same time. Imagine that these threads really execute in parallel, each has its own processor, but share the same memory Reading and writing different variables can be done simultaneously by different threads, as well as reading the same variable Writing the same variable is done sequentially The above view is in fact equivalent to an interleaving execution: a totally ordered sequence of computation steps, where threads take turns doing one or more steps in sequence C. Varela; Adapted with permission from S. Haridi and P. Van Roy 10

Causal order In a sequential program all execution states are totally ordered In a concurrent program all execution states of a given thread are totally ordered The execution state of the concurrent program as a whole is partially ordered C. Varela; Adapted with permission from S. Haridi and P. Van Roy 11

Total order In a sequential program all execution states are totally ordered computation step sequential execution C. Varela; Adapted with permission from S. Haridi and P. Van Roy 12

Causal order in the declarative model In a concurrent program all execution states of a given thread are totally ordered The execution state of the concurrent program is partially ordered thread T3 fork a thread thread T2 thread T1 computation step C. Varela; Adapted with permission from S. Haridi and P. Van Roy 13

Causal order in the declarative model synchonize on a dataflow variable bind a dataflow variable x thread T3 fork a thread y computation step thread T2 thread T1 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 14

Nondeterminism An execution is nondeterministic if there is a computation step in which there is a choice what to do next Nondeterminism appears naturally when there is concurrent access to shared state C. Varela; Adapted with permission from S. Haridi and P. Van Roy 15

Example of nondeterminism store Thread 1 Thread 2 x y = 5 x = 1 x = 3 time The thread that binds x first will continue, the other thread will raise an exception time C. Varela; Adapted with permission from S. Haridi and P. Van Roy 16

Nondeterminism An execution is nondeterministic if there is a computation step in which there is a choice what to do next Nondeterminism appears naturally when there is concurrent access to shared state In the concurrent declarative model when there is only one binder for each dataflow variable, the nondeterminism is not observable on the store (i.e. the store develops to the same final results) This means for correctness we can ignore the concurrency C. Varela; Adapted with permission from S. Haridi and P. Van Roy 17

Scheduling The choice of which thread to execute next and for how long is done by a part of the system called the scheduler A thread is runnable if its next statement to execute is not blocked on a dataflow variable, otherwise the thread is susped A scheduler is fair if it does not starve a runnable thread, i.e. all runnable threads eventually execute Fair scheduling makes it easy to reason about programs and program composition Otherwise some correct program (in isolation) may never get processing time when composed with other programs C. Varela; Adapted with permission from S. Haridi and P. Van Roy 18

The semantics In the sequential model we had: (ST, σ ) ST is a stack of semantic statements σ is the single assignment store In the concurrent model we have: (MST, σ ) MST is a (multi)set of stacks of semantic statements σ is the single assignment store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 19

The initial execution state statement ({ [ ( s, ) ] }, ) stack multiset store C. Varela; Adapted with permission from S. Haridi and P. Van Roy 20

Execution (the scheduler) At each step, one runnable semantic stack is selected from MST (the multiset of stacks), call it ST, s.t. MST = ST MST Assume the current store is σ, one computation step is done that transforms ST to ST and σ to σ The total computation state is transformed from (MST, σ) to (ST MST, σ ) Which stack is selected, and how many steps are taken is the task of the scheduler, a good scheduler should be fair, i.e., each runnable thread will eventually be selected The computation stops when there are no runnable stacks C. Varela; Adapted with permission from S. Haridi and P. Van Roy 21

Example of runnable threads proc {Loop P N} if N > 0 then {P} {Loop P N-1} else skip thread {Loop proc {$} {Show 1} 1000} thread {Loop proc {$} {Show 2} 1000} This program will interleave the execution of two threads, one printing 1, and the other printing 2 We assume a fair scheduler C. Varela; Adapted with permission from S. Haridi and P. Van Roy 22

Dataflow computation Threads susp on data unavailability in dataflow variables The {Delay X} primitive makes the thread susps for X milliseconds, after that, the thread is runnable declare X {Browse X} local Y in thread {Delay 1000} Y = 10*10 X = Y + 100*100 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 23

Illustrating dataflow computation declare X0 X1 X2 X3 {Browse [X0 X1 X2 X3]} thread Y0 Y1 Y2 Y3 in {Browse [Y0 Y1 Y2 Y3]} Y0 = X0 + 1 Y1 = X1 + Y0 Y2 = X2 + Y1 Y3 = X3 + Y2 {Browse completed} Enter incrementally the values of X0 to X3 When X0 is bound the thread will compute Y0=X0+1, and will susp again until X1 is bound C. Varela; Adapted with permission from S. Haridi and P. Van Roy 24

Concurrent Map fun {Map Xs F} case Xs of nil then nil [] X Xr then thread {F X} {Map Xr F} This will fork a thread for each individual element in the input list Each thread will run only if both the element X and the procedure F is known C. Varela; Adapted with permission from S. Haridi and P. Van Roy 25

Concurrent Map Function fun {Map Xs F} case Xs of nil then nil [] X Xr then thread {F X} {Map Xr F} What this looks like in the kernel language: proc {Map Xs F Rs} case Xs of nil then Rs = nil [] X Xr then R Rr in Rs = R Rr thread {F X R} {Map Xr F Rr} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 26

How does it work? If we enter the following statements: declare F X Y Z {Browse thread {Map X F} } A thread executing Map is created. It will susp immediately in the case-statement because X is unbound. If we thereafter enter the following statements: X = 1 2 Y fun {F X} X*X The main thread will traverse the list creating two threads for the first two arguments of the list C. Varela; Adapted with permission from S. Haridi and P. Van Roy 27

How does it work? The main thread will traverse the list creating two threads for the first two arguments of the list: thread {F 1}, and thread {F 2}, After entering: Y = 3 Z Z = nil the program will complete the computation of the main thread and the newly created thread thread {F 3}, resulting in the final list [1 4 9]. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 28

Simple concurrency with dataflow Declarative programs can be easily made concurrent Just use the thread statement where concurrency is needed fun {Fib X} if X=<2 then 1 else thread {Fib X-1} + {Fib X-2} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 29

Understanding why fun {Fib X} if X=<2 then 1 else F1 F2 in F1 = thread {Fib X-1} F2 = {Fib X-2} F1 + F2 Dataflow depency C. Varela; Adapted with permission from S. Haridi and P. Van Roy 30

Execution of {Fib 6} F3 F2 F1 Fork a thread F4 F2 F2 Synchronize on result F5 F3 F1 F3 F2 F1 Running thread F6 F4 F2 C. Varela; Adapted with permission from S. Haridi and P. Van Roy 31

Threads and Garbage Collection C. Varela; Adapted with permission from S. Haridi and P. Van Roy 32

Streams A stream is a sequence of messages A stream is a First-In First-Out (FIFO) channel The producer augments the stream with new messages, and the consumer reads the messages, one by one. producer x 5 x 4 x 3 x 2 x 1 consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 33

Stream Communication I The data-flow property of Oz easily enables writing threads that communicate through streams in a producerconsumer pattern. A stream is a list that is created incrementally by one thread (the producer) and subsequently consumed by one or more threads (the consumers). The consumers consume the same elements of the stream. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 34

Stream Communication II Producer, produces incrementally the elements Transducer(s), transform(s) the elements of the stream Consumer, accumulates the results thread 1 thread 2 thread 3 thread N producer transducer transducer consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 35

Stream communication patterns The producer, transducers, and the consumer can, in general, be described by certain program patterns We show various patterns C. Varela; Adapted with permission from S. Haridi and P. Van Roy 36

Producer fun {Producer State} if {More State} then X = {Produce State} in X {Producer {Transform State}} else nil The definition of More, Produce, and Transform is problem depent State could be multiple arguments The above definition is not a complete program! C. Varela; Adapted with permission from S. Haridi and P. Van Roy 37

Example Producer fun {Generate N Limit} if N=<Limit then N {Generate N+1 Limit} else nil fun {Producer State} if {More State} then X = {Produce State} in X {Producer {Transform State}} else nil The State is the two arguments N and Limit The predicate More is the condition N=<Limit The predicate Produce is the identity function on N The Transform function (N,Limit) (N+1,Limit) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 38

Consumer Pattern fun {Consumer State InStream} case InStream of nil then {Final State} [] X RestInStream then NextState = {Consume X State} in {Consumer NextState RestInStream} Final and Consume are problem depent The consumer susps until InStream is either a cons or a nil C. Varela; Adapted with permission from S. Haridi and P. Van Roy 39

Example Consumer fun {Sum A Xs} case Xs of nil then A [] X Xr then {Sum A+X Xr} The State is A fun {Consumer State InStream} case InStream of nil then {Final State} [] X RestInStream then NextState = {Consume X State} in Final is just the identity function on State Consume takes X and State X + State {Consumer NextState RestInStream} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 40

Transducer Pattern 1 fun {Transducer State InStream} case InStream of nil then nil [] X RestInStream then NextState#TX = {Transform X State} TX {Transducer NextState RestInStream} A transducer keeps its state in State, receives messages on InStream and ss messages on OutStream C. Varela; Adapted with permission from S. Haridi and P. Van Roy 41

Transducer Pattern 2 fun {Transducer State InStream} case InStream of nil then nil [] X RestInStream then if {Test X#State} then NextState#TX = {Transform X State} TX {Transducer NextState RestInStream} else {Transducer State RestInStream} A transducer keeps its state in State, receives messages on InStream and ss messages on OutStream C. Varela; Adapted with permission from S. Haridi and P. Van Roy 42

Example Transducer IsOdd Generate 6 5 4 3 2 1 5 3 1 Filter fun {Filter Xs F} case Xs of nil then nil [] X Xr then if {F X} then X {Filter Xr F} else {Filter Xr F} Filter is a transducer that takes an Instream and incremently produces an Outstream that satisfies the predicate F local Xs Ys in thread Xs = {Generate 1 100} thread Ys = {Filter Xs IsOdd} thread {Browse Ys} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 43

Larger example: The sieve of Eratosthenes Produces prime numbers It takes a stream 2...N, peals off 2 from the rest of the stream Delivers the rest to the next sieve Sieve Xs X X Zs Xr Filter Ys Sieve Zs C. Varela; Adapted with permission from S. Haridi and P. Van Roy 44

Sieve fun {Sieve Xs} case Xs of nil then nil [] X Xr then Ys in thread Ys = {Filter Xr fun {$ Y} Y mod X \= 0 } X {Sieve Ys} The program forks a filter thread on each sieve call C. Varela; Adapted with permission from S. Haridi and P. Van Roy 45

Example call local Xs Ys in thread Xs = {Generate 2 100000} thread Ys = {Sieve Xs} thread for Y in Ys do {Show Y} 7 11... Filter 2 Filter 3 Filter 5 Sieve C. Varela; Adapted with permission from S. Haridi and P. Van Roy 46

Limitation of eager stream processing Streams The producer might be much faster than the consumer This will produce a large intermediate stream that requires potentially unbounded memory storage producer x 5 x 4 x 3 x 2 x 1 consumer C. Varela; Adapted with permission from S. Haridi and P. Van Roy 47

Solutions There are three alternatives: 1. Play with the speed of the different threads, i.e. play with the scheduler to make the producer slower 2. Create a bounded buffer, say of size N, so that the producer waits automatically when the buffer is full 3. Use demand-driven approach, where the consumer activates the producer when it needs a new element (lazy evaluation) The last two approaches introduce the notion of flowcontrol between concurrent activities (very common) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 48

Coroutines I Languages that do not support concurrent threads might instead support a notion called coroutining A coroutine is a nonpreemptive thread (sequence of instructions), there is no scheduler Switching between threads is the programmer s responsibility C. Varela; Adapted with permission from S. Haridi and P. Van Roy 49

Coroutines II, Comparison {P...} -- call procedure Q procedure P return Procedures: one sequence of instructions, program transfers explicitly when terminated it returns to the caller spawn P resume P coroutine Q resume Q coroutine P resume Q C. Varela; Adapted with permission from S. Haridi and P. Van Roy 50

Coroutines II, Comparison {P...} -- call procedure Q procedure P return Coroutines: New sequences of instructions, programs explicitly do all the scheduling, by spawn, susp and resume spawn P resume P coroutine Q resume Q coroutine P resume Q C. Varela; Adapted with permission from S. Haridi and P. Van Roy 51

Time In concurrent computation one would like to handle time proc {Time.delay T} The running thread susps for T milliseconds proc {Time.alarm T U} Immediately creates its own thread, and binds U to unit after T milliseconds C. Varela; Adapted with permission from S. Haridi and P. Van Roy 52

Example local proc {Ping N} for I in 1..N do {Delay 500} {Browse ping} {Browse 'ping terminate'} proc {Pong N} for I in 1..N do {Delay 600} {Browse pong} {Browse 'pong terminate'} in... local... in {Browse 'game started'} thread {Ping 1000} thread {Pong 1000} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 53

Concurrent control abstraction We have seen how threads are forked by thread... A natural question to ask is: how can we join threads? fork threads join C. Varela; Adapted with permission from S. Haridi and P. Van Roy 54

Termination detection This is a special case of detecting termination of multiple threads, and making another thread wait on that event. The general scheme is quite easy because of dataflow variables: thread S1 X1 = unit thread S2 X2 = X1... thread Sn X n = X n-1 {Wait Xn} % Continue main thread When all threads terminate the variables X 1 X N will be merged together labeling a single box that contains the value unit. {Wait X N } susps the main thread until X N is bound. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 55

Concurrent Composition conc S 1 [] S 2 [] [] S n {Conc [ proc{$} S1 proc{$} S2... proc{$} Sn ] } Takes a single argument that is a list of nullary procedures. When it is executed, the procedures are forked concurrently. The next statement is executed only when all procedures in the list terminate. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 56

Conc local proc {Conc1 Ps I O} case Ps of P Pr then M in thread {P} M = I {Conc1 Pr M O} [] nil then O = I in proc {Conc Ps} X in {Conc1 Ps unit X} {Wait X} This abstraction takes a list of zero-argument procedures and terminate after all these threads have terminated C. Varela; Adapted with permission from S. Haridi and P. Van Roy 57

Example local proc {Ping N} for I in 1..N do {Delay 500} {Browse ping} {Browse 'ping terminate'} proc {Pong N} for I in 1..N do {Delay 600} {Browse pong} {Browse 'pong terminate'} in... local... in {Browse 'game started'} {Conc [ proc {$} {Ping 1000} proc {$} {Pong 1000} ]} {Browse game terminated } C. Varela; Adapted with permission from S. Haridi and P. Van Roy 58

Futures A future is a read-only capability of a single-assignment variable. For example to create a future of the variable X we perform the operation!! to create a future Y: Y =!!X A thread trying to use the value of a future, e.g. using Y, will susp until the variable of the future, e.g. X, gets bound. One way to execute a procedure lazily, i.e. in a demand-driven manner, is to use the operation {ByNeed +P?F}. ByNeed takes a zero-argument function P, and returns a future F. When a thread tries to access the value of F, the function {P} is called, and its result is bound to F. This allows us to perform demand-driven computations in a straightforward manner. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 59

Example declare Y {ByNeed fun {$} 1 Y} {Browse Y} we will observe that Y becomes a future, i.e. we will see Y<Future> in the Browser. If we try to access the value of Y, it will get bound to 1. One way to access Y is by perform the operation {Wait Y} which triggers the producing procedure. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 60

Thread Priority and Real Time Try to run the program using the following statement: {Sum 0 thread {Generate 0 100000000} } Switch on the panel and observe the memory behavior of the program. You will quickly notice that this program does not behave well. The reason has to do with the asynchronous message passing. If the producer ss messages i.e. create new elements in the stream, in a faster rate than the consumer can consume, increasingly more buffering will be needed until the system starts to break down. One possible solution is to control experimentally the rate of thread execution so that the consumers get a larger time-slice than the producers do. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 61

Priorities There are three priority levels: high, medium, and low (the default) A priority level determines how often a runnable thread is allocated a time slice. In Oz, a high priority thread cannot starve a low priority one. Priority determines only how large piece of the processor-cake a thread can get. Each thread has a unique name. To get the name of the current thread the procedure Thread.this/1 is called. Having a reference to a thread, by using its name, enables operations on threads such as: terminating a thread, or raising an exception in a thread. Thread operations are defined the standard module Thread. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 62

Thread priority and thread control fun {Thread.state T} proc{thread.injectexception T E} fun {Thread.this} proc{thread.setpriority T P} proc{thread.setthispriority P} %% returns thread state %% exception E injected into thread %% returns 1st class reference to thread %% P is high, medium or low %% as above on current thread fun{property.get priorities} proc{property.put priorities(high:h medium:m)} %% get priority ratios C. Varela; Adapted with permission from S. Haridi and P. Van Roy 63

Thread Priorities Oz has three priority levels. The system procedure {Property.put priorities p(medium:y high:x)} Sets the processor-time ratio to X:1 between high-priority threads and mediumpriority thread. It also sets the processor-time ratio to Y:1 between medium-priority threads and low-priority threads. X and Y are integers. Example: {Property.put priorities p(high:10 medium:10)} Now let us make our producer-consumer program work. We give the producer low priority, and the consumer high. We also set the priority ratios to 10:1 and 10:1. C. Varela; Adapted with permission from S. Haridi and P. Van Roy 64

The program with priorities local L in {Property.put priorities p(high:10 medium:10)} thread {Thread.setThisPriority low} L = {Generate 0 100000000} thread {Thread.setThisPriority high} {Sum 0 L} C. Varela; Adapted with permission from S. Haridi and P. Van Roy 65

Exercises 87. SALSA asynchronous message passing enables to tag messages with properties: priority, delay, and waitfor. Compare these mechanisms with Oz thread priorities, time delays and alarms, and futures. 88. How do SALSA tokens relate to Oz dataflow variables and futures? 89. What is the difference between multiple thread termination detection in Oz and join blocks in SALSA? 90. CTM Exercise 4.11.3 (page 339) - Compare the sequential and concurrent execution performance of equivalent SALSA programs. 91. CTM Exercise 4.11.5 (page 339) C. Varela; Adapted with permission from S. Haridi and P. Van Roy 66