News and remarks CS68 Principles of Programming Languages Håkan Jonsson Dept of Computer Science and Electrical Engineering Luleå University of Technology, Sweden and Computer Science, Dartmouth College http://www.cs.dartmouth.edu/~hj Apart from the Oz Browser there is a also the Oz Panel that shows runtime information. Open the Oz panel from emacs: Either menu option, or C-. C-. s Extra help on HW2 posted in Blackboard. A version of ToHTML to test your Parse. The label "doc" and the parenthesis surrounding the list in 2a and 2b should not be there(!) The list should be the sole input to Parse. doc([ ]) should be [ ] October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 2 Contents today Declarative concurrency. Threads October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 3 Concurrency By concurrency is meant that several activities executes simultaneously. Network and disk I/O, multiple processes, unix pipelines, The Search for Extra- Terrestrial Intelligence (SETI), etc. Concurrency in programming enables us to organize our programs into indepent parts. Crucial questions: How are actions synchronized? How are data communicated? In Oz, concurrency is achieved by means of threads. The basic syntax is: thread <S> Informal semantics: Evaluate <S> in a separate thread; execution of the main program (and other threads) continues while <S> is being evaluated. Examples: Delay and Cmap. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 4
Concurrency Each thread is executed as a single program. Same semantics as we have already seen. In fact, all programs we have seen so far have been executed as a (single) thread. The meaning of the simultaneous execution of several threads is explained in terms of an execution state with multiple semantic stacks (one per thread) that all share the same single-assignment store. Multiple reads can happen truly simultaneously. Multiple writes happen in some order. Depencies among threads induce constraints upon order of execution, but otherwise the order in which statements execute across threads is undetermined. Example: CFib Fibonacci with threads October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 5 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 6 Execution of threads Thread semantics Formally speaking, the abstract machine model is exted. Executions states will have a multi-set MST of semantics stacks. One stack per thread. A multi-set is a set in which an element can occur more than once (multi-sets are also called bags). Machine state: (MST,!) where MST={ST1,ST2,,STn} is a multi-set and! a singleassignment store. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 7 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 8
Thread semantics Execution semantics: Let (MST,!) be the execution state. Then, if all stacks of MST are empty, halt. The program has now terminated. Otherwise, wait until at least one element of MST is runnable. This could, but must not be, an indefinite wait. Choose a runnable stack ST from MST. Let MST' = MST \ {ST}. Pop the top element (<S>,E) of ST and evaluate it according to the rules. This gives a new stack ST and a new store!. Let (MST' " {ST'},! ) be the new machine state. Redo. Notice that all active stacks share a single-assignment store. The semantics describe an implementation that will work on a 1-processor computer, but permit execution by a true multiprocessor. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 9 Scheduling Ideally, threads should seem like running concurrently, even if we are simulating concurrency on a single processor system. The way a runnable thread is chosen affects this. If we choose "unfairly", some threads may seem like not running at all (or much slower than needed/expected). Starvation: A thread is runnable, but never gets chosen. Fair: Every runnable thread will eventually be chosen. Typical scheduling approaches: Time slicing using timers (could be hardware interrupts) and priority levels. Dead-lock, all threads susped: thread 1 is susped on X; next line of code will bind Y. thread 2 is susped on Y; next line of code will bind X. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 10 Order of execution October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 11 Easy separation of tasks Example 1: In a client for a MMORPG (Massively Multiplayer Online Role Playing Game) we could put the graphics computation, the networking processing, the internal modeling (computing what actually happens, making predictions, and coping with uncertain information), and the interaction with the player in separate threads. Problem: Balancing the share of processor power given to each thread so that the game becomes realistic. Without threads a program like this could be programmed as a single loop in which all task are carried out in sequence. This doesn t work in practice since some operations might block and others take unacceptable long to complete (causing others to wait). In the concurrent declarative model, breaking up a sequential program into threads gives a program with an equivalent semantics but, possibly, with another order of execution. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 12
Coroutines: Many applications can be modeled as communicating sequential processes. A coroutine is a component much like a procedure that runs forever once started and that every now and then transfers control to other co-routines and goes to sleep until control is returned. A coroutine decides on its own when to run or not. Communication is crucial between coroutines. Typical applications are so called producer - consumer and client -server applications. Example October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 13 Table 4.2 This example makes use of the Thread module. Drawback: Threads need to know about each other. The activation and suspension of threads must be programmed. Next lecture we will learn about other, better, ways to produce clientserver and producer-consumer applications. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 14 Components We know how to start a thread; but how do we wait for a thread to finish? Solution: Create an unbound variable Done, In thread 1: Bind Done at exit. In thread 2: Call {Wait Done} to wait until Done is bound. Wait is easy to implement: proc {Wait V} case V of x then skip else skip October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 15 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 16
This idea generalizes to multiple threads; either call Wait many times, or introduce multiple variables: local T 1 T 2 T n in thread T 1 = done thread T 2 = T 1 thread T n = T n-1 {Wait T n } Notice how this "cascades ; each thread waits for the previous ones to bind their variable (and terminate). More generally, this synchronizes n threads, executing a procedure each, so they all finish before Barrier returns: Creates a new thread each recursive step. {Barrier [ proc {$} <stmt> 1 proc {$} <stmt> 2... proc {$} <stmt> n ] } proc {Barrier Ps} fun {BarrierLoop Ps L} case Ps of P Pr then M in thread {P} M=L {BarrierLoop Pr M} [] nil then L S={BarrierLoop Ps unit} in {Wait S} Creates a new unbound variable each recursive step. October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 17 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 18 Nondeterminism If there are more than one thread that can bind a variable we have observable nondeterminism. The meaning of such a program deps on which binding is carried out first. This is not decided by the programming language but deps on the policy of the thread scheduler. Example Next lecture (Wednesday) Homework 2 due. Read Ch. 4.3 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 19 October 15, 2007 CS68 Lecture 9 - Håkan Jonsson Page 20