Concurrent ML. John Reppy January 21, University of Chicago

Similar documents
The Manticore Project

Threads and Continuations COS 320, David Walker

Programming Idioms for Transactional Events

Implicit threading, Speculation, and Mutation in Manticore

Verifying Concurrent ML programs

Transactional Events. Kevin Donnelly. Matthew Fluet. Abstract. 1. Introduction. Cornell University

Executive Summary. It is important for a Java Programmer to understand the power and limitations of concurrent programming in Java using threads.

Exercise (could be a quiz) Solution. Concurrent Programming. Roadmap. Tevfik Koşar. CSE 421/521 - Operating Systems Fall Lecture - IV Threads

Supplementary Notes on Concurrent ML

Concurrent Programming (RIO) Lesson 9. Ch 8 [BenA 06] Messages Channels Rendezvous RPC and RMI. X=f(..); send X to B. OS kernel.

Composable Asynchronous Events

Concurrency. Glossary

IT 540 Operating Systems ECE519 Advanced Operating Systems

Ch 9: Control flow. Sequencers. Jumps. Jumps

Concurrency Control in Distributed Environment

Written Presentation: JoCaml, a Language for Concurrent Distributed and Mobile Programming

G Programming Languages Spring 2010 Lecture 13. Robert Grimm, New York University

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Dealing with Issues for Interprocess Communication

Reminder from last time

main read Identical static threads Dynamic threads with Concurrent and

CS152: Programming Languages. Lecture 19 Shared-Memory Parallelism and Concurrency. Dan Grossman Spring 2011

Concurrency and Parallelism. CS152: Programming Languages. Lecture 19 Shared-Memory Parallelism and Concurrency. Concurrency vs. Parallelism.

Concurrency. Lecture 14: Concurrency & exceptions. Why concurrent subprograms? Processes and threads. Design Issues for Concurrency.

What's wrong with Semaphores?

PROCESSES AND THREADS

Lecture 14. Synthesizing Parallel Programs IAP 2007 MIT

Composable Asynchronous Events

CHAPTER - 4 REMOTE COMMUNICATION

Potential violations of Serializability: Example 1

Reinhard v. Hanxleden 1, Michael Mendler 2, J. Aguado 2, Björn Duderstadt 1, Insa Fuhrmann 1, Christian Motika 1, Stephen Mercer 3 and Owen Brian 3

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

G52CON: Concepts of Concurrency Lecture 15: Message Passing. Gabriela Ochoa School of Computer Science & IT

Coupling Thursday, October 21, :23 PM

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Effective Performance Measurement and Analysis of Multithreaded Applications

CS533 Concepts of Operating Systems. Jonathan Walpole

Models and languages of concurrent computation

Multithreading and Interactive Programs

MultiThreading 07/01/2013. Session objectives. Introduction. Introduction. Advanced Java Programming Course

Advanced Java Programming Course. MultiThreading. By Võ Văn Hải Faculty of Information Technologies Industrial University of Ho Chi Minh City

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

This is a talk given by me to the Northwest C++ Users Group on May 19, 2010.

Lectures 24 and 25: Scheduling; Introduction to Effects

CS377P Programming for Performance Multicore Performance Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Performance Throughput Utilization of system resources

Lecture 16/17: Distributed Shared Memory. CSC 469H1F Fall 2006 Angela Demke Brown

Advances in Programming Languages

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

CS Programming Languages: Scala

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Parallel Computer Architecture Spring Memory Consistency. Nikos Bellas

Process. Program Vs. process. During execution, the process may be in one of the following states

Introduction to Model Checking

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

DMP Deterministic Shared Memory Multiprocessing

Computation Abstractions. Processes vs. Threads. So, What Is a Thread? CMSC 433 Programming Language Technologies and Paradigms Spring 2007

Lecture 2 Process Management

CSE 544: Principles of Database Systems

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

Continuations and Continuation-Passing Style

Architecture or Parallel Computers CSC / ECE 506

The Problem with Threads

Inter-process communication (IPC)

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)

Chapter 4: Processes

CMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago

Declarative Concurrency (CTM 4)

ECE 587 Hardware/Software Co-Design Lecture 07 Concurrency in Practice Shared Memory I

Foundations of the C++ Concurrency Memory Model

MTAT Enterprise System Integration. Lecture 2: Middleware & Web Services

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

A Monadic Account of First-class Synchronous Events

CMSC 433 Programming Language Technologies and Paradigms. Concurrency

Discussion CSE 224. Week 4

Chapter 4: Processes. Process Concept

Implementing Symmetric Multiprocessing in LispWorks

CS 152 Computer Architecture and Engineering. Lecture 19: Synchronization and Sequential Consistency

Process and Its Image An operating system executes a variety of programs: A program that browses the Web A program that serves Web requests

Scope. Chapter Ten Modern Programming Languages 1

Concurrency: what, why, how

C++ Threading. Tim Bailey

Lecture 16: Recapitulations. Lecture 16: Recapitulations p. 1

Learning from Bad Examples. CSCI 5828: Foundations of Software Engineering Lecture 25 11/18/2014

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

CSE502: Computer Architecture CSE 502: Computer Architecture

CHAPTER 3 - PROCESS CONCEPT

Concurrency WS 2014/15 Message Passing Concurrency

Architectural Styles. Software Architecture Lecture 5. Copyright Richard N. Taylor, Nenad Medvidovic, and Eric M. Dashofy. All rights reserved.

Operating System Architecture. CS3026 Operating Systems Lecture 03

CAS 703 Software Design

Concurrency: what, why, how

Lecture 21: Transactional Memory. Topics: consistency model recap, introduction to transactional memory

Microsoft Visual C# Step by Step. John Sharp

Chapter 5. Multiprocessors and Thread-Level Parallelism

The Problem with Treads

7. System Design: Addressing Design Goals

The story so far. Elements of Programming Languages. Pairs in various languages. Pairs

Transcription:

Concurrent ML John Reppy jhr@cs.uchicago.edu University of Chicago January 21, 2016

Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is time) CML 2

Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is time) CML 2

Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is time) CML 2

Introduction Outline I Concurrent programming models I Concurrent ML I Multithreading via continuations (if there is time) CML 2

Concurrent programming models Different language-design axes I Parallel vs. concurrent vs. distributed. I Implicitly parallel vs. implicitly threaded vs. explicitly threaded. I Deterministic vs. non-deterministic. I Shared state vs. shared-nothing. In this lecture, I will mostly focus on explicitly-threaded, non-deterministic, shared-nothing, concurrent languages CML 3

Concurrent programming models Different language-design axes I Parallel vs. concurrent vs. distributed. I Implicitly parallel vs. implicitly threaded vs. explicitly threaded. I Deterministic vs. non-deterministic. I Shared state vs. shared-nothing. In this lecture, I will mostly focus on explicitly-threaded, non-deterministic, shared-nothing, concurrent languages CML 3

Concurrent programming models Different language-design axes I Parallel vs. concurrent vs. distributed. I Implicitly parallel vs. implicitly threaded vs. explicitly threaded. I Deterministic vs. non-deterministic. I Shared state vs. shared-nothing. In this lecture, I will mostly focus on explicitly-threaded, non-deterministic, shared-nothing, concurrent languages CML 3

Concurrent programming models Different language-design axes I Parallel vs. concurrent vs. distributed. I Implicitly parallel vs. implicitly threaded vs. explicitly threaded. I Deterministic vs. non-deterministic. I Shared state vs. shared-nothing. In this lecture, I will mostly focus on explicitly-threaded, non-deterministic, shared-nothing, concurrent languages CML 3

Concurrent programming models Different language-design axes I Parallel vs. concurrent vs. distributed. I Implicitly parallel vs. implicitly threaded vs. explicitly threaded. I Deterministic vs. non-deterministic. I Shared state vs. shared-nothing. In this lecture, I will mostly focus on explicitly-threaded, non-deterministic, shared-nothing, concurrent languages CML 3

Parallelism vs. concurrency Concurrent programming models Parallel and concurrent programming address two different problems. I Parallelism is about speed exploiting parallel processors to solve problems quicker. I Concurrency is about nondeterminism managing the unpredictable external world. CML 4

Parallelism vs. concurrency Concurrent programming models Parallel and concurrent programming address two different problems. I Parallelism is about speed exploiting parallel processors to solve problems quicker. I Concurrency is about nondeterminism managing the unpredictable external world. CML 4

Parallelism vs. concurrency Concurrent programming models Parallel and concurrent programming address two different problems. I Parallelism is about speed exploiting parallel processors to solve problems quicker. I Concurrency is about nondeterminism managing the unpredictable external world. CML 4

Concurrency Why concurrency? I Many applications are reactive systems that must cope with non-determinism (e.g., users and the network). I Concurrency provides a clean abstraction of such interactions by hiding the underlying interleaving of execution. I Thread abstraction is useful for large-grain, heterogeneous parallelism. CML 5

Concurrency Why concurrency? I Many applications are reactive systems that must cope with non-determinism (e.g., users and the network). I Concurrency provides a clean abstraction of such interactions by hiding the underlying interleaving of execution. I Thread abstraction is useful for large-grain, heterogeneous parallelism. CML 5

Concurrency Why concurrency? I Many applications are reactive systems that must cope with non-determinism (e.g., users and the network). I Concurrency provides a clean abstraction of such interactions by hiding the underlying interleaving of execution. I Thread abstraction is useful for large-grain, heterogeneous parallelism. CML 5

Concurrency Why concurrency? I Many applications are reactive systems that must cope with non-determinism (e.g., users and the network). I Concurrency provides a clean abstraction of such interactions by hiding the underlying interleaving of execution. I Thread abstraction is useful for large-grain, heterogeneous parallelism. CML 5

Concurrency Synchronization and communication For concurrent languages, the choice of synchronization and communication mechanisms is critical. I Should these be independent or coupled? I What guarantees should be provided? CML 6

Concurrency Synchronization and communication For concurrent languages, the choice of synchronization and communication mechanisms is critical. I Should these be independent or coupled? I What guarantees should be provided? CML 6

Concurrency Synchronization and communication For concurrent languages, the choice of synchronization and communication mechanisms is critical. I Should these be independent or coupled? I What guarantees should be provided? CML 6

Concurrency Concurrency is hard(?) Concurrent programming has a reputation of being hard. I The problem is that shared-memory concurrency using locks and condition variables is the dominant model in concurrent languages. I Shared-memory programming requires a defensive approach: protect against data races. I Synchronization and communication are decoupled. I Shared state often leads to poor modularity. CML 7

Concurrency Concurrency is hard(?) Concurrent programming has a reputation of being hard. I The problem is that shared-memory concurrency using locks and condition variables is the dominant model in concurrent languages. I Shared-memory programming requires a defensive approach: protect against data races. I Synchronization and communication are decoupled. I Shared state often leads to poor modularity. CML 7

Concurrency Concurrency is hard(?) Concurrent programming has a reputation of being hard. I The problem is that shared-memory concurrency using locks and condition variables is the dominant model in concurrent languages. I Shared-memory programming requires a defensive approach: protect against data races. I Synchronization and communication are decoupled. I Shared state often leads to poor modularity. CML 7

Concurrency Concurrency is hard(?) Concurrent programming has a reputation of being hard. I The problem is that shared-memory concurrency using locks and condition variables is the dominant model in concurrent languages. I Shared-memory programming requires a defensive approach: protect against data races. I Synchronization and communication are decoupled. I Shared state often leads to poor modularity. CML 7

Concurrency Concurrency is hard(?) Concurrent programming has a reputation of being hard. I The problem is that shared-memory concurrency using locks and condition variables is the dominant model in concurrent languages. I Shared-memory programming requires a defensive approach: protect against data races. I Synchronization and communication are decoupled. I Shared state often leads to poor modularity. CML 7

Software transactional memory Concurrency I Software transactional memory (STM) has been offered as a solution. I Ideal semantics is appealing: simple and intuitive. I Reality is less so. Issues of nesting, exceptions, I/O, weak vs. strong atomicity, make things much more complicated. I Also, STM does support conditional synchronization well. CML 8

Software transactional memory Concurrency I Software transactional memory (STM) has been offered as a solution. I Ideal semantics is appealing: simple and intuitive. I Reality is less so. Issues of nesting, exceptions, I/O, weak vs. strong atomicity, make things much more complicated. I Also, STM does support conditional synchronization well. CML 8

Software transactional memory Concurrency I Software transactional memory (STM) has been offered as a solution. I Ideal semantics is appealing: simple and intuitive. I Reality is less so. Issues of nesting, exceptions, I/O, weak vs. strong atomicity, make things much more complicated. I Also, STM does support conditional synchronization well. CML 8

Software transactional memory Concurrency I Software transactional memory (STM) has been offered as a solution. I Ideal semantics is appealing: simple and intuitive. I Reality is less so. Issues of nesting, exceptions, I/O, weak vs. strong atomicity, make things much more complicated. I Also, STM does support conditional synchronization well. CML 8

Software transactional memory Concurrency I Software transactional memory (STM) has been offered as a solution. I Ideal semantics is appealing: simple and intuitive. I Reality is less so. Issues of nesting, exceptions, I/O, weak vs. strong atomicity, make things much more complicated. I Also, STM does support conditional synchronization well. CML 8

Message-passing Message passing In 1978, Tony Hoare proposed a concurrent programming model based on independent processes that communicate via messages (CSP). I Well-defined interfaces between independent, sequential, components. I Natural encapsulation of state. I Extends more easily to distributed implementation. I Inspired many language designs, including CML, go (and its predecessors), OCCAM, OCCAM-, etc.. CML 9

Message-passing Message passing In 1978, Tony Hoare proposed a concurrent programming model based on independent processes that communicate via messages (CSP). I Well-defined interfaces between independent, sequential, components. I Natural encapsulation of state. I Extends more easily to distributed implementation. I Inspired many language designs, including CML, go (and its predecessors), OCCAM, OCCAM-, etc.. CML 9

Message-passing Message passing In 1978, Tony Hoare proposed a concurrent programming model based on independent processes that communicate via messages (CSP). I Well-defined interfaces between independent, sequential, components. I Natural encapsulation of state. I Extends more easily to distributed implementation. I Inspired many language designs, including CML, go (and its predecessors), OCCAM, OCCAM-, etc.. CML 9

Message-passing Message passing In 1978, Tony Hoare proposed a concurrent programming model based on independent processes that communicate via messages (CSP). I Well-defined interfaces between independent, sequential, components. I Natural encapsulation of state. I Extends more easily to distributed implementation. I Inspired many language designs, including CML, go (and its predecessors), OCCAM, OCCAM-, etc.. CML 9

Message-passing Message passing In 1978, Tony Hoare proposed a concurrent programming model based on independent processes that communicate via messages (CSP). I Well-defined interfaces between independent, sequential, components. I Natural encapsulation of state. I Extends more easily to distributed implementation. I Inspired many language designs, including CML, go (and its predecessors), OCCAM, OCCAM-, etc.. CML 9

Message-passing design space Message-passing I Synchronous vs. asynchronous vs. RPC-style communication. I Per-thread message addressing vs. channels I Synchronization constructs: asymmetric choice, symmetric choice, join-patterns. CML 10

Message-passing design space Message-passing I Synchronous vs. asynchronous vs. RPC-style communication. I Per-thread message addressing vs. channels I Synchronization constructs: asymmetric choice, symmetric choice, join-patterns. CML 10

Message-passing design space Message-passing I Synchronous vs. asynchronous vs. RPC-style communication. I Per-thread message addressing vs. channels I Synchronization constructs: asymmetric choice, symmetric choice, join-patterns. CML 10

Message-passing design space Message-passing I Synchronous vs. asynchronous vs. RPC-style communication. I Per-thread message addressing vs. channels I Synchronization constructs: asymmetric choice, symmetric choice, join-patterns. CML 10

Message-passing Channels For the rest of the talk, we assume channel-based communication with synchronous message passing. In SML, we can define the following interface to this model: type a chan val channel : unit -> a chan val recv : a chan -> a val send : ( a chan * a) -> unit We might also include a way to monitor multiple channels, such as the following asymmetric choice operator: val selectrecv : ( a chan * ( a -> b)) list -> b CML 11

Concurrent ML Interprocess communication In practice, it is often the case that I interactions between processes involve multiple messages. I processes need to interact with multiple partners (nondeterministic choice). These two properties of IPC cause a conflict. CML 12

Concurrent ML Interprocess communication In practice, it is often the case that I interactions between processes involve multiple messages. I processes need to interact with multiple partners (nondeterministic choice). These two properties of IPC cause a conflict. CML 12

Concurrent ML Interprocess communication In practice, it is often the case that I interactions between processes involve multiple messages. I processes need to interact with multiple partners (nondeterministic choice). These two properties of IPC cause a conflict. CML 12

Concurrent ML Interprocess communication In practice, it is often the case that I interactions between processes involve multiple messages. I processes need to interact with multiple partners (nondeterministic choice). These two properties of IPC cause a conflict. CML 12

Concurrent ML Interprocess communication (continued...) For example, consider a possible interaction between a client and two servers. Server1 Client Server2 request request reply / ack nack CML 13

Concurrent ML Interprocess communication (continued...) Without abstraction, the code is a mess. let in ] end val replch1 = channel() and nack1 = cvar() val replch2 = channel() and nack2 = cvar() send (reqch1, (req1, replch1, nack1)); send (reqch2, (req2, replch2, nack2)); selectrecv [ (replch1, fn repl1 => ( set nack2; act1 repl1 ), (replch2, fn repl2 => ( set nack1; act2 repl2 ) But traditional abstraction mechanisms do not support choice! CML 14

Concurrent ML Concurrent ML I Provides a uniform framework for synchronization: events. I Event combinators for constructing abstract protocols. I Collection of event constructors: I I-variables I M-variables I Mailboxes I Channels Plus I/O, timeouts, thread join,... CML 15

Concurrent ML Concurrent ML I Provides a uniform framework for synchronization: events. I Event combinators for constructing abstract protocols. I Collection of event constructors: I I-variables I M-variables I Mailboxes I Channels Plus I/O, timeouts, thread join,... CML 15

Concurrent ML Concurrent ML I Provides a uniform framework for synchronization: events. I Event combinators for constructing abstract protocols. I Collection of event constructors: I I-variables I M-variables I Mailboxes I Channels Plus I/O, timeouts, thread join,... CML 15

Concurrent ML Concurrent ML I Provides a uniform framework for synchronization: events. I Event combinators for constructing abstract protocols. I Collection of event constructors: I I-variables I M-variables I Mailboxes I Channels Plus I/O, timeouts, thread join,... CML 15

Concurrent ML Events I We use event values to package up protocols as first-class abstractions. I An event is an abstraction of a synchronous operation, such as receiving a message or a timeout. type a event I Base-event constructors create event values for communication primitives: val recvevt : a chan -> a event val sendevt : a chan -> unit event CML 16

Concurrent ML Events (continued...) Event operations: I Event wrappers for post-synchronization actions: val wrap : ( a event * ( a -> b)) -> b event I Event generators for pre-synchronization actions and cancellation: val guard : (unit -> a event) -> a event val withnack : (unit event -> a event) -> a event I Choice for managing multiple communications: val choose : a event list -> a event I Synchronization on an event value: val sync : a event -> a CML 17

Concurrent ML Swap channels A swap channel is an abstraction that allows two threads to swap values. type a swap_chan val swapchannel : unit -> a swap_chan val swapevt : a swap_chan * a -> a event CML 18

Concurrent ML Swap channels (continued...) The basic implementation of swap channels is straightforward. datatype a swap_chan = SC of ( a * a chan) chan fun swapchannel () = SC(channel ()) fun swap (SC ch, vout) = let val inch = channel () in select [ wrap (recvevt ch, fn (vin, outch) => (send(outch, vout); vin)), wrap (sendevt (ch, (vout, inch)), fn () => recv inch) ] end Note that the swap function both offers to send and receive on the channel so as to avoid deadlock. CML 19

Making swap channels first class Concurrent ML We can also make the swap operation first class val swapevt : a swap_chan * a -> a event by using the guard combinator to allocate the reply channel. fun swapevt (SC ch, vout) = guard (fn () => let val inch = channel () in choose [ wrap (recvevt ch, fn (vin, outch) => (send(outch, vout); vin)), wrap (sendevt (ch, (vout, inch)), fn () => recv inch) ] end) CML 20

Concurrent ML Two-server interaction using events Server abstraction: type server val rpcevt : server * req -> repl event The client code is no longer a mess. select [ ] wrap (rpcevt server1, fn repl1 => act1 repl1 ), wrap (rpcevt server2, fn repl2 => act2 repl2 ) Note that select is shorthand for sync o choose. CML 21

Concurrent ML Other abstractions Events have been used to implement a wide range of abstractions in CML, including: I Futures I Promises (asynchronous RPC) I Actors I Join patterns CML 22

Concurrent ML Other abstractions Events have been used to implement a wide range of abstractions in CML, including: I Futures I Promises (asynchronous RPC) I Actors I Join patterns CML 22

Concurrent ML Other abstractions Events have been used to implement a wide range of abstractions in CML, including: I Futures I Promises (asynchronous RPC) I Actors I Join patterns CML 22

Concurrent ML Other abstractions Events have been used to implement a wide range of abstractions in CML, including: I Futures I Promises (asynchronous RPC) I Actors I Join patterns CML 22

Concurrent ML Other abstractions Events have been used to implement a wide range of abstractions in CML, including: I Futures I Promises (asynchronous RPC) I Actors I Join patterns CML 22

Concurrent ML Example distributed tuple spaces The Linda family of languages use tuple spaces to organize distributed computation. A tuple space is a shared associative memory, with three operations: output adds a tuple. input removes a tuple from the tuple space. The tuple is selected by matching against a template. read reads a tuple from the tuple space, without removing it. val output : (ts * tuple) -> unit val input : (ts * template) -> value list event val read : (ts * template) -> value list event CML 23

Concurrent ML Distributed tuple spaces (continued...) There are two ways to implement a distributed tuple space: I Read-all, write-one I Read-one, write-all We choose read-all, write-one. In this organization, a write operation goes to a single processor, while an input or read operation must query all processors. CML 24

Concurrent ML Distributed tuple spaces (continued...) The input protocol is complicated: 1. The reader broadcasts the query to all tuple-space servers. 2. Each server checks for a match; if it finds one, it places a hold on the tuple and sends it to the reader. Otherwise it remembers the request to check against subsequent write operations. 3. The reader waits for a matching tuple. When it receives a match, it sends an acknowledgement to the source, and cancellation messages to the others. 4. When a tuple server receives an acknowledgement, it removes the tuple; when it receives a cancellation it removes any hold or queued request. CML 25

Concurrent ML Distributed tuple spaces (continued...) Here is the message traffic for a successful input operation: Client Local Tuple-server Proxy Remote Tuple-server Proxy select recv recv request reply Remote Tuple Server recv recv accept CML 26

Concurrent ML Distributed tuple spaces (continued...) We use negative acknowledgements to cancel requests when the client chooses some other event. Client Local Tuple-server Proxy Remote Tuple-server Proxy select ev recv recv request recv recv cancel Remote Tuple Server Note that we must confirm that a client accepts a tuple before sending out the acknowledgement. CML 27

Multithreading via continuations Implementing concurrency in functional languages I Functional languages can provide a platform for efficient implementations of concurrency features. I This is especially true for languages that support continuations. CML 28

Multithreading via continuations Implementing concurrency in functional languages I Functional languages can provide a platform for efficient implementations of concurrency features. I This is especially true for languages that support continuations. CML 28

Multithreading via continuations Continuations Continuations are a semantic concept that captures the meaning of the rest of the program. In a functional language, we can apply the continuation-passing-style transformation to make continuations explicit. For example, consider the expression (x+y)*z. We can rewrite it as (fn k => k(x+y)) (fn v => v*z) In this rewritten code, the variable k is bound to the continiation of the expression x+y. CML 29

First-class continuations Multithreading via continuations Some languages make it possible to reify the implicit continuations. For example, SML/NJ provides the following interface to its first-class continuations: type a cont val callcc : ( a cont -> a) -> a val throw : a cont -> a -> b First-class continuations can be used to implement many kinds of control-flow, including loops, back-tracking, exceptions, and various concurrency mechanisms. CML 30

Multithreading via continuations Coroutines Implementing a simple coroutine package using continuations is straightforward. val fork : (unit -> unit) -> unit val exit : unit -> a val yield : unit -> unit CML 31

Coroutines (continued...) Multithreading via continuations val rdyq : unit cont Q.queue = Q.mkQueue() fun dispatch () = throw (Q.dequeue rdyq) () fun yield () = callcc (fn k => ( Q.enqueue (rdyq, k); dispatch ())) fun exit () = dispatch () fun fork f = callcc (fn parentk => ( Q.enqueue (rdyq, parentk); (f ()) handle _ => (); exit ())) To support preemption and/or parallelism requires additional runtime-system support. CML 32