COMP3151/9151 Foundations of Concurrency Lecture 8

Similar documents
LiU-FP2010 Part II: Lecture 7 Concurrency

Composable Shared Memory Transactions Lecture 20-2

COMP3151/9151 Foundations of Concurrency Lecture 8

Implementing Concurrent Futures in Concurrent Haskell

An Abstract Machine for Concurrent Haskell with Futures

Lightweight Concurrency Primitives for GHC. Peng Li Simon Peyton Jones Andrew Tolmach Simon Marlow

PARALLEL AND CONCURRENT PROGRAMMING IN HASKELL

Lecture Notes on Data Representation

A short introduction to concurrency and parallelism in Haskell

Conservative Concurrency in Haskell

References. Monadic I/O in Haskell. Digression, continued. Digression: Creating stand-alone Haskell Programs

CS 4110 Programming Languages & Logics. Lecture 17 Programming in the λ-calculus

Lightweight Concurrency in GHC. KC Sivaramakrishnan Tim Harris Simon Marlow Simon Peyton Jones

Monad Background (3A) Young Won Lim 11/18/17

Introduction to Lambda Calculus. Lecture 7 CS /08/09

CSE 505: Concepts of Programming Languages

CS457/557 Functional Languages

M-Structures (Continued)

CSE 451 Midterm 1. Name:

CS 11 Haskell track: lecture 1

Monad Background (3A) Young Won Lim 11/8/17

Chalmers University of Technology. Without adding any primitives to the language, we dene a concurrency monad transformer

11/6/17. Outline. FP Foundations, Scheme. Imperative Languages. Functional Programming. Mathematical Foundations. Mathematical Foundations

Software System Design and Implementation

I/O in Purely Functional Languages. Stream-Based I/O. Continuation-Based I/O. Monads

Type Systems Winter Semester 2006

It turns out that races can be eliminated without sacrificing much in terms of performance or expressive power.

Foundations. Yu Zhang. Acknowledgement: modified from Stanford CS242

Introduction to Lambda Calculus. Lecture 5 CS 565 1/24/08

CS 6110 S14 Lecture 1 Introduction 24 January 2014

CS-XXX: Graduate Programming Languages. Lecture 9 Simply Typed Lambda Calculus. Dan Grossman 2012

CMSC 336: Type Systems for Programming Languages Lecture 4: Programming in the Lambda Calculus Acar & Ahmed 22 January 2008.

CSc 372. Comparative Programming Languages. 2 : Functional Programming. Department of Computer Science University of Arizona

Paradigms of computer programming

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Sorting lower bound and Linear-time sorting Date: 9/19/17

Side Effects (3A) Young Won Lim 1/13/18

Monads. Functional Programming (CS4011) Monads

(Refer Slide Time: 4:00)

Side Effects (3B) Young Won Lim 11/20/17

Stop coding Pascal. Saturday, April 6, 13

Tradeoffs. CSE 505: Programming Languages. Lecture 15 Subtyping. Where shall we add useful completeness? Where shall we add completeness?

CIS 500 Software Foundations Fall September 25

Concurrent & Distributed Systems Supervision Exercises

Introduction to Haskell

Liveness properties. Deadlock

CS The IO Monad. Slides from John Mitchell, K Fisher, and S. Peyton Jones

Chapter 13: Reference. Why reference Typing Evaluation Store Typings Safety Notes

Lecture 8: Summary of Haskell course + Type Level Programming

Compiling Parallel Algorithms to Memory Systems

A Functional Graph Library

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

6.852: Distributed Algorithms Fall, Class 21

CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Dan Grossman Spring 2011

Lecture 1: Overview

Multicore programming in Haskell. Simon Marlow Microsoft Research

CS152: Programming Languages. Lecture 7 Lambda Calculus. Dan Grossman Spring 2011

Lecture 3: Recursion; Structural Induction

6.001 Notes: Section 8.1

Monad Background (3A) Young Won Lim 10/5/17

Dining Philosophers with π-calculus

Haskell Monads CSC 131. Kim Bruce

The Lambda Calculus. 27 September. Fall Software Foundations CIS 500. The lambda-calculus. Announcements

College Functors, Applicatives

Simon Peyton Jones (Microsoft Research) Tokyo Haskell Users Group April 2010

Fundamentals and lambda calculus

Fundamentals and lambda calculus. Deian Stefan (adopted from my & Edward Yang s CSE242 slides)

A Solidify Understanding Task

CSE-321 Programming Languages 2010 Final

Applicative, traversable, foldable

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Background Type Classes (1B) Young Won Lim 6/28/18

Lecture 3: Linear Classification

Chapter 6: Process Synchronization

More Lambda Calculus and Intro to Type Systems

6.001 Notes: Section 6.1

CSE 413 Languages & Implementation. Hal Perkins Winter 2019 Structs, Implementing Languages (credits: Dan Grossman, CSE 341)

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Haske k ll An introduction to Functional functional programming using Haskell Purely Lazy Example: QuickSort in Java Example: QuickSort in Haskell

Handout 10: Imperative programs and the Lambda Calculus

Mutation. COS 326 Andrew W. Appel Princeton University. slides copyright David Walker and Andrew W. Appel

Side Effects (3B) Young Won Lim 11/23/17

CITS3211 FUNCTIONAL PROGRAMMING. 14. Graph reduction

Introduction to Concepts in Functional Programming. CS16: Introduction to Data Structures & Algorithms Spring 2017

Chapter 13: Reference. Why reference Typing Evaluation Store Typings Safety Notes

Side Effects (3B) Young Won Lim 11/27/17

Types, Expressions, and States

Programming in Haskell Aug Nov 2015

An introduction introduction to functional functional programming programming using usin Haskell

5. Introduction to the Lambda Calculus. Oscar Nierstrasz

JVM ByteCode Interpreter

G Programming Languages - Fall 2012

Chapter 11 :: Functional Languages

Monad P1 : Several Monad Types (4A) Young Won Lim 2/13/19

Programming Languages

I/O in Haskell. To output a character: putchar :: Char -> IO () e.g., putchar c. To output a string: putstr :: String -> IO () e.g.

Intro to Haskell Notes: Part 5

Week - 04 Lecture - 01 Merge Sort. (Refer Slide Time: 00:02)

Operating Systems (234123) Spring (Homework 3 Wet) Homework 3 Wet

CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019

Programming Language Pragmatics

Transcription:

1 COMP3151/9151 Foundations of Concurrency Lecture 8 Liam O Connor CSE, UNSW (and data61) 8 Sept 2017

2 Shared Data Consider the Readers and Writers problem from Lecture 6: Problem We have a large data structure (i.e. a structure that cannot be updated in one atomic step) that is shared between some number of writers who are updating the data structure and some number of readers who are attempting to retrieve a coherent copy of the data structure. Desiderata: We want atomicity, in that each update happens in one go, and updates-in-progress or partial updates are not observable. We want consistency, in that any reader that starts after an update finishes will see that update. We want to minimise waiting.

3 A Crappy Solution Treat both reads and updates as critical sections use any old critical section solution (mutexes, semaphores, etc.) to sequentialise all reads and writes to the data structure. Observation Updates are atomic and reads are consistent but reads can t happen concurrently, which leads to unnecessary contention.

4 A Better Solution Use a monitor with two condition variables. See Algorithm 6.12 of the slides or Ben-Ari s Algorithm 7.4. Observation We have atomicity and consistency, and now multiple reads can execute concurrently. Still, we don t allow updates to execute concurrently with reads, to prevent partial updates from being observed by a reader.

5 Complication Reading and Writing Now suppose we don t want readers to wait (much) while an update is performed. Instead, we d rather they get an older version of the data structure. Trick: Rather than update the data structure in place, a writer creates their own local copy of the data structure, and then merely updates the (shared) pointer to the data structure to point to their copy. Liam: Draw on the board Atomicity The only shared write is now just to one pointer. Consistency Reads that start before the pointer update get the older version, but reads that start after get the latest.

6 Persistent Data Structures Copying is O(n) in the worst case, but we can do better for many tree-like types of data structure. Example (Binary Search Tree) Pointer 64 64 37 37 102 20 40 40 3 22 42

7 Persistent Data Structures Copying is O(n) in the worst case, but we can do better for many tree-like types of data structure. Example (Binary Search Tree) Pointer 64 64 37 37 102 20 40 40 3 22 42

8 Persistent Data Structures Copying is O(n) in the worst case, but we can do better for many tree-like types of data structure. Example (Binary Search Tree) Pointer 64 64 37 37 102 20 40 40 3 22 42

9 Persistent Data Structures Copying is O(n) in the worst case, but we can do better for many tree-like types of data structure. Example (Binary Search Tree) Pointer 64 64 37 37 102 20 40 40 3 22 42

10 Purely Functional Data Structures Persistent data structures that exclusively make use of copying over mutation are called purely functional data structures. They are so called because operations on them are best expressed in the form of mathematical functions that, given an input structure, return a new output structure: insert v Leaf = Branch v Leaf Leaf insert v (Branch x l r) = if v x then Branch x (insert v l) r else Branch x l (insert v r) Purely functional programming languages like Haskell are designed to facilitate programming in this way.

11 Haskell Haskell is a programming language where all programs are written in the form of immutable definitions. There is no intrinsic notion of time or mutable state. Syntax is mostly derived from simple mathematical notation. Example (Factorial) Liam: Show more examples, use GHCi a bit factorial :: Int Int factorial 0 = 1 factorial n = n factorial (n 1)

12 Computing with Functions Unfortunately, pure functions don t capture everything we want from computations. Processes have to interact with the user, global state, other processes, and the operating system. We model real processes in Haskell using the IO type. IO τ = A (possibly effectful) process that, when executed, produces a result of type τ Note the semantics of evaluation and execution are different things.

13 Building up IO pure :: a. a IO a ( =) :: a b. IO a (a IO b) IO b getchar :: IO Char putchar :: Char IO () Example (Echo) echo :: IO () echo = getchar = (λx. putchar x = λy. echo) (here λx. y is notation for an anonymous function that returns y given argument x)

Monadic IO Monad Laws Define a sequential composition operator IO as follows: ( IO ) :: a b c. (a IO b) (b IO c) (a IO c) (f IO g) x = f x = g We have the following laws to aid reasoning: Identity f IO pure = f = pure IO f Associativity f IO (g IO h) = (f IO g) IO h 14

15 Do Notation Haskell defines some convenience notation to make writing processes easier. do x P; Q = P = λx. do Q do P; Q = P = λ. do Q do P = P Example (Echo, revisited) echo :: IO () echo = do x getchar putchar x echo

16 Adding Concurrency We can have multiple threads easily enough: forkio :: IO () IO () Example (Dueling Printers) let loop c = do putchar c; loop c in do forkio (loop a ); loop z But what sort of synchronisation primitives are available?

17 MVars The MVar is the simplest synchronisation primitive in Haskell. It can be thought of as a shared box which holds at most one value. Threads must take the value out of a full box to read it, and must put a value into an empty box to update it. MVar Functions newmvar :: a. a IO (MVar a) takemvar :: a. MVar a IO a putmvar :: a. MVar a a IO () Create a new MVar Read/remove the value Update/insert a value Taking from an empty MVar or putting into a full one results in blocking. An MVar can be thought of as channel containing at most one value.

18 Readers and Writers We can treat MVars as shared variables with some definitions: writemvar m v = do takemvar m; putmvar m v readmvar m = do v takemvar m; putmvar m v; pure v problem :: DB IO () problem initial = do db newmvar initial wl newmvar () let reader = readmvar db = let writer = do takemvar wl d readmvar db let d = update d evaluate d writemvar db d putmvar wl ()

19 Fairness Each MVar has an attached FIFO queue, so GHC Haskell can ensure the following fairness property: No thread can be blocked indefinitely on an MVar unless another thread holds that MVar indefinitely. In LTL, this is something like (where L m is the queue for MVar m): (p L m putmvar(m)) (p / L m ) It seems to me that a stronger guarantee can be made, but this is what is stated in the literature.

20 Evaluation Semantics The semantics of Haskell s evaluation are interesting but not particularly relevant for us. We will assume that it happens quietly without a fuss: β-equivalence (λx. M[x]) N β M[N] α-equivalence λx. M[x] α λy. M[y] η-equivalence λx. M x η M Let our ambient congruence relation be αβη enriched with the following extra equations, justified by the monad laws: pure N = M M N (X = Y ) = Z X = (λx. Y x = Z) X X = pure

21 Processes This means that a Haskell expression of type IO τ for will boil down to either pure x where x is a value of type τ; or a = M where a is some primitive IO action (forkio p, readmvar v, etc.) and M is some function producing another IO τ. This is the head normal form for IO expressions. Definition Define a language of processes P, which contains all (head-normal) expressions of type IO (). We want to define the semantics of the execution of these processes. Let s use operational semantics: ( ) P P

22 Semantics for forkio To model forkio, we need to model the parallel execution of multiple processes in our process language. We shall add a parallel composition operator to the language of processes: P, Q ::= a = M pure () P Q And the following ambient congruence equations: P Q Q P P (Q R) (P Q) R

23 Semantics for forkio If we have multiple processes active, pick one of them non-deterministically to move: P P P Q P Q The forkio operation introduces a new process: (forkio P = M) P (pure () = M)

24 Semantics for MVars MVars are modelled as a special type of process, identified by a unique name. Values of MVar type merely contain the name of the process, so that putmvar and friends know where to look. P, Q ::= a = M pure () P Q n v n n (putmvar n v = M) v n (pure () = M) v n (takemvar n = M) n (pure v = M)

25 Semantics for newmvar We might think that newmvar should have semantics like this: (n fresh) (newmvar v = M) v n (pure n = M) But this approach has a number of problems: The name n is now globally-scoped, without an explicit binder to introduce it. It doesn t accurately model the lifetime of the MVar, which should be garbage-collected once all threads that can access it finish. It makes MVars global objects, so our semantics aren t very abstract. We would like local communication to be local in our model.

26 Restriction Operator We introduce a restriction operator ν to our language of processes: P, Q ::= a = M pure () P Q n v n (ν n) P Writing (ν n) P says that the MVar name n is only available in process P. Mentioning n outside P is not well-formed. We need the following additional congruence equations: (ν n) (ν m) P (ν m) (ν n) P (ν n)(p Q) P (ν n) Q (if n / P)

Better Semantics for newmvar The rule for newmvar is much the same as before, but now we explicitly restrict the MVar to M. (n fresh) (newmvar v = M) (ν n)( v n (pure n = M)) We can always execute under a restriction: P P (ν n) P (ν n) P Question What happens when you put an MVar inside another MVar? 27

28 Garbage Collection If an MVar is no longer used, we just replace it with the do-nothing process: (ν n) n pure () (ν n) v n pure () Extra processes that have outlived their usefulness disappear: pure () P P

29 Process Algebra Our language P is called a process algebra, and our method of ascribing semantics to P is called structural operational semantics. Neither are a focus of this course, but they are points which intersect heavily with other courses at UNSW. Process algebras and calculi of various kinds are covered in great detail in COMP6752 with Rob van Glabbeek, who is an expert in this field. Operational Semantics are a major part of the course COMP[39]161, which I develop and teach along with Gabi Keller, who is the lecturer in charge. If there s time! We can talk about CCS.

30 Bibliography Simon Marlow Parallel and Concurrent Programming in Haskell O Reilly, 2013 http://chimera.labs.oreilly.com/books/1230000000929 Simon Peyton Jones, Andrew Gordon and Sigbjorn Finne Concurrent Haskell POPL 96 Association for Computer Machinery http://microsoft.com/en-us/research/wp-content/uploads/1996/01/concurrent-haskell.pdf Simon Marlow (Editor) Haskell 2010 Language Report https://www.haskell.org/onlinereport/haskell2010/