Dining Philosophers with π-calculus

Similar documents
A Theory of Parallel Computation The π-calculus

Dealing with Issues for Interprocess Communication

6.001 Notes: Section 15.1

Introduction to Linear-Time Temporal Logic. CSE 814 Introduction to LTL

Threading the Code. Self-Review Questions. Self-review 11.1 What is a thread and what is a process? What is the difference between the two?

From the λ-calculus to Functional Programming Drew McDermott Posted

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Process Synchronization. studykorner.org

Process Management And Synchronization

Stop coding Pascal. Saturday, April 6, 13

Programming with Math and Logic

3.7 Denotational Semantics

Dining Philosophers, Semaphores

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

COMP80 Lambda Calculus Programming Languages Slides Courtesy of Prof. Sam Guyer Tufts University Computer Science History Big ideas Examples:

Interprocess Communication By: Kaushik Vaghani

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Concurrency. Glossary

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CMSC 330: Organization of Programming Languages

COMP3151/9151 Foundations of Concurrency Lecture 8

(Refer Slide Time: 1:36)

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization

Database Management System Prof. D. Janakiram Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

Homework #3. CS 318/418/618, Fall Handout: 10/09/2017

Parallelism and Concurrency. Motivation, Challenges, Impact on Software Development CSE 110 Winter 2016

Type Systems Winter Semester 2006

BST Deletion. First, we need to find the value which is easy because we can just use the method we developed for BST_Search.

Synchronization. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

CSc 372. Comparative Programming Languages. 2 : Functional Programming. Department of Computer Science University of Arizona

Class Notes, 3/21/07, Operating Systems

6.001 Notes: Section 8.1

Liveness properties. Deadlock

Introduction to Lambda Calculus. Lecture 7 CS /08/09

CHAPTER 6: PROCESS SYNCHRONIZATION

Lecture Notes on Data Representation

The Deadlock Lecture

Chapter 6 Control Flow. June 9, 2015

Lambda Calculus. Variables and Functions. cs3723 1

CSE 505: Concepts of Programming Languages

Parallel Processing: Performance Limits & Synchronization

Typing Control. Chapter Conditionals

Process Co-ordination OPERATING SYSTEMS

Chapter 9. Labelled Transition Systems. System Composition. specifications. implementations.

Programming Languages Third Edition

Lambda Calculus and Lambda notation in Lisp II. Based on Prof. Gotshalks notes on Lambda Calculus and Chapter 9 in Wilensky.

5 Classical IPC Problems

Graphical Untyped Lambda Calculus Interactive Interpreter

CS 6110 S14 Lecture 1 Introduction 24 January 2014

Control Abstraction. Hwansoo Han

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

CS370 Operating Systems

CMSC 336: Type Systems for Programming Languages Lecture 4: Programming in the Lambda Calculus Acar & Ahmed 22 January 2008.

AXIOMS OF AN IMPERATIVE LANGUAGE PARTIAL CORRECTNESS WEAK AND STRONG CONDITIONS. THE AXIOM FOR nop

1.3. Conditional expressions To express case distinctions like

CS370 Operating Systems

Software Paradigms (Lesson 4) Functional Programming Paradigm

Lecture 2: SML Basics

Chapter 7: Process Synchronization!

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

St. MARTIN S ENGINEERING COLLEGE Dhulapally, Secunderabad

CS 2112 Lecture 20 Synchronization 5 April 2012 Lecturer: Andrew Myers

Functional Programming

COMP 1130 Lambda Calculus. based on slides by Jeff Foster, U Maryland

An introduction to Scheme

6.001 Notes: Section 6.1

Lecture 13 CIS 341: COMPILERS

Process Synchronization

Concurrency and Synchronisation

Concurrency and Synchronisation

Functional abstraction. What is abstraction? Eating apples. Readings: HtDP, sections Language level: Intermediate Student With Lambda

Functional abstraction

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Multicore and Multiprocessor Systems: Part I

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

3 Process Synchronization

(Refer Slide Time: 4:00)

Intro. Scheme Basics. scm> 5 5. scm>

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Concurrency and Synchronisation. Leonid Ryzhyk

Multithreaded Programming Part II. CSE 219 Stony Brook University, Department of Computer Science

Synchronization Principles II

UNIT 3

Deadlocks. Deadlock in Resource Sharing Environment. CIT 595 Spring Recap Example. Representing Deadlock

CS152: Programming Languages. Lecture 11 STLC Extensions and Related Topics. Dan Grossman Spring 2011

CPS221 Lecture: Threads

INSTITUTE OF AERONAUTICAL ENGINEERING

Topics in Object-Oriented Design Patterns

Process Synchronization

Threads and Locks. CSCI 5828: Foundations of Software Engineering Lecture 09 09/22/2015

Concurrency pros and cons. Concurrent Programming Problems. Linked list example. Linked list example. Mutual Exclusion. Concurrency is good for users

15 Unification and Embedded Languages in Lisp

Alan J. Perlis - Epigrams on Programming

On Formal Analysis of OO Languages using. OO Languages and Rewriting Logic: Designing for Performance

High Performance Computing Prof. Matthew Jacob Department of Computer Science and Automation Indian Institute of Science, Bangalore

Deadlock CS 241. March 19, University of Illinois

Observable Behaviour Observable behaviour can be defined in terms of experimentation.

Transcription:

Dining Philosophers with π-calculus Matthew Johnson January 28, 2015 Overview The primary goal of this project was to explore the π-calculus by implementing a deadlockfree solution to the dining philosophers problem. Some π-calculus resources I found useful and can provide more information if you are interested are the CSinParallel Module, the Wikipedia page, and this π-calculus FAQ. For the dining philosophers, I used Allen Downey s Little Book of Semaphores in addition to the class materials. The π-calculus Two of the most common models of computation are Turing machines, which focus on sequential execution of instructions and modification of memory addresses and underlie imperative languages like C and Python, and λ-calculus, which creates and composes functions to perform calculations and is the theoretical basis of functional languages like Lisp, Erlang, and Haskell. These models, while useful for many applications, have a major shortcoming in that they are sequential and don t directly address some of the theory behind parallel computing. The π-calculus (and other process calculi) fill this gap by focusing on passing messages between concurrent processes to perform computations. Although the word calculus often refers to the math of differentiation and integration, it can more generally refer to any sort of strategy for performing calculations through symbolic 1

manipulation. Before trying to apply π-calculus to try to solve any problem, it is important first to understand some of the basic symbols and manipulations that it allows. Symbols and Components One of the key ideas of π-calculus is a name. A name functions both as a channel which can be used to send messages and also as a variable that is sent through a channel. In general, names are in lower case. In all of the below, P and Q represent generic processes. Symbol Name General Meaning/Interpretation 0 nil process a process that has stopped or does nothing P Q concurrency the processes P and Q executing in parallel c(x).p input prefixing wait for a message on channel c, store the message as name x, and then continue with process P c x.p output prefixing send the name x across channel c, then continue with process P (νc)p name allocation creates a new channel name in the process P!P replication create copies of P, as in a server forking to process a user s request or a new thread being spawned to execute a function Axioms Given these symbols, we also need rules to manipulate them. There are two categories of axioms: structural congruence and reduction semantics 2

Structural Congruence Name Alpha-conversion Parallel Composition Restriction Relation between Restriction and Parallelism Replication Congruence P Q if renaming bound variables (i.e., names created through name allocation or output prefixing) can transform P into Q (P and Q are the same if only the names they use are different.) P Q Q P (P Q) R P (Q R) P 0 P (Concurrent processes can be written in any order with any grouping; the concurrency operator is commutative, associative, and has identity.) (νx)(νy)p (νy)(νx)p (νx)0 0 (The order in which names are created doesn t matter, and the nil process doesn t uses any names.) (νx)(p Q) (νx)p Q if x is not a free name of Q ( Scope extension axiom ; if a process doesn t use a name, it doesn t need to be in the scope of the name, but can be.)!p P!P (A replicating process can create any number of copies.) Reduction Semantics Name Communication Reduction x z.p x(y).q P Q[z/y] (Sending a name replaces all instances of y in Q with z.) if P Q then P R Q R (If you can reduce P to Q, then you can also do this is parallel with R.) if P Q then (νx)p (νx)q (If you can reduce P to Q, then you can also do this with the extra name x.) if P P and P Q where Q Q, then P Q (Congruent processes reduce to congruent processes.) 3

Examples To help illustrate these, we will consider a sample client-server application from the π-calculus FAQ linked from above. In this example, we use an extension of π-calculus which allows sending and receiving multiple names at a time and also includes arithmetic. The operation that the server will perform is an increment: it will receive a message containing a channel and a number, and it will send a message on the channel it received containing the number it received, plus 1. One instance of this operation looks something like this: incr(c, x).c x + 1.0 In order to change this into a server, we need to make it possible for this process to be carried out on demand. We can accomplish this by simply adding the replication operator,! :!incr(c, x).c x + 1.0 Now whenever some process sends a message on the channel incr, this process can spawn a new copy of the increment service, which will handle the message. In order to use this service, we send a message containing a channel and a number on the channel incr, and then listen for a response on the channel that we sent. Typically, in such cases we create a new name to send to ensure that the channel is exclusive to this client process and the process spawned by the server that is handling the request. Given this, a client process might look like this: (νa)incr a, 17.a(y).P To see how the axioms help understand the execution, we will informally trace the execution of a client-server transaction (a more explicit walk-through appears near the end of the CSinParallel page). We start with the client and server running in parallel: (νa)incr a, 17.a(y).P!incr(c, x).c x + 1.0 The replication operator allows the server to spawn a new copy of the increment service: (νa)incr a, 17.a(y).P incr(c, x).c x + 1.0!incr(c, x).c x + 1.0 Since we have input and output on the same channel, we can apply the communication 4

reduction: (νa)a(y).p c x + 1.0[a, 17/c, x]!incr(c, x).c x + 1.0 We can further simply this by substituting the sent values wherever they appear: (νa)a(y).p a 17 + 1.0!incr(c, x).c x + 1.0 Once more, we can use the communication reduction: (νa)p [18/y] 0!incr(c, x).c x + 1.0 If a doesn t appear anywhere in the process P, we can simply drop the creation of the name a by one of the restriction congruence axioms. Additionally, the parallel composition axioms let us drop the process 0. This gives the last step of this computation: P [18/y]!incr(c, x).c x + 1.0 which is what we were expecting: the client proceeds with an incremented number, and the server is still available to handle more requests. Although the basic π-calculus is Turing complete (informally, anything that can be computed using a language like C can also be computed using π-calculus), as this example shows, you can work much more easily if you explicitly include extensions such as arithmetic and the ability to send/receive multiple names at once. Other common extensions include general conveniences such as control structures, more data types, or new operators, and specific applications, such as cryptography, add their own primitives for easier modelling. For the implementation of the dining philosophers, however, we will only need the basic π-calculus described above. Dining Philosophers Strategy The solution that we will implement comes from The Little Book of Semaphores by Allen B. Downey and is a variation of the waiter solution we saw in class. Both the waiter solution and this solution try to prevent deadlock by avoiding the case that each of the 5 philosophers is holding a chopstick. Whereas the waiter solution enforces this 5

by keeping track of which chopsticks are in use and whether giving out the last chopstick would cause a deadlock, Downey s solution enforces this by ensuring that at most 4 of the 5 philosophers are holding a chopstick at a time. This way, even if all 5 chopsticks are in use, we are guaranteed that at least one of the philosophers has two chopsticks, and deadlock is avoided. We can imagine this solution by picturing not a waiter, but a bouncer: while the philosophers are thinking, they are outside the dining room. Before they can start picking up chopsticks, they must first enter the dining room, but the bouncer ensures that at most 4 of the philosophers are in the dining room at a time. This solution is especially nice for π-calculus since it requires less bookkeeping, which is convenient since π-calculus doesn t readily offer the structures needed by the waiter solution. Semaphores As you might guess, a solution from The Little Book of Semaphores requires semaphores. A semaphore is a useful tool for synchronisation: the basic idea is that the semaphore has an internal counter and two operations, often called wait and signal, for interacting with this counter. In the context of this problem, the counter represents the number of some resource that are available, wait claims a resource if it is available and otherwise blocks until one is available, and signal gives a claimed resourced to a blocked process, if there is one, and simply returns it otherwise. We can use semaphores both to control how many philosophers are allowed in the dining room and to ensure that at most one philosopher is holding a chopstick at a time. For the dining room semaphore, we initialize the semaphore to 4 (to indicate that up to 4 philosophers can be in the room at a time), waiting on the semaphore corresponds to trying to enter the room, and signalling the semaphore corresponds to leaving the room. For a chopstick semaphore, we initialize the semaphore to 1 (since only one philosopher can hold a chopstick at a time), wait corresponds to trying to pick up the chopstick, and signal corresponds to putting the chopstick back on the table. 6

Fortunately, it is fairly simple to implement wait and signal using the notation of π-calculus:!wait(c).sem(x).c x.0!signal(c).sem c.0 and we initialize this semaphore to n by having n copies of the process sem 0.0 (the value 0 sent is arbitrary.) In order to use this semaphore, a process will do something like (νa)wait a.a(x). CRIT.signal a.0 where CRIT is the critical section. In this implementation, the number of unresolved sem x.0 processes can be interpreted as the number of resources available. If there are one or more unresolved sem x.0 processes, the sem(x) operation in the wait process can immediately receive a message, and it will let the process that is trying to wait on the semaphore continue. If there are no unresolved sem x.0 processes, the sem(x) operation will block until one appears. Conversely, signal creates a sem x.0 process, which will either immediately communicate with a sem(x), unblocking it, or remain unresolved until some process tries to wait. Loops Before we can implement dining philosophers, we need one more structure to help represent the philosophers. In the dining philosophers, each of the 5 philosophers is in an infinite loop alternating between thinking and eating. Although the π-calculus provides the replication operator, this on its own isn t enough if the body of one of the philosopher processes is represented by the process PHIL, the process! PHIL would create infinitely many clones of this philosopher, which would be problematic. In order to ensure that exactly one copy of each philosopher is active at a time, we can combine replication with communication so that as one copy of a philosopher ends, it starts 7

another copy. This idea can be implemented in this way:!loop(x). PHIL.loop x.0 On its own, however, this doesn t do anything since the first operation of each copy is a receieve, every copy is blocked until there is some send. To resolve this, we can kick-start this philosopher with the process loop x.0. (It is worth noting, that, although we don t use x for anything is dining philosophers, this pattern could be used to pass information between iterations.) Putting Things Together The only thing remaining, then, is the body of the philosophers themselves. In each thinkeat cycle, the philosopher must think (which I will represent with THINK), wait to enter the dining room, obtain both chopsticks, eat (EAT), return the chopsticks, and leave the room. We can represent one iteration of this cycle as the process (νphil). wait room phil.phil(x).wait L chop phil.phil(x).wait R chop phil.phil(x). EAT. signal R chop phil.signal L chop phil.signal room phil. 0 where wait room, signal room, wait L chop, signal L chop, wait R chop, and signal R chop are the signal and wait channels for the semaphores representing the space in the room and left and right chopsticks. Adding the loop construct from above to the philosopher process, we obtain this process as 8

a complete philosopher: (νphil) (!phil loop(x). wait room phil.phil(x).wait L chop phil.phil(x).wait R chop phil.phil(x). EAT. signal R chop phil.signal L chop phil.signal room phil. phil loop x. 0 phil loop x.0) Thus, we can represent the entire five-philosopher problem with a process containing the parallel execution of five of these philosopher processes (the names changed so they are distinguishable and use the correct chopsticks), one semaphore initialized to 1 for each of the chopsticks, and one semaphore initialized to 4 for the room. Complete Listing This is the complete π-calculus version of my implementation of the dining philosophers solution.!wait room(c).sem r(x).c x.0!signal room(c).sem r c.0 sem r 0.0 sem r 0.0 sem r 0.0 sem r 0.0!wait chop0(c).sem c0(x).c x.0!signal chop0(c).sem c0) c.0 sem c0 0.0!wait chop1(c).sem c1(x).c x.0!signal chop1(c).sem c1 c.0 sem c1 0.0!wait chop2(c).sem c2(x).c x.0!signal chop2(c).sem c2 c.0 sem c2 0.0!wait chop3(c).sem c3(x).c x.0!signal chop3(c).sem c3 c.0 sem c3 0.0 9

!wait chop4(c).sem c4(x).c x.0!signal chop4(c).sem c4 c.0 sem c4 0.0 (νphil0)!loop phil0(x). wait room phil0.phil0(x).wait chop0 phil0.phil0(x).wait chop1 phil0.phil0(x). EAT. signal chop1 phil0.signal chop0 phil0.signal room phil0. loop phil0 x.0 loop phil0 x.0 (νphil1)!loop phil1(x). wait room phil1.phil1(x).wait chop1 phil1.phil1(x).wait chop2 phil1.phil1(x). EAT. signal chop2 phil1.signal chop1 phil1.signal room phil1. loop phil1 x.0 loop phil1 x.0 (νphil2)!loop phil2(x). wait room phil2.phil2(x).wait chop2 phil2.phil2(x).wait chop3 phil2.phil2(x). EAT. signal chop3 phil2.signal chop2 phil2.signal room phil2. loop phil2 x.0 loop phil2 x.0 (νphil3)!loop phil3(x). wait room phil3.phil3(x).wait chop3 phil3.phil3(x).wait chop4 phil3.phil3(x). EAT. signal chop4 phil3.signal chop3 phil3.signal room phil3. loop phil3 x.0 loop phil3 x.0 10

(νphil4)!loop phil4(x). wait room phil4.phil4(x).wait chop4 phil4.phil4(x).wait chop0 phil4.phil4(x). EAT. signal chop0 phil4.signal chop4 phil4.signal room phil4. loop phil4 x.0 loop phil4 x.0 11