CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM *

Similar documents
Process Management And Synchronization

Concept of a process

Semaphores INF4140. Lecture 3. 0 Book: Andrews - ch.04 ( ) INF4140 ( ) Semaphores Lecture 3 1 / 34

CSE Traditional Operating Systems deal with typical system software designed to be:

Process Synchronization

Process Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization

Interprocess Communication By: Kaushik Vaghani

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Silberschatz and Galvin Chapter 6

Lecture 3: Intro to Concurrent Processing using Semaphores

Chapter 7: Process Synchronization!

CS3502 OPERATING SYSTEMS

Process Synchronization. studykorner.org

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

Lesson 6: Process Synchronization

Dealing with Issues for Interprocess Communication

Synchronization Principles

CSC501 Operating Systems Principles. Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

Process Co-ordination OPERATING SYSTEMS

JOINT PROGRESS WITH DEADLOCK RESOURCE CATEGORIES AND DEADLOCK

DEADLOCK CS 409, FALL 2013 DEADLOCK AND STARVATION/1

PROCESS SYNCHRONIZATION

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Process Synchronization

Sections 01 (11:30), 02 (16:00), 03 (8:30) Ashraf Aboulnaga & Borzoo Bonakdarpour

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Chapter 6: Process Synchronization. Module 6: Process Synchronization

IV. Process Synchronisation

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Concurrent Processes Rab Nawaz Jadoon

5 Classical IPC Problems

Introduction to Operating Systems

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Process Synchronization

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Contribution:javaMultithreading Multithreading Prof. Dr. Ralf Lämmel Universität Koblenz-Landau Software Languages Team

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Paralleland Distributed Programming. Concurrency

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Chapter 7: Process Synchronization. Background

Topic 4: Synchronization with Semaphores

Process Synchronization

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

PESIT Bangalore South Campus

Concurrency: Deadlock and Starvation. Chapter 6

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Processor speed. Concurrency Structure and Interpretation of Computer Programs. Multiple processors. Processor speed. Mike Phillips <mpp>

1. Motivation (Race Condition)

CS370 Operating Systems

Process Synchronization

Example of use. Shared var mutex: semaphore = 1; Process i. begin.. P(mutex); execute CS; V(mutex);.. End;

Concurrent Object Oriented Languages

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Semaphores. Otto J. Anshus University of {Tromsø, Oslo}

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

Module 6: Process Synchronization

Module 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization

Process Synchronization(2)

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

CS370 Operating Systems

Operating Systems. Week 7 5 th Semester, 2012 IN Fei Peng PAI Group University of Fribourg

CS370 Operating Systems

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Interprocess Communication and Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Concurrent Programming

Chapter 6 Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

1 Process Coordination

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Concurrency. Chapter 5

Synchronization COMPSCI 386

Chapters 5 and 6 Concurrency

IT 540 Operating Systems ECE519 Advanced Operating Systems

Process Synchronization. Mehdi Kargahi School of ECE University of Tehran Spring 2008

Programming Languages

Lecture. DM510 - Operating Systems, Weekly Notes, Week 11/12, 2018

Synchronization for Concurrent Tasks

CSE 4/521 Introduction to Operating Systems

CS420: Operating Systems. Process Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Transcription:

CONCURRENT/DISTRIBUTED PROGRAMMING ILLUSTRATED USING THE DINING PHILOSOPHERS PROBLEM * S. Krishnaprasad Mathematical, Computing, and Information Sciences Jacksonville State University Jacksonville, AL 36265 Email: skp@jsucc.jsu.edu ABSTRACT Many paradigms and techniques of distributed computing and concurrent processing have matured and are popular. A quick and effective way to assimilate the basics of these principles is to illustrate them using a single but simple example, avoiding language-specific and tool-specific details and syntax. In this paper we consider the famous dining philosophers problem and indicate several solutions to it based on the various concurrent/distributed computing concepts. 1. INTRODUCTION Advances in hardware, networking, and software have led to widespread use of concurrent and distributed processing techniques. Since many of these technologies have matured it is important for software professionals and computer science students to have basic understanding of these techniques. One of the ways to learn about a variety of techniques is to see them all in a given context. We may consider this as a "one stop shopping" approach of presenting the concepts. In concurrent and distributed computing the following are popular and mature techniques for synchronization and mutual exclusion: semaphores, monitors, message passing, remote procedure call and rendezvous. An excellent introduction and discussion of * Copyright 2003 by the Consortium for Computing in Small Colleges. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the CCSC copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Consortium for Computing in Small Colleges. To copy otherwise, or to republish, requires a fee and/or specific permission. 104

CCSC: South Central Conference these topics can be found in the text by Andrews [1]. The intricacies of these methods can be quickly assimilated if they are all presented with a single application example. In this paper we illustrate these techniques as applied to the dining philosophers problem. The dining philosophers problem (DPP) [4] is one of the popular examples used in operating systems course to illustrate the basic concepts of mutual exclusion, synchronization, and deadlock. This problem was first introduced by Dijkstra [3] in which five philosophers are seated around a dinner table. In front of each philosopher is a plate of spaghetti. A philosopher needs two forks (left and right) to eat. But there are only five forks on the table placed in between each of the five plates. Each philosopher can only do two things: think or eat. No forks are needed for thinking. A philosopher has to acquire both the forks before the eating process. After finishing eating, a philosopher places forks back on the table and resumes thinking. Note that at most two philosophers may eat concurrently. Chandy and Misra [2] have a given more formal treatment of this problem. Section 2 illustrates a solution to the DPP using semaphores and section 3 shows a solution based on monitors. A solution based on message passing paradigm is presented in section 4. Section 5 shows how to use remote procedure call technique to implement the DPP. Finally, section 6 presents a solution based on the rendezvous method. Conclusions follow in section 7. We highly recommend the book by Andrews [1] for the detailed and lucid exposition of the concepts and techniques related to multithreaded, parallel, and distributed computing. 2. USING SEMAPHORES Semaphore is a kind of shared variable which is manipulated by two special operations known as P and V. Semaphore variables assume non-negative integer values. Suppose x is a semaphore variable. Then, the operation V(x) increments the value of x by one atomically. P(x) operation, also executed atomically, has the following effect: the process executing the P(x) operation waits until the value of x is positive; then it decrements x by one and continues. Semaphore is an important low-level abstraction to implement mutual exclusion and synchronization amongst concurrent processes. In the DPP each fork is a shared resource. We may represent the forks by an array of five semaphores. Acquiring a fork uses a P operation and releasing a fork uses a V operation on appropriate semaphores. Philosophers 0 to 3 grab the left fork first and then the right fork. Philosopher 4 grabs the right fork first and then the left fork. This is done to avoid deadlock. The necessary mutual exclusion and synchronization actions based on semaphores are shown in the following solution: // DPP solution based on semaphores Semaphore fork[5] = { 1, 1,1,1, 1}; // an array of five semaphore variables, all initialized to 1 process philosopher [ k = 0 to 3 ] // concurrent processes of first four philosophers 105

JCSC 18, 4 (April 2003) //now hungry, ready to eat P(fork[ k ]); // grab the left fork P(fork[ k+1 ]); // grab the right fork Eat the spaghetti; V(fork[ k ]); // release the left fork V(fork[ k+1 ]); //release the right fork process philosopher [ 4 ] // concurrent process of the last philosopher //now hungry, ready to eat P(fork[ 0 ]); // grab the right fork P(fork[ 4 ]); // grab the left fork Eat the spaghetti; V(fork[ 0 ]); // release the right fork V(fork[ 4 ]); //release the left fork 3. USING MONITORS Monitors are class-like abstractions used to encapsulate shared data and a set of operations (procedures) to manipulate those data. Procedures inside a monitor are executed mutually exclusively. Thus, when concurrent processes access monitor's procedures, mutual exclusion is implicitly guaranteed. But, condition synchronization need to be explicitly programmed inside the monitor using condition variables. A condition variable is used to delay a process until some monitor's state is met at which time the waiting process will be awakened. Typically a queue of waiting processes is associated with each condition variable. Primitives like wait() and signal() are used for coding synchronization aspects. A monitor-based solution for the DPP follows: // DPP solution based on monitors Monitor Dining_Controller Cond ForkReady[5]; // condition variables for synchronization Boolean Fork[5] = {true, true, true, true, true}; // availability status of each fork Procedure get _forks ( pid ) // pid is the philosopher id #: 0 to 4 int left = pid; int right = (pid +1 ) mod 5; //grant the left fork if (NOT Fork[left] ) wait(forkready[left]); // enqueue in the condition variable queue Fork[left] = false; //grant the right fork if (NOT Fork[right] ) wait(forkready[right]); // enqueue in the condition variable queue Fork[right] = false; 106

CCSC: South Central Conference Procedure release_forks(pid) int left = pid; int right = (pid +1 ) mod 5; // release the left fork if (Empty(ForkReady[left]) ) // no one is waiting for this fork Fork[left] = true; else Signal (ForkReady[left]); // awaken a process waiting on this fork // release the right fork if (Empty(ForkReady[right]) ) // no one is waiting for this fork Fork[right] = true; else Signal (ForkReady[right]); // awaken a process waiting on this fork // Dining_Controller monitor Process Philosopher [k=0 to 4] // the five philosopher clients Dining_Controller.get_forks(k); // client requests for forks via the monitor Eat spaghetti; Dining_Controller.release_forks(k); // client releases its forks via the monitor 4. USING MESSAGE PASSING In loosely coupled multiprocessing systems and distributed systems, processes communicate using explicit message passing via a communication network. A channel abstraction is employed to provide this communication path between processes. A channel represents a queue of messages that have been sent but not yet received. A process sends a message to a channel using a send primitive: send channel_id (parameter list). A process receives a message from a channel using a receive primitive: receive channel_id (parameter list). A message typically includes client ID, the kind of operation being requested, and any parameters/results. A solution for DPP using message passing is given next. // DPP solution based on message passing Type op_kind enum (ACQ, REL); // two kinds of operations: acquire and release forks Channel request (int cid, op_kind kind); // request channel Channel reply[5]( ); // an array of five reply channels Process Dining_Server Boolean fork[5] = {true, true, true, true, true}; // indicates fork availability Queue WQ; // queue for clients waiting on forks int left, right, cid; op_kind kind; DO Receive request (cid, kind); // server receives client request via message passing left = cid; right = (cid +1) mod 5; 107

JCSC 18, 4 (April 2003) If ( kind == ACQ ) // requested operation is acquire forks If (fork[left] && fork[right] ) Send reply[cid] ( ); // server informs client via message passing Else Enqueue(WQ, cid); // put the client in queue until forks are available Else // requested operation is release forks { fork[left] = true; fork[right] = true; Send reply[cid] ( ); // server acknowledges client via message passing Assign_Forks_To_A_Waiting_Client()://assumed procedure in the server } WHILE (true); // Dining_Server Process Philosopher [k=0 to 4] // the five clients Send request ( k, ACQ ); // client requests for forks from the server via message passing Receive reply[k] ( ); // client receives acknowledgement from the server via message passing Eat spaghetti; Send request ( k, REL ); // client requests server to release its forks Receive reply [k] ( ); // client receives acknowledgement from the server 5. USING REMOTE PROCEDURE CALL In a truly distributed computing environment, processes residing in different address spaces may communicate. The remote procedure call (RPC) abstraction allows for inter-process communication via remote procedure calls. In this technique, distributed program modules consist of exportable operations (callable from other modules), local operations, and other active processes. For each remote procedure call a new server process is created in the called module. Parameters are passed between the calling and called modules using implicit message passing. The calling process delays until the remote call is serviced and acknowledged by a reply message. At this point the server process terminates. The sending and receiving of parameters using message passing are implicit ( that is, no need to explicitly program the message passing). Processes inside the module may execute concurrently and share variables. This requires explicit programming of the mutual exclusion and synchronization aspects. Thus, the module design might get quite complex unlike the case of monitor design. What follows is an RPC-based solution to the DPP. //DPP solution based on RPC Module Dining_table Operation get_forks (int); // exportable operation Operation rel_forks (int); // exportable operation 108

CCSC: South Central Conference Boolean eating[5] = {false, false, false, false, false}; Semaphore mutex; // for mutually exclusive access to eating array Queue WQ; // queue of waiting clients Semaphore MQ = 1; // for mutually exclusive access to WQ queue Procedure get_forks (int k) P(mutex); If (NOT eating[k] && NOT eating[ (k+1) mod 5] ) { eating[k] = true; V(mutex); } else { P(MQ); Enqueue(WQ, k); V(MQ); V(mutex); } Procedure rel_forks(int k) P(mutex); Eating[k] = false; V(mutex); Assign_Forks-To_A_Waiting_Client();//assumed procedure inside this module // Dining_Table module Process Philosopher [k=0 to 4] // the five clients CALL Dining_Table.get_forks(k); // client does RPC to acquire forks Eat spaghetti; CALL Dining_Table.release_forks(k); // client does RPC to release forks 6. USING RENDEZVOUS Rendezvous provides implicit communication and synchronization thus greatly simplifying the design of distributed modules. Similar to an RPC module, we define exportable operations inside the rendezvous module. A client process can call an exportable operation of a remote module by a remote call. Unlike the case of RPC, here we do not create a new server process. Instead an existing server process of the remote module actually waits for, accepts, and executes instances of remote calls. Remote operations are executed one at a time. Synchronization is achieved by specifying condition expressions as part of the call-accepting step. We say that the server process of the called module performs a rendezvous with the calling process by executing an accept statement for a specific operation. A solution based on rendezvous follows: // DPP solution based on rendezvous Module Dining_Table 109

JCSC 18, 4 (April 2003) Operation get_forks (int); // exportable operations Operation rel_forks (int); // exportable operations Process Table_Waiter // active server process Boolean eating[5] = { false, false, false, false, false }; DO Select Accept get_forks(k) && NOT eating[k] && NOT eating[(k+1) mod 5] // rendezvous -> eating[k]=true; OR Accept rel_forks(k) -> eating[k] = false; // rendezvous point Select; UNTIL (true); Table_Waiter // Dining_Table module // code for client processes are similar to that given in RPC example 7. CONCLUSIONS In this paper we have considered many of the basic principles behind concurrent/distributed computing and discussed how to apply them in the context of the dining philosophers problem. This kind of "single stop shopping" approach often expedites the learning process by seeing all of the different techniques with a single example. We encourage the readers to substitute the DPP with other practical problems, like bounded-buffer problem, readers-writers problem, and resource sharing problem. To simplify the presentation, we have used pseudo-code to illustrate the programs/procedures without the language or tool-specific details/syntax. REFERENCES 1. Andrews, G. R., Foundations of Multithreaded, Parallel, and Distributed Programming, Addison-Wesley, 2000. 2. Chandy, K. M., Misra, J., Parallel Program Design: A Foundation, Addison-Wesley, 1988. 3. Dijkstra, E. W., Cooperating sequential processes, in Programming Languages, Academic Press, 1968. 4. Nutt, G., Operating Systems: A Modern Perspective, Addison-Wesley, 1990. 110