Concurrency(I) Chapter 5

Similar documents
The concept of concurrency is fundamental to all these areas.

Lecture 6. Process Synchronization

CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency: mutual exclusion and synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Concurrency. Chapter 5

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Concurrency: Mutual Exclusion and Synchronization. Concurrency

IT 540 Operating Systems ECE519 Advanced Operating Systems

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Mutual Exclusion and Synchronization

semsignal (s) & semwait (s):

Process Management And Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CS420: Operating Systems. Process Synchronization

Concurrency: Mutual Exclusion and Synchronization - Part 2

Dept. of CSE, York Univ. 1

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Introduction to OS Synchronization MOS 2.3

Interprocess Communication By: Kaushik Vaghani

Principles of Operating Systems CS 446/646

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Chapter 7: Process Synchronization!

Process Synchronization

Chapter 6: Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Classic Problems of Synchronization

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Process Synchronization - I

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization

1 Process Coordination

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

IV. Process Synchronisation

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of MCA

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Concurrency Principles

Concurrent Processes Rab Nawaz Jadoon

Process Synchronization

Lecture 8: September 30

Synchronization Principles

Chapter 7: Process Synchronization. Background. Illustration

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

Lecture Topics. Announcements. Today: Concurrency: Mutual Exclusion (Stallings, chapter , 5.7)

Chapter 7: Process Synchronization. Background

Dealing with Issues for Interprocess Communication

CSC 1600: Chapter 6. Synchronizing Threads. Semaphores " Review: Multi-Threaded Processes"

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Module 1. Introduction:

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Process Synchronization

CS3502 OPERATING SYSTEMS

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

CS370 Operating Systems

Process Synchronization

CSC501 Operating Systems Principles. Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Process Coordination

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Chapters 5 and 6 Concurrency

Chapter 5 Asynchronous Concurrent Execution

Process Synchronization

Process/Thread Synchronization

Process Synchronization

Lesson 6: Process Synchronization

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Chapter 6: Process Synchronization. Module 6: Process Synchronization

PESIT Bangalore South Campus

1. Motivation (Race Condition)

Synchronization I. Jo, Heeseung

Models of concurrency & synchronization algorithms

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Achieving Synchronization or How to Build a Semaphore

G52CON: Concepts of Concurrency

Interprocess Communication and Synchronization

PROCESS SYNCHRONIZATION

Module 6: Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Multitasking / Multithreading system Supports multiple tasks

CS3733: Operating Systems

UNIX Input/Output Buffering

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Concurrent & Distributed Systems Supervision Exercises

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Concurrency. Glossary

Chapter 6 Concurrency: Deadlock and Starvation

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Roadmap. Shared Variables: count=0, buffer[] Producer: Background. Consumer: while (1) { Race Condition. Race Condition.

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Transcription:

Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency is fundamental to all these areas. It is about a whole collection of design issues, including communication among processes, sharing of and competing for resources, synchronization of the activities of multiple processes, and allocation of processor time to processes. In a single processor multiprogramming system, processes are interleaved in time to yield the appearance of simultaneous exclusion. In a multiprocessing environment, we can also overlap the processes to achieve real parallel processing. 1

Interleaving and overlapping are both examples of concurrent processing, and suffer from the same problems. The fundamental issue is that the relative speed of process execution is unpredictable. In the case of single processor, the sharing of global resources is a big one. For example, if two processes read and write on the same (global) variable, then the order in which the various read and write are done is critical. Deposit and withdraw with a shared account is a good example. It is also tough to manage the allocation of resources. For example, one process may request use of, and be granted access to, a particular I/O device, and then be suspended before using it. It may be problematic to simply lock this device to prevent its use by other processes, since it may lead to deadlock. Pepper and salt is a good example here. 2

Finally, it becomes very difficult to locate a programming error in such an environment since results are typically not deterministic and reproducible. All of these difficulties are present in a multiprocessing system as well. It must also deal with the problems caused by simultaneous execution of multiple processes. 3

A simple example Given the following code char chin, chout; void echo(){ chin=getchar(); chout=chin; putchar(chout); Any program can call this procedure to accept user s input and echoes it back. Assume we have a single-processor multiprogramming case supporting one user, who can jump from application to application, calling the same procedure, using the same input and output device. It makes sense to share a single copy of the code among these applications to save space. 4

A problem Such code sharing can lead to problems, e.g., when the following sequence is followed. 1. P 1 calls the echo and is interrupted as soon as getchar() is done. Assume at this point, the most recently entered character is x. 2. P 2 is activated and calls the echo procedure, which runs all the way to completion, inputting and then outputting y on the screen. 3. P 1 resumes. But at this point, the value x stored in chin has been overwritten with y. Hence, what will be output by P 1 is another y. The essence of this problem is that the global variable, chin, is shared, and accessed by multiple processes. 5

A solution On the other hand, if we allow only one process to get access to chin at one time, then, we will have the following: 1. P 1 calls echo and is interrupted right after the getchar() is completed. At this point, chin holds x. 2. P 2 is activated and calls echo as well. However, since P 1 is still inside echo, although suspended for the moment, P 2 has to be blocked from entering echo. Thus, P 2 is actually suspended, waiting for the availability of echo. 3. At some point, P 1 is resumed, go all the way through, and print out x. 4. Now, echo is available, so P 2 can be resumed, it can now call echo, and gets, and sends out, y. Homework: Problem 5.2. 6

Multiprocessing case Consider the following situation, when both P 1 and P 2 execute on a separate processor, and call echo: P 1 time P 2 - - chin=getchar(); t 1 - - chin=getchar(); chout=chin; t 2 - chout=chin; putchar(chout); t 3 - - putchar(chout); - t 4 Again, we will have the problem that the input to P 1 will get lost before it is displayed(?). 7

The same solution We can again add the ability to enforce that only one process can be executing echo at the same time. Thus, 1. Both P 1 and P 2 are executing, each on a separate processor. P 1 calls on echo first. 2. While P 1 is inside echo, P 2 tries to call echo as well, but it has to be blocked. Therefore, it is suspended, waiting for the availability of echo. 3. At a later time, P 1 completes the execution of echo and make the procedure available again. Thus, P 2 will resume its execution, and start to execute echo. 8

What is in common? In the uniprocessor case, the problem is that an interrupt can stop execution at any time; and in the multiprocessor case, two processes executes simultaneously and both are trying to get access to the same global variable. The solution is the same: control access to the shared resources, which could be either data space, e.g., the variable chin; or actual program segment, e.g., the echo() method. 9

Race condition A race condition occurs when multiple processes or threads read and write data items so that the final result depends on the execution order. For example, assume two processes, P 1 and P 2 share a global variable a. At some point, P 1 updates a to 1, and at some point, P 2 updates a to 2. Thus, the two tasks are in a race to write into a. In the case, the loser determines the final value of a. As another example, P 3 and P 4 hare b and c, with their initial values being 1 and 2, respectively. At some point, P 3 might do b=b+c; Later P 4 might do c=b+c; Although they do update different variables, the final value of the variables depends on the relative order of these two operations. 10

Operation concerns 1. The OS must be able to keep track of various processes, using PCB. 2. The OS must allocate and deallocate resources to various processes, such as processing time, memory, file and I/O devices. 3. It must protect data and physical resources from unintended interference. 4. The results of a process must be independent of its execution speed, relative to the speed of the other concurrent processes. This is referred to as speed independence. To understand this issue better, let s consider the many ways processes can interact with each other. 11

Process interaction 1. When processes are completely independent of each other, they are not intended to work with each other. On the other hand, they may compete with each other for the same resources, e.g., the same file, or the same printer. OS must regulate these accesses. 2. Processes might not interact directly, with, e.g., their respective IDs, but they may share access to the same object, e.g., the same I/O buffer. Such processes cooperate with each other. 3. Finally, processes might communicate with each other since, they are designed to work jointly. These processes also exhibit cooperation. 12

Competing process To manage competing processes, three control problems have to be dealt with. One is mutual exclusion. Assume two or more processes require access to a nonsharable resource, such as a printer, we will refer to such resource as a critical resource, and the portion of the program that uses a critical resource a critical section. It is important that only one program can be allowed, at a time, in such a critical section. For example, we want any individual process to have a total control of the printer when it prints its entire output. The enforcement of mutual exclusion may lead to problems. One of them is deadlock: Possessing R 1, P 1 may request R 2 ; while P 2, possessing R 2 exclusively, may want to have R 1. P 1 and P 2 are deadlocked. (Recall the Pepper and Salt problem.) 13

Another problem could be starvation. Assume that P 1, P 2 and P 3 all want to have resource R, and P 1 now has R, thus both P 2 and P 3 are delayed. When P 1 exits its critical section for R, assumes that the OS gives R to P 3. Further assume that P 1 again asks for R, before P 3 exits its critical section, and OS decides to give it back to P 1. If this situation continues, then P 2 will never get it, thus starved. Solutions to these problems have to deal with both OS and the processes. OS is fundamental in allocating resources; while the processes have to be able to lock up its resource with the locking mechanism provided by the OS. 14

A general framework In the following program, the parbegin structure means that suspends the execution of main, initiates the concurrent execution of P 1,, P n, and once all them are done, resume main. Each process includes a critical section, and the remainder. Each function takes the name of the required resource, an integer, as its argument. Any process that attempts to enter its critical section while another process is in its critical section for the same resource is made to wait, or blocked. 15

The code const int n=/*number of processes*/; void P(int i){ while(true){ entercritical(i); /*critical section*/ exitcritical(i); /*remainder */; void main(){ parbegin(p(r1),..., P(Rn)); We will discuss the implementation of the two locking functions entercritical() and exitcritical later. 16

Sharing and cooperating Multiple processes may have access to shared data, and may use and update them without reference to other processes, while knowing their existence. Thus, those processes must cooperate with each other to ensure that the shared data are properly managed. Again, since all these data are stored in resources, the problems of mutual exclusion, deadlock, and starvation might occur, together with another problem of data coherence. Homework: Problem 5.3. 17

An example Assume that two pieces of data, a and b have to be maintained such that a = b holds. Now consider the following two processes: P1: P2: a=a+1; b=b+1; b=2*b; a=2*a; If the state is initially consistent, and each process executed separately, the resulted data will also be consistent. On the other hand, the following concurrent execution of the above two processes will leave the state inconsistent afterwards: a=a+1; b=2*b; b=b+1; a=2*a; 18

A solution It is clear that the above problem can be avoided if we require that the whole process of using the shared a and b become the critical section. Thus, the concept of critical section is also essential in the cooperating process case. 19

Communicating process When processes cooperate by communicating with each other, they participate in an effort that links all of them. The communication itself provides a way to synchronize, coordinate, the activities involved. Communication is usually carried out by passing messages. The corresponding primitives for sending and receiving messages could be either part of the programming language, or can be provided by the OS kernel. Since nothing is shared between processes for this category, mutual exclusion is not a needed mechanism. But, the problems of deadlock and starvation persist. For example, two processes might be waiting for each other s message. 20

The mutual exclusion requirement Any facility that is to support mutual exclusion must meet the following requirement: 1. Only one process at one time is allowed to enter a critical section. 2. A process that halts in its non-critical section must not interfere with other processes. 3. It must not be possible for a process requiring access to some requests to be delayed indefinitely.(starvation) 4. When no process is in a critical section, any process should be permitted to enter without delay.(deadlock) 5. No assumptions should be made about relative process execution speed.(speed independence) 6. A process remains in its critical section only for a finite amount of time.(deadlock) 21

Hardware approaches In a uniprocessor machine, concurrent processes cannot be overlapped, only be interleaved. A process, once started, will continue to run until it requests an OS service, or when it is interrupted. Hence, to guarantee mutual exclusion, it suffices to prevent a running process from being interrupted. This can be done in the form of primitives provided by the kernel for enabling and disabling interrupts. Then, the basic configuration will be the follows: while(true){ /* disable interrupts */; /* critical section */; /* enable interrupts */; /* remainder */; 22

Since the critical section can t interrupted, or rather, during the period that the critical section is being executed, the running process can not be interrupted, this guarantees that no other process has a chance to get in the critical section at the same time. Hence, mutual exclusion is upheld. However, the efficiency of execution is degraded since this approach limits processor s ability to interleave programs. A second problem is certainly that this will not work in a multiprocessor scenario, when it is still possible for a process, running on a different processor to enter the critical section for the same resource. 23

Special instructions In a multiprocessor situation, multiple processors work together, and independently, in a peer relationship. There does not exit an interrupt mechanism between processors that will prevent mutual exclusion. Based on the mutual exclusion at the memory location level, a few approaches have been suggested at the instruction level. Those instructions can carry two actions, such as reading and writing, or reading and testing, in a single machine cycle, thus not subject to interference by other instructions. We will discuss two approaches based on such instructions. 24

Test and set This instruction can be defined as follows: boolean testset(int i){ if(i==0){ i=1; return true; else return false; The idea is that this entire program is hard coded as an single, atomic, instruction. 25

An application Below shows a mutual exclusion protocol based on the above test-and-set instruction. const int n=/*number of processes*/; int bolt; void P(int i){ while(true){ while(!testset(bolt)) /*do nothing*/; /*critical section*/; bolt=0; /*remainder*/; void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); 26

How come? When the first process tries to get in, the value of the shared variable bolt is 0; thus this process gets in to the critical section, after flipping bolt to 1. Hence, all the other processes, perhaps organized as a queue, will stay in the while loop, waiting for the lucky process to exit the critical section and reset bolt to 0, when the process at the front end of the queue will be allowed to get into the critical section. Homework: Study the other approach, namely, the exchange instruction, and answer the following questions: 1) How does it accomplish mutual exclusion? 2) Why does the equation bolt + i key i = n hold? 27

The exchange instruction This atomic instruction can be defined as follows: void exchange(int register, int memory){ int temp; temp=memory; memory=register; register=memory; This one exchanges the content of a register and that of a memory location. During its execution, that memory location is blocked from any other instruction to get access to it. 28

An application int const n=/*number of processes*/; int bolt; void P(int i){ int keyi=1; while(true){ keyi=1; do exchange(keyi, bolt) while(keyi!=0); /*critical section*/; exchange(keyi, bolt); /*remainder*/; void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); 29

What happens? When implementing the mutual exclusion mechanism, a shared variable, bolt, is set to 0. Each process uses a local variable key, initialized to 1. The only process that is allowed to enter its critical section is the one that finds bolt equal to 0. This process then excludes other processes by setting bolt to 1, the value of its local variable keyi. When a process exits from its critical section, it resets bolt back to 0, which is then used to activate the next looping process. At any moment, the following expression holds: bolt + i key i = n. If bolt is 0, no process is in its critical section. If it is 1, then exactly one process is in, i.e., the one with its key value being 0. 30

Properties Besides being simple, thus easy to verify, this approach is applicable to any number of processes on either a single processor machine, or on a multiple processor machine with shared memory. Finally, it can be used to support multiple critical sections, each of them can be associated with a separate bolt. However, while a processor is waiting for access, it still consumes processor s time. Also, when the critical section becomes available again, the selection is arbitrary, thus, some process may never get it(starvation). Finally, deadlock is also possible. For example, P 1 is interrupted after entering a critical section and give the processor to a more important P 2. P 2 can t get into the same critical section(?), thus has to wait. On the other hand, P 1 can t exit the section since it has a wait for P 2 to finish first. Homework: Problem 5.4. 31

A software approach Mutual exclusion can be implemented by following a software approach for concurrent processes that execute on a single processor or a multiprocessor machine with shared main memory. It is usually assumed that mutual exclusion holds at the memory access level, i.e., simultaneous accesses to the same location in main memory are serialized by certain mechanism, although the order of access is not specified. Other than that, no support in hardware, OS, or programming language is assumed. 32

Semaphores Semaphores is one of the mechanisms provided by operating system and programming languages to support concurrency. The basic idea is that two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specific place until it has received a special signal. For signals, special variables called semaphores are used. To send a signal via a semaphore s, the process executes the primitive semsignal(s); to receive such a signal, semwait(s) is executed. If the expected signal has not been transmitted, a process is suspended until such a transmission happens. 33

A few requirements To achieve such effects, a semaphore is defined as a variable, with an integer value, together with the following operations: 1. A semaphore can be initialized to a nonnegative value. 2. The semwait operation decrements the value. If the value becomes negative, the process that executes semwait is blocked. 3. The semsignal operations increments its value. It the value is less or equal to 0 after the increment, then a process blocked by a semwait operation is released from the blocked state. There are no other ways to inspect or manipulate a semaphore. Homework: Problem 5.8. 34

struct semaphore{ int count; queuetype queue; Semaphore code void semwait(semaphore s){ s.count--; if(s.count<0){ place the process into queue; block this process; void semsignal(semaphore s){ s.count++; if(s.count<=0){ remove a process from s.queue; place it on ready list; 35

Binary semaphore The following binary semaphore is easier to implement, but has the same power as the general one. struct binary_semaphore{ enum(zero, one) value; queuetype queue; void semwaitb(binary_semaphore s){ if(s.count==1) s.count=0; else { place the process into queue; block this process; 36

void semsignalb(binary_semaphore s){ if(s.queue.is_empty()) s.value=1; else { remove a process from s.queue; place it on ready list; Homework: Problem 5.13. 37

Strong and weak semaphore For semaphores, a queue is used to hold those processes waiting on semaphore. The question arises when we have to decide the order in which the waiting processes are removed, when the semaphore becomes available again. The fairest policy is the first-in, and first-out. This way, the process that is blocked longest will be released first. Such a semaphore is called a strong semaphore, otherwise, a weak semaphore. Both of them can be used to implement mutual exclusion algorithm. Although strong semaphore will prevent starvation from happening, a weak semaphore can t do that. Hence, we will assume the strong version. 38

An example Below shows an execution of a strong semaphore, where processes A, B and C depends on a result produced by process D. 39

Support mutual exclusion Below shows a straightforward implementation of mutual exclusion. int const n=/*number of processes*/; semaphore s=1; void P(int i){ while(true){ semwait(s); /* critical section */; semsignal(s); /* remainder */; void main(){ parbegin(p(1), P(2),..., P(n)); 40

How does it work? For each process, a semwait operation is executed first before it enters its critical section. If the value of s becomes negative, this process is suspended. On the other hand, if this value is 1, then it is decremented to 0, and the process enters the critical section immediately. Because s becomes 0 now, no further process will be able to enter the critical section any more, since it has to execute a sigwait operation first, which leads to a negative value for s. The semaphore is initialized to 1. Hence, the first one is able to enter the critical section immediately, setting s to 0. Any number of further processes will keep on decrementing s, and be put into the queue. 41

When the process that initially entered the critical section completes and leaves, it executes a semsignal operation which increments s by 1. As a result, one of the blocked processes will be dequeued, and put into the Ready state. Thus, when it is scheduled next time, it may enter the critical section. 42

The program for semaphore mutual exclusion can also handle the case that more than one processes be allowed to enter their critical section at a time. This can be done easily by initializing the s value to a specified value. Thus, at any time, the value of s.count can be interpreted as follows: 1. When s.count is not negative, it is the number of processes that can execute semwait(s) without suspension. 2. Otherwise, the magnitude of s.count is the number of processes suspended in s.queue. 43

The producer/consumer problem This is one of the most common problems in concurrent processing: One or more producers generate data and put them into a buffer. A single consumer takes items out of the buffer one at a time. The goal is that only one agent, either consumer or producer, may get access to the system at one time since we want to prevent an overlap of the buffer operations. We will look at a few solutions to illustrate both the power and pitfalls of semaphores. 44

To begin with Assume that the buffer is infinite and consists of a linear array of elements. Then the two functions can be defined as follows: producer: while(true){ /* produce item v */; b[in++]=v; consumer: while(true){ while(in<=out) /*do nothing */; w=b[out++]; /* consume w */; 45

Below shows the structure of the buffer b. The producer basically generates items and store them in b at its own speed. Whenever it puts something into the buffer, an index in is incremented. The consumer proceeds in the same fashion, but must make sure that it will not try to read something out of an empty buffer. b[1] b[2] out b[3] b[4] in b[5] Now let s try to implement a solution, using binary semaphores. 46

The first attempt Instead of using both in and out, we can keep track of, n, the number of items in the buffer, i.e., the difference between out and in. We also use a semaphore s to enforce mutual exclusion, and another one delay to force the consumer to wait if the buffer is empty. The producer is free to add to the buffer at any time. It performs a waitb(s) before adding, and signalb(s) afterwards to make sure that the consumer will not try to take something out while it is adding in an item. Also, the producer will increment n. If n=1, then the buffer was empty before this new addition. Thus, the producer will also execute signalb(delay) to alert the consumer. 47

Now let s look at the code. int n; binary_semaphore s=1; binary_semaphore delay=0; void producer(){ produce(); semwaitb(s); append(); n++; if(n==1) semsignalb(delay); semsignalb(s); void consumer(){ semwaitb(delay); while(true){ semwaitb(s); take(); n--; semsignalb(s); consume(); if(n==0) semwaitb(delay); void main(){ n=0; parbegin(producer, consumer); 48

The consumer begins by waiting for the first item to be produced, using semwaitb(delay). It then takes an item and decrements n in its critical section. If the producer is able to produce enough items, then the consumer will rarely block on delay because n is usually positive. Hence both of them will run smoothly. Otherwise, when the consumer exhausts the buffer, it has to reset delay and will be forced to wait until the producer generates more items. But, in some case, the above code will generate incorrect result. Homework: Figure out the error by checking through Table 5.3. 49

The above problem cannot be easily fixed by moving in the test into the critical section of the consumer part since that may lead to a deadlock (?). A fix for the problem could be to use another variable that can be set inside the consumer s critical section to pass out some needed information as shown below. void consumer(){ int m; waitb(delay); while(true){ waitb(s); take(); n--; m=n; semsignalb(s); consume(); if(m==0) waitb(delay); Homework: Draw a similar figure as in Table 5.3 to show, with the revised code, the problem is solved; but if we move the test of n==0 into the critical section, then a deadlock will occur. 50

Yet another solution When using general semaphores, we can have a cleaner solution, as shown below. semaphore n=0; semaphore s=1; void producer(){ while(true){ produce(); semwait(s); append(); semsignal(s); semsignal(n); void consumer(){ while(true){ semwait(n); semwait(s); take(); emsignal(s); consume(); 51

What about... 1. Reverse semsignal(s) and semsignal(n)? Then, semsignal(n) will be included in the critical section of the producer. This does not matter as consumer is concerned, since it needs to go through both semaphores before taking anything out. 2. Reverse semwait(n); and semwait(s); This could be a serious problem. Assume that consumer gets into its critical section, when the buffer is empty, then it will get stuck there, without releasing the s semaphore. As a result, no producer will be able to get into its critical section to add another item. This leads to a deadlock. 52

Get real In reality, the buffer is certainly finite. This is treated similar to the queue case, i.e., as a circular list, which requires the modular operations. For example, producer: while(true){ /* produce item v */; while((in+1)%n==out) /*do nothing */; b[in]=v; in=(in+1)%n; 53

A solution In the code, semaphore e is used to keep track of empty spaces. const int sizeofbuffer=/*buffer size*/; semaphore n=0; semaphore s=1; semaphore e=sizeofbuffer; void producer(){ while(true){ produce(); semwait(e); semwait(s); append(); semsignal(s); semsignal(n); void consumer(){ while(true){ semwait(n); semwait(s); take(); semsignal(s); semsignal(e); consume(); 54

Semaphore implementation The key is to implement both semwait and semsignal as atomic operations. Recall that only one process at any time may manipulate a semaphore with either a semwait or semsignal operation. Any of the software mutual exclusion certainly works, but leads to large overhead. Another alternative is to use one of the hardware schemes. For example, we can use the test-and-set instruction to implement such a semaphore. 55

One implementation semwait(s){ while(!testset(s.flag) /*do nothing*/; s.count--; if(s.count<0){ place the process in s.queue; block the process, and set s.flag to 0 else s.flag=0; semsignal(s){ while(!testset(s.flag) /* do nothing */; s.count++; if(s.count<=0){ remove a process from s.queue; place it on the Ready list s.flag=0; Homework: Problems 5.8 and 5.11. 56

Other mechanisms Semaphores provides a primitive, but powerful and flexible, method to enforce mutual exclusion and for coordinating processes. But, it is not very easy to use since wait and signal operations may spread all over the program and their overall effect will be tough to see. There are some other mechanisms as well. Monitor is a programming language construct that provides equivalent functionality as that of semaphores, but it is easier to use. It is essentially a software module consisting of a few procedures, an initialization sequence, and local data. The concept of monitor has been implemented in a few languages, including Java. 57

Monitor all share the following properties: 1. The local variables are accessible only by the monitor s procedures and not by any external procedure. 2. A process enters the monitor by invoking one of its procedures. 3. Only one process may be executing in the monitor at any time; any other process that has invoked the monitor is suspended, waiting for the monitor to become available. Obviously, the third property helps to enforce the mutual exclusion we are expecting. 58

An example Let s look at a solution of the bounded -buffer producer/consumer problem in terms of a monitor. Here, we define two conditional variables: notfull, notempty, essentially two semaphores. monitor boundedbuffer; char buffer[n]; int nextin, nextout; int count; cond notfull, notempty; void append(char x){ if(count==n) cwait(notfull); buffer[nextin]=x; nextin=nextin+1%n; count++; csignal(notempty); 59

The take method checks to see if there is still stuff to take. If not, it waits on the notempty semaphore. Otherwise, it takes it, and decrement the counter. void take(char x){ if(count==0) cwait(notempty); x=buffer[nextout]; nextout=(nextout+1)%n; count-- csignal(notfull) {nextin=nextout=count=0; 60

Both producer and consumer can only use the defined append and take to add and/or delete stuff from the buffer. void producer(){ char x; while(true){ produce(x); append(x); void consumer(){ char x; while(true){ take(x); consume(x); void main(){ Parbegin(producer, consumer); 61

Message passing When processes interact with one another, two fundamental requirements must be satisfied: synchronization and communication. The former serves the purpose of enforcing mutual exclusion, and the latter is needed for processes to achieve cooperation. One way to meet both requirements is to pass messages between processes. This mechanism has an additional advantage, namely, it can also be used in a distributed system, besides in a uniprocessor system, and multiprocessors with shared memory. Homework: Self-study 5.5, and answer the following questions: 1) How does message passing accomplish synchronization? and 2) How mutual exclusion is achieved with message passing? 62