Lecture 6. Process Synchronization

Similar documents
Concurrency(I) Chapter 5

Concurrency: mutual exclusion and synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION

Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS420: Operating Systems. Process Synchronization

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

IT 540 Operating Systems ECE519 Advanced Operating Systems

semsignal (s) & semwait (s):

Synchronization Principles

Lesson 6: Process Synchronization

Interprocess Communication By: Kaushik Vaghani

Process Synchronization

Chapter 7: Process Synchronization!

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background. Illustration

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency. Chapter 5

Mutual Exclusion and Synchronization

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Chapter 6: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Process Management And Synchronization

Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization

CS370 Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

IV. Process Synchronisation

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Process Coordination

Introduction to OS Synchronization MOS 2.3

Chapter 6 Process Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

CS370 Operating Systems

Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Process Synchronization

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Concurrency: Mutual Exclusion and Synchronization - Part 2

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Dept. of CSE, York Univ. 1

Chapter 5: Process Synchronization

Process Co-ordination OPERATING SYSTEMS

Process Synchronization. studykorner.org

Achieving Synchronization or How to Build a Semaphore

Process/Thread Synchronization

Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Classic Problems of Synchronization

The concept of concurrency is fundamental to all these areas.

1 Process Coordination

Concurrency: Mutual Exclusion and Synchronization. Concurrency

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS3733: Operating Systems

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 5: Process Synchronization

Dealing with Issues for Interprocess Communication

Comp 310 Computer Systems and Organization

Synchronization Spinlocks - Semaphores

Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization

1. Motivation (Race Condition)

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Process/Thread Synchronization

CS370 Operating Systems

Synchronization I. Jo, Heeseung

Multitasking / Multithreading system Supports multiple tasks

Process Synchronization Mechanisms

CSE Opera,ng System Principles

Concurrency Principles

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Introduction to Operating Systems

Chapter 6 Synchronization

Chapter 6: Process Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Process Synchronization(2)

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

CSE 4/521 Introduction to Operating Systems

Synchronization for Concurrent Tasks

Process Synchronization

Lecture 3: Synchronization & Deadlocks

CSC501 Operating Systems Principles. Process Synchronization

Transcription:

Lecture 6 Process Synchronization 1

Lecture Contents 1. Principles of Concurrency 2. Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 2

1. Principles of Concurrency OS design issue is concurrency. That is, how to ensure the orderly execution of cooperating processes that share logical address space (data and code) so that data consistency is maintained. 3

Difficulty of Concurrency Sharing of global resources: If two processes both make use of same global variable and both perform reads and writes on that variable, then the order in which various reads and writes are executed is critical. 4

A Simple Example: on Uniprocessor System chin is a shared global variable echo() shared by P1 and P2 5

A Simple Example : on Uniprocessor System Consider that we have a single-processor multiprogramming system supporting a single user. The user is using some applications, each application needs to use the procedure echo(). It makes sense for echo() to be a shared procedure that is loaded into a portion of memory global to all applications. Thus, only a single copy of the echo() procedure is used, saving space. 6

A Simple Example : on Uniprocessor System The sharing of main memory among processes is useful to permit efficient and close interaction among processes. However, such sharing can lead to problems (e.g., data inconsistency due to the order of execution of cooperating processes). 7

A Simple Example : on Uniprocessor System Consider the following sequence: 1. Process P1 invokes echo() procedure and is interrupted immediately after getchar() returns its value and stores it in chin. At this point, the most recently entered character, x, is stored in variable chin. 2. Process P2 is activated and invokes echo() procedure, which runs to conclusion, inputting and then displaying a single character, y, on screen. 8

A Simple Example : on Uniprocessor System 3. Process P1 is resumed. By this time, value x has been overwritten in chin and therefore lost. Instead, chin contains y, which is transferred to chout and displayed. P1 and P2 are interleaved so data inconsistency occurs. 9

A Simple Example : on Uniprocessor System Thus, the first character x is lost and the second character y is displayed twice. The essence of this problem is shared global variable, chin. Multiple processes have access to this variable. If one process updates global variable and then is interrupted, another process may alter the variable before first process can use its value. Solution: only one process at a time is permitted to be in echo() procedure (i.e., mutual exclusion) 10

A Simple Example : on Multiprocessor System P1 and P2 are concurrently executed (i.e., they are overlapped) CPU 1 CPU 2 11

A Simple Example: on Multiprocessor System In multiprocessor system, the same problems of protected shared resources arise, and the same solution works (i.e., mutual exclusion). First, suppose that there is no mechanism for controlling access to the shared global variable: 1. Processes P1 and P2 are both executing, each on a separate processor. Both processes invoke the echo() procedure. 12

A Simple Example : on Multiprocessor System 2. The following events occur; events on the same line take place in parallel: The result is that the character input to P1 (x) is lost before being displayed, and the character input to P2 (y) is displayed by both P1 and P2. Solution: only one process at a time can be in echo() procedure (i.e., mutual exclusion) 13

Solution to Data Inconsistency Problem If we enforce a rule that only one process can enter the shared function at a time (enforce single access), then: 1. P1 & P2 are running in parallel 2. P1 enters echo() first, - P2 tries to enter but is blocked, P2 is suspended 3. P1 completes execution - P2 resumes and executes echo() 14

Race Condition A race condition occurs when multiple processes or threads read and write shared data items so that the final result depends on the order of execution of processes. 15

Race Condition In other word, a situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called a race condition. To guard against the race condition, we need to ensure that only one process at a time can manipulate the shared data (mutual exclusion) To make such a guarantee, we require that the processes be synchronized in some way. 16

Competition among Processes for Resources OS must deal with three main control problems: Mutual Exclusion: makes sure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. - Critical section of a process is its portion of code accessing a shared resource. 17

Competition among Processes for Resources Deadlock: situation in which two or more processes are unable to proceed because each is waiting for one of the others to do something. That is, two or more processes are hung up waiting for each other. Starvation: situation in which a runnable process is overlooked / ignored indefinitely by the scheduler; although it is able to proceed, it is never selected for execution (e.g. low priority). 18

The Critical-Section Problem The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section. 19

The Critical-Section Problem (CSP) General structure of a typical process P i. 20

Requirements for CSP 1. Mutual exclusion. Only one process at a time is allowed in critical section for shared resource. - That is, if a process is executing in its critical section, then no other processes can be executing in their critical sections. 21

Requirements for CSP 2. Progress. It must not be possible for a process requiring access to a critical section to be delayed/postponed indefinitely (i.e., allow waiting process to proceed). - That is, when no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay. 22

Requirements for CSP 3. Bounded-waiting. There exists a bound (or limit) on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. - That is, a process requesting to enter its critical section cannot wait for other processes indefinitely. 23

Lecture Contents 1. Principles of Concurrency 2. Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 24

1. Disable Interrupts (on uniprocessor) Uniprocessor systems only allow interleaving, no overlapped execution (i.e., no parallel execution) Interrupt Disabling - A process runs until it invokes an OS service or until it is interrupted - Disabling interrupts guarantees mutual exclusion - Will not work in multiprocessor architecture (due to time-consuming message passing) 25

Mutual Exclusion Enforcement A process can enforce mutual exclusion as follows while (true) { } disable interrupts // entry section critical section enable interrupts // exit section remainder 26

2. Use Special Machine Instructions In multiprocessor system, there is no interrupt mechanism among processors on which mutual exclusion can be based. Processor designers have proposed several machine instructions that carry out two actions (e.g., reading and writing or reading and testing) atomically (indivisibly or uninterruptible) The term atomic means that the instruction is treated as a single step that cannot be interrupted. 27

Mutual Exclusion with TestAndSet() - Assume that process P i needs to execute following code to enter its critical section lock = FALSE; do { while (TestAndSet(&lock)) ; // do nothing // critical section lock = FALSE; // exit section // remainder section } while (TRUE); 28

Mutual Exclusion with TestAndSet() boolean TestAndSet(boolean*lock) { boolean rv = *lock; *lock = TRUE; return rv; } 29

Mutual Exclusion with Swap() lock = FALSE; do { key = TRUE; while (key == TRUE) Swap(&lock, &key); // critical section lock = FALSE; // remainder section } while (TRUE); 30

Mutual Exclusion with Swap() void Swap(boolean *lock, boolean *key) { } boolean temp = *lock; *lock = *key; *key = temp; 31

Advantages of Hardware Solution Applicable to any number of processes on either a single processor or multiple processors sharing main memory It is simple and therefore easy to verify It can be used to support multiple critical sections; each critical section can be defined by its own variable 32

Disadvantages of Hardware Solution Busy-waiting is employed: when process is waiting for access to critical section, it continues to consume CPU time to test appropriate variable value to gain entrance (it wastes CPU time). Starvation is possible: when process leaves a critical section and more than one process is waiting. - Selection of waiting process is arbitrary - Some process could indefinitely be denied access. 33

Disadvantages of Hardware Solution Deadlock is possible: Example (on a uniprocessor). - Process P1 executes the special instruction and enters its critical section. - P1 is then interrupted to give the processor to P2, which has higher priority. 34

Disadvantages of Hardware Solution - If P2 now attempts to use the same resource as P1, it will be denied access because of the mutual exclusion mechanism. Thus it will go into a busy waiting loop. - However, P1 will never be dispatched because it is of lower priority than another ready process, P2. Implementing atomic special machine instructions is not a trivial task. 35

Lecture Contents 1. Principles of Concurrency 2. Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 36

Software Solutions to Mutual Exclusion Three solutions to process synchronization (or concurrency) are Semaphore, Monitor, and message passing. All software solutions use busy waiting technique Busy waiting (or spin waiting) refers to a technique in which a process can do nothing until it gets permission to enter its critical section but continues to execute an instruction or set of instructions that tests the appropriate variable to gain entrance. 37

Busy-waiting Semaphore or Spinlock Busy-waiting semaphore is also called a spinlock because the process spins while waiting for the lock. Spinlock has an advantage in that no context switch is required when process must wait on a lock, and context switch may take considerable time. Thus, when locks are expected to be held for short times, spinlock is useful. 38

Semaphore Semaphore s (also called counting or general semaphore): is an integer variable used for signalling among processes. Only three operations may be performed on a semaphore, all of which are atomic (indivisible): 39

Semaphore Only three operations may be performed on a semaphore, all of which are atomic: 1. Initialization. semaphore may be initialized to a nonnegative integer value (e.g., s = 0, 1). - positive semaphore value is the number of resources available - If a semaphore value is negative, its magnitude is the number of processes waiting on that semaphore. 40

Semaphore 2. The semwait(s) operation decreases the semaphore value by one (i.e., s--). - If the value becomes negative, then the process executing the semwait(s) is blocked. - Otherwise, the process continues execution. 41

Semaphore 3. The semsignal(s) operation increases the semaphore value by one (i.e., s++). - If the resulting value is less than or equal to zero, then a process blocked by a semwait(s) operation, if any, is unblocked. 42

Definition of Semaphore Primitives struct semaphore { int count; queuetype queue; // waiting queue }; 43

Definition of Semaphore Primitives void semwait(semaphore s) { s.count--; if (s.count < 0) { // place this process in s.queue; // block this process; } } 44

Definition of Semaphore Primitives void semsignal(semaphore s) { s.count++; if (s.count <= 0) { } } //remove a process P from s.queue; // place process P in ready queue; 45

Definition of Semaphore Primitives typedef struct { int value; struct process *list; } semaphore; 46

Definition of Semaphore Primitives wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } 47

Definition of Semaphore Primitives signal(semaphore *S) { S->value++; if (S->value <= 0) { remove process P from S->list; wakeup(p);//resumes blocked P } } 48

Binary Semaphore A binary semaphore may only take on the values 0 and 1 and can be defined by the following three operations: 1. A binary semaphore may be initialized to 0 or 1. 2. The semwaitb(s) operation checks the semaphore value. If the value is one, then the value is changed to zero and the process continues execution. If the value is zero, then the process executing the semwaitb(s) is blocked. 49

Binary Semaphore 3. The semsignalb(s) operation checks to see if any processes are blocked on this semaphore. If no processes are blocked, then the value of the semaphore is set to one. If there is a blocked process, then a process blocked by a semwaitb(s) operation is unblocked. 50

Definition of Binary Semaphore Primitives struct binary_semaphore { enum {zero, one} value; queuetype queue; }; 51

Definition of Binary Semaphore Primitives void semwaitb(binary_semaphore s) { if (s.value == one) s.value = zero; else { // s.value = zero } } // place this process in s.queue; // block this process; 52

Definition of Binary Semaphore Primitives void semsignalb(binary_semaphore s) { if (s.queue is empty()) // no blocked process s.value = one; else { // has blocked process } } // remove a process P from s.queue; // place process P in ready queue; 53

System Calls Used for Semaphore on UNIX 1. int sem_init(sem_t *sem, int pshared, unsigned int value); 2. int sem_wait(sem_t *sem); 3. int sem_post(sem_t *sem); 4. int sem_destroy(sem_t *sem); 54

Binary Semaphore On some systems binary semaphore is known as mutex locks (mutual exclusion). A key difference between the two is that the process that locks the mutex (sets the value to zero) must be the one to unlock it (sets the value to 1). In contrast, it is possible for one process to lock a binary semaphore and for another to unlock it. 55

System Calls Used for Mutex on UNIX 1. int pthread_mutex_init( pthread_mutex_t *mutex, const pthread_mutexattr_t *attr); 2. int pthread_mutex_lock( pthread_mutex_t *mutex); 3. int pthread_mutex_unlock( pthread_mutex_t *mutex); 4. int pthread_mutex_destroy( pthread_mutex_t *mutex); 56

Strong/Weak Semaphore For both counting and binary semaphores, a queue is used to hold processes waiting on the semaphore - In what order are processes removed from the queue? Strong Semaphores: use FIFO (avoid starvation) Weak Semaphores: don t specify the order of process removal from the queue (starvation may occur) 57

Processes Using Semaphore 58

Disadvantage of Semaphores The key difficulty with semaphores is that wait() and signal() operations may be scattered/distributed throughout a program and it is not easy to see the overall effect of these operations on the semaphores they affect. Various types of errors can be generated easily when programmers use semaphores incorrectly to solve the critical-section problem. It is difficult to detect the errors. Producer/Consumer Animation 59

Some Errors Caused by Incorrect Use of Semaphore A process interchanges order in which wait() and signal() operations on semaphore mutex are executed. signal(mutex);... critical section... wait(mutex); Several processes may be executing in their critical sections simultaneously, violating the mutual-exclusion requirement Producer/Consumer Animation 60

Some Errors Caused by Incorrect Use of Semaphore A process replaces signal(mutex) with wait(mutex). wait(mutex);... critical section... wait(mutex); A deadlock will occur. Producer/Consumer Animation 61

The Producer-Consumer Problem (PCP) There are one or more producers generating some type of data (records, characters) and placing these in a buffer. There is a single consumer that is taking items out of the buffer one at a time. The system is to be constrained to prevent the overlap of buffer operations. That is, only one agent (producer or consumer) may access the buffer at any one time. 62

The Producer-Consumer Problem (PCP) The problem is to make sure - that the producer won t try to add data into the buffer if it s full and - that the consumer won t try to remove data from an empty buffer. 63

Finite Circular Buffer for the PCP Full: ((in + 1) % BUFFER_SIZE) = out. Empty: in = out 64

Bounded-Buffer PCP: Producer while (true) { // produce item v while ((in + 1) % N == out) ; // full, do nothing b[in] = v; in = (in + 1) % N; } 65

Bounded-Buffer PCP: Consumer while (true) { while (in == out) ; // empty, do nothing w = b[out]; out = (out + 1) % N; // consume item w; } 66

A Solution to Bounded-Buffer PCP Using Semaphore // program boundedbuffer const int sizeofbuffer = N; char buffer[n]; binary_semaphore s = 1; semaphore n = 0, e = N; // n: number of items in buffer // n = (in - out) modulo N // e: number of empty spaces 67

A Solution to Bounded-Buffer PCP Using Semaphore void producer() {// e = N, s = 1, n = 0 while (true) {// s is bin semaphore produce(x); semwait(e); // e-- semwaitb(s); // s = 0, mutex append(x); semsignalb(s);//s=1 if no blocked semsignal(n); // n++ } } 68

A Solution to Bounded-Buffer PCP Using Semaphore void consumer() { while (true) { semwait(n); // n-- semwaitb(s); // s = 0, mutex take(x); semsignalb(s);//s=1 if no blocked semsignal(e); // e++ consume(x); } } 69

A Solution to Bounded-Buffer PCP Using Semaphore void main() { parbegin (producer, consumer); } 70

Lecture Contents 1. Principles of Concurrency 2. Mutual Exclusion: Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 71

Monitors Monitor is a programming-language construct that provides functionality equivalent to that of semaphores and that is easier to control. In other words, monitor is high-level programming language construct that provides an abstract data type and mutually exclusive access to a set of procedures Implemented in a number of programming languages, including Pascal-Plus, C#, and Java. 72

Syntax of a Monitor monitor monitor name { // shared variable declarations procedure P1(... ){... } procedure P2(... ){... } procedure Pn(... ){... } initialization code(... ) {... } } 73

Schematic View of a Monitor. (functions/methods) 74

Main Characteristics The data variables in the monitor can be accessed by only one process at a time. Thus, a shared data structure can be protected by placing it in a monitor. 75

Main Characteristics Local data variables of a monitor are accessible only by monitor s procedures (methods) and not by any external procedure monitor is an ADT Process enters monitor by invoking one of its procedures The monitor construct ensures that only one process at a time is active within the monitor. That is, only one process may be executing in the monitor at a time. Monitor provides mutual exclusion facility 76

Monitor with Synchronization Mechanism Monitor supports synchronisation by using condition variables (e.g., notfull, notempty) that are contained within monitor and accessible only within monitor Condition variables are a special data type in monitors, which are operated on by two functions cwait(c) and csignal(c). 77

Monitor with Synchronization Mechanism - cwait(c): Suspend execution of the calling process on condition c (notfull or notempty). That is, the process invoking cwait(c) is suspended until another process invokes csignal(c). The monitor is now available for use by another process - csignal(c): Resume execution of some process blocked after a cwait() on the same condition. If there are several such processes, choose one of them; if there is no such process, do nothing. 78

Monitor with 2 Condition Variables 79

Structure of a Monitor Monitor has single entry point, only one process may be active in monitor Other processes that attempt to enter monitor join queue of processes blocked waiting for monitor availability 80

Solution to Bounded-Buffer PCP Using Monitor // program producerconsumer monitor boundedbuffer; char buffer[n]; // space for N items int nextin = 0, nextout = 0; // buffer pointers int n = 0; // number of items in buffer cond notempty = false, notfull = true; // condition variables for // synchronization 81

Solution to Bounded-Buffer PCP Using Monitor void append(char x) { if (n == N) cwait(notfull); // buffer is full, avoid overflow buffer[nextin] = x; nextin = (nextin + 1) % N; n++; // one more item in buffer csignal(notempty); // resume any // waiting consumer } 82

Solution to Bounded-Buffer PCP Using Monitor void take(char x) { if (n == 0) cwait(notempty); // buffer is empty, avoid underflow x = buffer[nextout]; nextout = (nextout + 1) % N); n--;// one fewer item in buffer csignal(notfull); // resume any // waiting producer } 83

Solution to Bounded-Buffer PCP Using Monitor { // monitor body nextin = 0; nextout = 0; n = 0; // buffer initially empty notempty = false, notfull = true } 84

Solution to Bounded-Buffer PCP Using Monitor void producer() { char x; while (true) { produce(x); append(x); } } 85

Solution to Bounded-Buffer PCP Using Monitor void consumer() { char x; while (true) { take(x); consume(x); } } 86

Solution to Bounded-Buffer PCP Using Monitor void main() { parbegin (producer, consumer); } 87

A Solution to Bounded-Buffer PCP Using Monitor A monitor construct itself enforces mutual exclusion. That is, it is not possible for two processes (e.g., both a producer and a consumer) to access a shared buffer simultaneously. For synchronization, a programmer must appropriately use cwait() and csignal() primitives of a monitor to prevent processes from adding data to a full buffer or removing data from an empty buffer. 88

A Solution to Bounded-Buffer PCP Using Monitor In the case of semaphores, both mutual exclusion (i.e., no two processes access a shared buffer simultaneously) and synchronization (i.e., a process cannot add data to a full buffer or remove data from an empty buffer) are the responsibility of a programmer. 89

Bounded Buffer Monitor for Mesa Monitor Bounded Buffer Monitor Code for Mesa Monitor (csignal() is replaced with cnotify()) 90

Lecture Contents 1. Principles of Concurrency 2. Mutual Exclusion: Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 91

Producer-Consumer Problem (PCP) PCP is one of the most common problems faced in concurrent processing Conditions must be satisfied: 1. One or more producers are generating data and placing the generated data in a buffer 2. One consumer is taking items out of the buffer one at a time 3. Only one producer or consumer may access the buffer at any one time Producer/Consumer Animation 92

Readers-Writers Problem (RWP) The RWP is defined as follows: there is a data area (e.g., memory, file, processor register) shared among many processes Some processes only read the data area (readers), some only write to the data area (writers) interaction of readers and writers. 93

Readers-Writers Problem (RWP) Conditions must be satisfied: 1. Any number of readers may simultaneously read from shared data area (e.g., a file). 2. Only one writer at a time may write to shared data area. 3. If a writer is writing to shared data area, no reader may read from shared data area. interaction of readers and writers. 94

Solution to RWP Using Semaphore // program readersandwriters int readcount = 0; semaphore x = 1, wsem = 1; void main() { readcount = 0; parbegin(reader, writer); } 95

Solution to RWP Using Semaphore void writer() { while (true) { semwait(wsem); WRITEUNIT(); semsignal(wsem); } } 96

Solution to RWP Using Semaphore void reader() while (true) { semwait(x); readcount++; if (readcount == 1) semwait(wsem); semsignal(x); 97

Solution to RWP Using Semaphore } } READUNIT(); semwait(x); readcount--; if (readcount == 0) semsignal(wsem); semsignal(x); 98

Solution to RWP Using Semaphore While one writer is accessing the shared data area, no other writers and no readers may access it. The reader process also makes use of wsem to enforce mutual exclusion. To allow multiple readers, we require that, when there are no readers reading, the first reader that attempts to read should wait on wsem. 99

Solution to RWP Using Semaphore When there is already at least one reader reading, subsequent readers need not wait before entering. The global variable readcount is used to keep track of the number of readers, and the semaphore x is used to assure that readcount is updated properly. 100

Summary Concurrent execution of multiple processes on single-processor multiprogramming, multiprocessor, or distributed systems causes the problem of inconsistency of shared resource. Mutual exclusion can solve data inconsistency problem by enforcing single access. Approaches to mutual exclusion - Hardware: disabling interrupts, special-purpose machine instructions 101

Summary - Software: semaphores, monitors Read References for the barbershop problem The barbershop problem relating to concurrency can be solved by using semaphore. 102