Chapter 6: Synchronization

Similar documents
Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Module 6: Process Synchronization

Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

Chapter 7: Process Synchronization. Background

Lesson 6: Process Synchronization

Synchronization Principles

Module 6: Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6 Synchronization

Chapter 7: Process Synchronization!

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Process Synchronization

Chapter 5: Process Synchronization

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Process Synchronization

Process Synchronization

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Chapter 6: Process Synchronization

Lecture 3: Synchronization & Deadlocks

Chapter 6: Process Synchronization

Process Synchronization

Chapter 5: Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Process Coordination

Chapter 7 Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Interprocess Communication By: Kaushik Vaghani

CS420: Operating Systems. Process Synchronization

CS370 Operating Systems

CS370 Operating Systems

CS370 Operating Systems

Real-Time Operating Systems M. 5. Process Synchronization

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Chapter 6 Process Synchronization

CSE Opera,ng System Principles

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

CSE 4/521 Introduction to Operating Systems

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Process Synchronization

Chapter 6: Process Synchronization

$ %! 0,-./ + %/ 0"/ C (" &() + A &B' 7! .+ N!! O8K + 8 N. (Monitors) 3+!

Introduction to Operating Systems

Concurrency: Mutual Exclusion and

CS370 Operating Systems

Process Co-ordination OPERATING SYSTEMS

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Dept. of CSE, York Univ. 1

Process Synchronization

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Process Synchronization(2)

UNIT II PROCESS MANAGEMENT 9

Lecture 5: Inter-process Communication and Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

IV. Process Synchronisation

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

Process Synchronization(2)

Process Synchronization(2)

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

1. Motivation (Race Condition)

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Concurrency: Mutual Exclusion and Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

PESIT Bangalore South Campus

Synchronization for Concurrent Tasks

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Process Management And Synchronization

Process Synchronization. Mehdi Kargahi School of ECE University of Tehran Spring 2008

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Process Synchronization

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

Synchronization Principles II

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Unit 1: Introduction

Process Synchronization

Process Synchronization

High-level Synchronization

QUESTION BANK. UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks)

Process Synchronization. studykorner.org

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Operating Systems. Synchronization Based on Ch. 5 of OS Concepts by SGG

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Transcription:

Chapter 6: Synchronization

Module 6: Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic Transactions 6.2 Silberschatz, Galvin and Gagne 2005

Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumer-producer problem that fills all the buffers. We can do so by having an integer count that keeps track of the number of full buffers. Initially, count is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. 6.3 Silberschatz, Galvin and Gagne 2005

Producer while (true) /* produce an item and put in nextproduced while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; count++; } 6.4 Silberschatz, Galvin and Gagne 2005

Consumer while (1) { while (count == 0) ; // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item in nextconsumed } 6.5 Silberschatz, Galvin and Gagne 2005

Race Condition count++ could be implemented as register1 = count register1 = register1 + 1 count = register1 count-- could be implemented as register2 = count register2 = register2-1 count = register2 6.6 Silberschatz, Galvin and Gagne 2005

Race Condition Consider this execution interleaving with count = 5 initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2-1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4} 6.7 Silberschatz, Galvin and Gagne 2005

Race Condition Race condition: The situation where several processes access and manipulate shared data concurrently. The outcome of the execution depends on the particular order in which the access takes To prevent race conditions, concurrent processes must be synchronized. 6.8 Silberschatz, Galvin and Gagne 2005

The Critical-Section Problem n processes all competing to use some shared data Each process has a code segment, called critical section, in which the shared data is accessed. Problem ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. 6.9 Silberschatz, Galvin and Gagne 2005

Solution to Critical-Section Problem A solution to the critical-section problem must satisfy the following three requirements: 1.Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then only those not in RS can participate the decision of the next, and this selection cannot be postponed indefinitely. 6.10 Silberschatz, Galvin and Gagne 2005

Solution to Critical-Section Problem 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted 6.11 Silberschatz, Galvin and Gagne 2005

General structure of a typical process Pi do { entry section critical section exit section reminder section } while (true); 6.12 Silberschatz, Galvin and Gagne 2005

Peterson s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready! 6.13 Silberschatz, Galvin and Gagne 2005

Algorithm for Process Pi do { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE; REMAINDER SECTION } while (TRUE); 6.14 Silberschatz, Galvin and Gagne 2005

Algorithm 1 Shared variables: int turn; initially turn = 0 turn = i P i can enter its critical section Process P i do { while (turn!= i) ; critical section turn = j; reminder section } while (true); Satisfies mutual exclusion, but not progress Executing sequence: P i, P j, P i, P j, P i, P j, 6.15 Silberschatz, Galvin and Gagne 2005

Algorithm 2 Shared variables boolean flag[2]; initially flag [0] = flag [1] = false. flag [i] = true P i ready to enter its critical section Process P i do { flag[i] := true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1); Satisfies mutual exclusion, but not progress requirement. 6.16 Silberschatz, Galvin and Gagne 2005

Algorithm 2 May loop infinitely If changes the order to be while (flag[j]) ; flag[i] := true; then the condition of mutual exclusion will not hold. 6.17 Silberschatz, Galvin and Gagne 2005

Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value Or swap contents of two memory words 6.18 Silberschatz, Galvin and Gagne 2005

TestAndndSet Instruction Definition: boolean TestAndSet (boolean *target) { } boolean rv = *target; *target = TRUE; return rv: 6.19 Silberschatz, Galvin and Gagne 2005

Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution: do { while ( TestAndSet (&lock )) ; /* do nothing // critical section lock = FALSE; // remainder section } while ( TRUE); 6.20 Silberschatz, Galvin and Gagne 2005

Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } 6.21 Silberschatz, Galvin and Gagne 2005

Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key. Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while ( TRUE); 6.22 Silberschatz, Galvin and Gagne 2005

Bounded-waiting mutual exclusion with TestAndSet() The hardware-based solutions to the CS problem do not satisfy the bounded-waiting requirement Another algorithm using the Test-and-Set() instruction that satisfies all the CS requirement boolean waiting [n]; /* are initialized to false */ boolean lock; /* is initialized to false */ A process P i can enter its critical section only if either waiting[i]== false or key == false. 6.23 Silberschatz, Galvin and Gagne 2005

Do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; Critical Section; j = (i + 1) % n; while ((j!= i) &&!waiting[j]) j = (j +1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; Remainder Section; } while (TRUE); 6.24 Silberschatz, Galvin and Gagne 2005

Semaphore Synchronization tool that does not require busy waiting Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() (test) and V() (increment) Less complicated 6.25 Silberschatz, Galvin and Gagne 2005

Semaphore Can only be accessed via two indivisible (atomic) operations wait (S) { } while S <= 0 ; // no-op (busy waiting) S--; signal (S) { } S++; 6.26 Silberschatz, Galvin and Gagne 2005

Semaphore as General Synchronization Tool Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore S; // initialized to 1 wait (S); Critical Section signal (S); 6.27 Silberschatz, Galvin and Gagne 2005

A more complicated example: begin parbegin /*Initially, all semaphores a ~ g are 0*/ begin S1; signal(a); signal(b); end; a S 1 b begin wait(a); S 2 ; S 4 ; signal(c); signal(d); end; S 2 begin wait(b); S 3 ; signal(e); end; S 3 begin wait(c); S5; signal(f); end; begin wait(d); wait(e); S6; signal(g); end; c S 4 d e begin wait(f); wait(g); S 7 ; end; S 5 S 6 parend end f g S 7 6.28 Silberschatz, Galvin and Gagne 2005

Semaphore Implementation Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. Could now have busy waiting in critical section implementation But implementation code is short Little busy waiting if critical section rarely occupied Note that applications may spend lots of time in critical sections and therefore this is not a good solution. 6.29 Silberschatz, Galvin and Gagne 2005

Semaphore Implementation The main disadvantage of mutual-exclusion solutions is busy waiting (wasting CPU cycles in the loops of enter section). The semaphore defined before also has the same problem. Thus, the type of semaphore is also call a spinlock ( spin while waiting for the lock). To overcome the need for busy waiting, we can modify the definition of P and V, using block and wakeup operations. And define a semaphore as a record typedef struct { int value; struct process *list; } semaphore; 6.30 Silberschatz, Galvin and Gagne 2005

Semaphore Implementation Semaphore operations now defined as wait(semaphore *S): S->value--; if (S->value < 0) { } signal(s): S->value++; add this process to S->list; block(); if (S->value >= 0) { } remove a process P from S-> list; wakeup(p); 6.31 Silberschatz, Galvin and Gagne 2005

Semaphore Implementation signal(s) and wait(s) should be atomic. This situation is a CS problem (no two processes can executed wait() and signal() on the same semaphore simultaneously). Solutions uniprocessor system: inhibit interrupts during the execution of signal and wait operations multiprocessor system: disable interrupt does not work here (using spinlocks to ensure that wait and signal are performed atomically) 6.32 Silberschatz, Galvin and Gagne 2005

Semaphore Implementation Note that busy waiting have not be completely eliminated. It is removed from the entry to the CSs of application programs. It is limited to CSs of signal and wait operations, which codes are short (<10 instructions). Thus, the CS is almost never occupied, and busy waiting occurs rarely, and only for a short time 6.33 Silberschatz, Galvin and Gagne 2005

Semaphore as a General Synchronization Tool Execute B in P j only after A executed in P i Use semaphore flag initialized to 0 Code: P i A signal(flag) P j wait(flag) B 6.34 Silberschatz, Galvin and Gagne 2005

Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q); wait (Q); wait (S);...... signal (S); signal (Q); signal (Q); signal (S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. Ex: a semaphore in LIFO order 6.35 Silberschatz, Galvin and Gagne 2005

Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem The three problems are important, because they are examples for a large class of concurrency-control problems. used for testing nearly every newly proposed synchronization scheme. Semaphores are used for synchronization in our solutions. 6.36 Silberschatz, Galvin and Gagne 2005

Bounded-Buffer Problem buffer of size n producer consumer Shared data semaphore full, empty, mutex; Initially: full = 0, /* # of full slots empty = n, /* # of empty slots mutex = 1 6.37 Silberschatz, Galvin and Gagne 2005

Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (true); 6.38 Silberschatz, Galvin and Gagne 2005

Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer signal (mutex); signal (empty); // consume the removed item } while (true); 6.39 Silberschatz, Galvin and Gagne 2005

Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write. Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. Shared Data Data set Semaphore mutex initialized to 1. Semaphore wrt initialized to 1. Integer readcount initialized to 0. 6.40 Silberschatz, Galvin and Gagne 2005

Readers-Writers Problem Two variations (priorities of readers and writers) No reader will be kept waiting unless a writer has already obtained permission to use the shared object. (Thus, no reader should wait for other readers to finish because a writer is waiting.) Writers may starve Once a writer is ready, that writer performs its write as soon as possible. (Thus, if a writer is waiting to access the object, no new readers may start reading.) Readers may starve 6.41 Silberschatz, Galvin and Gagne 2005

Readers-Writers Problem The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (true) 6.42 Silberschatz, Galvin and Gagne 2005

Readers-Writers Problem The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readercount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if redacount == 0) signal (wrt) ; signal (mutex) ; } while (true) 6.43 Silberschatz, Galvin and Gagne 2005

Readers-Writers Problem If a writer is in the CS and n readers are waiting, then the first one is queued on wrt the other n-1 are queued on mutex When a writer executes signal(wrt), either the waiting readers or a single writer are resumed. The selection is made by the scheduler. 6.44 Silberschatz, Galvin and Gagne 2005

Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 5 1 2 3 4 6.45 Silberschatz, Galvin and Gagne 2005

Dining-Philosophers Problem (Cont.) The structure of Philosopher i: Do { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (true) ; (May Deadlock!) 6.46 Silberschatz, Galvin and Gagne 2005

Dining-Philosophers Problem Possible solutions to the deadlock problem Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up her chopsticks only if both chopsticks are available (note that she must pick them up in a critical section). Use an asymmetric solution; that is, odd philosopher: left first, and then right an even philosopher: right first, and then left Besides deadlock, any satisfactory solution to the DPP problem must avoid the problem of starvation. 6.47 Silberschatz, Galvin and Gagne 2005

Problems with Semaphores Semaphores provide a convenient and effective mechanism for process synchronization. However, incorrect use may result in timing errors. signal(mutex);... critical section... wait(mutex); incorrect order (not mutual exclusive) forgotten wait(mutex);... critical section... wait(mutex);... critical section... wait(mutex); typing error (deadlock)... critical section... signal(mutex); 6.48 Silberschatz, Galvin and Gagne 2005

Monitor: A High-level Language Constructs A monitor is an approach to synchronize two or more processes that use a shared resource, usually a hardware device or a set of variables. With monitor-based concurrency, the compiler or interpreter transparently inserts locking and unlocking code to appropriately designated procedures, instead of the programmer having to access concurrency primitives explicitly. 6.49 Silberschatz, Galvin and Gagne 2005

Monitor: A High-level Language Constructs The representation of a monitor type consists of declarations of variables whose values define the state of an instance of the type procedures or functions that implement operations on the type. A procedure within a monitor can access only variables defined in the monitor and the formal parameters. The local variables of a monitor can be used only by the local procedures. The monitor construct ensures that only one process at a time can be active within the monitor. 6.50 Silberschatz, Galvin and Gagne 2005

Monitors High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes. monitor monitor-name { // shared variable declarations } procedure body P1 ( ) {... } procedure body P2 ( ) {... } procedure body Pn ( ) {... } { initialization code } 6.51 Silberschatz, Galvin and Gagne 2005

Monitors condition variables To avoid entering a busy waiting state, processes must be able to signal each other about events of interest. Monitors provide this capability through condition variables. When a monitor function requires a particular condition to be true before it can proceed, it waits on an associated condition variable. By waiting, it gives up the lock and is removed from the set of runable processes. Any process that subsequently causes the condition to be true may then use the condition variable to notify a process waiting for the condition. A process that has been notified regains the lock and can proceed. 6.52 Silberschatz, Galvin and Gagne 2005

Monitors condition variables To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; Condition variable can only be used with the operations wait and signal. The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. 6.53 Silberschatz, Galvin and Gagne 2005

Schematic view of a Monitor 6.54 Silberschatz, Galvin and Gagne 2005

Monitor with Condition Variables 6.55 Silberschatz, Galvin and Gagne 2005

Monitor With Condition Variables Suppose Q executes x.wait and P executes x.signal. Both P and Q are in the monitor, one must wait. Two possibilities for the implementation: (a) Signal and wait: P either waits Q leaves, or waits another condition (b) Signal and continue: Q either waits P leaves, or waits another condition Possibility (b) seems more reasonable However, if Q waits, the logical condition for which Q is waiting may no longer hold by the time Q is resumed. Concurrent Pascal (using a compromise) When P executes x.signal, it leaves immediately, and Q is resumed immediately. 6.56 Silberschatz, Galvin and Gagne 2005

Dining Philosophers Example (deadlock-free) monitor dp { enum {THINKING, HUNGRY, EATING} state[5]; condition self[5]; void pickup(int i) void putdown(int i) void test(int i) void init() { for (int i = 0; i < 5; i++) state[i] = THINKING; } } 6.57 Silberschatz, Galvin and Gagne 2005

void pickup(int i) { state[i] = HUNGRY; test[i]; if (state[i]!= EATING) self[i].wait(); } try to eat void putdown(int i) { state[i] = THINKING; /*allow neighbors to eat // test left and right neighbors left test((i+4) % 5); right test((i+1) % 5); } 6.58 Silberschatz, Galvin and Gagne 2005

(if it is hungry) void test(int i) { if ( (state[(i + 4) % 5]!= EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5]!= EATING)) { state[i] = EATING; self[i].signal(); } if P i is suspended, } try to let P i eat resume it if P i is not suspended, no effect 6.59 Silberschatz, Galvin and Gagne 2005

An illustration P 3 thinking P 4 4 3 P 2 hungry 0 2 self[2].wait; 1 P 0 P 1 thinking dp.pickup(1) eating; This solution cannot solve starvation! dp.pushdown(1) test (0) ; test (2) ; if P 3 is not eating self[2].signal 6.60 Silberschatz, Galvin and Gagne 2005

Monitor Implementation Using Semaphores Variables: semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each external procedure F will be replaced by body of F compiler wait(mutex); body of F; if next-count>0 then signal(next) signal(mutex); else Mutual exclusion within a monitor is ensured. wakeup a suspended process allow others to enter the monitor 6.61 Silberschatz, Galvin and Gagne 2005

Monitor Implementation Using Semaphores For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); o o x-count--; A signaling process must WAIT until the resumed process either leaves or waits. NEXT: the signaling processes may suspend themselves. 6.62 Silberschatz, Galvin and Gagne 2005

Monitor Implementation The operation x.signal can be implemented as: if (x-count > 0) { next-count++; signal(x-sem); wait(next); next-count--; } no effect, if no process is waiting wakeup a waiting process 6.63 Silberschatz, Galvin and Gagne 2005

Monitor ResourceAllocator { boolean busy; condition x; void acquire(int time) { if (busy) x.wait (time); busy = TRUE; } void release() { busy = FALSE; x.signal (); } R.acquire(t);. access the resource;. R.release(); } initialization_code() { busy = FALSE; } 6.64 Silberschatz, Galvin and Gagne 2005

Monitor ResourceAllocator Conditional-wait construct: x.wait(c); c integer expression evaluated when the wait operation is executed. value of c (a priority number) stored with the name of the process that is suspended. when x.signal is executed, process with smallest associated priority number is resumed next. Check two conditions to establish correctness of system: User processes must always make their calls on the monitor in a correct sequence. Must ensure that an uncooperative process does not ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols. 6.65 Silberschatz, Galvin and Gagne 2005

Synchronization Examples Solaris Windows XP Linux Pthreads 6.66 Silberschatz, Galvin and Gagne 2005

Solaris Synchronization Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing Uses adaptive mutexes for efficiency when protecting data from short code segments Uses condition variables and readers-writers locks when longer sections of code need access to data Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or readerwriter lock 6.67 Silberschatz, Galvin and Gagne 2005

Windows XP Synchronization Uses interrupt masks to protect access to global resources on uniprocessor systems Uses spinlocks on multiprocessor systems Also provides dispatcher objects which may act as either mutexes and semaphores Dispatcher objects may also provide events An event acts much like a condition variable 6.68 Silberschatz, Galvin and Gagne 2005

Linux Synchronization Linux: disables interrupts to implement short critical sections Linux provides: semaphores spin locks 6.69 Silberschatz, Galvin and Gagne 2005

Pthreads Synchronization Pthreads API is OS-independent It provides: mutex locks condition variables Non-portable extensions include: read-write locks spin locks 6.70 Silberschatz, Galvin and Gagne 2005

Serializability and Concurrency- Control Algorithms Serializability Locking protocol Timestamp-based protocols 6.71 Silberschatz, Galvin and Gagne 2005

Serializability serial schedule: A schedule where each transaction is executed atomically. Schedule A nonserial schedule: allow two transactions to overlap their execution. Schedule B T 0 T 1 T 0 T 1 read(a) write(a) read(b) write(b) read(a) write(a) read(b) write(b) read(a) write(a) read(b) write(b) read(a) write(a) read(b) write(b) 6.72 Silberschatz, Galvin and Gagne 2005

Serializability O i and O j conflict if they access the same data item and at least one of them is a write. Conflict serializable: A schedule S can be transformed into a serial schedule S by a series of swaps of nonconflicting operations. Schedule B Schedule A r(a) 1 w(a) 1 r(b) 0 w(b) 0 r(b) 0 w(b) 0 r(a) 1 w(a) 1 Swap the read(b) of T 0 with the write(a) of T 1 Swap the read(b) of T 0 with the read(a) of T 1 Swap the write(b) of T 0 with the write(a) of T 1 Swap the write(b) of T 0 with the read(a) of T 1 Schedule B will produce the same final state as well some serial schedule. 6.73 Silberschatz, Galvin and Gagne 2005

Locking Protocol One way to ensure serializability is to associate with each data item a lock, and require that each transaction follow a locking protocol that governs how locks are acquired and released. Two lock modes Shared: allows read but not write Exclusive: allows both read and write Two-phase locking protocol: requires that each transaction issue lock and unlock in two phases: growing and shrinking. The execution order between every pair of conflicting transactions is determined at execution time. 6.74 Silberschatz, Galvin and Gagne 2005

Locking Protocol growing phase: may obtain lock, but cannot release any lock. shrinking phase: may release lock, but cannot obtain any lock. Initially, a transaction is in the growing phase. The transaction acquires locks as needed. Once the transaction releases a lock, it enters the shrinking phase, and no more lock requests can be issued. The two-phase locking protocol ensures conflict serializability. It does not, however, ensure freedom from deadlock. 6.75 Silberschatz, Galvin and Gagne 2005

Timestamp-based Protocols Select the order among transactions in advance Timestamps could be System clock Logical counter Implementation W-timestamp(Q): denotes the largest timestamp of any transaction that executed write(q) successfully. R-timestamp(Q): denotes the largest timestamp of any transaction that executed read(q) successfully. These timestamps are updated whenever a new read(q) or write(q) instruction is executed. 6.76 Silberschatz, Galvin and Gagne 2005

Timestamp-based Protocols Timestamp-based protocols Suppose T i issues read(q): If TS(T i ) < W-timestamp(Q), [read a value of Q that was already overwritten] read(q) is rejected, and T i is rolled back. If TS(T i ) > W-timestamp(Q), read(q) is executed, and R -timestamp(q) is set to the maximum of R-timestamp(Q) and TS(T i ). Suppose T i issues write(q): If TS(T i ) < R-timestamp(Q), [the value of Q that is producing was needed previously and T i assumed that this value would never be produced] write(q) is rejected, and T i is rolled back. If TS(T i ) < W-timestamp(Q), [is attempting to write an obsolete value] write(q) is rejected, and T i is rolled back. Otherwise, write(q) is executed. 6.77 Silberschatz, Galvin and Gagne 2005

Timestamp-Based Protocols A transaction that is rolled back by the concurrency-control scheme as a results of the issuing of either a read or write is assigned a new timestamp and is restarted. TS(T 2 ) < TS(T 3 ) T 2 10:00 T 3 10:02 read(b) read(a) read(b) write(b) read(a) write(a) The timestamp-ordering protocol ensures conflict serializability. The protocol ensures freedom from deadlock because no transaction ever waits. 6.78 Silberschatz, Galvin and Gagne 2005

Homework 1, 3, 7, 9, 11, 13, 18 Term Project: Producer-Consumer Problem (Due: Dec. 31, 2007) 6.79 Silberschatz, Galvin and Gagne 2005

End of Chapter 6

Bakery Algorithm Critical section for n processes Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section. If processes P i and P j receive the same number, if i < j, then P i is served first; else P j is served first. The numbering scheme always generates numbers in increasing order of enumeration; i.e., 1,2,3,3,3,3,4,5... 6.81 Silberschatz, Galvin and Gagne 2005

Bakery Algorithm Notation lexicographical order (ticket #, process id #) (a, b) < (c, d) if a < c or if a = c and b < d max (a 0,, a n-1 ) Shared data boolean choosing[n]; int number[n]; initialized to false and 0, respectively. 6.82 Silberschatz, Galvin and Gagne 2005

Bakery Algorithm do { choosing[i] = true; number[i] = max(number[0], number[1],, number [n 1])+1; choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; /* Wait for the choosing of P j while ((number[j]!= 0) && (number[j],j) < number[i], i)) ; /* smallest first } critical section number[i] = 0; remainder section } while (1); 6.83 Silberschatz, Galvin and Gagne 2005

Bakery Algorithm Key for showing the correctness if P i in CS, all other P k has number[k] = 0, or (number[i], i) < (number[k], k) mutual exclusion: OK progress: OK (smallest first) bounded waiting: OK Note that processes enter their CSs in a FCFS basis How many times? n-1 6.84 Silberschatz, Galvin and Gagne 2005