Chapter 6: Process Synchronization

Similar documents
Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Synchronization Principles

Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Lesson 6: Process Synchronization

Chapter 7: Process Synchronization!

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CHAPTER 6: PROCESS SYNCHRONIZATION

Process Synchronization

Module 6: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Chapter 7: Process Synchronization. Background. Illustration

Process Synchronization

Chapter 7: Process Synchronization. Background

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6: Process Synchronization

Chapter 6 Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Chapter 5: Process Synchronization

Lecture 3: Synchronization & Deadlocks

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization

CS370 Operating Systems

Introduction to Operating Systems

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

CS370 Operating Systems

CSE Opera,ng System Principles

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Chapter 6: Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Interprocess Communication By: Kaushik Vaghani

CS420: Operating Systems. Process Synchronization

CS370 Operating Systems

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Lecture 5: Inter-process Communication and Synchronization

Module 6: Process Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

PESIT Bangalore South Campus

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Real-Time Operating Systems M. 5. Process Synchronization

Process Co-ordination OPERATING SYSTEMS

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Dept. of CSE, York Univ. 1

UNIT II PROCESS MANAGEMENT 9

Chapter 6 Process Synchronization

Process Coordination

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

IV. Process Synchronisation

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Process Management And Synchronization

Synchronization Principles II

CSE 4/521 Introduction to Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Process Synchronization (Part I)

Process Synchronization(2)

Process Synchronization(2)

Process Synchronization. studykorner.org

Process Synchronization

$ %! 0,-./ + %/ 0"/ C (" &() + A &B' 7! .+ N!! O8K + 8 N. (Monitors) 3+!

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Process/Thread Synchronization

Chapter 7 Process Synchronization

Process Synchronization

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Process Synchronization(2)

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Process/Thread Synchronization

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

CS3502 OPERATING SYSTEMS

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

CS370 Operating Systems

1. Motivation (Race Condition)

OS Process Synchronization!

Silberschatz and Galvin Chapter 6

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Transcription:

Chapter 6: Process Synchronization

Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic Problems of Synchronization 6.8 Monitors 6.9 Synchronization Examples 6.10 Alternative Approaches 6.2

6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Suppose that we wanted to provide a solution to the consumerproducer problem that fills all the buffers. We can do so by having an integer counter that keeps track of the number of full buffers. Initially, counter is set to 0. It is incremented by the producer after it produces a new buffer and is decremented by the consumer after it consumes a buffer. 6.3

Producer Consumer while (true) { /* produce an item and put in nextproduced */ while (counter == BUFFER_SIZE) ; // do nothing buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; counter++; while (true) { while (counter == 0) ; // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; /* consume the item in nextconsumed 6.4

Race Condition counter++ could be implemented as register1 = counter register1 = register1 + 1 counter = register1 counter-- could be implemented as register2 = counter register2 = register2-1 counter = register2 Consider this execution interleaving with counter = 5 initially: T0: producer execute register1 = counter {register1 = 5 T1: producer execute register1 = register1 + 1 {register1 = 6 T2: consumer execute register2 = counter {register2 = 5 T3: consumer execute register2 = register2 1 {register2 = 4 T4: producer execute counter = register1 {counter = 6 T5: consumer execute counter = register2 {counter = 4 Race condition: several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access take place data inconsistency: different counter values seen by the processes Process synchronization: ensure that only one process at a time can be manipulating the data not to interfere with each other 6.5

6.2 The Critical-Section Problem critical section: when one process is executing in a critical section, no other process is to be allowed to execute in its critical section the process may be changing common variables, updating a table, writing a file, and so on no two processes are executing in their critical sections at the same time Examples: kernel data structure (a) preemptive kernels and (b) nonpreemptive kernels do { entry section critical section exit section remainder section while (TRUE); Figure 6.3 General structure of a typical process Pi 6.6

Requirements for Solution to Critical-Section Problem 1. Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then only those processes that are not executing in their remainder sections can participate in the decision on which will enter its critical section next and this selection cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes 6.7

6.3 Peterson s Solution A classic software-based two-process solution Assume the LOAD and STORE instructions are atomic, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2]; The variable turn indicates whose turn it is to enter the critical section. if turn == i, then process Pi is allowed to execute in its critical section The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready! Algorithm for Process P i while (true) { flag[i] = TRUE; turn = j; while ( flag[j] && turn == j); CRITICAL SECTION flag[i] = FALSE; REMAINDER SECTION 6.8 Algorithm for Process P j while (true) { flag[j] = TRUE; turn = i; while ( flag[i] && turn == i); CRITICAL SECTION flag[j] = FALSE; REMAINDER SECTION

6.4 Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors could disable interrupts Currently running code would execute without preemption Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable CLI/SETI in Intel IA-32 Modern machines provide special atomic hardware instructions Atomic = non-interruptable TestAndSet: Either test memory word and set value Swap: Or swap contents of two memory words 6.9

Solution using TestAndSet TestAndSet Instruction Definition: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: Shared Boolean variable lock., initialized to false. Solution: lock = FALSE; while (true) { while ( TestAndSet (&lock )) ; /* do nothing Atomic: two simultaneously TestAndSet instructions are executed sequentially in some arbitrary order only one process can find FALSE for target // critical section lock = FALSE; // remainder section 6.10

Solution using Swap Instruction Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key. Solution: lock = FALSE; while (true) { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section 6.11

6.5 Mutex Locks OS designers build software tools to solve critical section problem Simplest is mutex lock: Protect a critical section by first acquire() a lock then release() the lock Boolean variable indicating if lock is available or not Calls to acquire() and release() must be atomic Usually implemented via hardware atomic instructions acquire a lock before entering a critical section release the lock when it exits the critical section do { acquire lock release lock while (TRUE); CRITICAL SECTION REMAINDER SECTION 6.12

6.6 Semaphore hardware-based solutions are complicated for application programmers to use semaphore: a higher-level synchronization tool that abstracts the hardware complexity Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Less complicated Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; signal (S) { S++; 6.13

Semaphore as General Synchronization Tool Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Usage I: Provides mutual exclusion Usage II: Synchronize processes: S2 be executed only after S1 has completed Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // Remainder Section while (TRUE); 6.14 Semaphore synch; // initialized to 0 in Process P1: S1; signal (synch); in Process P2: wait (synch); S2;

Semaphore Implementation Thus, implementation becomes the critical section problem where the wait and signal code are placed in the critical section. Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time spinlock: busy waiting semaphore in critical section implementation disadvantage: waste CPU cycles applications may spend lots of time in critical sections advantage: no context switch that takes considerable time good when locks are expected to be held for short times (short critical section) 6.15

Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer): # of waiting processes pointer to next record in the list Two operations: block place the process invoking the operation on the appropriate waiting queue. wakeup remove one of processes in the waiting queue and place it in the ready queue. 6.16

Semaphore Implementation with no Busy waiting (Cont.) typedef struct { int struct process semaphore; value; *list; Implementation of wait: Implementation of signal: wait (semaphore *S){ S->value--; if (S->value < 0) { add this process to waiting queue S->list; block(); Signal (semaphore *S){ S->value++; if (S->value <= 0) { remove a process P from the waiting queue S->list; wakeup(p); 6.17

Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q); wait (Q); wait (S);...... signal (S); signal (Q); signal (Q); signal (S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. 6.18

6.7 Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem 6.19

Bounded Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N. The structure of the producer process while (true) { // produce an item wait (empty); wait (mutex); // add the item to the buffer The structure of the consumer process while (true) { wait (full); wait (mutex); // remove an item from buffer signal (mutex); signal (empty); signal (mutex); signal (full); // consume the removed item 6.20

Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write. Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time. first readers-writers problem: no reader be kept waiting unless a writer has already obtained permission to use the shared object No reader should wait for other readers to finish second readers-writers problem: once a writer is ready, that writer performs its write ASAP. If a writer is waiting to access the object, no new readers may start reading Solution to 1st readers-writers problem: Shared Data Data set Semaphore mutex initialized to 1. Semaphore wrt initialized to 1. Integer readcount initialized to 0. 6.21

Readers-Writers Problem (Cont.) The structure of a writer process The structure of a reader process while (true) { wait (wrt) ; // writing is performed signal (wrt) ; while (true) { wait (mutex) ; readcount ++ ; if (readercount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (redacount == 0) signal (wrt) ; signal (mutex) ; If a writer is in the CS and n readers are waiting, then 1 reader is queued on wrt, and n-1 readers are queued on mutex 6.22

Dining-Philosophers Problem Five philosophers who think and eat. Five single chopsticks. Pick two chopsticks closest to her before eating. Pick one at a time. Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 6.23

Dining-Philosophers Problem (Cont.) The structure of Philosopher i: While (true) { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think Deadlock when each philosopher tries to grab her right chopstick deadlock-free solution shown in next section 6.24

Problems with Semaphores Correct use of semaphore operations: mutex is initialized to 1 wait (mutex). signal (mutex) timing errors are difficult to detect since they happen only for some particular execution sequences signal (mutex). wait (mutex) wait (mutex) wait (mutex) Omitting of wait (mutex) or signal (mutex) (or both) 6.25

6.8 Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization Only one process may be active within the monitor at a time a monitor type presents a set of programmer-defined operations that are provided mutual exclusion within the monitor monitor monitor-name { // shared variable declarations procedure P1 ( ) {. procedure Pn ( ) { Initialization code (.) { Schematic view of a Monitor 6.26

condition x, y; Condition Variables Two operations on a condition variable: x.wait () a process that invokes the operation is suspended. x.signal () resumes one of processes (if any) that invoked x.wait () Monitor with Condition Variables 6.27

Solution to Dining Philosophers monitor DP { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i]!= EATING) self [i].wait; void test (int i) { if ( (state[(i + 4) % 5]!= EATING) && (state[i] == HUNGRY) && (state[(i + 1) % 5]!= EATING) ) { state[i] = EATING ; self[i].signal () ; void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; 6.28

Solution to Dining Philosophers (cont) Each philosopher I invokes the operations pickup() and putdown() in the following sequence: dp.pickup (i) EAT dp.putdown (i) 6.29

Monitor Implementation Using Semaphores Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each procedure F will be replaced by wait(mutex); body of F; if (next-count > 0) signal(next) else signal(mutex); Mutual exclusion within a monitor is ensured. 6.30

Monitor Implementation For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); x-count--; The operation x.signal can be implemented as: if (x-count > 0) { next-count++; signal(x-sem); wait(next); next-count--; 6.31

Summary 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic Problems of Synchronization 6.8 Monitors 6.9 Synchronization Examples 6.10 Alternative Approaches 6.32