Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Similar documents
Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Synchronization Principles

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Module 6: Process Synchronization

CS370 Operating Systems

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CHAPTER 6: PROCESS SYNCHRONIZATION

Process Synchronization

Module 6: Process Synchronization

Lesson 6: Process Synchronization

Chapter 7: Process Synchronization!

Chapter 6: Process Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

CS420: Operating Systems. Process Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

CS370 Operating Systems

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 7: Process Synchronization. Background

Chapter 6: Process Synchronization

Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization

Process Coordination

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Interprocess Communication By: Kaushik Vaghani

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Lecture 3: Synchronization & Deadlocks

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Chapter 6 Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Process Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Process Synchronization

Dept. of CSE, York Univ. 1

Chapter 6: Synchronization

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

IV. Process Synchronisation

Introduction to Operating Systems

Module 6: Process Synchronization

Process Co-ordination OPERATING SYSTEMS

CSE Opera,ng System Principles

Real-Time Operating Systems M. 5. Process Synchronization

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

CSE 4/521 Introduction to Operating Systems

Lecture 5: Inter-process Communication and Synchronization

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Process Synchronization (Part I)

Process Synchronization(2)

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Process Synchronization. studykorner.org

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Process Management And Synchronization

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Chapter 6 Process Synchronization

Process Synchronization(2)

Synchronization Principles I

CS370 Operating Systems

Synchronization for Concurrent Tasks

CS370 Operating Systems

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT II PROCESS MANAGEMENT 9

Synchronization Spinlocks - Semaphores

Process Synchronization

CSC501 Operating Systems Principles. Process Synchronization

CS3733: Operating Systems

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Process Synchronization

Process Synchronization(2)

Dealing with Issues for Interprocess Communication

OS Process Synchronization!

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Process/Thread Synchronization

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Synchronization Principles II

CS3502 OPERATING SYSTEMS

Process/Thread Synchronization

Concurrency: Mutual Exclusion and Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

PESIT Bangalore South Campus

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

PROCESS SYNCHRONIZATION

1. Motivation (Race Condition)

Chapter 7 Process Synchronization

Transcription:

Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009

Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization 6.2 Silberschatz, Galvin and Gagne 2009

Background Concurrent access to shared data may result in data inconsistency A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called race condition Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Ex: Shared-memory solution to bounded-buffer problem (Chapter 3) allows at most n 1 items in buffer at the same time. A solution, where all N buffers are used is not simple. Suppose that we Modify producer-consumer code by adding a variable counter, initialized to 0, incremented each time a new item is added to the buffer decremented every time a one item remove from the buffer 6.3 Silberschatz, Galvin and Gagne 2009

Producer while (true) { } /* produce an item and put in nextproduced */ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; count++; 6.4 Silberschatz, Galvin and Gagne 2009

Consumer while (true) { while (count == 0) ; // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; } /* consume the item in nextconsumed 6.5 Silberschatz, Galvin and Gagne 2009

Critical-Section A section of code, common to n cooperating processes, in which the processes may be accessing common variables. A Critical Section Environment contains: Entry Section Code requesting entry into the critical section. Critical Section Code in which only one process can execute at any one time. Exit Section The end of the critical section, releasing or allowing others in. Remainder Section Rest of the code AFTER the critical section. 6.6 Silberschatz, Galvin and Gagne 2009

The Critical-Section Problem N processes all competing to use shared data. Problem Structure of process Pi ---- Each process has a code segment, called the critical section, in which the shared data is accessed. do { entry section /* enter critical section */ critical section /* access shared variables */ exit section /* leave critical section */ remainder section /* do other work */ } while (TRUE) Ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. 6.7 Silberschatz, Galvin and Gagne 2009

Solution to Critical Section Problem Requirements Mutual Exclusion Progress If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. If no process is executing in its critical section and there exists some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. Bounded Waiting A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a nonzero speed. No assumption concerning relative speed of the n processes 6.8 Silberschatz, Galvin and Gagne 2009

Peterson s Solution It is software-based solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready! 6.9 Silberschatz, Galvin and Gagne 2009

Algorithm for Process P i do { flag[i] = TRUE; turn = j; while (flag[j] && turn == j); critical section flag[i] = FALSE; remainder section } while (TRUE); 6.10 Silberschatz, Galvin and Gagne 2009

Synchronization Hardware Software based solutions are not guaranteed to work on modern computer architectures Instead, we can generally state that any solution to the criticalsection problem requires a simple tool- a lock Race conditions are prevented by requiring that critical regions be protected by locks. do { acquire lock critical section release lock remainder section } while (TRUE); Fig 6.3 Solution to Critical-section Problem Using Locks 6.11 Silberschatz, Galvin and Gagne 2009

Synchronization Hardware There are several solutions to the critical-section problem ranging from hardware to software available to application programmers based on the premise of locking. Many systems provide hardware support for critical section code Uniprocessors could disable interrupts from occurring while a shared variable was being modified. That means Currently running code would execute without preemption. Multiprocessors- disable interrupts is too inefficient on multiprocessor systems (time consuming). So Modern machines provide special atomic hardware instructions Atomic = non-interruptable Either test memory word and set value. TestAndSet() Or swap contents of two memory words. Swap() 6.12 Silberschatz, Galvin and Gagne 2009

Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock remainder section } while (TRUE); 6.13 Silberschatz, Galvin and Gagne 2009

TestAndndSet Instruction Definition: boolean TestAndSet (boolean *target) { boolean rv = *target; *target = TRUE; return rv: } 6.14 Silberschatz, Galvin and Gagne 2009

Solution using TestAndSet Shared boolean variable lock., initialized to false. Solution: do { while ( TestAndSet (&lock )) ; // do nothing // critical section lock = FALSE; // remainder section } while (TRUE); 6.15 Silberschatz, Galvin and Gagne 2009

Swap Instruction Definition: void Swap (boolean *a, boolean *b) { boolean temp = *a; *a = *b; *b = temp: } 6.16 Silberschatz, Galvin and Gagne 2009

Solution using Swap Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key Solution: do { key = TRUE; while ( key == TRUE) Swap (&lock, &key ); // critical section lock = FALSE; // remainder section } while (TRUE); 6.17 Silberschatz, Galvin and Gagne 2009

Semaphore To overcome the complicated hardware-based solutions for application programmers we can use a synchronization tool called a semaphore, A Semaphore S - integer variable used to represent number of abstract resources Can only be accessed via two indivisible (atomic) operations wait (S): while S <= 0 do no-op; S := S-1; signal (S): S := S+1; P or wait used to acquire a resource, decrements count V or signal releases a resource and increments count If P is performed on a count <= 0, process must wait for V or the release of a resource. 6.18 Silberschatz, Galvin and Gagne 2009

Semaphore as General Synchronization Tool Operating system distinguish between two types of semaphore: Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); 6.19 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation Must guarantee that no two processes can execute wait () and signal () on the same semaphore at the same time The main disadvantages of semaphore is that it requires busy waiting (while a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code) Busy Waiting, wastes CPU cycles that other process might be able to use productively This type of semaphore is called a spinlock. It is good for short times since it prevents a context switch. But it is not good For longer runtimes. (see book p236) To overcome the busy waiting we need to: associate waiting queue With each semaphore. modify the definition of P and V so that processes can block and resume. (see book p236) 6.20 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation with no Busy waiting Each semaphore has an integer value and a list of processes list. typedef struct { int value; struct process *list; /* waiting queue*/ } semaphore; Two operations: block place the process into a waiting queue associated with the semaphore, and switch process state to the waiting state. wakeup change process state from waiting to ready state, and remove the process from the waiting queue and place it in the ready queue. 6.21 Silberschatz, Galvin and Gagne 2009

Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(p); } } 6.22 Silberschatz, Galvin and Gagne 2009

Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q);...... signal (S); signal (Q); wait (Q); wait (S); signal (Q); signal (S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. 6.23 Silberschatz, Galvin and Gagne 2009

Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem 6.24 Silberschatz, Galvin and Gagne 2009

Bounded-Buffer Problem The problem is: to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer. to make sure that only one producer or consumer is executing at a time for multiple producers and consumers. Solution using Semaphores: We assume that we have a pool consists of N buffers, each can hold one item Data shared: Semaphore mutex (provide mutual exclusion for accesses to the buffer pool) initialized to the value 1 Semaphore full (count the number of full buffers) initialized to the value 0 Semaphore empty (count the number of empty buffers) initialized to the value N. 6.25 Silberschatz, Galvin and Gagne 2009

Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE); 6.26 Silberschatz, Galvin and Gagne 2009

Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE); 6.27 Silberschatz, Galvin and Gagne 2009

Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time Shared Data Data set Semaphore mutex initialized to 1 Semaphore wrt (common to both reader and writer processes) initialized to 1 Integer readcount (how many processes are currently reading the object) initialized to 0 6.28 Silberschatz, Galvin and Gagne 2009

Readers-Writers Problem (Cont.) The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE); 6.29 Silberschatz, Galvin and Gagne 2009

Readers-Writers Problem (Cont.) The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE); 6.30 Silberschatz, Galvin and Gagne 2009

Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 6.31 Silberschatz, Galvin and Gagne 2009

Dining-Philosophers Problem (Cont.) The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE); 6.32 Silberschatz, Galvin and Gagne 2009

End of Chapter 6, Silberschatz, Galvin and Gagne 2009