CS420: Operating Systems. Process Synchronization

Similar documents
Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS370 Operating Systems

Lesson 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 7: Process Synchronization!

Chapter 6: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

Synchronization Principles

Chapter 5: Process Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

CS370 Operating Systems

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 7: Process Synchronization. Background. Illustration

Module 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Process Synchronization

Chapter 7: Process Synchronization. Background

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Chapter 6 Process Synchronization

Process Coordination

Chapter 5: Process Synchronization

Dept. of CSE, York Univ. 1

Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization

Synchronization for Concurrent Tasks

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

CSE Opera,ng System Principles

Chapter 6: Process Synchronization

Real-Time Operating Systems M. 5. Process Synchronization

Chapter 6 Synchronization

Process Synchronization

IV. Process Synchronisation

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

CS370 Operating Systems Midterm Review

Synchronization Spinlocks - Semaphores

Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Process Synchronization (Part I)

1. Motivation (Race Condition)

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

CSE 4/521 Introduction to Operating Systems

Synchronization Principles I

Chapter 6: Synchronization

Process/Thread Synchronization

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

CS370 Operating Systems

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Introduction to Operating Systems

Process Synchronization

Process/Thread Synchronization

Chapter 6: Process Synchronization

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CS370 Operating Systems

Lecture 5: Inter-process Communication and Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Interprocess Communication By: Kaushik Vaghani

Lecture 6. Process Synchronization

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Process Co-ordination OPERATING SYSTEMS

Mutual Exclusion and Synchronization

Process Synchronization(2)

Lecture 3: Synchronization & Deadlocks

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Chapter 6 Synchronization

Process Synchronization - I

Process Synchronization(2)

Module 6: Process Synchronization

CS3733: Operating Systems

Introduction to OS Synchronization MOS 2.3

9/28/2014. CS341: Operating System. Synchronization. High Level Construct: Monitor Classical Problems of Synchronizations FAQ: Mid Semester

Comp 310 Computer Systems and Organization

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Process Synchronization(2)

UNIT II PROCESS MANAGEMENT 9

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Prof. Hui Jiang Dept of Computer Science and Engineering York University

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Chapter 7 Process Synchronization

Dealing with Issues for Interprocess Communication

Concurrency: Mutual Exclusion and

UNIT-II PROCESS SYNCHRONIZATION

Process Synchronization Mechanisms

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Transcription:

Process Synchronization James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne

Background Concurrent access to shared data may result in data inconsistency - Multiple threads/processes changing the same data without synchronization can lead to unpredictable results Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes - Identifying critical sections of code - Semaphores (or mutexes) to prevent simultaneous access to shared data 2

Consider the Following Producer + Consumer Both threads of execution may be trying to access (read or write) the value of counter concurrently - When executed individually, the following code blocks operate correctly - When executed concurrently, all bets are off while (true) { /* produce an item and put in nextproduced */ while (counter == BUFFER_SIZE); // do nothing buffer[in] = nextproduced; in = (in + 1) % BUFFER_SIZE; counter++; } Depending on the order of execution, the value of counter after these blocks of code run could be several different values while (true) { while (counter == 0); // do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; counter--; /* consume the item in nextconsumed */ } 3

Race Condition At the hardware level, an addition or subtraction may require several steps counter++ register1 = counter register1 = register1 + 1 counter = register1 counter-- register2 = counter register2 = register2-1 counter = register2 The steps above may get interleaved when executing on a processor and cause incorrect results - Consider the following interleaved execution with counter=5 initially: T0: producer executes register1 = counter { register1 = 5 } T1: producer executes register1 = register1 + 1 { register1 = 6 } T2: consumer executes register2 = counter { register2 = 5 } T3: consumer executes register2 = register2-1 { register2 = 4 } T4: producer executes counter = register1 { counter = 6 } T5: consumer executes counter = register2 { counter = 4 } 4

Race Condition A situation in which several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place is called a race condition 5

Critical Section Problem Consider a system of n processes {P0, P1, Pn-1} Each process has a critical-section segment of code - Process may be changing common variables, updating table, writing file, etc. Example: If two threads modify a shared variable, the portion of code that modifies that variable is a critical-section of code - When one process is in its critical-section, no other may be in its critical-section The critical-section problem -- what type of protocol could be implemented that would allow processes to cooperate when accessing shared data/ resources? Each process must ask permission to enter critical-section of code 6

Solution to Critical-Section Problem A solution to the critical section problem must satisfy the following three requirements: (1) Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections (need to be able to lock sections of code) (2) Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely (3) Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted 7

Possible Solutions to the Critical-Section Problem Synchronization Hardware - A hardware-based solution that provides a lock that can be set that only allows a single process to execute in a critical-section Semaphores - Arguably the most common approach for synchronizing access to shared data - Programmer friendly 8

Synchronization Hardware Many systems provide hardware support for critical section code In a uniprocessors system, could disable interrupts while in critical-section - Currently running code would execute without preemption - Generally too inefficient on multiprocessor systems Operating systems using this not broadly scalable Modern machines provide special atomic hardware instructions that can be used to implement the lock (atomic = non-interruptable) - Either test memory word and set value - Or swap contents of two memory words 9

Solution to Critical-section Problem Using Locks do { acquire lock critical section release lock Critical section of code should be as small as possible, only lock sections that access/modify shared data remainder section } while(true); 10

Using TestAndSet Instruction to Lock do { while(testandset(&lock)); // critical section lock = FALSE; // remainder section Initialize the value of shared lock variable to FALSE Loop continuously until TestAndSet returns FALSE } while (TRUE); boolean TestAndSet(boolean *target) { boolean rv = *target; *target = TRUE; return rv; } Gets current value of lock variable from memory, sets value in memory, return previous value from memory 11

Bounded-Waiting Mutual Exclusion with TestandSet do { waiting[i] = TRUE; // Pi is waiting for critical section key = TRUE; while (waiting[i] && key) // Pi waiting and doesn t have lock key = TestAndSet(&lock); // Returns FALSE when Pi gets lock waiting[i] = FALSE; // Pi is no longer waiting :-) // critical section j = (i + 1) % n; while ((j!= i) &&!waiting[j]) j = (j + 1) % n; // Scan waiting array to find // others waiting for lock if (j == i) lock = FALSE; else waiting[j] = FALSE; // if looped all the way back to self, // no one else waiting, release lock // otherwise, keep lock set to TRUE // and just tell next P to stop waiting // remainder section } while (TRUE); 12

Semaphores A synchronization tool that does not require busy waiting (a while loop that runs until conditions are satisfied) A semaphore is represented as an integer variable Two standard atomic operations are used to modify a semaphore - wait() - attempts to decrement the value of the semaphore - signal() - increments the value of the semaphore 13

Semaphores Two different kinds of semaphores (1) Binary Semaphore Integer value can be either 0 or 1 Also called a mutex lock Provides mutual exclusion (2) Counting Semaphore Integer value can range over an unrestricted domain Can be used to control access to a system resource with a finite number of instances - Initialize to the number of resources available - Decrement each time wait() is called When value of semaphore reaches 0, no more resources are available - Increment each time signal() is called 14

Semaphore as Synchronization Tool Provides mutual exclusion to critical section of code Semaphore mutex; // initialized to 1 do { wait(mutex); // acquire lock // Critical section signal(mutex); // release lock // remainder section } while(true); 15

Semaphore Implementation Must guarantee that no two processes can execute wait() and signal() on the same semaphore at the same time - Semaphore value is a shared value that only one process should be able to access and modify at any given time Thus, implementation becomes a critical section problem where the code for wait() and signal() become a critical section - Could implement with busy waiting since wait() and signal() code is very short (simply increments or decrements semaphore value) - Busy waiting wastes CPU cycles 16

Semaphore Implementation with No Busy Waiting Associate a wait queue with each semaphore Each semaphore has two data items: - A value (of type integer) that can be incremented/decremented by wait/signal - A pointer to a list of PCBs waiting for the semaphore typedef struct { int value; struct process *list; } semaphore; Two new system calls facilitate semaphore functionality without busy waiting: - block() suspends the process that invokes it - wakeup(p) resumes the execution of a blocked process P (i.e. returns it to the ready queue) 17

Semaphore Implementation with No Busy Waiting wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } A process calls wait() when it wants access to a critical region or resource. If resource is unavailable, then the process blocks. signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(p); } } Some other process will eventually call signal() to release the semaphore (hopefully). If another process was waiting, it will be removed from the waiting list and added to the ready queue. 18

Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Example: let S and Q be two semaphores initialized to 1 P0 P1 --------------------- wait(s) wait(q) wait(q) wait(s)...... signal(s) signal(q) signal(q) signal(s) What happens when P0 executes wait(s) and P1 executes wait(q)? 19

Problems with Semaphores Incorrect use of semaphore operations (i.e. bad programming): - signal(mutex) wait(mutex) -- wrong order - wait(mutex) wait(mutex) -- decrement TWO resources - Omitting of wait(mutex) or signal(mutex) (or both) -- UGH 20

Deadlock and Starvation Starvation indefinite blocking - A process may never be removed from the semaphore queue in which it is suspended Priority Inversion Scheduling problem when lower-priority process holds a lock needed by higher-priority process - Solved via priority-inheritance protocol whereby a lower priority process inherits the priority of another higher priority process that needs access to the resources held by the lower priority process 21