Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Similar documents
Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

Process Synchronization. studykorner.org

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CSE 4/521 Introduction to Operating Systems

Synchronization Principles

Module 6: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization

Lesson 6: Process Synchronization

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

CS370 Operating Systems

CHAPTER 6: PROCESS SYNCHRONIZATION

CS370 Operating Systems

Chapter 5: Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Chapter 6 Synchronization

Process Synchronization(2)

Interprocess Communication By: Kaushik Vaghani

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Chapter 5: Process Synchronization

Process Co-ordination OPERATING SYSTEMS

High-level Synchronization

CS370 Operating Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization(2)

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Concurrency pros and cons. Concurrent Programming Problems. Mutual Exclusion. Concurrency is good for users

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Module 6: Process Synchronization

Lecture 3: Synchronization & Deadlocks

9/29/2014. CS341: Operating System Mid Semester Model Solution Uploaded Semaphore ADT: wait(), signal()

Process Synchronization

Process Synchronization

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Chapter 6: Process Synchronization

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Real-Time Operating Systems M. 5. Process Synchronization

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Concurrency pros and cons. Concurrent Programming Problems. Linked list example. Linked list example. Mutual Exclusion. Concurrency is good for users

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

Chapter 6: Synchronization

Process Synchronization

OS Process Synchronization!

Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

Introduction to Operating Systems

Process Management And Synchronization

Chapter 6 Process Synchronization

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization!

Process Synchronization(2)

Paralleland Distributed Programming. Concurrency

Dealing with Issues for Interprocess Communication

CS420: Operating Systems. Process Synchronization

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Classic Problems of Synchronization

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS3502 OPERATING SYSTEMS

CSE Opera,ng System Principles

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Process Coordination

CSC501 Operating Systems Principles. Process Synchronization

C09: Process Synchronization

CS370 Operating Systems

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Roadmap. Readers-Writers Problem. Readers-Writers Problem. Readers-Writers Problem (Cont.) Dining Philosophers Problem.

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

1. Motivation (Race Condition)

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

CS370 Operating Systems

IV. Process Synchronisation

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Lecture 5: Inter-process Communication and Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Multitasking / Multithreading system Supports multiple tasks

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Chapter 6 Concurrency: Deadlock and Starvation

Synchronization Principles II

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

PROCESS SYNCHRONIZATION

Transcription:

Semaphore Semaphore S integer variable Two standard operations modify S: wait() and signal() Originally called P() and V() Can only be accessed via two indivisible (atomic) operations wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; } 5.1 Silberschatz, Galvin and Gagne

Semaphore as General Synchronization Tool Counting semaphore integer value can range over an unrestricted domain Binary semaphore integer value can range only between 0 and 1; can be simpler to implement Also known as mutex locks Can implement a counting semaphore S as a binary semaphore Provides mutual exclusion Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section signal (mutex); // remainder section } while (TRUE); 5.2 Silberschatz, Galvin and Gagne

Semaphore Implementation with no Busy waiting With each semaphore there is an associated waiting queue. Each entry in a waiting queue has two data items: value (of type integer) pointer to next record in the list Two operations: block place the process invoking the operation on the appropriate waiting queue. wakeup remove one of processes in the waiting queue and place it in the ready queue. 5.3 Silberschatz, Galvin and Gagne

Semaphore Implementation with no Busy waiting (Cont.) Implementation of wait: wait(semaphore *S) { S->value--; if (S->value < 0) { add this process to S->list; block(); } } Implementation of signal: signal(semaphore *S) { S->value++; if (S->value <= 0) { remove a process P from S->list; wakeup(p); } } 5.4 Silberschatz, Galvin and Gagne

Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes Let S and Q be two semaphores initialized to 1 P 0 P 1 wait (S); wait (Q);...... signal (S); signal (Q); wait (Q); wait (S); signal (Q); signal (S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended Priority Inversion - Scheduling problem when lower-priority process holds a lock needed by higher-priority process 5.5 Silberschatz, Galvin and Gagne

Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem 5.6 Silberschatz, Galvin and Gagne

Bounded-Buffer Problem N buffers, each can hold one item Semaphore mutex initialized to the value 1 Semaphore full initialized to the value 0 Semaphore empty initialized to the value N. 5.7 Silberschatz, Galvin and Gagne

Bounded Buffer Problem (Cont.) The structure of the producer process do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (TRUE); 5.8 Silberschatz, Galvin and Gagne

Bounded Buffer Problem (Cont.) The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (TRUE); 5.9 Silberschatz, Galvin and Gagne

Readers-Writers Problem A data set is shared among a number of concurrent processes Readers only read the data set; they do not perform any updates Writers can both read and write Problem allow multiple readers to read at the same time. Only one single writer can access the shared data at the same time Shared Data Data set Semaphore mutex initialized to 1 Semaphore wrt initialized to 1 Integer readcount initialized to 0 5.10 Silberschatz, Galvin and Gagne

Readers-Writers Problem (Cont.) The structure of a writer process do { wait (wrt) ; // writing is performed signal (wrt) ; } while (TRUE); 5.11 Silberschatz, Galvin and Gagne

Readers-Writers Problem (Cont.) The structure of a reader process do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (TRUE); 5.12 Silberschatz, Galvin and Gagne

Dining-Philosophers Problem Shared data Bowl of rice (data set) Semaphore chopstick [5] initialized to 1 5.13 Silberschatz, Galvin and Gagne

Dining-Philosophers Problem (Cont.) The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (TRUE); 5.14 Silberschatz, Galvin and Gagne

Problems with Semaphores Correct use of semaphore operations: signal (mutex). wait (mutex) wait (mutex) wait (mutex) Omitting of wait (mutex) or signal (mutex) (or both) 5.15 Silberschatz, Galvin and Gagne

Linux Synchronization Linux: Prior to kernel Version 2.6, disables interrupts to implement short critical sections Version 2.6 and later, fully preemptive Linux provides: semaphores spin locks 5.16 Silberschatz, Galvin and Gagne

Pthreads Synchronization Pthreads API is OS-independent It provides: mutex locks condition variables Non-portable extensions include: read-write locks spin locks 5.17 Silberschatz, Galvin and Gagne

Pthreads Mutex Mutex is an abbreviation for "mutual exclusion". Mutex variables are one of the primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. A mutex variable acts like a "lock" protecting access to a shared data resource. The basic concept of a mutex as used in Pthreads is that only one thread can lock (or own) a mutex variable at any given time. Thus, even if several threads try to lock a mutex only one thread will be successful. No other thread can own that mutex until the owning thread unlocks that mutex. Very often the action performed by a thread owning a mutex is the updating of global variables. This is a safe way to ensure that when several threads update the same variable, the final value is the same as what it would be if only one thread performed the update. When protecting shared data, it is the programmer's responsibility to make sure every thread that needs to use a mutex does so. For example, if 4 threads are updating the same data, but only one uses a mutex, the data can still be corrupted. 5.18 Silberschatz, Galvin and Gagne

Pthreads Condition Variables While mutexes implement synchronization by controlling thread access to data, condition variables allow threads to synchronize based upon the actual value of data. Without condition variables, the programmer would need to have threads continually polling to check if the condition is met. A condition variable is a way to achieve the same goal without polling. A condition variable is always used in conjunction with a mutex lock. 5.19 Silberschatz, Galvin and Gagne

Main Thread Declare and initialize global data/variables which require synchronization Declare and initialize a condition variable object Declare and initialize an associated mutex Create threads A and B to do work Thread A Do work up to the point where a certain condition must occur Lock associated mutex and check value of a global variable Call pthread_cond_wait() to perform a blocking wait for signal from Thread-B. Note that a call to pthread_cond_wait() automatically and atomically unlocks the associated mutex variable so that it can be used by Thread-B. When signalled, wake up. Mutex is automatically and atomically locked. Explicitly unlock mutex Continue Thread B Do work Lock associated mutex Change the value of the global variable that Thread-A is waiting upon. Check value of the global Thread- A wait variable. If it fulfills the desired condition, signal Thread-A. Unlock mutex. Continue Main Thread Join / Continue 5.20 Silberschatz, Galvin and Gagne

End of Chapter 5 Silberschatz, Galvin and Gagne