Semaphores. May 10, Mutual exclusion with shared variables is difficult (e.g. Dekker s solution).

Similar documents
Concept of a process

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

1 Process Coordination

Process Management And Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Chapter 5 Asynchronous Concurrent Execution

CSE Traditional Operating Systems deal with typical system software designed to be:

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

Concurrency: a crash course

Dealing with Issues for Interprocess Communication

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Global Environment Model

Concurrent Processes Rab Nawaz Jadoon

Interprocess Communication By: Kaushik Vaghani

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Resource management. Real-Time Systems. Resource management. Resource management

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Synchronization for Concurrent Tasks

Models of concurrency & synchronization algorithms

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Last Class: Synchronization

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Process Synchronization Mechanisms

Chapter 6: Process Synchronization

Lecture 8: September 30

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

Example Threads. compile: gcc mythread.cc -o mythread -lpthread What is the output of this program? #include <pthread.h> #include <stdio.

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

G52CON: Concepts of Concurrency

Real-Time Systems. Lecture #4. Professor Jan Jonsson. Department of Computer Science and Engineering Chalmers University of Technology

COMP 3430 Robert Guderian

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

CS370 Operating Systems

Lecture 3: Intro to Concurrent Processing using Semaphores

Lecture 6. Process Synchronization

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Process Synchronization

PROCESS SYNCHRONIZATION

CS420: Operating Systems. Process Synchronization

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Process Synchronization

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Concurrency. Chapter 5

CSC501 Operating Systems Principles. Process Synchronization

CS3733: Operating Systems

Process Coordination

Introduction to OS Synchronization MOS 2.3

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

Remaining Contemplation Questions

Concurrency: Mutual Exclusion and Synchronization

Synchronization Principles

UNIX Input/Output Buffering

Interprocess Communication and Synchronization

CSC 1600: Chapter 6. Synchronizing Threads. Semaphores " Review: Multi-Threaded Processes"

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Semaphores and Monitors: High-level Synchronization Constructs

[module 2.2] MODELING CONCURRENT PROGRAM EXECUTION

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Process Synchronization

Synchronization I. Jo, Heeseung

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Synchronization: Semaphores

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

Synchronisation algorithms

CS162 Operating Systems and Systems Programming Lecture 7. Mutual Exclusion, Semaphores, Monitors, and Condition Variables

Concurrency: mutual exclusion and synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Introduction to programming with semaphores Fri 1 Sep 2017

Locks. Dongkun Shin, SKKU

Process Synchronization

Lecture. DM510 - Operating Systems, Weekly Notes, Week 11/12, 2018

Chapter 8. Basic Synchronization Principles

Do not start the test until instructed to do so!

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Peterson s Algorithm

Process Synchronization

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CS 153 Design of Operating Systems Winter 2016

CHAPTER 6: PROCESS SYNCHRONIZATION

Concurrency and Synchronisation

SWEN-220 Mathematical Models of Software. Process Synchronization Critical Section & Semaphores

Chapter 7: Process Synchronization. Background

Transcription:

Semaphores May 10, 2000 1 Introduction Mutual exclusion with shared variables is difficult (e.g. Dekker s solution). Generalising to an arbitrary number of processes is also nontrivial (e.g. Lamport s bakery algoritm). Using busy-waiting is wasteful of CPU time. A blocking solution is preferable. 2 Semaphores Are an abstract data type (ADT), and thus consist of (i) data; and (ii) operations on the data. The data encapsulated by a semaphore is a non-negative integer. The operations are wait and signal (sometimes referred to as P and V respectively after their original Dutch names). If the integer data is only allowed to take the values 0 and 1 then the semaphore is referred to as a binary semaphore. If the integer data is allowed to take any non-negative value then the semaphore is referred to as a general semaphore. 2.1 The wait operation wait(s) (where s is a semaphore) means decrement the value of s when the result would be non-negative. In pseudocode: if (s > 0) then s := s - 1 else block process on s A process that tries to decrement a semaphore with a value of 0 is blocked until the value becomes positive. 1

wait is an atomic action, i.e. cannot be interrupted/interleaved once called. It is possible but inefficient and rare to use busy-waiting instead of blocking. 2.2 The signal operation signal(s) (where s is a semaphore) means increment the value of s. In pseudocode: if processes are blocked on s then unblock one of them else s := s + 1 signal is also indivisible. Which process to unblock? FIFO (strongly fair): no starvation. random (weakly fair): starvation possible. block until s = 0 (rare). 2.3 Mutual exclusion with semaphores entry_protocol: exit_protocol: wait(s) signal(s) What should the initial value for the semaphore s be? Rule-of-thumb: The initial value for a semaphore should be equal to the number of resources that must be shared among the processes. For mutual exclusion this implies that we use s = 1 for an initial value. Note that the solution above works for an arbitrary number of processes. 2.4 Semaphore invariants (i) s 0 (ii) s = s 0 + #signals #completed waits Using the semaphore invariants we can provide a proof of correctness for mutual exclusion using semaphores: #cs = #wait(s) #signal(s) (from the code) s = 1 + #signal(s) #wait(s) (semaphore invariant) s = 1 #cs s + #cs = 1 #cs = 1 s 1 (since s 0) 2

Therefore there can be at most one process in the critical section at a time. 3 Pascal-FC semaphore is a primitive type: var s : semaphore; s_array : array[1..10] of semaphore; s_record: record i : integer; s : semaphore; end; type mytype = array[1..10] of semaphore; Semaphores can only be declared in the main program declaration (semaphores cannot be used as local variables). When used as routine parameters, semaphores must be declared as var parameters. 4 Semaphore implementation The addition of semaphores to the language has shifted responsibility for providing mutual exclusion from the applications programmer to the semaphore implementor. The pseudocode presented earlier for wait and signal must be executed as an atomic action. There are three ways to implement semaphores: 1. Use one of the n-process shared variable solutions. 2. Disable interrupts. This only works for a uniprocessor and is not ideal. 3. Use a special microprocessor instruction (typically called test-and-set. 4.1 Test and Set The test-and-set (TAS) instruction carries out an indivisible operation that is suitable for ensuring mutual exclusion. 1. It tests a memory location and sets the condition register to reflect whether the contents of that memory location were zero or non-zero. 2. It then sets the contents of the memory location to be non-zero. A typical use would be: 3

LOOP: TAS lockbyte BNZ LOOP ;critical section CLR lockbyte Used in this way TAS is effectively an assembly-level semaphore, with TAS corresponding to the wait operation and CLR the signal operation. The TAS instruction can be implemented at the bus level so that on a multiprocessor it still works (the bus is locked for the duration of the TAS instruction). So why would we not use TAS + CLR for providing mutual exclusion in the first place? Writing parts of the code in assembly and the rest in a high-level language adds complexity to the program (and consequently adds bugs). It s overkill. An application s critical section may be of arbitrary length, whereas a semaphore s critical section is only a few instructions. Locking the bus during the relatively short time it takes for a semaphore s critical section to execute is acceptable; locking the bus during an application s critical section is another matter. 5 General vs Binary semaphores These have the same expressive power, i.e. are logically equivalent. Binary semaphores can simulate general semaphores: general_wait(s): wait(s.delay); wait(s.mutex); s.count := s.count - 1; if (s.count > 0) then signal(s.delay); signal(s.mutex); where s is a record with the following declaration: record mutex: binary_semaphore; delay : binary_semaphore; count : integer; end; The corresponding signal operation is general_signal(s): wait(s.mutex); s.count := s.count + 1; if (s.count = 1) then signal(s.delay); signal(s.mutex); 4

6 The Producer-Consumer problem Consider two processes, a producer process and a consumer process. The producer creates items and puts them into a buffer (to begin with we assume this buffer is unbounded, and examine finite buffers later). The consumer process takes items from the buffer. We will extend this problem to multiple producers and consumers shortly. We want to make sure that the consumer cannot overtake the Basic rules: e produce i consume i 1. The producer may produce an item at any time. 2. The consumer may consume only when the buffer is non-empty. 3. The buffer is FIFO. 4. All items are (eventually) consumed. 6.1 Semaphore solution A signal(itemsready); wait(itemsready); where itemsready is a general semaphore with an initial value of 0. 6.1.1 Proof of correctness Assume that the consumer overtakes the #waits > #signals (from code) #signals #waits < 0 itemsready < 0 (semaphore invariant) and this is a contradiction, which proves that the consumer cannot overtake the producer. There still remains a possible problem with multiple producers and consumers concurrently accessing the buffer, so we need to protect the buffer with a semaphore. 5

6.2 Semaphore solution B signal(itemsready); wait(itemsready); Now suppose that there are a limited number of spaces available in the buffer a bounded buffer. Now we need to prevent overproduction as well as overconsumption, and to do this we introduce another general semaphore, spacesleft, that counts the number of empty spaces available in the buffer. The producer will decrement the number of empty spaces, so the producer will need to execute a wait on spacesleft. The consumer increments the number of empty spaces and so will execute a signal on spacesleft. Note the symmetry between the producer and consumer with respect to the semaphores itemsready and spacesleft. 6.3 Semaphore solution C wait(spacesleft); signal(itemsready); wait(itemsready); signal(spacesleft); 6