Dealing with Issues for Interprocess Communication

Similar documents
CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Synchronization I. Jo, Heeseung

Operating Systems. Synchronization

CS 153 Design of Operating Systems Winter 2016

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CSE 153 Design of Operating Systems

Lecture 5: Synchronization w/locks

CSE 451: Operating Systems Winter Synchronization. Gary Kimura

Operating Systems ECE344

Chapter 6: Process Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Interprocess Communication By: Kaushik Vaghani

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CSE 153 Design of Operating Systems Fall 2018

Opera&ng Systems ECE344

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

IV. Process Synchronisation

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Process Synchronization

PROCESS SYNCHRONIZATION

Synchronization Principles

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

IT 540 Operating Systems ECE519 Advanced Operating Systems

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Process Synchronization

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

CS 153 Design of Operating Systems Winter 2016

CS370 Operating Systems

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

Lesson 6: Process Synchronization

Chapter 7: Process Synchronization!

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Concurrent Processes Rab Nawaz Jadoon

Process Co-ordination OPERATING SYSTEMS

Synchronization for Concurrent Tasks

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

CS420: Operating Systems. Process Synchronization

Concept of a process

Chapter 6: Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Process Coordination

Synchronization. Dr. Yingwu Zhu

Lecture 6 (cont.): Semaphores and Monitors

Process Management And Synchronization

CS370 Operating Systems

Concurrency: a crash course

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Chapter 6 Process Synchronization

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Concurrency. Chapter 5

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Synchronization COMPSCI 386

Introduction to Operating Systems

Silberschatz and Galvin Chapter 6

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Module 6: Process Synchronization

Operating Systems. Thread Synchronization Primitives. Thomas Ropars.

CSE 153 Design of Operating Systems

Process Synchronization

Interprocess Communication and Synchronization

CSC501 Operating Systems Principles. Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Advance Operating Systems (CS202) Locks Discussion

Process Synchronization. studykorner.org

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Process Synchronization

Chapter 6 Concurrency: Deadlock and Starvation

Chapter 7: Process Synchronization. Background

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

CS3502 OPERATING SYSTEMS

Process Synchronization

Midterm Exam. October 20th, Thursday NSC

CSE Traditional Operating Systems deal with typical system software designed to be:

Concurrency: Deadlock and Starvation. Chapter 6

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Synchronization Basic Problem:

CS370 Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Transcription:

Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1

Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed to the i/p of the second process etc There are three issues here How can we pass info from one process to another Making sure the two processes don t get in each other s way when engaged in critical activity If process B depends on process A producing material before B consume it We will look at the issues here 7.2

Co-operation needs Synchronization Threads/Processes cooperate in multi-threaded/multi-process environments in order to: Access to shared state To coordinate their execution For correctness,we have to control this cooperation Must assume threads interleave executions arbitrarily and at different rates scheduling is not under application s control We control cooperation using synchronization 7.3

Shared Resources Basic problem: Two concurrent threads are accessing a shared variable If the variable is read/modified/written by both threads,then access to the variable must be controlled Otherwise,unexpected results may occur We ll look at: Mechanisms to control access to shared resources Low-level mechanisms:locks Higher level mechanisms: mutexes,semaphores,monitors,and condition variables Patterns for coordinating access to shared resources bounded buffer,producer-consumer, 7.4

A classic problem Suppose we have to implement a function to withdraw money from a bank account: int withdraw(account, amount) { } balance = get_balance(account); balance = balance - amount; put_balance(account, balance); return balance; Now suppose that you and your brother share a bank account with a balance of 100.00 what happens if you both go to separate ATM machines, and simultaneously withdraw 10.00 from the account? 7.5

Example continued Represent the situation by creating a separate thread for each person to do the withdrawals have both threads run on the same bank mainframe: What s the problem with this? what are the possible balance values after this runs? 7.6

Interleaved Schedules The problem is that the execution of the two threads can be interleaved, assuming pre-emptive scheduling: What s the account balance after this sequence? who s happy, the bank or you? ;) 7.7

Introducing Race Condition 7.8

The crux of the matter The problem is that two concurrent threads (or processes) access a shared resource (account) without any synchronization creates a race condition output is non-deterministic, depends on timing We need mechanisms for controlling access to shared resources in the face of concurrency so we can reason about the operation of programs essentially, re-introducing determinism Synchronization is necessary for any shared data structure buffers, queues, lists, hash tables, 7.9

Race condition - A Low Level View producer ++count at the register level register 1 = count; register 1 = register 1 + 1; consumer --count at the register level register 2 = count; register 2 = register 2-1; count = register 1 count = register 2 expects count = 6 expects count = 4 reg1 reg2 count Execution sequence as seen by CPU S 0 : producer execute register 1 = count S 1 : producer execute register 1 = register 1 + 1 S 2 : consumer execute register 2 = count S 3 : consumer execute register 2 = register 2 1 S 4 : producer execute count = register 1 S 5 : consumer execute count = register 2 5 5 6 5 5 5 4 5 6 4 7.10

When are Resources Shared? Local variables are not shared refer to data on the stack, each thread has its own stack never pass/share/store a pointer to a local variable on another thread s stack Global variables are shared stored in the static data segment, accessible by any thread Dynamic objects are shared stored in the heap, shared if you can name it in C, can conjure up the pointer e.g. void *x = (void *) 0xDEADBEEF in Java, strong typing prevents this must pass references explicitly 7.11

Race condition When several threads (or processes) access or manipulate the same data concurrently and where the outcome of the execution depends on the particular order in which access takes place then this is called a race condition We want to avoid race conditions! We need mutual exclusion This means we need synchronization of the threads. 7.12

Mutual Exclusion 7.13

Critical Region 7.14

Critical Section Requirements Critical sections have the following requirements mutual exclusion at most one thread is in the critical section progress if thread T is outside the critical section, then T cannot prevent thread S from entering the critical section bounded waiting (no starvation) if thread T is waiting on the critical section, then T will eventually enter the critical section assumes threads eventually leave critical sections performance the overhead of entering and exiting the critical section is small with respect to the work being done within it 7.15

Mechanisms for Enforcing Mutual Exclusion in Critical Regions Hardware Solutions Locks very primitive, minimal semantics; used to build others Semaphores basic, easy to get the hang of, hard to program with Monitors high level, requires language support, implicit operations easy to program with; Java synchronized() as example Ref 2.3.3 Tanenbaum 7.16

Letting H/W Help Us Hardware solutions are available to help us with the critical-section problem Solution 1 Disallow Interrupts While a shared variable is being modified we disallow interrupts (no preemption or context switching allowed) OS themselves use this technique But is this a good idea for user processes? What happens it they forget to re-enable interrupts? In a multi-cpu machine disabling interrupts on one CPU will not prevent processes on another CPU from entering the critical region Would it be reasonable to disable interrupts from all machines? Solution 2 Provide a H/W instruction that allows us: (a) Test and modify the contents of a word atomically Or (b) Swap the contents of two words atomically 7.17

Locks This is a software solution We create a single, shared (lock) variable If a process wants to enter a critical region it tests the lock. If the lock is 0 (open) it sets it to 1 and enters the region If the lock is 1 it waits until it becomes 0 7.18

Locks A lock is a object (in memory) that provides the following two operations: acquire( ): a thread calls this before entering a critical section release( ): a thread calls this after leaving a critical section Threads pair up calls to acquire( ) and release( ) between acquire( ) and release( ), the thread holds the lock acquire( ) does not return until the caller holds the lock at most one thread can hold a lock at a time (usually) so: what can happen if the calls aren t paired? Two basic flavours of locks spinlocks (I.e. those that use busy waiting) blocking (a.k.a. mutex ) 7.19

Using Locks What happens when green tries to acquire the lock? Remember that acquire() will not return until the lock is available -- this means that green will block until pink releases the lock 7.20

Spinlocks are lock that employ busy waiting How do we implement locks? Here s one attempt, the spinlock: 7.21

Implementing locks (continued) Problem is that implementation of locks has critical sections, too! the acquire/release must be atomic atomic == executes as though it could not be interrupted code that executes all or nothing How to solve this? (a) Could use Peterson s software method.. OR (b) We could enlist help from the hardware atomic instructions test-and-set, compare-and-swap, disable/re-enable interrupts to prevent context switches 7.22

Petersons (Tanenbaum p.106) But we still have busy waiting 7.23

(b) Getting Help from the H/W for Locks 7.24

Getting Help from the H/W for Locks Some assembly code instruction sets assist us in implementing mutual exclusion The instruction is commonly called Test and Set Lock (TSL) It reads the contents of a memory location, stores it in a register and then stores a non-zero value at the address. Guaranteed to be indivisible 7.25

Problems with Spinlocks Horribly wasteful! Thread that wants the lock has higher priority than the thread holding the lock if a thread is spinning on a lock, the thread holding the lock cannot make progress called priority inversion How did lock holder yield the CPU in the first place? calls yield( ) or sleep( ) involuntary context switch Only want spinlocks as primitives to build higher-level synchronization constructs The major problem with Locks is busy waiting 7.26

Mechanisms for Enforcing Mutual Exclusion in Critical Sections Hardware Solutions Locks very primitive, minimal semantics; used to build others Semaphores basic, easy to get the hang of, hard to program with Monitors high level, requires language support, implicit operations easy to program with; Java synchronized() as example 7.27

Semaphores Semaphore = a synchronization primitive higher level than locks invented by Dijkstra in 1968, as part of the THE operating system A semaphore is: an integer variable It is manipulated atomically through two operations: P(S); called wait, down, test, lock or decrement, if it can it decrements semaphore value (which must have be >=0) and continues if not it blocks (sleeps) until semaphore is >=0 V(S); called signal, up, unlock or increment, signal(semaphore): allow another to enter increments semaphore value We don t have busy-waiting processes we have sleeping processes which do not use the CPU note: P and V relate to Dutch words 7.28

Blocking in Semaphores Each semaphore has an associated queue of processes/threads when P(S) or wait() is called by a thread, if semaphore is >=0 open, thread continues if semaphore is < 0 closed, thread blocks, waits on queue V(S) or signal() opens the semaphore if thread(s) are waiting on a queue, one thread is unblocked if no threads are on the queue, the signal is remembered for next time a wait() is called In other words, semaphore has history this history is a counter if counter falls below 0 (after decrement), then the semaphore is closed 7.29

Two types of semaphores Binary semaphore (aka mutex semaphore) guarantees mutually exclusive access to resource only one thread/process allowed entry at a time counter is initialized to 1 Counting semaphore (aka counted semaphore) represents a resources with many units available allows threads/process to enter as long as more units are available counter is initialized to N N = number of units available 7.30

Problems with Semaphores They can be used to solve any of the traditional synchronization problems, but: semaphores are essentially shared global variables can be accessed from anywhere (bad software engineering) there is no connection between the semaphore and the data being controlled by it used for both critical sections (mutual exclusion) and for coordination (scheduling) no control over their use, no guarantee of proper usage. Thus, they are prone to bugs 7.31

Problems with Semaphores Deadlock and Starvation Deadlock two or more processes are waiting indefinitely for an event that can be caused by only one of the waiting processes. Let S and Q be two semaphores initialized to 1 P 0 P 1 P(S); P(Q); V(S); P(Q); P(S); V(Q); V(Q) V(S); Starvation indefinite blocking. A process may never be removed from the semaphore queue in which it is suspended. 7.32

Mechanisms for Building Crit. Sections Hardware Solutions Locks very primitive, minimal semantics; used to build others Semaphores basic, easy to get the hang of, hard to program with Monitors high level, requires language support, implicit operations easy to program with; Java synchronized() as example 7.33

Monitors Semaphores can be used incorrectly by programmers We need a high level language support Monitors: A programming language construct that supports controlled access to shared data synchronization code added by compiler, enforced at runtime why does this help? Monitor is a software module that encapsulates: shared data structures procedures that operate on the shared data synchronization between concurrent processes that invoke those procedures Monitor protects the data from unstructured access guarantees only access data through procedures, hence in legitimate ways 7.34

Monitors 7.35

Monitor facilities Mutual exclusion only one process can be executing inside at any time thus, synchronization implicitly associated with monitor if a second process tries to enter a monitor procedure, it blocks until the first has left the monitor more restrictive than semaphores! but easier to use most of the time Once inside, a process may discover it can t continue, and may wish to sleep or, allow some other waiting process to continue condition variables provided within monitor processes can wait or signal others to continue condition variable can only be accessed from inside monitor 7.36

Classical Problems of Synchronization 7.37

Classical Problems of Synchronization Bounded-Buffer Problem (2 examples) A Producer places items in the buffer by calling enter() and a Consumer removes items by calling remove() Readers and Writers Problem A database is shared among several concurrent threads, some of whom may want to read (readers) the database while others may want to update (writers) to it. Can t let a reader thread and a writer thread at the same piece of data without problems but it is ok for two readers to read the same data at the same time Dining-Philosophers Problem 7.38

Dining-Philosophers Problem When you think you don t eat. When you want to eat then you take 2 chop-sticks and eat without interruption. When you finish eating you let the chopsticks down. Introduces some complex synchronization problem when deadlock and starvation are not allowed. Consider the chop stick to be a semaphore. Semaphore chopstick[] = new Semaphore[5]; 7.39