Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Similar documents
Memory system behavior: volatile variables. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 6. Volatile variables and concurrency

Pre- and post- CS protocols. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 7. Other requirements for a mutual exclusion algorithm

CS 361 Concurrent programming Drexel University Fall 2004 Lecture 8. Proof by contradiction. Proof of correctness. Proof of mutual exclusion property

Concept of a process

CS 361 Concurrent programming Drexel University Spring 2000 Lecture 14. The dining philosophers problem

CSE Traditional Operating Systems deal with typical system software designed to be:

Models of concurrency & synchronization algorithms

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Dealing with Issues for Interprocess Communication

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 5 Asynchronous Concurrent Execution

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Semaphores. May 10, Mutual exclusion with shared variables is difficult (e.g. Dekker s solution).

Synchronization for Concurrent Tasks

Process Coordination

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

CS-537: Midterm Exam (Spring 2001)

Introduction to OS Synchronization MOS 2.3

Interprocess Communication By: Kaushik Vaghani

CS420: Operating Systems. Process Synchronization

Concurrency. Chapter 5

IT 540 Operating Systems ECE519 Advanced Operating Systems

G52CON: Concepts of Concurrency

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Chapter 6: Process Synchronization

Concurrency and Synchronisation

CS370 Operating Systems

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Quiz on Tuesday April 13. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 4. Java facts and questions. Things to try in Java

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Programming in Parallel COMP755

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Concurrency and Synchronisation

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Concurrent Processes Rab Nawaz Jadoon

Locks. Dongkun Shin, SKKU

IV. Process Synchronisation

Synchronization Principles

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

2 Threads vs. Processes

Lesson 6: Process Synchronization

PROCESS SYNCHRONIZATION

Dept. of CSE, York Univ. 1

Process Synchronization

Synchronization I. Jo, Heeseung

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

Process Management And Synchronization

CS 153 Design of Operating Systems Winter 2016

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Synchronization. CS 416: Operating Systems Design, Spring 2011 Department of Computer Science Rutgers University

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

CHAPTER 6: PROCESS SYNCHRONIZATION

ECE469 - Operating Systems Engineering Exam # March 25

Process Synchronization. Mehdi Kargahi School of ECE University of Tehran Spring 2008

UNIT-3 : MULTI THREADED PROGRAMMING, EVENT HANDLING. A Multithreaded program contains two or more parts that can run concurrently.

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

CS3733: Operating Systems

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

10/17/2011. Cooperating Processes. Synchronization 1. Example: Producer Consumer (3) Example

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

UNIX Input/Output Buffering

Process Synchronization

Introduction to Operating Systems

Lecture 6. Process Synchronization

CS Operating Systems

CS Operating Systems

1 Process Coordination

Concurrency: a crash course

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

COMP 300E Operating Systems Fall Semester 2011 Midterm Examination SAMPLE. Name: Student ID:

Need for synchronization: If threads comprise parts of our software systems, then they must communicate.

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

Synchronization: Semaphores

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Do not start the test until instructed to do so!

Week 7. Concurrent Programming: Thread Synchronization. CS 180 Sunil Prabhakar Department of Computer Science Purdue University

Concurrency: Mutual Exclusion and

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Chapter 8. Basic Synchronization Principles

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Lecture 8: September 30

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Mutual Exclusion and Synchronization

Last class: Today: CPU Scheduling. Start synchronization

Concurrency: Locks. Announcements

Chapters 5 and 6 Concurrency

Transcription:

CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce these notes for their own use. Midterm on next week Tuesday May 4 Will cover chapters Will cover chapters 1-3, plus section 4.1 of Hartley book. Also sections 2.1-2.6; 3.1, 3.6, 3.7, 4.1; 5.1-5.4, 5.6 of the Silbershantz Applied Operating Systems textbook. Lecture notes 1-10. Closed book. Kinds of questions: Describe scenarios and consequences of race conditions Analyze code for properties (mutual exclusion, starvation, deadlock, etc.) Assertions -- what must be true at a certain point in a program? What might be true? What is impossible? Describe/define terms carefully. Describe meaning of code, describe consequences of changing code. Describe the operation of a binary semaphore. page 1 page 2 Lamport s bakery algorithm for mutual exclusion Basic idea: A thread wishing to enter its critical section computes the next ticket number and waits its turn, analogous to customers entering a bakery or deli, drawing a number from the ticket dispenser. page 3 Getting ticket numbers that work concurrently is the crux of the problem. The problem is calculating the ticket number. We can t guarantee that a thread will get a unique number when it tries, due to a race condition in getting the present number. Some computers have a memory get and increment single uninterruptable instruction, but we are assuming only a memory load and store architecture so we don t have a way of doing this without developing a race condition to get the ticket. page 4 Calculating ticket numbers Have the threads compute the next ticket number they will use by using (the maximum of all outstanding ticket numbers) + 1. Two or more threads might compute the same number with this scheme, so a unique constant, numeric identifier associated with each thread is used to break ties. Lamport s bakery algorithm for N threads class Ticket { public volatile int value = 0; class Arbitrator { private static final int NUM_NODES = 2; // for two nodes only // Lamport's bakery ticket algorithm. private Ticket[] ticket = new Ticket[NUM_NODES]; // array of tickets page 5 page 6

Lamport s algorithm, con t // Continuation of Arbitrator class // Arbitrator constructor public Arbitrator(int numnodes) { // takes number of contenders as argument. = 2. if (numnodes!= NUM_NODES) { System.err.println("Arbitrator: numnodes=" + numnodes + " which is!= " + NUM_NODES); System.exit(1); ; // initialize array of tickets. for (int i = 0; i < NUM_NODES; i++) ticket[i] = new Ticket(); Other method in Arbitrator class private int other(int i) { return (i + 1) % NUM_NODES; page 7 page 8 Pre-, post- protocols for Lamport algorithm public void wanttoentercs(int i) { // pre-protocol ticket[i].value = 1; ticket[i].value = ticket[other(i)].value + 1; // compute next ticket while (!(ticket[other(i)].value == 0 ticket[i].value < ticket[other(i)].value (ticket[i].value == ticket[other(i)].value // break a tie && i == 0))) /* busy wait */ Thread.currentThread().yield(); public void finishedincs(int i) { ticket[i].value = 0; page 9 // post-protocol page 10 Explanation of ticket condition Another way to state the while condition is: if any of the following is true: If Other thread s ticket is zero, or Your ticket s value is less than other thread s ticket, or Your ticket s value is equal to to the other thread s ticket and you re thread #0. Then proceed into critical section Work out what happens Try to deadlock Try to defeat mutual exclusion Is there starvation in the absence of contention? Try to achieve starvation in the presence of contention Why doesn t the standard problems with race conditions arise? page 11 page 12

page 13 No race conditions Because each thread updates a distinct set of variables. So we can t have the read/write/inspect interleaving by multiple threads Lamport s algorithm with more than two threads class Ticket { public volatile int value = 0; class Arbitrator { private int numnodes = 0; // Lamport's bakery ticket algorithm. private Ticket[] ticket = null; public Arbitrator(int numnodes) { this.numnodes = numnodes; ticket = new Ticket[numNodes]; for (int i = 0; i < numnodes; i++) ticket[i] = new Ticket(); private int maxx(ticket[] ticket) { // find maximum of all tickets int mx = ticket[0].value; for (int i = 1; i < ticket.length; i++) if (ticket[i].value > mx) mx = ticket[i].value; return mx; page 14 Lamport s algorithm with more than two threads public void wanttoentercs(int i) { // pre-protocol ticket[i].value = 1; ticket[i].value = 1 + maxx(ticket); // compute next ticket for (int j = 0; j < numnodes; j++) if (j!= i) while (!(ticket[j].value == 0 ticket[i].value < ticket[j].value // break a tie (ticket[i].value == ticket[j].value && i < j))) // busy wait Thread.currentThread().yield(); public void finishedincs(int i) { ticket[i].value = 0; // post-protocol Notes on the condition So now we proceed into the critical section if, for all nodes j!= i, at least one of the following is true: Thread j s ticket is zero, or Thread j s ticket value is higher than that of ticket i, or Thread j s ticket value is equal to that of thread i s ticket, but i has the smallest thread number of all those threads (tie breaker) page 15 page 16 page 17 Nice properties of Lamport s algorithm Lamport s bakery algorithm has all good properties it enforces mutual exclusion, it does not deadlock, it does not livelock, it prevents starvation in the absence of contention, and it prevents starvation in the presence of contention. How to enforce mutual exclusion via hardware Method #1: turn off interrupts works on uniprocessor machines. Also need to hardware-lock the memory bus. Turning off interrupts means that time-slicing can t happen, so a thread running will continue to run until it yields Don t waste cycles busy-waiting. Some OS s running in privileged (kernel) mode allow this But makes the machine unresponsive to anything else while thread is in critical section. Doesn t work well with multiprocessors. page 18

Method #2: test and set instruction TS(R, addr, value) : Retrieves the value of addr into R, and then stores the specified value into addr. An atomic, uninterruptable assembly language instruction found on many computers. From this we can build a software procedure useful for mutual exclusion boolean testandset(boolean flag) { Register R; TS(R, flag, true); // get current value of flag // and set that flag to true, in one // uninterruptable operation. return R; page 19 page 20 Using testandset to implement mutual exclusion boolean lockflag = false; wanttoentercs(int I) { while testandset(lockflag) /* busy wait */ ; finished InCS(int I) { lockflag = false; Works for any number of threads Does not deadlock Does not suffer from starvation in the absence of contention. There may be starvation in the presence of contention because some thread could be locked out. Solutions that use blocking Busy waiting burns CPU cycles during the wait, particularly with a multiprocessor situation. Can we use features such as Java s wait and notify to avoid needless usage of cycles? Ideal methods delay() and wakeup() delay() would remove its thread from runnable and placed it at the end of a queue of delayed threads. Its state is changed from running to blocked. wakeup() (by a running thread, not the delayed one, obviously) moves a thread at the head of the delay queue back to the ready queue and its state is changed from blocked to runnable. If the delay queue is empty, wakeup has no effect. We avoid busy waiting for threads in the delay queue. page 21 page 22 Does this work? public void wanttoentercs(int I) { desirecs[i].value = true; last = I; while (desirecs[other(i)].value && last == I) delay(); public void finishedincs(int I){ desirecs[i].value = false; wakeup(); A problem with this approach: missed signal. If there s a context switch by a thread executing wanttoentercs between the time that it does the while test and the time it does the delay, then if the other thread does the wakeup before the the thread in the preprotocol does the delay, it could miss the wakeup and then hang forever. page 23 page 24

Bounded buffer code, revisited Producer consumer code // Class 3.19 (p.84) public void deposit(double value) { public double fetch() { if (buffer[putin].occupied) { double value; Thread producer = Thread.currentThread(); if (!buffer[takeout].occupied) { buffer[putin].thread = producer; buffer[takeout].thread = consumer; producer.suspend(); consumer.suspend(); // bad context switch here? // bad context switch? buffer[takeout].thread=null; buffer[putin].thread = null; ; value = buffer[takeout].value; buffer[putin].value = value; buffer[takeout].occupied = false; buffer[putin].occupied = true; Thread producer = buffer[takeout].thread; Thread consumer = buffer[putin].thread; if (producer!= null) putin = (putin + 1) & numslots; producer.resume();// a producer is waiting if (consumer!=null) consumer.resume(); Return value; // a consumer is waiting. page 25 page 26 What goes wrong with this? This has the same problem as the original pseudo code: a missed wakeup because the resume occurs before the suspend. A busy waiting solution to the bounded buffer problem class BufferItem { public volatile double value = 0; // multiple threads access public volatile boolean occupied = false; // so make these `volatile' page 27 page 28 Bounded buffer class class BoundedBuffer { // designed for a single producer thread and // a single consumer thread private int numslots = 0; private BufferItem[] buffer = null; private int putin = 0, takeout = 0; // private int count = 0; public BoundedBuffer(int numslots) { if (numslots <= 0) throw new IllegalArgumentException("numSlots<=0"); this.numslots = numslots; buffer = new BufferItem[numSlots]; for (int i = 0; i < numslots; i++) buffer[i] = new BufferItem(); Busy waiting bounded buffer producer public void deposit(double value) { while (buffer[putin].occupied) // busy wait Thread.currentThread().yield(); buffer[putin].value = value; // A buffer[putin].occupied = true; // B putin = (putin + 1) % numslots; // count++; // race condition!!! page 29 page 30

Busy waiting bounded buffer consumer public double fetch() { double value; while (!buffer[takeout].occupied) // busy wait Thread.currentThread().yield(); value = buffer[takeout].value; // C buffer[takeout].occupied = false; // D takeout = (takeout + 1) % numslots; // count--; // race condition!!! return value; What could go wrong here? It uses value and occupied which must be declared volatile if the consumer is to see things in the same order as the producer is setting them (and vice versa) If the consumer thread has a higher priority than the producer then the consumer thread could busy-wait forever and prevent the producer from ever getting a time slice in which to put something into the buffer. This is a form of starvation even though there is no critical section. This is always a danger on a system that doesn t have time-slicing. page 31 page 32 page 33 Semaphores A counting semaphore is an abstract data type with two atomic (uninterruptable) operations, P and V. The data field known as the value of the semaphore is an integer which is supposed to take on non-negative values (0, 1, 2, ). When created the value of the semaphore is initialized by the constructor, e.g. Semaphore S = new Semaphore(1); s.p() or P(s) is known as down (or dutch for passeren, to pass) s.v() or V(s) is known as up (or vrygeven, to release) page 34 Semaphore semantics The P operation decrements S in an atomic (noninterruptable) action) if it is > 0. Then the thread doing the P operation proceeds. Otherwise, if the value of the semaphore is already 0, the thread doing the P operation waits. If a thread invokes the V operation, then if no thread is waiting (from doing a P) then the value of the semaphore is incremented and the thread doing the V operation proceeds. If a thread invokes the V operation, then if another thread is waiting (from doing a P operation with the semaphore) then one of the waiting threads is released, and the thread doing the V operation is proceeds. The value of the semaphore is not incremented in this case. Semaphore notes Do some scenarios. Remember P and V are atomic (uninterruptable) operations. Can have several semaphores S1, S2, S3, etc. in a program. Doing a S1.P() or S1.V() doesn t affect operations with S2, S3, etc. Another notation for describing P and V, with s holding the value of the semaphore. P: < await (s>0); s = s-1;> V: <s = s+1;>. page 35 page 36

Binary semaphores A binary semaphore is limited to the values 0 and 1. A V operation applied to a semaphore whose value is already 1 has no effect. Binary semaphores are also called mutex locks. With some implementations of binary semaphores, sometimes P is named lock and V is named unlock. Locks Sometimes the concept of a lock is refined to include the idea that only the holder of a lock is allowed to release it. This makes sense when the lock is to give the holder exclusive access to a resource. However, sometimes a binary semaphore is used to block a thread until some event caused by another thread has occurred. For that reason, binary semaphores don t have any restrictions on who does the down and who does the up. page 37 page 38 Back to general binary semaphores Binary semaphores are used for mutual exclusion synchronization (enforcing mutual exclusion into critical sections) and condition synchronization (blocking threads until some condition becomes true or some event occurs, as with producer/consumer). How to do mutual exclusion with binary semaphores Shared binary semaphore mutex with initial value 1 mutex.p() // pre protocol Critical section; mutex.v() // post protocol page 39 page 40 Do some scenarios. Semaphore S = new Semaphore(1);// initialize to 1 S.P(); (critical section) S.V(); 1. Thread A does S.P(); Decrements semaphore to 0, returns from call to P, proceeds into critical section. Do some scenarios. 2. Thread B does S.P(); Blocks. 3. Thread C does S.P(); Blocks. 4. Thread A finishes CS, do S.V(); Instead of incrementing value of semaphore, it unblocks a thread, say B. A has finished CS. 5. What happens if thread A does an S.P() at this point? (blocks) 6. Thread B finished CS, does S.V();. Unblocks a thread, say A. 7. A finished, does S.V(); unblocks C. 8. C finishes CS, does S.V();, semaphore is set to 1. page 41 page 42

Binary semaphores also handle the producer/consumer problem In general the way to do condition synchronization with binary semaphores is: In one thread: if (condition) P(S); // blocks For example, for consumer thread, if buffer is empty, then do a P(S) which will block if semaphore is initially zero. In producer thread: as soon as product is placed in buffer, V(S); // giving permission to the other thread to unblock. No missed signals; we can do P(S) followed by V(S) or vice versa. page 43 page 44 Producer/consumer code with binary semaphores. int N, count = 0; BinarySemaphore S = new BinaryhSemaphore(0), mutex = new BinarySemaphore (1); public void producer() { while (true) { produceitem(); if (count == N) P(S); // block if buffer full enteritem(); P(mutex); count ++; V(mutex); // critical section // protected for mutual exclusion if (count == 1) V(S); Consumer code with binary semaphores public void consumer() { while (true) { if (count == 0) P(S); // delay if buffer empty removeitem(); P(mutex); count --; V(mutex); // critical section if (count == N -1) V(s); // release consumer if it s waiting. consumeitem(); Questions about this code Go through a scenario of what happens when the producer produces into an unfull buffer, and into a full buffer. Why are two and not only one semaphores needed? page 45 page 46