Synchronization 1. Synchronization

Similar documents
Synchronization 1. Synchronization

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

CS 350 Operating Systems Course Notes

CS 350 Operating Systems Course Notes

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

CS 318 Principles of Operating Systems

Operating Systems ECE344

Lecture 5: Synchronization w/locks

Opera&ng Systems ECE344

CMSC421: Principles of Operating Systems

Synchronization I. Jo, Heeseung

CSE 451: Operating Systems Winter Synchronization. Gary Kimura

Dealing with Issues for Interprocess Communication

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

CS533 Concepts of Operating Systems. Jonathan Walpole

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Lecture 6 (cont.): Semaphores and Monitors

Concurrency: Mutual Exclusion (Locks)

What's wrong with Semaphores?

CSE 451: Operating Systems Spring Module 10 Semaphores, Condition Variables, and Monitors

Systèmes d Exploitation Avancés

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

CSE 153 Design of Operating Systems

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

COMP 3430 Robert Guderian

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Implementing Locks. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

CMSC421: Principles of Operating Systems

Synchronization COMPSCI 386

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

Introduction to OS Synchronization MOS 2.3

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Semaphores and Monitors: High-level Synchronization Constructs

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Reminder from last time

[537] Locks. Tyler Harter

CSE 153 Design of Operating Systems

Lecture 8: September 30

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

CS 537 Lecture 11 Locks

Chapter 6: Process Synchronization

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Synchronization for Concurrent Tasks

The University of Texas at Arlington

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Synchronization. Disclaimer: some slides are adopted from the book authors slides with permission 1

Programming Languages

Synchronising Threads

Operating Systems. Synchronization

Locks. Dongkun Shin, SKKU

CS 153 Design of Operating Systems Winter 2016

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Operating Systems (1DT020 & 1TT802)

Interprocess Communication and Synchronization

Chapter 6: Process Synchronization

CS510 Operating System Foundations. Jonathan Walpole

CS 153 Design of Operating Systems Winter 2016

Problem Set 2. CS347: Operating Systems

Interprocess Communication By: Kaushik Vaghani

Concurrency and Synchronisation

CS 350 Operating Systems Course Notes

Synchronization. Dr. Yingwu Zhu

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

Lecture 9: Midterm Review

Reminder from last time

Last Class: Deadlocks. Today

IV. Process Synchronisation

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

The University of Texas at Arlington

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

CSE 120 Principles of Operating Systems Spring 2016

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Synchronization Spinlocks - Semaphores

Concurrency: Locks. Announcements

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Administrivia. Assignments 0 & 1 Class contact for the next two weeks Next week. Following week

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

Concurrency and Synchronisation

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018

Threads and Concurrency

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS5460: Operating Systems

Prof. Dr. Hasan Hüseyin

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Concurrency: Signaling and Condition Variables

Synchronization Mechanisms

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

CS-537: Midterm Exam (Spring 2001)

Transcription:

Synchronization 1 Synchronization key concepts critical sections, mutual exclusion, test-and-set, spinlocks, blocking and blocking locks, semaphores, condition variables, deadlocks reading Three Easy Pieces: Chapters 28-32 Synchronization 2 Thread Synchronization All threads in a concurrent program share access to the program s global variables and the heap. The part of a concurrent program in which a shared object is accessed is called a critical section. What happens if several threads try to access the same global variable or heap object at the same time?

Synchronization 3 Critical Section Example /* Note the use of volatile */ int volatile total = 0; void add() { void sub() { int i; int i; for (i=0; i<n; i++) { for (i=0; i<n; i++) { total++; total--; If one thread executes add and another executes sub what is the value of total when they have finished? Synchronization 4 Critical Section Example (assembly detail) /* Note the use of volatile */ int volatile total = 0; void add() { void sub() { loadaddr R8 total loadaddr R10 total for (i=0; i<n; i++) { for (i=0; i<n; i++) { lw R9 0(R8) lw R11 0(R10) add R9 1 sub R11 1 sw R9 0(R8) sw R11 0(R10)

Synchronization 5 Critical Section Example (Trace 1) Thread 1 Thread 2 loadaddr R8 total lw R9 0(R8) R9=0 add R9 1 R9=1 sw R9 0(R8) total=1 <INTERRUPT> loadaddr R10 total lw R11 0(R10) R11=0 sub R11 1 R11=-1 sw R11 0(R10) total=-1 One possible order of execution. Final value oftotal is 0. Synchronization 6 Critical Section Example (Trace 2) Thread 1 Thread 2 loadaddr R8 total lw R9 0(R8) R9=0 add R9 1 R9=1 <INTERRUPT and context switch> loadaddr R10 total lw R11 0(R10) R11=0 sub R11 1 R11=-1 sw R11 0(R10) total=-1... <INTERRUPT and context switch> sw R9 0(R8) total=1 One possible order of execution. Final value oftotal is 1.

Synchronization 7 Critical Section Example (Trace 3) Thread 1 Thread 2 loadaddr R8 total loadaddr R10 total lw R9 0(R8) R9=0 lw R11 0(R10) R11=0 add R9 1 R9=1 sub R11 1 R11=-1 sw R9 0(R8) total=1 sw R11 0(R10) total=-1 Another possible order of execution, this time on two processors. Final value oftotal is -1. Synchronization 8 Aboutvolatile /* What if we DO NOT use volatile */ int -------- volatile total = 0; void add() { void sub() { loadaddr R8 total loadaddr R10 total lw R9 0(R8) lw R11 0(R10) for (i=0; i<n; i++) { for (i=0; i<n; i++) { add R9 1 sub R11 1 sw R9 0(R8) sw R11 0(R10) Without volatile the compiler could optimize the code. forces the compiler to load and store the value on every use. volatile

Synchronization 9 Another Critical Section Example (Part 1) int list remove front(list *lp) { int num; list element *element; assert(!is empty(lp)); element = lp->first; num = lp->first->item; if (lp->first == lp->last) { lp->first = lp->last = NULL; else { lp->first = element->next; lp->num_in_list--; free(element); return num; The list remove front function is a critical section. It may not work properly if two threads call it at the same time on the samelist. (Why?) Synchronization 10 Another Critical Section Example (Part 2) void list append(list *lp, int new item) { list element *element = malloc(sizeof(list element)); element->item = new item assert(!is in list(lp, new item)); if (is empty(lp)) { lp->first = element; lp->last = element; else { lp->last->next = element; lp->last = element; lp->num in list++; The list append function is part of the same critical section as list remove front. It may not work properly if two threads call it at the same time, or if a thread calls it while another has called list remove front

Synchronization 11 int volatile total = 0; Mutual Exclusion void add() { void sub() { int i; int i; for (i=0; i<n; i++) { for (i=0; i<n; i++) { ----------- mutual exclusion start ------------- total++; total--; ----------- mutual exclusion end --------------- To prevent race conditions, we can enforce mutual exclusion on critical sections in the code. Synchronization 12 Enforcing Mutual Exclusion With Locks int volatile total = 0; /* lock for total: false => free, true => locked */ bool volatile total lock = false; void add() { void sub() { int i; int i; for (i=0; i<n; i++) { for (i=0; i<n; i++) { Acquire(&total lock); Acquire(&total lock); total++; total--; Release(&total lock); Release(&total lock); Acquire/Release must ensure that only one thread at a time can hold the lock, even if both attempt to Acquire at the same time. If a thread cannot Acquire the lock immediately, it must wait until the lock is available.

Synchronization 13 Lock Aquire and Release - First Try Acquire(bool *lock) { while (*lock == true) ; /* spin until lock is free */ *lock = true; /* grab the lock */ Release(book *lock) { *lock = false; /* give up the lock */ This simple approach does not work! (Why?) Synchronization 14 Hardware-Specific Synchronization Instructions used to implement synchronization primitives like locks provide a way to test and set a lock in a single atomic (indivisible) operation example: x86xchg instruction: xchg src,addr wheresrc is typically a register, andaddr is a memory address. Value in registersrc is written to memory at addressaddr, and the old value ataddr is placed intosrc. logical behavior ofxchg can be thought of as an atomic function that behaves like this: Xchg(value,addr) { old = *addr; *addr = value; return(old);

Synchronization 15 Lock Aquire and Release with Xchg Acquire(bool *lock) { while (Xchg(true,lock) == true) ; Release(book *lock) { *lock = false; /* give up the lock */ If Xchg returns true, the lock was already set, and we must continue to loop. If Xchg returns false, then the lock was free, and we have now acquired it. This construct is known as a spin lock, since a thread busy-waits (loops) in Acquire until the lock is free. Synchronization 16 Other Synchronization Instructions SPARCcas instruction cas addr,r1,r2 if value ataddr matches value inr1 then swap contents ofaddr andr2 Compare-And-Swap CompareAndSwap(addr,expectedval,newval) old = *addr; // get old value at addr if (old == expectedval) *addr = newval; return old; MIPS load-linked and store-conditional Load-linked returns the current value of a memory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-linked.

Synchronization 17 Spinlocks in OS/161 struct spinlock { volatile spinlock_data_t lk_lock; struct cpu *lk_holder; ; void spinlock_init(struct spinlock *lk void spinlock_acquire(struct spinlock *lk); void spinlock_release(struct spinlock *lk); spinlock acquire calls spinlock data testandset in a loop until the lock is acquired. Synchronization 18 Using Load-Linked / Store-Conditional /* return value 0 indicates lock was acquired */ spinlock_data_testandset(volatile spinlock_data_t *sd) { spinlock_data_t x,y; y = 1; asm volatile( ".set push;" /* save assembler mode */ ".set mips32;" /* allow MIPS32 instructions */ ".set volatile;" /* avoid unwanted optimization */ "ll %0, 0(%2);" /* x = *sd */ "sc %1, 0(%2);" /* *sd = y; y = success? */ ".set pop" /* restore assembler mode */ : "=r" (x), "+r" (y) : "r" (sd)); if (y == 0) { return 1; return x;

Synchronization 19 OS/161 Locks In addition to spinlocks, OS/161 also has locks. Like spinlocks, locks are used to enforce mutual exclusion. struct lock *mylock = lock create("lockname"); lock aquire(mylock); critical section /* e.g., call to list remove front */ lock release(mylock); spinlocks spin, locks block: a thread that callsspinlock acquire spins until the lock can be acquired a thread that callslock acquire blocks until the lock can be acquired Synchronization 20 Thread Blocking Sometimes a thread will need to wait for something, e.g.: wait for a lock to be released by another thread wait for data from a (relatively) slow device wait for input from a keyboard wait for busy device to become idle When a thread blocks, it stops running: the scheduler chooses a new thread to run a context switch from the blocking thread to the new thread occurs, the blocking thread is queued in a wait queue (not on the ready list) Eventually, a blocked thread is signaled and awakened by another thread.

Synchronization 21 Wait Channels in OS/161 wait channels are used to implement thread blocking in OS/161 void wchan sleep(struct wchan *wc); blocks calling thread on wait channelwc causes a context switch, likethread yield void wchan wakeall(struct wchan *wc); unblock all threads sleeping on wait channelwc void wchan wakeone(struct wchan *wc); unblock one thread sleeping on wait channelwc void wchan lock(struct wchan *wc); prevent operations on wait channelwc more on this later! there can be many different wait channels, holding threads that are blocked for different reasons. Synchronization 22 Thread States, Revisited a very simple thread state transition diagram preemption or thread_yield() ready got resource or event (wchan_wakeone/all()) dispatch blocked running need resource or event (wchan_sleep()) the states: running: currently executing ready: ready to execute blocked: waiting for something, so not ready to execute. ready threads are queued on the ready queue, blocked threads are queued on wait channels

Synchronization 23 Semaphores A semaphore is a synchronization primitive that can be used to enforce mutual exclusion requirements. It can also be used to solve other kinds of synchronization problems. A semaphore is an object that has an integer value, and that supports two operations: P: if the semaphore value is greater than0, decrement the value. Otherwise, wait until the value is greater than0and then decrement it. V: increment the value of the semaphore By definition, thepandvoperations of a semaphore are atomic. Synchronization 24 Mutual Exclusion Using a Semaphore volatile int total = 0; struct semaphore *total sem; total sem = sem create("total mutex",1); /* initial value is void add() { void sub() { int i; int i; for (i=0; i<n; i++) { for (i=0; i<n; i++) { P(sem); P(sem); total++; total--; V(sem); V(sem);

Synchronization 25 Producer/Consumer Synchronization with Bounded Buffer suppose we have threads (producers) that add items to a buffer and threads (consumers) that remove items from the buffer suppose we want to ensure that consumers do not consume if the buffer is empty - instead they must wait until the buffer has something in it similarly, suppose the buffer has a finite capacity (N), and we need to ensure that producers must wait if the buffer is full this requires synchronization between consumers and producers semaphores can provide the necessary synchronization Synchronization 26 Bounded Buffer Producer/Consumer Synchronization with Semaphores struct semaphore *Items,*Spaces; Items = sem create("buffer Items", 0); /* initially = 0 */ Spaces = sem create("buffer Spaces", N);/* initially = N */ Producer s Pseudo-code: P(Spaces); add item to the buffer V(Items); Consumer s Pseudo-code: P(Items); remove item from the buffer V(Spaces);

Synchronization 27 Condition Variables OS/161 supports another common synchronization primitive: condition variables each condition variable is intended to work together with a lock: condition variables are only used from within the critical section that is protected by the lock three operations are possible on a condition variable: wait: This causes the calling thread to block, and it releases the lock associated with the condition variable. Once the thread is unblocked it reacquires the lock. signal: If threads are blocked on the signaled condition variable, then one of those threads is unblocked. broadcast: Like signal, but unblocks all threads that are blocked on the condition variable. Synchronization 28 Using Condition Variables Condition variables get their name because they allow threads to wait for arbitrary conditions to become true inside of a critical section. Normally, each condition variable corresponds to a particular condition that is of interest to an application. For example, in the bounded buffer producer/consumer example on the following slides, the two conditions are: count > 0 (there are items in the buffer) count < N (there is free space in the buffer) when a condition is not true, a thread canwait on the corresponding condition variable until it becomes true when a thread detects that a condition is true, it usessignal orbroadcast to notify any threads that may be waiting Note that signalling (or broadcasting to) a condition variable that has no waiters has no effect. Signals do not accumulate.

Synchronization 29 Waiting on Condition Variables when a blocked thread is unblocked (bysignal orbroadcast), it reacquires the lock before returning from thewait call a thread is in the critical section when it callswait, and it will be in the critical section whenwait returns. However, in between the call and the return, while the caller is blocked, the caller is out of the critical section, and other threads may enter. In particular, the thread that callssignal (orbroadcast) to wake up the waiting thread will itself be in the critical section when it signals. The waiting thread will have to wait (at least) until the signaller releases the lock before it can unblock and return from thewait call. This describes Mesa-style condition variables, which are used in OS/161. There are alternative condition variable semantics (Hoare semantics), which differ from the semantics described here. Synchronization 30 Bounded Buffer Producer Using Locks and Condition Variables int volatile count = 0; /* must initially be 0 */ struct lock *mutex; /* for mutual exclusion */ struct cv *notfull, *notempty; /* condition variables */ /* Initialization Note: the lock and cv s must be created * using lock create() and cv create() before Produce() * and Consume() are called */ Produce(itemType item) { lock acquire(mutex); while (count == N) { cv wait(notfull, mutex); /* wait until buffer is not ful add item to buffer (call list append()) count = count + 1; cv signal(notempty, mutex); /* signal that buffer is not em lock release(mutex);

Synchronization 31 Bounded Buffer Consumer Using Locks and Condition Variables itemtype Consume() { lock acquire(mutex); while (count == 0) { cv wait(notempty, mutex); /* wait until buffer is not em remove item from buffer (call list remove front()) count = count - 1; cv signal(notfull, mutex); /* signal that buffer is not ful lock release(mutex); return(item); Both Produce() and Consume() call cv wait() inside of a while loop. Why? Synchronization 32 Deadlocks Suppose there are two threads and two locks,locka andlockb, both initially unlocked. Suppose the following sequence of events occurs 1. Thread 1 doeslock acquire(locka). 2. Thread 2 doeslock acquire(lockb). 3. Thread 1 doeslock acquire(lockb) and blocks, becauselockb is held by thread 2. 4. Thread 2 doeslock acquire(locka) and blocks, becauselocka is held by thread 1. These two threads are deadlocked - neither thread can make progress. Waiting will not resolve the deadlock. The threads are permanently stuck.

Synchronization 33 Two Techniques for Deadlock Prevention No Hold and Wait: prevent a thread from requesting resources if it currently has resources allocated to it. A thread may hold several resources, but to do so it must make a single request for all of them. Resource Ordering: Order (e.g., number) the resource types, and require that each thread acquire resources in increasing resource type order. That is, a thread may make no requests for resources of type less than or equal toiif it is holding resources of typei.