CS 318 Principles of Operating Systems

Similar documents
CS 318 Principles of Operating Systems

CSE 120 Principles of Operating Systems Spring 2016

CSE 120 Principles of Operating Systems Spring 2016

Lecture 6 (cont.): Semaphores and Monitors

CSE 153 Design of Operating Systems

Operating Systems ECE344

Opera&ng Systems ECE344

Lecture 7: CVs & Scheduling

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

CSE 120 Principles of Operating Systems Spring 2016

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

CS 153 Design of Operating Systems Winter 2016

CSE 451: Operating Systems Spring Module 10 Semaphores, Condition Variables, and Monitors

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Synchronization. Dr. Yingwu Zhu

CS 318 Principles of Operating Systems

CS 153 Design of Operating Systems Winter 2016

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Semaphores and Monitors: High-level Synchronization Constructs

CSE 451: Operating Systems Winter Synchronization. Gary Kimura

Dealing with Issues for Interprocess Communication

CS510 Operating System Foundations. Jonathan Walpole

Lecture 8: September 30

CS 537 Lecture 8 Monitors. Thread Join with Semaphores. Dining Philosophers. Parent thread. Child thread. Michael Swift

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Concurrency and Synchronisation

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Synchronization: Semaphore

Concurrency and Synchronisation

What's wrong with Semaphores?

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Introduction to OS Synchronization MOS 2.3

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

There are 8 total numbered pages, 6 Questions. You have 60 minutes. Budget your time carefully!

Reminder from last time

Last Class: Synchronization. Review. Semaphores. Today: Semaphores. MLFQ CPU scheduler. What is test & set?

Multitasking / Multithreading system Supports multiple tasks

Synchronization Basic Problem:

Condition Variables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 153 Design of Operating Systems Spring 18

EECS 482 Introduction to Operating Systems

Review: Processes. A process is an instance of a running program. POSIX Thread APIs:

CSE 120 Principles of Operating Systems

Concurrency. Chapter 5

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

CS370 Operating Systems

Locks and Condition Variables Recap. Introducing Monitors. Hoare Monitors: Semantics. Coke Machine Example. Locks. Condition variables

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Synchronization Primitives

CSE 153 Design of Operating Systems Fall 2018

Synchronization Classic Problems

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Operating Systems (2INC0) 2017/18

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Operating Systems. Lecture 8

Semaphores (by Dijkstra)

CS 326 Operating Systems Synchronization. Greg Benson Department of Computer Science University of San Francisco

Synchronization 1. Synchronization

Chapter 6: Process Synchronization

Lesson 6: Process Synchronization

High-level Synchronization

Lecture 5: Synchronization w/locks

IV. Process Synchronisation

CSE 153 Design of Operating Systems

Process Coordination

Announcements. Class feedback for mid-course evaluations Receive about survey to fill out until this Friday

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

The Synchronization Toolbox

1 Process Coordination

Systèmes d Exploitation Avancés

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Chapter 6: Process Synchronization

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

Chapter 7: Process Synchronization. Background. Illustration

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Enforcing Mutual Exclusion Using Monitors

CHAPTER 6: PROCESS SYNCHRONIZATION

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Chapter 5: Process Synchronization

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Process Management And Synchronization

CS370 Operating Systems

Project 3-2. Mutex & CV

Synchronization. CSCI 5103 Operating Systems. Semaphore Usage. Bounded-Buffer Problem With Semaphores. Monitors Other Approaches

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Outline. TDDC47 Lesson 1: Pintos labs Assignments 0, 1, 2. Administration. A new kind of complexity. The Pintos boot process

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

Synchronization 1. Synchronization

Plan. Demos. Next Project. CSCI [4 6]730 Operating Systems. Hardware Primitives. Process Synchronization Part II. Synchronization Part 2

Process Synchronization

Transcription:

CS 318 Principles of Operating Systems Fall 2017 Lecture 7: Semaphores and Monitors Ryan Huang

Higher-Level Synchronization We looked at using locks to provide mutual exclusion Locks work, but they have limited semantics - Just provide mutual exclusion Instead, we want synchronization mechanisms that - Block waiters, leave interrupts enabled in critical sections - Provide semantics beyond mutual exclusion Look at two common high-level mechanisms - Semaphores: binary (mutex) and counting - Monitors: mutexes and condition variables Use them to solve common synchronization problems 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 2

Semaphores An abstract data type to provide mutual exclusion to critical sections - Described by Dijkstra in the THE system in 1968 Semaphores can also be used as atomic counters - More later Semaphores are integers that support two operations: - Semaphore::P(): decrement, block until semaphore is open after the Dutch word Proberen (to try), also Wait() - Semaphore::V(): increment, allow another thread to enter after the Dutch word Verhogen (increment), also Signal() - That's it! No other operations not even just reading its value Semaphore safety property: the semaphore value is always greater than or equal to 0 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 3

Blocking in Semaphores Associated with each semaphore is a queue of waiting processes When P() is called by a thread: - If semaphore is open, thread continues - If semaphore is closed, thread blocks on queue Then V() opens the semaphore: - If a thread is waiting on the queue, the thread is unblocked - If no threads are waiting on the queue, the signal is remembered for the next thread In other words, V() has history (c.f., condition vars later) This history is a counter 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 4

Semaphore Types Semaphores come in two types Mutex semaphore (or binary semaphore) - Represents single access to a resource - Guarantees mutual exclusion to a critical section Counting semaphore (or general semaphore) - Represents a resource with many units available, or a resource that allows certain kinds of unsynchronized concurrent access (e.g., reading) - Multiple threads can pass the semaphore - Number of threads determined by the semaphore count mutex has count = 1, counting has count = N 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 5

Using Semaphores Use is similar to our locks, but semantics are different struct Semaphore { int value; Queue q; S; withdraw (account, amount) { wait(s); balance = get_balance(account); balance = balance amount; put_balance(account, balance); signal(s); return balance; Threads block critical section wait(s); balance = get_balance(account); balance = balance amount; wait(s); wait(s); put_balance(account, balance); signal(s); signal(s); It is undefined which thread runs after a signal signal(s); 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 6

Semaphores in Pintos void sema_down(struct semaphore *sema) { enum intr_level old_level; old_level = intr_disable(); while (sema->value == 0) { list_push_back(&sema->waiters, &thread_current()->elem); thread_block(); sema->value--; intr_set_level(old_level); To reference current thread: thread_current() thread_block() assumes interrupts are disabled - Note that interrupts are disabled only to enter/leave critical section - How can it sleep with interrupts disabled? void sema_up(struct semaphore *sema) { enum intr_level old_level; old_level = intr_disable(); if (!list_empty (&sema->waiters)) thread_unblock(list_entry( list_pop_front(&sema->waiters), struct thread, elem)); sema->value++; intr_set_level(old_level); 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 7

Interrupts Disabled During Context Switch thread_yield() { Disable interrupts; add current thread to ready_list; schedule(); // context switch Enable interrupts; sema_down() { Disable interrupts; while(value == 0) { add current thread to waiters; thread_block(); value--; Enable interrupts; [thread_yield] Disable interrupts; add current thread to ready_list; schedule(); [thread_yield] (Returns from schedule()) Enable interrupts; [sema_down] Disable interrupts; while(value == 0) { add current thread to waiters; thread_block(); [thread_yield] (Returns from schedule()) Enable interrupts; 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 8

Using Semaphores We ve looked at a simple example for using synchronization - Mutual exclusion while accessing a bank account Now we re going to use semaphores to look at more interesting examples - Readers/Writers - Bounded Buffers 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 9

Readers/Writers Problem Readers/Writers Problem: - An object is shared among several threads - Some threads only read the object, others only write it - We can allow multiple readers but only one writer Let #r be the number of readers, #w be the number of writers Safety: (#r 0) (0 #w 1) ((#r > 0) (#w = 0)) How can we use semaphores to control access to the object to implement this protocol? Use three variables - int readcount number of threads reading object - Semaphore mutex control access to readcount - Semaphore w_or_r exclusive writing or reading 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 10

Readers/Writers // number of readers int readcount = 0; // mutual exclusion to readcount Semaphore mutex = 1; // exclusive writer or reader Semaphore w_or_r = 1; writer { wait(w_or_r); // lock out readers Write; signal(w_or_r);// up for grabs reader { wait(mutex); // lock readcount readcount += 1; // one more reader if (readcount == 1) wait(w_or_r);// synch w/ writers signal(mutex); // unlock readcount Read; wait(mutex); // lock readcount readcount -= 1; // one less reader if (readcount == 0) signal(w_or_r); // up for grabs signal(mutex); // unlock readcount 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 11

Readers/Writers Notes w_or_r provides mutex between readers and writers - writer wait/signal, reader wait/signal when readcount goes from 0 to 1 or from 1 to 0. If a writer is writing, where will readers be waiting? Once a writer exits, all readers can fall through - Which reader gets to go first? - Is it guaranteed that all readers will fall through? If readers and writers are waiting, and a writer exits, who goes first? Why do readers use mutex? Why don't writers use mutex? What if the signal is above if (readcount == 1)? 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 12

Bounded Buffer Problem: There is a set of resource buffers shared by producer and consumer threads - Producer inserts resources into the buffer set Output, disk blocks, memory pages, processes, etc. - Consumer removes resources from the buffer set Whatever is generated by the producer Producer and consumer execute at different rates - No serialization of one behind the other - Tasks are independent (easier to think about) - The buffer set allows each to run without explicit handoff Safety: - Sequence of consumed values is prefix of sequence of produced values - If nc is number consumed, np number produced, and N the size of the buffer, then 0 np - nc N 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 13

Bounded Buffer (2) 0 np - nc N and 0 (nc - np) + N N Use three semaphores: - empty count of empty buffers Counting semaphore empty = (nc - np) + N - full count of full buffers Counting semaphore np - nc = full - mutex mutual exclusion to shared set of buffers Binary semaphore 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 14

Bounded Buffer (3) Semaphore mutex = 1; // mutual exclusion to shared set of buffers Semaphore empty = N; // count of empty buffers (all empty to start) Semaphore full = 0; // count of full buffers (none full to start) producer { while (1) { Produce new resource; wait(empty); // wait for empty buffer wait(mutex); // lock buffer list Add resource to an empty buffer; signal(mutex); // unlock buffer list signal(full); // note a full buffer consumer { while (1) { wait(full); // wait for a full buffer wait(mutex); // lock buffer list Remove resource from a full buffer; signal(mutex); // unlock buffer list signal(empty); // note an empty buffer Consume resource; 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 15

Bounded Buffer (4) Why need the mutex at all? Where are the critical sections? What has to hold for deadlock to occur? - empty = 0 and full = 0 - (nc - np) + N = 0 and np - nc = 0 - N = 0 What happens if operations on mutex and full/empty are switched around? - The pattern of signal/wait on full/empty is a common construct often called an interlock Producer-Consumer and Bounded Buffer are classic synchronization problems 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 16

Semaphore Questions Are there any problems that can be solved with counting semaphores that cannot be solved with mutex semaphores? Does it matter which thread is unblocked by a signal operation? - Hint: consider the following three threads sharing a semaphore mutex that is initially 1: while (1) { wait(mutex); // in critical // section signal(mutex); while (1) { wait(mutex); // in critical // section signal(mutex); while (1) { wait(mutex); // in critical // section signal(mutex); 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 17

Semaphore Summary Semaphores can be used to solve any of the traditional synchronization problems However, they have some drawbacks - They are essentially shared global variables Can potentially be accessed anywhere in program - No connection between the semaphore and the data being controlled by the semaphore - Used both for critical sections (mutual exclusion) and coordination (scheduling) Note that I had to use comments in the code to distinguish - No control or guarantee of proper usage Sometimes hard to use and prone to bugs - Another approach: Use programming language support 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 18

Monitors A monitor is a programming language construct that controls access to shared data - Synchronization code added by compiler, enforced at runtime - Why is this an advantage? A monitor is a module that encapsulates - Shared data structures - Procedures that operate on the shared data structures - Synchronization between concurrent threads that invoke the procedures A monitor protects its data from unstructured access It guarantees that threads accessing its data through its procedures interact only in legitimate ways 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 19

Monitor Semantics A monitor guarantees mutual exclusion - Only one thread can execute any monitor procedure at any time (the thread is in the monitor ) - If a second thread invokes a monitor procedure when a first thread is already executing one, it blocks So the monitor has to have a wait queue - If a thread within a monitor blocks, another one can enter What are the implications in terms of parallelism in a monitor? 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 20

Account Example Monitor account { double balance; double withdraw(amount) { balance = balance amount; return balance; Hey, that was easy! Threads block waiting to get into monitor When first thread exits, another can enter. Which one is undefined. withdraw(amount) balance = balance amount; withdraw(amount) withdraw(amount) return balance (and exit) balance = balance amount return balance; balance = balance amount; return balance; But what if a thread wants to wait inside the monitor? - Such as mutex(empty) by reader in bounded buffer? 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 21

Monitors, Monitor Invariants and Condition Variables A monitor invariant is a safety property associated with the monitor, expressed over the monitored variables. It holds whenever a thread enters or exits the monitor. A condition variable is associated with a condition needed for a thread to make progress once it is in the monitor. - alternative: busy waiting, bad Monitor M {... monitored variables Condition c; void entermonitor (...) { if (extra property not true) wait(c); waits outside of the monitor's mutex do what you have to do if (extra property true) signal(c); brings in one thread waiting on condition 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 22

Condition Variables Condition variables support three operations: - Wait release monitor lock, wait for C/V to be signaled So condition variables have wait queues, too - Signal wakeup one waiting thread - Broadcast wakeup all waiting threads Condition variables are not boolean objects - if (condition_variable) then does not make sense - if (num_resources == 0) then wait(resources_available) does - An example will make this more clear 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 23

Monitor Bounded Buffer Monitor bounded_buffer { Resource buffer[n]; // Variables for indexing buffer // monitor invariant involves these vars Condition not_full; // space in buffer Condition not_empty; // value in buffer void put_resource (Resource R) { while (buffer array is full) wait(not_full); Add R to buffer array; signal(not_empty); Resource get_resource() { while (buffer array is empty) wait(not_empty); Get resource R from buffer array; signal(not_full); return R; // end monitor - What happens if no threads are waiting when signal is called? 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 24

Monitor Queues Monitor bounded_buffer { Condition not_full; other variables Condition not_empty; Waiting to enter Waiting on condition variables void put_resource() { wait(not_full) signal(not_empty) Resource get_resource() { Executing inside the monitor 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 25

Condition Vars!= Semaphores Condition variables!= semaphores - Although their operations have the same names, they have entirely different semantics (such is life, worse yet to come) - However, they each can be used to implement the other Access to the monitor is controlled by a lock - wait() blocks the calling thread, and gives up the lock To call wait, the thread has to be in the monitor (hence has lock) Semaphore::wait just blocks the thread on the queue - signal() causes a waiting thread to wake up If there is no waiting thread, the signal is lost Semaphore::signal increases the semaphore count, allowing future entry even if no thread is waiting Condition variables have no history 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 26

Signal Semantics There are two flavors of monitors that differ in the scheduling semantics of signal() - Hoare monitors (original) signal() immediately switches from the caller to a waiting thread The condition that the waiter was anticipating is guaranteed to hold when waiter executes Signaler must restore monitor invariants before signaling - Mesa monitors (Mesa, Java) signal() places a waiter on the ready queue, but signaler continues inside monitor Condition is not necessarily true when waiter runs again Returning from wait() is only a hint that something changed Must recheck conditional case 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 27

Hoare vs. Mesa Monitors Hoare if (empty) wait(condition); Mesa while (empty) wait(condition); Tradeoffs - Mesa monitors easier to use, more efficient Fewer context switches, easy to support broadcast - Hoare monitors leave less to chance Easier to reason about the program 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 28

Monitor Readers and Writers Using Mesa monitor semantics. Will have four methods: StartRead, StartWrite, EndRead and EndWrite Monitored data: nr (number of readers) and nw (number of writers) with the monitor invariant Two conditions: - canread: nw = 0 - canwrite: (nr = 0) (nw = 0) (nr 0) (0 nw 1) ((nr > 0) (nw = 0)) 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 29

Monitor Readers and Writers Write with just wait() - Will be safe, maybe not live why? Monitor RW { int nr = 0, nw = 0; Condition canread, canwrite; void StartRead () { while (nw!= 0) do wait(canread); nr++; void EndRead () { nr--; void StartWrite { while (nr!= 0 nw!= 0) do wait(canwrite); nw++; void EndWrite () { nw--; // end monitor 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 30

Monitor Readers and Writers add signal() and broadcast() Monitor RW { int nr = 0, nw = 0; Condition canread, canwrite; void StartRead () { while (nw!= 0) do wait(canread); nr++; can we put a signal here? void EndRead () { nr--; if (nr == 0) signal(canwrite); void StartWrite () { while (nr!= 0 nw!= 0) do wait(canwrite); nw++; can we put a signal here? void EndWrite () { nw--; broadcast(canread); signal(canwrite); // end monitor 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 31

Monitor Readers and Writers Is there any priority between readers and writers? What if you wanted to ensure that a waiting writer would have priority over new readers? 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 32

Condition Vars & Locks C/Vs are also used without monitors in conjunction with locks - void cond_init (cond_t *,...); - void cond_wait (cond_t *c, mutex_t *m); Atomically unlock m and sleep until c signaled Then re-acquire m and resume executing - void cond_signal (cond_t *c); - void cond_broadcast (cond_t *c); - Wake one/all threads waiting on c 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 33

Condition Vars & Locks C/Vs are also used without monitors in conjunction with locks A monitor a module whose state includes a C/V and a lock - Difference is syntactic; with monitors, compiler adds the code It is just as if each procedure in the module calls acquire() on entry and release() on exit - But can be done anywhere in procedure, at finer granularity With condition variables, the module methods may wait and signal on independent conditions 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 34

Condition Vars & Locks Why must cond_wait both release mutex_t & sleep? - void cond_wait(cond_t *c, mutex_t *m); Why not separate mutexes and condition variables? while (count == BUFFER_SIZE) { mutex_unlock(&mutex); cond_wait(&not_full); mutex_lock(&mutex); 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 35

Condition Vars & Locks Why must cond_wait both release mutex_t & sleep? - void cond_wait(cond_t *c, mutex_t *m); Why not separate mutexes and condition variables? Producer while (count == BUFFER_SIZE) { mutex_unlock(&mutex); cond_wait(&not_full); mutex_lock(&mutex); Consumer mutex_lock(&mutex);... count--; cond_signal(&not_full); 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 36

Using Cond Vars & Locks Alternation of two threads (ping-pong) Each executes the following: Lock lock; Condition cond; void ping_pong () { acquire(lock); while (1) { printf( ping or pong\n ); signal(cond, lock); wait(cond, lock); release(lock); Must acquire lock before you can wait (similar to needing interrupts disabled to call thread_block in Pintos) Wait atomically releases lock and blocks until signal() Wait atomically acquires lock before it returns 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 37

Monitors and Java A lock and condition variable are in every Java object - No explicit classes for locks or condition variables Every object is/has a monitor - At most one thread can be inside an object s monitor - A thread enters an object s monitor by Executing a method declared synchronized Can mix synchronized/unsynchronized methods in same class Executing the body of a synchronized statement Supports finer-grained locking than an entire procedure Identical to the Modula-2 LOCK (m) DO construct - The compiler generates code to acquire the object s lock at the start of the method and release it just before returning The lock itself is implicit, programmers do not worry about it 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 38

Monitors and Java Every object can be treated as a condition variable - Half of Object s methods are for synchronization! Take a look at the Java Object class: - Object.wait(*) is Condition::wait() - Object.notify() is Condition::signal() - Object.notifyAll() is Condition::broadcast() 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 39

Summary Semaphores - wait()/signal() implement blocking mutual exclusion - Also used as atomic counters (counting semaphores) - Can be inconvenient to use Monitors - Synchronizes execution within procedures that manipulate encapsulated data shared among procedures Only one thread can execute within a monitor at a time - Relies upon high-level language support Condition variables - Used by threads as a synchronization point to wait for events - Inside monitors, or outside with locks 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 40

Next Time Read Chapters 7 9/26/17 CS 318 Lecture 7 Semaphores and Monitors 41