Lecture Outline. CS 5523 Operating Systems: Concurrency and Synchronization. a = 0; b = 0; // Initial state Thread 1. Objectives.

Similar documents
CS3733: Operating Systems

CS 3723 Operating Systems: Final Review

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CMSC421: Principles of Operating Systems

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS 3723 Operating Systems: Final Review

OS Process Synchronization!

CHAPTER 6: PROCESS SYNCHRONIZATION

Lesson 6: Process Synchronization

CS370 Operating Systems

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Process Synchronization

Process Synchronization(2)

Chapter 6: Process Synchronization

Process Synchronization(2)

CSE 4/521 Introduction to Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Process Synchronization(2)

Process Synchronization

Process Coordination

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Synchronization Principles

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background. Illustration

CS370 Operating Systems

Interprocess Communication By: Kaushik Vaghani

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

CS370 Operating Systems

Module 6: Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS420: Operating Systems. Process Synchronization

High-level Synchronization

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Concurrency and Synchronisation

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Concurrency and Synchronisation

Chapter 6 Process Synchronization

Chapter 7: Process Synchronization!

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Process Synchronization

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Chapter 5: Process Synchronization

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Chapter 5: Process Synchronization

Process Synchronization

Introduction to OS Synchronization MOS 2.3

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Process Synchronization

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Concurrency. Chapter 5

Synchronization. Dr. Yingwu Zhu

Lecture 6. Process Synchronization

Dealing with Issues for Interprocess Communication

Process Synchronization

Process Synchronization

Module 6: Process Synchronization

C09: Process Synchronization

Process Synchronization

CSci 4061 Introduction to Operating Systems. Synchronization Basics: Locks

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Achieving Synchronization or How to Build a Semaphore

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Introduction to Operating Systems

Chapter 6: Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

CS370 Operating Systems Midterm Review

Process Management And Synchronization

1. Motivation (Race Condition)

CS 5523: Operating Systems

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

CSE Opera,ng System Principles

Real-Time Operating Systems M. 5. Process Synchronization

Process Co-ordination OPERATING SYSTEMS

Prof. Hui Jiang Dept of Computer Science and Engineering York University

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Operating systems. Lecture 12

Synchronization Spinlocks - Semaphores

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Interprocess Communication and Synchronization

IV. Process Synchronisation

Chapters 5 and 6 Concurrency

Synchronization. Disclaimer: some slides are adopted from the book authors slides with permission 1

Transcription:

CS 5523 Operating Systems: Concurrency and Synchronization Thank Dr. Dakai Zhu and Dr. Palden Lama for providing their slides. Lecture Outline Problems with concurrent access to shared data Ø Race condition and critical section Ø General structure for enforce critical section Synchronization mechanism Ø Hardware supported instructions: e.g., TestAndSet Ø Software solution: e.g., semaphore Classical Synchronization Problems High-level synchronization structure: Monitor Case study for synchronization Ø Pthread library: mutex and conditional variables Ø Java inherit monitor and conditional variable 1 2 Objectives To introduce the critical-section problem, whose solutions can be used to ensure the consistency of shared data" To present both software and hardware solutions of the critical-section problem" To introduce the concept of an atomic transaction and describe mechanisms to ensure atomicity " Shared Data! Ø at the same logical address space " Ø at different address space through messages (later in DS)" a = 1 b = 0 a = 0; b = 0; // Initial state Thread 1 T1-1: if (b == 0) T1-2: a = 1; 99.43% 0.56% 0.01% a = 0 b = 1 Thread 2 T2-1: if (a == 0) T2-2: b = 1; a = 1 b = 1 a = 1 b = 1 1

Concurrent Access to Shared Data Two threads A and B have access to a shared variable Balance Thread A: Thread B: Balance = Balance + 100 Balance = Balance - 200 A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 B1. LOAD R1, BALANCE B2. SUB R1, 200 B3. STORE BALANCE, R1 What is the problem then? Observe: In a time-shared system, the exact instruction execution order cannot be predicted Scenario 1:! A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 Context Switch! B1. LOAD R1, BALANCE B2. SUB R1, 200 B3. STORE BALANCE, R1 Sequential correct execution Balance is effectively decreased by 100! Scenario 2:! B1. LOAD R1, BALANCE B2. SUB R1, 200 Context Switch! A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 Context Switch! B3. STORE BALANCE, R1 Mixed wrong execution Balance is effectively decreased by 200!!! 5 6 Race Conditions Multiple processes/threads write/read shared data and the outcome depends on the particular order to access shared data are called race conditions Ø A serious problem for concurrent system using shared variables! How do we solve the problem?! Need to make sure that some high-level code sections are executed atomically Ø Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflictcausing process What is Synchronization? Cooperating processes/threads share data & have effects on each other à executions are NOT reproducible with non-deterministic exec. speed Concurrent executions Ø Single processor à achieved by time slicing Ø Parallel/distributed systems à truly simultaneous Synchronization à getting processes/threads to work together in a coordinated manner 7 8 2

Critical-Section (CS) Problem General Structure for Critical Sections Multiple processes/threads compete to access shared data critical section (critical region): a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution. Problem ensure that only one process/thread is allowed to execute in its critical section (for the same shared data) at any time. The execution of critical sections must be mutually exclusive in time. In the entry section, the process requests permission. 9 10 do { entry section critical section exit section remainder statements while (1); Solving the Critical-Section Problem Mutual Exclusion Mutual Exclusion Ø No two processes can simultaneously enter into the critical section. Bounded Waiting Ø No process should wait forever to enter a critical section. Progress Ø Non-related process can not block a process trying to enter one critical section Relative Speed Ø No assumption can be made about the relative speed of different processes (though all processes have a non-zero speed). 11 12 3

Solutions for CS Problem Simple solution for two threads only Software based" Ø Peterson s solution" Ø Semaphores" Ø Monitors" Hardware based " Ø Locks" Ø disable interrupts" Ø Atomic instructions: TestAndSet and Swap T0 and T1: alternate between CS and remainder Assumption: LOAD & STORE are atomic Ø May not work in modern architectures (e.g., speculation) Shared variables between T0 and T1 Ø int turn; à indicate whose turn to enter CS Thread 0: Thread 1: --------- --------- while(true) { while(true) { while (turn!= 0) ; while (turn!= 1) ; critical section critical section turn = 1; turn = 0; remainder section remainder section Problems?? (1) two threads only (2) busy waiting 13 14 Peterson s Solution Three shared variables: turn and flag[2] How does this solution satisfy the CS requirements (mutual exclusion, progress, bounded waiting)? Peterson s Solution (cont.) Mutual Exclusion Ø Q: when process/thread 0 is in its critical section, can process/thread 1 get into its critical section? à flag[0] = 1; and either flag[1]=0; or turn = 0 à Thus, for process/thread (1): flag[0]=1; if flag[1]=0, it is in reminder section; if turn = 0, it waits before its CS Progress: can P0 be prevented into CS, e.g. stuck in while loop? Bounded Waiting 15 16 4

Hardware Solution: Disable Interrupt Uniprocessors could disable interrupts" Ø Currently running code would execute without preemption" Inefficient on multiprocessor systems" do { DISABLE INTERRUPT critical section ENABLE INTERRUPT remainder statements while (1); What are the problems with this solution? 1. Time consuming, decreasing efficiency 2. Clock problem 3. Machine dependent Hardware Support for Synchronization Synchronization Ø Need to test and set a value atomically IA32 hardware provides a special instruction: xchg Ø When the instruction accesses memory, the bus is locked during its execution and no other process/thread can access memory until it completes!!! Ø Other variations: xchgb, xchgw, xchgl Other hardware instructions Ø TestAndSet (a) Ø Swap (a,b) 17 18 Hardware Instruction TestAndSet Another Hardware Instruction: Swap The TestAndSet instruction tests and modifies the content of a word atomically (non-interruptable)" Keep setting the lock to 1 and return old value. bool TestAndSet(bool *target){ boolean m = *target; *target = true; return m; What s the problem? 1. Busy-waiting, waste cpu 2. Hardware dependent, not do { Swap contents of two memory words void Swap (bool *a, bool *b){ bool temp = *a; *a = *b; *b = temp: What s the problem? 1. Busy-waiting, waste cpu 2. Hardware dependent, not bounded-waiting bool lock = FALSE; While(true){ while(testandset(&lock)); bool key = TRUE; critical section while(key == TRUE) { LOCK == FALSE //free the lock Swap(&key, &lock) ; lock = false; remainder section critical section; while(true); lock = FALSE; //release permission bounded-waiting 19 20 5

Semaphores Synchronization without busy waiting Ø Motivation: Avoid busy waiting by blocking a process execution until some condition is satisfied Semaphore S integer variable Two indivisible (atomic) operations: how? à later Ø wait(s) (also called P(s) or down(s) or acquire()); Ø signal(s) (also called V(s) or up(s) or release()) Ø User-visible operations on a semaphore Ø Easy to generalize, and less complicated for application programmers" 21 Semaphores without Busy Waiting The idea: Ø Once need to wait, remove the process/thread from CPU Ø The process/thread goes into a special queue waiting for a semaphore (like an I/O waiting queue) Ø OS/runtime manages this queue (e.g., FIFO manner) and remove a process/thread when a signal occurs A semaphore consists of an integer (S.value) and a linked list of waiting processes/threads (S.list) Ø If the integer is 0 or negative, its magnitude is the number of processes/threads waiting for this semaphore. Start with an empty list, and normally initialized to 0 22 Implement Semaphores down typedef struct{ int value; struct process *list; semaphore; Two operations by system calls" block place the current process to the waiting queue." wakeup remove one of processes in the waiting queue and place it in the ready queue." WaitingQ.ins wait(semaphore * s){ s->value--; if (s->value <0){ enlist(s->list); block(); signal(semaphore * s){ s->value++; if (s->value == 0){ delist(p, s->list); wakeup(p); Is this one without busy waiting? 23 24 WaitingQ.del 6

Semaphore Usage Counting semaphore " Ø Can be used to control access to a given resources with finite number of instances " Binary semaphore integer value can range only between 0 and 1; Also known as mutex locks! " S = number of resources while(1){ wait(s); use one of S resource signal(s); remainder section mutex = 1 while(1){ wait(mutex); Critical Section signal(mutex); remainder section Attacking CS Problem with Semaphores Shared data Ø semaphore mutex = 1; /* initially mutex = 1 */ For any process/thread do { wait(mutex); critical section signal(mutex); remainder section while(1); 26 Revisit Balance Update Problem Semaphore for General Synchronization Shared data: int Balance;! semaphore mutex; // initially mutex = 1! Execute code B in P j after code A is executed in P i Use semaphore flag : what is the initial value? Code P i : P j : Process A:!.! wait (mutex);! Balance = Balance 100;! signal (mutex);!!! Process B:!.! wait (mutex);! Balance = Balance 200;! signal (mutex);!!!..... wait(flag) A B signal(flag) What about 2 threads wait for 1 thread? Or 1 thread waits for 2 threads? 27 28 7

Semaphore for General Synchronization Classical Synchronization Problems A, B, C is for T1, T2, T3 A à B àc Semaphore f1 = 0; Semaphore f2 = 0; T1 : T2 : A; Signal(f1); wait(f1); B; Signal(f2); T3 : wait(f2); C; Semaphore f1 = 0; Semaphore f2 = 0; T1 : T2 : A; Signal(f1); Signal(f2); wait(f1); B; Signal(f2); T3 : wait(f2); wait(f2); C; Producer-Consumer Problem Ø Shared bounded-buffer Ø Producer puts items to the buffer area, wait if buffer is full Ø Consumer consumes items from the buffer, wait if is empty Readers-Writers Problem Ø Multiple readers can access concurrently Ø Writers mutual exclusive with writes/readers Dining-Philosophers Problem Ø Multiple resources, get one at each time 29 30 Producer-Consumer Problem Producer/Consumer Loops With Bounded-Buffer........! Producer Loop producer! consumer! Need to make sure that Ø The producer and the consumer do not access the buffer area and related variables at the same time Ø No item is available to the consumer if all the buffer slots are empty. Ø No slot in the buffer is available to the producer if all the buffer slots are full Consumer Loop For the shared variable counter, What may go wrong? 31 32 8

What Semaphores are needed? Producer-Consumer Codes semaphore mutex, full, empty; Initially: What are the initial values? full = 0 /* The number of full buffer slots */ empty = n /* The number of empty buffer slots */ mutex = 1 /* controlling mutual access to the buffer pool */ Producer Consumer What will happen if we change the order? do { produce an item in nextp wait(empty); wait(mutex); add nextp to buffer signal(mutex); signal(full); while (1) do { wait(full); wait(mutex); remove an item from buffer to nextc signal(mutex); signal(empty); consume the item in nextc while (1) 33 34 Dining-Philosophers Problem 4 3 4 0 0 1 Five philosophers share a common circular table. There are five chopsticks and a bowl of rice (in the middle). When a philosopher gets hungry, he tries to pick up the closest chopsticks.! 1 A philosopher may pick up only one 3 2 chopstick at a time, and cannot pick up a chopstick already in use. When 2 done, he puts down both of his chopsticks, one after the other.! Shared data! semaphore chopstick[5]; Initially all semaphore values are 1 A classic example of synchronization problem: allocate several resources 35 among several processes in a deadlock-free and starvation-free manner Monitors High-level synchronization construct (implement in different languages) that provided mutual exclusion within the monitor AND the ability to wait for a certain condition to become true monitor monitor-name{ shared variable declarations procedure body P1 ( ) {... procedure body P2 ( ) {... procedure body Pn ( ) {... {initialization codes; 36 9

Problems with Using Semaphores Let S and Q be two semaphores initialized to 1 T 0 T 1 wait (S); wait (Q); wait (Q); wait (S);... signal (S); signal (Q); signal (Q); signal (S); What is the problem with the above code? Problems with Using Semaphores (cont.) Strict requirements on the sequence of operations Ø Correct: wait (mutex). signal (mutex) Ø Incorrect: signal (mutex). wait (mutex); wait (mutex). wait(mutex); Complicated usages Incorrect usage could lead to Ø deadlock 37 38 monitors vs. semaphores A Monitor: Ø An object designed to be accessed across threads Ø Member functions enforce mutual exclusion Schematic View of a Monitor Monitor construct ensures at most one thread can be active within the monitor at a given time. Shared data (local variables) of the monitor can be accessed only by local procedures. A Semaphore: Ø A low-level object Ø We can use semaphore to implement a monitor 39 40 10

monitor class Account { private int balance := 0 invariant balance >= 0 public method boolean withdraw(int amount) precondition amount >= 0 { if balance < amount then return false else { balance := balance - amount ; return true class Account { private lock mylock; private int balance := 0 invariant balance >= 0 public method boolean withdraw(int amount) { int ret; mylock.acquire(); if balance < amount then ret = false else { balance := balance - amount ; ret = true mylock.release(); Case Study Pthread Library: OS-independent Ø mutex locks Ø condition variables Ø barrier public method deposit(int amount) precondition amount >= 0 { balance := balance + amount public method deposit(int amount) { mylock.acquire(); balance := balance + amount mylock.release(); 41 42 Synchronization in Pthread Library Mutex variables Ø pthread_mutex_t Conditional variables Ø pthread_cond_t All POSIX thread functions have the form: pthread[ _object ] _operation Most of the POSIX thread library functions return 0 in case of success and some non-zero error-number in case of a failure Mutex Variables: Mutual Exclusion A mutex variable can be either locked or unlocked Ø pthread_mutex_t lock; // lock is a mutex variable Initialization of a mutex variable by default attributes Ø pthread_mutex_init( &lock, NULL ); Lock operation Ø pthread_mutex_lock( &lock ) ; Unlock operation Ø pthread_mutex_unlock( &lock ) 43 44 11

Semaphore vs. Mutex_lock Binary Semaphore and Mutex Lock? Definition and initialization volatile int cnt = 0; sem_t mutex = 1; Entering and Exit CS for (i = 0; i < niters; i++) { Wait(&mutex); cnt++; Signal(&mutex); volatile int cnt = 0; pthread_mutex_t mutex; // Initialize to Unlocked pthread_mutex_init(&mutex, NULL); for (i = 0; i < niters; i++) { pthread_mutex_lock(&mutex); cnt++; pthread_mutex_unlock(&mutex); Binary Semaphore: Ø No ownership Mutex lock Ø Only the owner of a lock can release a lock. Ø Priority inversion safety: potentially promote a task Ø Deletion safety: a task owning a lock can t be deleted. 46 Synchronization? Synchronization Operations Synchronization serves two purposes: Ø Ensure safety for shared updates ü Avoid race conditions Ø Coordinate actions of threads ü Parallel computation ü Event notification ALL interleavings must be correct Ø there are lots of interleavings of events Ø also constrain as little as possible Safety Ø Locks provide mutual exclusion But, we need more than just mutual exclusion of critical regions Coordination Ø Condition variables provide ordering 47 48 12

Synchronization Problem: Queue Suppose we have a thread-safe queue Ø insert(item), remove(), empty() Ø must protect access with locks Three Possible Solutions Spin lock Ø Works? lock(); while(empty()) { unlock(); v = remove(); Options for removing when queue empty: Ø Return special error value (e.g., NULL) Ø Wait for something to appear in the queue Could release lock Ø Works? Re acquire Lock unlock(); while(empty()) { lock(); v = remove(); lock() while (empty()) { unlock(); lock(); v = remove(); unlock(); Works, but lots of checking 49 Solution: Sleep! Condition Variables Sleep = Ø don t run me until something happens What about this? Dequeue(){ lock(); if (queue empty) { sleep(); take one item; unlock(); Cannot hold lock while sleeping! Enqueue(){ lock(); insert item; if (thread waiting) wake up dequeuer(); unlock(); Special pthread data structure Make it possible/easy to go to sleep Ø Atomically: ü release lock ü put thread on wait queue ü go to sleep Each CV has a queue of waiting threads Do we worry about threads that have been put on the wait queue but have NOT gone to sleep yet? Ø no, because those two actions are atomic Each condition variable associated with one lock 13

Condition Variables Wait for 1 event, atomically release lock Ø wait(lock& l, CV& c) ü If queue is empty, wait Atomically releases lock, goes to sleep You must be holding lock! May reacquire lock when awakened (pthreads do) Ø signal(cv& c) ü Insert item in queue Wakes up one waiting thread, if any Ø broadcast(cv& c) ü Wakes up all waiting threads Condition Variable Exercise Implement Producer Consumer Ø One thread enqueues, another dequeues void * consumer (void *){ while (true) { pthread_mutex_lock(&l); while (q.empty()){ pthread_cond_wait(&nempty, &l); cout << q.pop_back() << endl; pthread_mutex_unlock(&l); void * producer(void *){ while (true) { pthread_mutex_lock(&l); q.push_front (1); pthread_cond_signal(&nempty); pthread_mutex_unlock(&l); Questions? Can I use if instead of while (to check cond)? 53 Barrier Barrier A barrier is used to order different threads (or a synchronization). Ø A barrier for a group of threads means any thread/process must stop at this point and cannot proceed until all other threads/processes reach this barrier. Where it can be used? Ø Watching movie. Thread 1 Thread 2 Thread 3 Epoch0 Epoch1 Epoch2 Time 55 14

Barrier Interfaces It is used when some parallel computations need to "meet up" at certain points before continuing. Pthreads extension includes barriers as synchronization objects (available in Single UNIX Specification) Ø Enable by #define _XOPEN_SOURCE 600 at start of file Initialize a barrier for count threads int pthread_barrier_init(pthread_barrier_t *barrier, const pthread_barrier attr_t *attr, int count); Each thread waits on a barrier by calling int pthread_barrier_wait(pthread_barrier_t *barrier); Destroy a barrier int pthread_barrier_destroy(pthread_barrier_t *barrier); Synchronization Barrier (2) void * thread1 (void *not_used) { time (&now); printf ( T1 starting %s", ctime (&now)); sleep (20); pthread_barrier_wait (&barrier); time (&now); printf ( T1: after barrier at %s", ctime (&now)); void * thread2 (void *not_used) { time (&now); printf ( T1 starting %s", ctime (&now)); sleep (20); pthread_barrier_wait (&barrier); time (&now); printf ( T1: after barrier at %s", ctime (&now)); time (&now); printf ( T2: after barrier at %s", ctime (&now)); int main () // ignore arguments { time_t now; // create a barrier object with a count of 3 pthread_barrier_init (&barrier, NULL, 3); // start up two threads, thread1 and thread2 pthread_create (NULL, NULL, thread1, NULL); pthread_create (NULL, NULL, thread2, NULL); time (&now); printf ("main() before barrier at %s", ctime (&now)); pthread_barrier_wait (&barrier); // Now all three threads have completed. time (&now); printf ( main() after barrier at %s", ctime (&now)); pthread_exit( NULL ); return (EXIT_SUCCESS); Summary Problems with concurrent access to shared data Ø Race condition and critical section Ø General structure for enforce critical section Synchronization mechanism Ø Hardware supported instructions Ø Software solution: semaphore Classical Synchronization Problems High-level synchronization structure: Monitor Case study: Pthread vs. Java monitor http://www.fit.vutbr.cz/~vojnar/publications/fittr-2010-03.pdf 59 60 15