CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Size: px
Start display at page:

Download "CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019"

Transcription

1 CS 31: Introduction to Computer Systems 22-23: Threads & Synchronization April 16-18, 2019

2 Making Programs Run Faster We all like how fast computers are In the old days (1980 s ): Algorithm too slow? Wait for HW to catch up. Modern CPUs exploit parallelism for speed: Executes multiple instructions at once Reorders instructions on the fly

3 Processor Design Trends Transistors (*10^3) Clock Speed (MHZ) Power (W) ILP (IPC) Instruction Level Parallelism From Herb Sutter, Dr. Dobbs Journal

4 The Multi-Core Era Today, can t make a single core go much faster. Limits on clock speed, heat, energy consumption Use extra transistors to put multiple CPU cores on the chip. Exciting: CPU capable of doing a lot more! Problem: up to the programmer to take advantage of multiple cores Humans bad at thinking in parallel

5 Parallel Abstraction To speed up a job, must divide it across multiple cores. A process contains both execution information and memory/resources. What if we want to separate the execution information to give us parallelism in our programs?

6 Which components of a process might we replicate to take advantage of multiple CPU cores? A. The entire address space (memory) B. Parts of the address space (memory) C. OS resources (open files, etc.) D. Execution state (PC, registers, etc.) E. More than one of these (which?)

7 Which components of a process might we replicate to take advantage of multiple CPU cores? A. The entire address space (memory not duplicated) B. Parts of the address space (memory - stack) C. OS resources (open files, etc not duplicated.) D. Execution state (PC, registers, etc.) E. More than one of these (which?)

8 Threads Modern OSes separate the concepts of processes and threads. The process defines the address space and general process attributes (e.g., open files) The thread defines a sequential execution stream within a process (PC, SP, registers) A thread is bound to a single process Processes, however, can have multiple threads Each process has at least one thread (e.g. main)

9 Threads Process 1 Thread 1 PC1 OS This is the picture we ve been using all along: SP1 Text Data A process with a single thread, which has execution state (registers) and a stack. Heap Stack 1

10 Thread 1 PC1 Threads Process 1 OS Text We can add a thread to the process. New threads share all memory (VAS) with other threads. New thread gets private registers, local stack. SP1 PC2 Data Heap Thread 2 SP2 Stack 2 Stack 1

11 Threads A third thread added. Thread 1 PC1 Process 1 OS Text Note: they re all executing the same program (shared instructions in text), though they may be at different points in the code. SP1 PC2 Data Heap Thread 2 PC3 Stack 3 SP2 Thread 3 Stack 2 SP3 Stack 1

12 Why Use Threads? Separating threads and processes makes it easier to support parallel applications: Creating multiple paths of execution does not require creating new processes (less state to store, initialize Light Weight Process ) Low-overhead sharing between threads in same process (threads share page tables, access same memory) Concurrency (multithreading) can be very useful

13 Concurrency? Several computations or threads of control are executing simultaneously, and potentially interacting with each other. We can multitask! Why does that help? Taking advantage of multiple CPUs / cores Overlapping I/O with computation Improving program structure

14 Recall: Processes Process 1 Process 2 Process n Text Text Text Data Data Data Stack Stack Stack System Calls fork read write Kernel System Management Context Switching Scheduling

15 Scheduling Threads We have basically two options 1. Kernel explicitly selects among threads in a process 2. Hide threads from the kernel, and have a user-level scheduler inside each multi-threaded process Why do we care? Think about the overhead of switching between threads Who decides which thread in a process should go first? What about blocking system calls?

16 User-Level Threads Process 1 Process 2 Process n Text Thread C/S + Sched Data Text Thread C/S + Sched Data Text Thread C/S + Sched Data Library divides stack region Stack Stack Stack System Calls fork read write Kernel Process Context Switching Process Scheduling System Management Threads are invisible to the kernel

17 Kernel-Level Threads Process 1 Process 2 Process n Text Data Stack 1 Stack 2 Stack 3 System Calls fork read write Text Data Stack 1 Stack 2 Kernel Thread Context Switching Text Data Stack 1 Thread + Process Scheduling System Management Kernel Context switching over threads Each process has explicitly mapped regions for stacks

18 If you call thread_create() on a modern OS (Linux/Mac/Windows), which type of thread would you expect to receive? (Why? Which would you pick?) A. Kernel threads B. User threads C. Some other sort of threads

19 Kernel vs. User Threads Kernel-level threads Integrated with OS (informed scheduling) Slower to create, manipulate, synchronize Requires getting the OS involved, which means changing context (relatively expensive) User-level threads Faster to create, manipulate, synchronize Not integrated with OS (uninformed scheduling) If one thread makes a syscall, all of them get blocked because the OS doesn t distinguish.

20 Threads & Sharing Code (text) shared by all threads in process Global variables and static objects are shared Stored in the static data segment, accessible by any thread Dynamic objects and other heap objects are shared Allocated from heap with malloc/free or new/delete Local variables should not be shared Refer to data on the stack Each thread has its own stack Never pass/share/store a pointer to a local variable on another thread s stack!!

21 Threads & Sharing Local variables should not be shared Refer to data on the stack Each thread has its own stack Never pass/share/store a pointer to a local variable on another thread s stack Function B returns Shared Heap int *x; Thread 2 can dereference x to access Z. function B function A Thread 1 s stack Z function D function C Thread 2 s stack

22 Threads & Sharing Local variables should not be shared Refer to data on the stack Each thread has its own stack Never pass/share/store a pointer to a local variable on another thread s stack function B function A Thread 1 s stack Shared Heap int *x; Z Shared data on heap! Thread 2 can dereference x to access Z. function D function C Thread 2 s stack

23 Thread-level Parallelism Speed up application by assigning portions to CPUs/cores that process in parallel Requires: partitioning responsibilities (e.g., parallel algorithm) managing their interaction Example: game of life (next lab) One core: Three cores:

24 If one CPU core can run a program at a rate of X, how quickly will the program run on two cores? Why? A. Slower than one core (<X) B. The same speed (X) C. Faster than one core, but not double (X-2X) D. Twice as fast (2X) E. More than twice as fast(>2x)

25 If one CPU core can run a program at a rate of X, how quickly will the program run on two cores? Why? A. Slower than one core (<X) B. The same speed (X) C. Faster than one core, but not double (X-2X) D. Twice as fast (2X) E. More than twice as fast(>2x)

26 Parallel Speedup Performance benefit of parallel threads depends on many factors: algorithm divisibility communication overhead memory hierarchy and locality implementation quality For most programs, more threads means more communication, diminishing returns.

27 Summary Physical limits to how much faster we can make a single core run. Use transistors to provide more cores. Parallelize applications to take advantage. OS abstraction: thread Shares most of the address space with other threads in same process Gets private execution context (registers) + stack Coordinating threads is challenging!

28 Recap To speed up a job, must divide it across multiple cores. Thread: abstraction for execution within process. Threads share process memory. Threads may need to communicate to achieve goal Thread communication: To solve task (e.g., neighbor GOL cells) To prevent bad interactions (synchronization)

29 Synchronization Synchronize: to (arrange events to) happen at same time (or ensure that they don t) Thread synchronization When one thread has to wait for another Events in threads that occur at the same time Uses of synchronization Prevent race conditions Wait for resources to become available

30 Synchronization Example One core: Four cores: Coordination required: Threads in different regions must work together to compute new value for boundary cells. Threads might not run at the same speed (depends on the OS scheduler). Can t let one region get too far ahead.

31 Thread Ordering (Why threads require care. Humans aren t good at reasoning about this.) As a programmer you have no idea when threads will run. The OS schedules them, and the schedule will vary across runs. It might decide to context switch from one thread to another at any time. Your code must be prepared for this! Ask yourself: Would something bad happen if we context switched here?

32 Example: The Credit/Debit Problem Say you have $1000 in your bank account You deposit $100 You also withdraw $100 How much should be in your account? What if your deposit and withdrawal occur at the same time, at different ATMs?

33 Credit/Debit Problem: Race Condition Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

34 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

35 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Switch to T 1 Read $1000 into b Debit by $100 Write $900 Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

36 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Switch to T 1 Read $1000 into b Debit by $100 Write $900 Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b); Switch back to T 0 Read $1000 into b Credit $100 Write $1100 Bank gave you $100! What went wrong?

37 Critical Section Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Badness if context switch here! Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b); Bank gave you $100! What went wrong?

38 To Avoid Race Conditions Thread Critical - - Section Thread Critical - - Section Identify critical sections 2. Use synchronization to enforce mutual exclusion Only one thread active in a critical section

39 What Are Critical Sections? Sections of code executed by multiple threads Access shared variables, often making local copy Places where order of execution or thread interleaving will affect the outcome Must run atomically with respect to each other Atomicity: runs as an entire unit or not at all. Cannot be divided into smaller parts.

40 Which code region is a critical section? Thread A main () { int a,b; Thread B main () { int a,b; A B a = getshared(); b = 10; a = a + b; saveshared(a); C D E a = getshared(); b = 20; a = a - b; saveshared(a); a += 1 a += 1 return a; shared memory return a; s = 40;

41 Which code region is a critical section? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); a += 1 Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); a += 1 return a; shared memory return a; s = 40;

42 Which values might the shared s variable hold after both threads finish? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; A. 30 B. 20 or 30 C. 20, 30, or 50 D. Another set of values s = 40;

43 If A runs first Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 50;

44 B runs after A Completes Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 30;

45 What about interleaving? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 40;

46 Is there a race condition? Suppose count is a global variable, multiple threads increment it: count++; A. Yes, there s a race condition (count++ is a critical section). B. No, there s no race condition (count++ is not a critical section). C. Cannot be determined How about if compiler implements it as: movl (%edx), %eax // read count value addl $1, %eax // modify value movl %eax, (%edx) // write count How about if compiler implements it as: incl (%edx) // increment value

47 Four Rules for Mutual Exclusion 1. No two threads can be inside their critical sections at the same time. 2. No thread outside its critical section may prevent others from entering their critical sections. 3. No thread should have to wait forever to enter its critical section. (Starvation) 4. No assumptions can be made about speeds or number of CPU s.

48 How to Achieve Mutual Exclusion? < entry code > < critical section > < exit code > < entry code > < critical section > < exit code > Surround critical section with entry/exit code Entry code should act as a gate If another thread is in critical section, block Otherwise, allow thread to proceed Exit code should release other entry gates

49 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; Lock indicates whether any thread is in critical section. Note: While loop has no body. Keeps checking the condition as quickly as possible until it becomes false. (It spins )

50 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; Lock indicates whether any thread is in critical section. Is there a problem here? A: Yes, this is broken. B: No, this ought to work.

51 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; What if a context switch occurs at this point?

52 Possible Solution: Take Turns? shared int turn = T 0 ; T 0 while (turn!= T 0 ); < critical section > turn = T 1 ; T 1 while (turn!= T 1 ); < critical section > turn = T 0 ; Alternate which thread can enter critical section Is there a problem? A: Yes, this is broken. B: No, this ought to work.

53 Possible Solution: Take Turns? shared int turn = T 0 ; T 0 while (turn!= T 0 ); < critical section > turn = T 1 ; T 1 while (turn!= T 1 ); < critical section > turn = T 0 ; Rule #2: No thread outside its critical section may prevent others from entering their critical sections.

54 Possible Solution: State Intention? shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; while (flag[t 1 ]); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; while (flag[t 0 ]); < critical section > flag[t 1 ] = FALSE; Each thread states it wants to enter critical section Is there a problem? A: Yes, this is broken. B: No, this ought to work.

55 Possible Solution: State Intention? shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; while (flag[t 1 ]); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; while (flag[t 0 ]); < critical section > flag[t 1 ] = FALSE; What if threads context switch between these two lines? Rule #3: No thread should have to wait forever to enter its critical section.

56 Peterson s Solution shared int turn; shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; turn = T 1 ; while (flag[t 1 ] && turn==t 1 ); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; turn = T 0 ; while (flag[t 0 ] && turn==t 0 ); < critical section > flag[t 1 ] = FALSE; If there is competition, take turns; otherwise, enter Is there a problem? A: Yes, this is broken. B: No, this ought to work.

57 Spinlocks are Wasteful If a thread is spinning on a lock, it s using the CPU without making progress. Single-core system, prevents lock holder from executing. Multi-core system, waste core time when something else could be running. Ideal: thread can t enter critical section? Schedule something else. Consider it blocked.

58

59 Atomicity How do we get away from having to know about all other interested threads? The implementation of acquiring/releasing critical section must be atomic. An atomic operation is one which executes as though it could not be interrupted Code that executes all or nothing How do we make them atomic? Atomic instructions (e.g., test-and-set, compare-and-swap) Allows us to build semaphore OS abstraction

60 Semaphores Semaphore: OS synchronization variable Has integer value List of waiting threads Works like a gate If sem > 0, gate is open Value equals number of threads that can enter Else, gate is closed Possibly with waiting threads sem = 3 sem = 2 sem = 1 critical section sem = 0

61 Semaphores Associated with each semaphore is a queue of waiting threads When wait() is called by a thread: If semaphore is open, thread continues If semaphore is closed, thread blocks on queue Then signal() opens the semaphore: If a thread is waiting on the queue, the thread is unblocked If no threads are waiting on the queue, the signal is remembered for the next thread

62 Semaphore Operations sem s = n; // declare and initialize wait (sem s) // Executes atomically decrement s; if s < 0, block thread (and associate with s); signal (sem s) // Executes atomically increment s; if blocked threads, unblock (any) one of them;

63 Semaphore Operations sem s = n; // declare and initialize wait (sem s) // Executes atomically decrement s; if s < 0, block thread (and associate with s); signal (sem s) // Executes atomically increment s; if blocked threads, unblock (any) one of them; Based on what you know about semaphores, should a process be able to check beforehand whether wait(s) will cause it to block? A. Yes, it should be able to check. B. No, it should not be able to check.

64 Semaphore Operations sem s = n; // declare and initialize wait (sem s) decrement s; if s < 0, block thread (and associate with s); signal (sem s) increment s; if blocked threads, unblock (any) one of them; No other operations allowed In particular, semaphore s value can t be tested! No thread can tell the value of s

65 Mutual Exclusion with Semaphores sem mutex = 1; T 0 wait (mutex); < critical section > signal (mutex); T 1 wait (mutex); < critical section > signal (mutex); Use a mutex semaphore initialized to 1 Only one thread can enter critical section Simple, works for any number of threads Is there any busy-waiting?

66 Locking Abstraction One way to implement critical sections is to lock the door on the way in, and unlock it again on the way out Typically exports nicer interface for semaphores in user space A lock is an object in memory providing two operations acquire()/lock(): before entering the critical section release()/unlock(): after leaving a critical section Threads pair calls to acquire() and release() Between acquire()/release(), the thread holds the lock acquire() does not return until any previous holder releases What can happen if the calls are not paired?

67 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 40;

68 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Nobody

69 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

70 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

71 Thread A main () { int a,b; Using Locks Thread B main () { int a,b; Lock already owned. Must Wait! acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

72 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 50; Lock l; Held by: Nobody

73 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 30; Lock l; Held by: Thread B

74 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 30; Lock l; Held by: Nobody No matter how we order threads or when we context switch, result will always be 30, like we expected (and probably wanted).

75 Summary We have no idea when OS will schedule or context switch our threads. Code must be prepared, tough to reason about. Threads often must synchronize To safely communicate / transfer data, without races Synchronization primitives help programmers Kernel-level semaphores: limit # of threads that can do something, provides atomicity User-level locks: built upon semaphore, provides mutual exclusion (usually part of thread library)

76 Recap To speed up a job, must divide it across multiple cores. Thread: abstraction for execution within process. Threads share process memory. Threads may need to communicate to achieve goal Thread communication: To solve task (e.g., neighbor GOL cells) To prevent bad interactions (synchronization)

77 Synchronization Synchronize: to (arrange events to) happen at same time (or ensure that they don t) Thread synchronization When one thread has to wait for another Events in threads that occur at the same time Uses of synchronization Prevent race conditions Wait for resources to become available

78 Synchronization Example One core: Four cores: Coordination required: Threads in different regions must work together to compute new value for boundary cells. Threads might not run at the same speed (depends on the OS scheduler). Can t let one region get too far ahead.

79 Thread Ordering (Why threads require care. Humans aren t good at reasoning about this.) As a programmer you have no idea when threads will run. The OS schedules them, and the schedule will vary across runs. It might decide to context switch from one thread to another at any time. Your code must be prepared for this! Ask yourself: Would something bad happen if we context switched here?

80 Example: The Credit/Debit Problem Say you have $1000 in your bank account You deposit $100 You also withdraw $100 How much should be in your account? What if your deposit and withdrawal occur at the same time, at different ATMs?

81 Credit/Debit Problem: Race Condition Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

82 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

83 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Switch to T 1 Read $1000 into b Debit by $100 Write $900 Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b);

84 Credit/Debit Problem: Race Condition Say T 0 runs first Read $1000 into b Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Switch to T 1 Read $1000 into b Debit by $100 Write $900 Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b); Switch back to T 0 Read $1000 into b Credit $100 Write $1100 Bank gave you $100! What went wrong?

85 Critical Section Thread T 0 Credit (int a) { int b; b = ReadBalance (); b = b + a; WriteBalance (b); Badness if context switch here! Thread T 1 Debit (int a) { int b; b = ReadBalance (); b = b - a; WriteBalance (b); PrintReceipt (b); PrintReceipt (b); Bank gave you $100! What went wrong?

86 To Avoid Race Conditions Thread Critical - - Section Thread Critical - - Section Identify critical sections 2. Use synchronization to enforce mutual exclusion Only one thread active in a critical section

87 What Are Critical Sections? Sections of code executed by multiple threads Access shared variables, often making local copy Places where order of execution or thread interleaving will affect the outcome Must run atomically with respect to each other Atomicity: runs as an entire unit or not at all. Cannot be divided into smaller parts.

88 Which code region is a critical section? Thread A main () { int a,b; Thread B main () { int a,b; A B a = getshared(); b = 10; a = a + b; saveshared(a); C D E a = getshared(); b = 20; a = a - b; saveshared(a); a += 1 a += 1 return a; shared memory return a; s = 40;

89 Which code region is a critical section? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); a += 1 Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); a += 1 return a; shared memory return a; s = 40;

90 Which values might the shared s variable hold after both threads finish? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; A. 30 B. 20 or 30 C. 20, 30, or 50 D. Another set of values s = 40;

91 If A runs first Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 50;

92 B runs after A Completes Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 30;

93 What about interleaving? Thread A main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); Thread B main () { int a,b; a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 40;

94 Is there a race condition? Suppose count is a global variable, multiple threads increment it: count++; A. Yes, there s a race condition (count++ is a critical section). B. No, there s no race condition (count++ is not a critical section). C. Cannot be determined How about if compiler implements it as: movl (%edx), %eax // read count value addl $1, %eax // modify value movl %eax, (%edx) // write count How about if compiler implements it as: incl (%edx) // increment value

95 Four Rules for Mutual Exclusion 1. No two threads can be inside their critical sections at the same time. 2. No thread outside its critical section may prevent others from entering their critical sections. 3. No thread should have to wait forever to enter its critical section. (Starvation) 4. No assumptions can be made about speeds or number of CPU s.

96 How to Achieve Mutual Exclusion? < entry code > < critical section > < exit code > < entry code > < critical section > < exit code > Surround critical section with entry/exit code Entry code should act as a gate If another thread is in critical section, block Otherwise, allow thread to proceed Exit code should release other entry gates

97 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; Lock indicates whether any thread is in critical section. Note: While loop has no body. Keeps checking the condition as quickly as possible until it becomes false. (It spins )

98 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; Lock indicates whether any thread is in critical section. Is there a problem here? A: Yes, this is broken. B: No, this ought to work.

99 Possible Solution: Spin Lock? shared int lock = OPEN; T 0 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; T 1 while (lock == CLOSED); lock = CLOSED; < critical section > lock = OPEN; What if a context switch occurs at this point?

100 Possible Solution: Take Turns? shared int turn = T 0 ; T 0 while (turn!= T 0 ); < critical section > turn = T 1 ; T 1 while (turn!= T 1 ); < critical section > turn = T 0 ; Alternate which thread can enter critical section Is there a problem? A: Yes, this is broken. B: No, this ought to work.

101 Possible Solution: Take Turns? shared int turn = T 0 ; T 0 while (turn!= T 0 ); < critical section > turn = T 1 ; T 1 while (turn!= T 1 ); < critical section > turn = T 0 ; Rule #2: No thread outside its critical section may prevent others from entering their critical sections.

102 Possible Solution: State Intention? shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; while (flag[t 1 ]); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; while (flag[t 0 ]); < critical section > flag[t 1 ] = FALSE; Each thread states it wants to enter critical section Is there a problem? A: Yes, this is broken. B: No, this ought to work.

103 Possible Solution: State Intention? shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; while (flag[t 1 ]); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; while (flag[t 0 ]); < critical section > flag[t 1 ] = FALSE; What if threads context switch between these two lines? Rule #3: No thread should have to wait forever to enter its critical section.

104 Peterson s Solution shared int turn; shared boolean flag[2] = {FALSE, FALSE; T 0 flag[t 0 ] = TRUE; turn = T 1 ; while (flag[t 1 ] && turn==t 1 ); < critical section > flag[t 0 ] = FALSE; T 1 flag[t 1 ] = TRUE; turn = T 0 ; while (flag[t 0 ] && turn==t 0 ); < critical section > flag[t 1 ] = FALSE; If there is competition, take turns; otherwise, enter Is there a problem? A: Yes, this is broken. B: No, this ought to work.

105 Spinlocks are Wasteful If a thread is spinning on a lock, it s using the CPU without making progress. Single-core system, prevents lock holder from executing. Multi-core system, waste core time when something else could be running. Ideal: thread can t enter critical section? Schedule something else. Consider it blocked.

106

107 Atomicity How do we get away from having to know about all other interested threads? The implementation of acquiring/releasing critical section must be atomic. An atomic operation is one which executes as though it could not be interrupted Code that executes all or nothing How do we make them atomic? Atomic instructions (e.g., test-and-set, compare-and-swap) Allows us to build semaphore OS abstraction

108 Semaphores Semaphore: OS synchronization variable Has integer value List of waiting threads Works like a gate If sem > 0, gate is open Value equals number of threads that can enter Else, gate is closed Possibly with waiting threads sem = 3 sem = 2 sem = 1 critical section sem = 0

109 Semaphores Associated with each semaphore is a queue of waiting threads When wait() is called by a thread: If semaphore is open, thread continues If semaphore is closed, thread blocks on queue Then signal() opens the semaphore: If a thread is waiting on the queue, the thread is unblocked If no threads are waiting on the queue, the signal is remembered for the next thread

110 Semaphore Operations sem s = n; // declare and initialize wait (sem s) // Executes atomically decrement s; if s < 0, block thread (and associate with s); signal (sem s) // Executes atomically increment s; if blocked threads, unblock (any) one of them;

111 Semaphore Operations sem s = n; // declare and initialize wait (sem s) // Executes atomically decrement s; if s < 0, block thread (and associate with s); signal (sem s) // Executes atomically increment s; if blocked threads, unblock (any) one of them; Based on what you know about semaphores, should a process be able to check beforehand whether wait(s) will cause it to block? A. Yes, it should be able to check. B. No, it should not be able to check.

112 Semaphore Operations sem s = n; // declare and initialize wait (sem s) decrement s; if s < 0, block thread (and associate with s); signal (sem s) increment s; if blocked threads, unblock (any) one of them; No other operations allowed In particular, semaphore s value can t be tested! No thread can tell the value of s

113 Mutual Exclusion with Semaphores sem mutex = 1; T 0 wait (mutex); < critical section > signal (mutex); T 1 wait (mutex); < critical section > signal (mutex); Use a mutex semaphore initialized to 1 Only one thread can enter critical section Simple, works for any number of threads Is there any busy-waiting?

114 Locking Abstraction One way to implement critical sections is to lock the door on the way in, and unlock it again on the way out Typically exports nicer interface for semaphores in user space A lock is an object in memory providing two operations acquire()/lock(): before entering the critical section release()/unlock(): after leaving a critical section Threads pair calls to acquire() and release() Between acquire()/release(), the thread holds the lock acquire() does not return until any previous holder releases What can happen if the calls are not paired?

115 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; a = getshared(); b = 10; a = a + b; saveshared(a); a = getshared(); b = 20; a = a - b; saveshared(a); return a; shared memory return a; s = 40;

116 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Nobody

117 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

118 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

119 Thread A main () { int a,b; Using Locks Thread B main () { int a,b; Lock already owned. Must Wait! acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 40; Lock l; Held by: Thread A

120 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 50; Lock l; Held by: Nobody

121 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 30; Lock l; Held by: Thread B

122 Using Locks Thread A main () { int a,b; Thread B main () { int a,b; acquire(l); a = getshared(); b = 10; a = a + b; saveshared(a); release(l); acquire(l); a = getshared(); b = 20; a = a - b; saveshared(a); release(l); return a; shared memory return a; s = 30; Lock l; Held by: Nobody No matter how we order threads or when we context switch, result will always be 30, like we expected (and probably wanted).

123 Summary We have no idea when OS will schedule or context switch our threads. Code must be prepared, tough to reason about. Threads often must synchronize To safely communicate / transfer data, without races Synchronization primitives help programmers Kernel-level semaphores: limit # of threads that can do something, provides atomicity User-level locks: built upon semaphore, provides mutual exclusion (usually part of thread library)

CS 31: Intro to Systems Threading & Parallel Applications. Kevin Webb Swarthmore College November 27, 2018

CS 31: Intro to Systems Threading & Parallel Applications. Kevin Webb Swarthmore College November 27, 2018 CS 31: Intro to Systems Threading & Parallel Applications Kevin Webb Swarthmore College November 27, 2018 Reading Quiz Making Programs Run Faster We all like how fast computers are In the old days (1980

More information

Why do we care about parallel?

Why do we care about parallel? Threads 11/15/16 CS31 teaches you How a computer runs a program. How the hardware performs computations How the compiler translates your code How the operating system connects hardware and software The

More information

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems. CSE 120: Principles of Operating Systems Lecture 4 Synchronization January 23, 2006 Prof. Joe Pasquale Department of Computer Science and Engineering University of California, San Diego Before We Begin

More information

Synchronization. Before We Begin. Synchronization. Example of a Race Condition. CSE 120: Principles of Operating Systems. Lecture 4.

Synchronization. Before We Begin. Synchronization. Example of a Race Condition. CSE 120: Principles of Operating Systems. Lecture 4. CSE 120: Principles of Operating Systems Lecture 4 Synchronization October 7, 2003 Before We Begin Read Chapter 7 (Process Synchronization) Programming Assignment #1 Due Sunday, October 19, midnight Prof.

More information

CSE 120: Principles of Operating Systems. Lecture 4. Synchronization. October 7, Prof. Joe Pasquale

CSE 120: Principles of Operating Systems. Lecture 4. Synchronization. October 7, Prof. Joe Pasquale CSE 120: Principles of Operating Systems Lecture 4 Synchronization October 7, 2003 Prof. Joe Pasquale Department of Computer Science and Engineering University of California, San Diego 2003 by Joseph Pasquale

More information

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018

Threads and Synchronization. Kevin Webb Swarthmore College February 15, 2018 Threads and Synchronization Kevin Webb Swarthmore College February 15, 2018 Today s Goals Extend processes to allow for multiple execution contexts (threads) Benefits and challenges of concurrency Race

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization Administrivia Homework 1 Due today by the end of day Hopefully you have started on project 1 by now? Kernel-level threads (preemptable

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs CSE 451: Operating Systems Winter 2005 Lecture 7 Synchronization Steve Gribble Synchronization Threads cooperate in multithreaded programs to share resources, access shared data structures e.g., threads

More information

Synchronization I. Jo, Heeseung

Synchronization I. Jo, Heeseung Synchronization I Jo, Heeseung Today's Topics Synchronization problem Locks 2 Synchronization Threads cooperate in multithreaded programs To share resources, access shared data structures Also, to coordinate

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall CSE 451: Operating Systems Winter 2003 Lecture 7 Synchronization Hank Levy Levy@cs.washington.edu 412 Sieg Hall Synchronization Threads cooperate in multithreaded programs to share resources, access shared

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 19 Lecture 7/8: Synchronization (1) Administrivia How is Lab going? Be prepared with questions for this weeks Lab My impression from TAs is that you are on track

More information

Lecture 5: Synchronization w/locks

Lecture 5: Synchronization w/locks Lecture 5: Synchronization w/locks CSE 120: Principles of Operating Systems Alex C. Snoeren Lab 1 Due 10/19 Threads Are Made to Share Global variables and static objects are shared Stored in the static

More information

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack What is it? Recap: Thread Independent flow of control What does it need (thread private)? Stack What for? Lightweight programming construct for concurrent activities How to implement? Kernel thread vs.

More information

CSE 153 Design of Operating Systems Fall 2018

CSE 153 Design of Operating Systems Fall 2018 CSE 153 Design of Operating Systems Fall 2018 Lecture 5: Threads/Synchronization Implementing threads l Kernel Level Threads l u u All thread operations are implemented in the kernel The OS schedules all

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

Operating Systems. Synchronization

Operating Systems. Synchronization Operating Systems Fall 2014 Synchronization Myungjin Lee myungjin.lee@ed.ac.uk 1 Temporal relations Instructions executed by a single thread are totally ordered A < B < C < Absent synchronization, instructions

More information

Synchronization. Disclaimer: some slides are adopted from the book authors slides with permission 1

Synchronization. Disclaimer: some slides are adopted from the book authors slides with permission 1 Synchronization Disclaimer: some slides are adopted from the book authors slides with permission 1 What is it? Recap: Thread Independent flow of control What does it need (thread private)? Stack What for?

More information

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University Synchronization I Jin-Soo Kim (jinsookim@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu Today s Topics Synchronization problem Locks 2 Synchronization Threads cooperate

More information

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011 Synchronization CS61, Lecture 18 Prof. Stephen Chong November 3, 2011 Announcements Assignment 5 Tell us your group by Sunday Nov 6 Due Thursday Nov 17 Talks of interest in next two days Towards Predictable,

More information

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules Brief Preview of Scheduling Concurrency Control Nan Niu (nn@cs.toronto.edu) CSC309 -- Summer 2008 Multiple threads ready to run Some mechanism for switching between them Context switches Some policy for

More information

Operating Systems. Sina Meraji U of T

Operating Systems. Sina Meraji U of T Operating Systems Sina Meraji U of T 1 Announcement Check discussion board for announcements A1 is posted 2 Recap: Process Creation: Unix In Unix, processes are created using fork() int fork() fork() Creates

More information

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4 Temporal relations CSE 451: Operating Systems Autumn 2010 Module 7 Synchronization Instructions executed by a single thread are totally ordered A < B < C < Absent synchronization, instructions executed

More information

EECE.4810/EECE.5730: Operating Systems Spring 2017 Homework 2 Solution

EECE.4810/EECE.5730: Operating Systems Spring 2017 Homework 2 Solution 1. (15 points) A system with two dual-core processors has four processors available for scheduling. A CPU-intensive application (e.g., a program that spends most of its time on computation, not I/O or

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization? Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions:» What are race conditions?»

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Advance Operating Systems (CS202) Locks Discussion

Advance Operating Systems (CS202) Locks Discussion Advance Operating Systems (CS202) Locks Discussion Threads Locks Spin Locks Array-based Locks MCS Locks Sequential Locks Road Map Threads Global variables and static objects are shared Stored in the static

More information

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA Chapter 6: Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics! Why is synchronization needed?! Synchronization Language/Definitions:» What are race

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1 Thread Disclaimer: some slides are adopted from the book authors slides with permission 1 IPC Shared memory Recap share a memory region between processes read or write to the shared memory region fast

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Synchronization for Concurrent Tasks

Synchronization for Concurrent Tasks Synchronization for Concurrent Tasks Minsoo Ryu Department of Computer Science and Engineering 2 1 Race Condition and Critical Section Page X 2 Algorithmic Approaches Page X 3 Hardware Support Page X 4

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 Process creation in UNIX All processes have a unique process id getpid(),

More information

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10 MULTITHREADING AND SYNCHRONIZATION CS124 Operating Systems Fall 2017-2018, Lecture 10 2 Critical Sections Race conditions can be avoided by preventing multiple control paths from accessing shared state

More information

Lecture. DM510 - Operating Systems, Weekly Notes, Week 11/12, 2018

Lecture. DM510 - Operating Systems, Weekly Notes, Week 11/12, 2018 Lecture In the lecture on March 13 we will mainly discuss Chapter 6 (Process Scheduling). Examples will be be shown for the simulation of the Dining Philosopher problem, a solution with monitors will also

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion. Tyler Robison Summer 2010

CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion. Tyler Robison Summer 2010 CSE332: Data Abstractions Lecture 22: Shared-Memory Concurrency and Mutual Exclusion Tyler Robison Summer 2010 1 Toward sharing resources (memory) So far we ve looked at parallel algorithms using fork-join

More information

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018 SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T 2. 3. 8 A N D 2. 3. 1 0 S P R I N G 2018 INTER-PROCESS COMMUNICATION 1. How a process pass information to another process

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

Multiprocessors and Locking

Multiprocessors and Locking Types of Multiprocessors (MPs) Uniform memory-access (UMA) MP Access to all memory occurs at the same speed for all processors. Multiprocessors and Locking COMP9242 2008/S2 Week 12 Part 1 Non-uniform memory-access

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts. CPSC/ECE 3220 Fall 2017 Exam 1 Name: 1. Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.) Referee / Illusionist / Glue. Circle only one of R, I, or G.

More information

Programmazione di sistemi multicore

Programmazione di sistemi multicore Programmazione di sistemi multicore A.A. 2015-2016 LECTURE 12 IRENE FINOCCHI http://wwwusers.di.uniroma1.it/~finocchi/ Shared-memory concurrency & mutual exclusion TASK PARALLELISM AND OVERLAPPING MEMORY

More information

Interprocess Communication and Synchronization

Interprocess Communication and Synchronization Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set

More information

CS533 Concepts of Operating Systems. Jonathan Walpole

CS533 Concepts of Operating Systems. Jonathan Walpole CS533 Concepts of Operating Systems Jonathan Walpole Introduction to Threads and Concurrency Why is Concurrency Important? Why study threads and concurrent programming in an OS class? What is a thread?

More information

Synchronization Spinlocks - Semaphores

Synchronization Spinlocks - Semaphores CS 4410 Operating Systems Synchronization Spinlocks - Semaphores Summer 2013 Cornell University 1 Today How can I synchronize the execution of multiple threads of the same process? Example Race condition

More information

THREADS: (abstract CPUs)

THREADS: (abstract CPUs) CS 61 Scribe Notes (November 29, 2012) Mu, Nagler, Strominger TODAY: Threads, Synchronization - Pset 5! AT LONG LAST! Adversarial network pong handling dropped packets, server delays, overloads with connection

More information

CS-537: Midterm Exam (Spring 2001)

CS-537: Midterm Exam (Spring 2001) CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long

More information

Review: Easy Piece 1

Review: Easy Piece 1 CS 537 Lecture 10 Threads Michael Swift 10/9/17 2004-2007 Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift 1 Review: Easy Piece 1 Virtualization CPU Memory Context Switch Schedulers

More information

CSCI [4 6] 730 Operating Systems. Example Execution. Process [& Thread] Synchronization. Why does cooperation require synchronization?

CSCI [4 6] 730 Operating Systems. Example Execution. Process [& Thread] Synchronization. Why does cooperation require synchronization? Process [& Thread] Synchronization CSCI [4 6] 730 Operating Systems Synchronization Part 1 : The Basics Why is synchronization needed? Synchronization Language/Definitions: What are race conditions? What

More information

CIS Operating Systems Synchronization based on Busy Waiting. Professor Qiang Zeng Spring 2018

CIS Operating Systems Synchronization based on Busy Waiting. Professor Qiang Zeng Spring 2018 CIS 3207 - Operating Systems Synchronization based on Busy Waiting Professor Qiang Zeng Spring 2018 Previous class IPC for passing data Pipe FIFO Message Queue Shared Memory Compare these IPCs for data

More information

ECE 574 Cluster Computing Lecture 8

ECE 574 Cluster Computing Lecture 8 ECE 574 Cluster Computing Lecture 8 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 16 February 2017 Announcements Too many snow days Posted a video with HW#4 Review HW#5 will

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Part II Process Management Chapter 6: Process Synchronization

Part II Process Management Chapter 6: Process Synchronization Part II Process Management Chapter 6: Process Synchronization 1 Process Synchronization Why is synchronization needed? Race Conditions Critical Sections Pure Software Solutions Hardware Support Semaphores

More information

Introduction to OS Synchronization MOS 2.3

Introduction to OS Synchronization MOS 2.3 Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how

More information

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Concurrency: Locks. Announcements

Concurrency: Locks. Announcements CS 537 Introduction to Operating Systems UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department Concurrency: Locks Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau Questions answered in this lecture:

More information

Operating Systems. Operating Systems Sina Meraji U of T

Operating Systems. Operating Systems Sina Meraji U of T Operating Systems Operating Systems Sina Meraji U of T Remember example from third week? My_work(id_t id) { /* id can be 0 or 1 */... flag[id] = true; /* indicate entering CS */ while (flag[1-id]) ;/*

More information

UNIX Input/Output Buffering

UNIX Input/Output Buffering UNIX Input/Output Buffering When a C/C++ program begins execution, the operating system environment is responsible for opening three files and providing file pointers to them: stdout standard output stderr

More information

Systèmes d Exploitation Avancés

Systèmes d Exploitation Avancés Systèmes d Exploitation Avancés Instructor: Pablo Oliveira ISTY Instructor: Pablo Oliveira (ISTY) Systèmes d Exploitation Avancés 1 / 32 Review : Thread package API tid thread create (void (*fn) (void

More information

CSE 332: Data Structures & Parallelism Lecture 17: Shared-Memory Concurrency & Mutual Exclusion. Ruth Anderson Winter 2019

CSE 332: Data Structures & Parallelism Lecture 17: Shared-Memory Concurrency & Mutual Exclusion. Ruth Anderson Winter 2019 CSE 332: Data Structures & Parallelism Lecture 17: Shared-Memory Concurrency & Mutual Exclusion Ruth Anderson Winter 2019 Toward sharing resources (memory) So far, we have been studying parallel algorithms

More information

Synchronising Threads

Synchronising Threads Synchronising Threads David Chisnall March 1, 2011 First Rule for Maintainable Concurrent Code No data may be both mutable and aliased Harder Problems Data is shared and mutable Access to it must be protected

More information

CS420: Operating Systems. Process Synchronization

CS420: Operating Systems. Process Synchronization Process Synchronization James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Background

More information

Synchronization Principles

Synchronization Principles Synchronization Principles Gordon College Stephen Brinton The Problem with Concurrency Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms

More information

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Locks. Dongkun Shin, SKKU

Locks. Dongkun Shin, SKKU Locks 1 Locks: The Basic Idea To implement a critical section A lock variable must be declared A lock variable holds the state of the lock Available (unlocked, free) Acquired (locked, held) Exactly one

More information

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 4 Shared-Memory Concurrency & Mutual Exclusion Dan Grossman Last Updated: August 2010 For more information, see http://www.cs.washington.edu/homes/djg/teachingmaterials/

More information

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems Synchronization CS 475, Spring 2018 Concurrent & Distributed Systems Review: Threads: Memory View code heap data files code heap data files stack stack stack stack m1 m1 a1 b1 m2 m2 a2 b2 m3 m3 a3 m4 m4

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Parallel Programming Languages COMP360

Parallel Programming Languages COMP360 Parallel Programming Languages COMP360 The way the processor industry is going, is to add more and more cores, but nobody knows how to program those things. I mean, two, yeah; four, not really; eight,

More information

Programming in Parallel COMP755

Programming in Parallel COMP755 Programming in Parallel COMP755 All games have morals; and the game of Snakes and Ladders captures, as no other activity can hope to do, the eternal truth that for every ladder you hope to climb, a snake

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

CSE 374 Programming Concepts & Tools

CSE 374 Programming Concepts & Tools CSE 374 Programming Concepts & Tools Hal Perkins Fall 2017 Lecture 22 Shared-Memory Concurrency 1 Administrivia HW7 due Thursday night, 11 pm (+ late days if you still have any & want to use them) Course

More information

Synchronization. Dr. Yingwu Zhu

Synchronization. Dr. Yingwu Zhu Synchronization Dr. Yingwu Zhu Synchronization Threads cooperate in multithreaded programs To share resources, access shared data structures Threads accessing a memory cache in a Web server To coordinate

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

[537] Locks. Tyler Harter

[537] Locks. Tyler Harter [537] Locks Tyler Harter Review: Threads+Locks CPU 1 CPU 2 running thread 1 running thread 2 RAM PageDir A PageDir B CPU 1 CPU 2 running thread 1 running thread 2 RAM PageDir A PageDir B Virt Mem (PageDir

More information

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han Announcements HW #3 is coming, due Friday Feb. 25, a week+ from now PA #2 is coming, assigned about next Tuesday Midterm is tentatively

More information

CSE Traditional Operating Systems deal with typical system software designed to be:

CSE Traditional Operating Systems deal with typical system software designed to be: CSE 6431 Traditional Operating Systems deal with typical system software designed to be: general purpose running on single processor machines Advanced Operating Systems are designed for either a special

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

CS4411 Intro. to Operating Systems Exam 1 Fall points 9 pages

CS4411 Intro. to Operating Systems Exam 1 Fall points 9 pages CS4411 Intro. to Operating Systems Exam 1 Fall 2009 1 CS4411 Intro. to Operating Systems Exam 1 Fall 2009 150 points 9 pages Name: Most of the following questions only require very short answers. Usually

More information

Midterm Exam Amy Murphy 19 March 2003

Midterm Exam Amy Murphy 19 March 2003 University of Rochester Midterm Exam Amy Murphy 19 March 2003 Computer Systems (CSC2/456) Read before beginning: Please write clearly. Illegible answers cannot be graded. Be sure to identify all of your

More information

Synchronization COMPSCI 386

Synchronization COMPSCI 386 Synchronization COMPSCI 386 Obvious? // push an item onto the stack while (top == SIZE) ; stack[top++] = item; // pop an item off the stack while (top == 0) ; item = stack[top--]; PRODUCER CONSUMER Suppose

More information

The concept of concurrency is fundamental to all these areas.

The concept of concurrency is fundamental to all these areas. Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency

More information

CS510 Advanced Topics in Concurrency. Jonathan Walpole

CS510 Advanced Topics in Concurrency. Jonathan Walpole CS510 Advanced Topics in Concurrency Jonathan Walpole Threads Cannot Be Implemented as a Library Reasoning About Programs What are the valid outcomes for this program? Is it valid for both r1 and r2 to

More information

Concurrency: Mutual Exclusion (Locks)

Concurrency: Mutual Exclusion (Locks) Concurrency: Mutual Exclusion (Locks) Questions Answered in this Lecture: What are locks and how do we implement them? How do we use hardware primitives (atomics) to support efficient locks? How do we

More information

CS 537 Lecture 11 Locks

CS 537 Lecture 11 Locks CS 537 Lecture 11 Locks Michael Swift 10/17/17 2004-2007 Ed Lazowska, Hank Levy, Andrea and Remzi Arpaci-Dussea, Michael Swift 1 Concurrency: Locks Questions answered in this lecture: Review: Why threads

More information

CSCI 447 Operating Systems Filip Jagodzinski

CSCI 447 Operating Systems Filip Jagodzinski Filip Jagodzinski Announcements Reading Task Should take 30 minutes-ish maximum Homework 2 Book questions including two custom ones will be posted to the course website today Programming tasks will be

More information

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling. Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency

More information

Memory Management. Kevin Webb Swarthmore College February 27, 2018

Memory Management. Kevin Webb Swarthmore College February 27, 2018 Memory Management Kevin Webb Swarthmore College February 27, 2018 Today s Goals Shifting topics: different process resource memory Motivate virtual memory, including what it might look like without it

More information

Semaphores. Otto J. Anshus University of {Tromsø, Oslo}

Semaphores. Otto J. Anshus University of {Tromsø, Oslo} Semaphores Otto J. Anshus University of {Tromsø, Oslo} Input sequence f Output sequence g Concurrency: Double buffering /* Fill s and empty t concurrently */ Get (s,f) s Put (t,g) t /* Copy */ t := s;

More information

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2. Learning Outcomes Concurrency and Synchronisation Understand concurrency is an issue in operating systems and multithreaded applications Know the concept of a critical region. Understand how mutual exclusion

More information

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio Fall 2017 1 Outline Inter-Process Communication (20) Threads

More information

Midterm Exam Amy Murphy 6 March 2002

Midterm Exam Amy Murphy 6 March 2002 University of Rochester Midterm Exam Amy Murphy 6 March 2002 Computer Systems (CSC2/456) Read before beginning: Please write clearly. Illegible answers cannot be graded. Be sure to identify all of your

More information