CS3733: Operating Systems

Similar documents
CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CS 3723 Operating Systems: Final Review

Lecture Outline. CS 5523 Operating Systems: Concurrency and Synchronization. a = 0; b = 0; // Initial state Thread 1. Objectives.

CS 5523: Operating Systems

CS 3723 Operating Systems: Midterm II - Review

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CS370 Operating Systems

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Synchronization: Basics

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Process Coordination

Chapter 7: Process Synchronization. Background. Illustration

CHAPTER 6: PROCESS SYNCHRONIZATION

Process Synchronization

Chapter 6: Process Synchronization

Synchronization: Basics

Lesson 6: Process Synchronization

Chapter 7: Process Synchronization. Background

Process Synchronization

Chapter 6: Process Synchronization

Carnegie Mellon Concurrency and Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

CS370 Operating Systems

Synchronization Spinlocks - Semaphores

OS Process Synchronization!

Carnegie Mellon. Synchroniza+on : Introduc+on to Computer Systems Recita+on 14: November 25, Pra+k Shah (pcshah) Sec+on C

Carnegie Mellon. Bryant and O Hallaron, Computer Systems: A Programmer s Perspective, Third Edition

CS420: Operating Systems. Process Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Synchronization Principles

Process Synchronization

Process Synchronization(2)

Chapter 5: Process Synchronization

Module 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Process Synchronization(2)

Module 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Concurrency on x86-64; Threads Programming Tips

Chapter 7: Process Synchronization!

Today. Threads review Sharing Mutual exclusion Semaphores

Chapter 5: Process Synchronization

Interprocess Communication By: Kaushik Vaghani

Chapter 5: Process Synchronization

Process Synchronization(2)

Process/Thread Synchronization

Concurrency and Synchronisation

Chapter 6 Process Synchronization

CS370 Operating Systems

CS370 Operating Systems Midterm Review

Process Synchronization Mechanisms

Concurrency and Synchronisation

Process Synchronization

Process Synchronization

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CSE Opera,ng System Principles

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Process Synchronization

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Process Synchronization

Process/Thread Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization

1. Motivation (Race Condition)

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CSE 120. Fall Lecture 6: Semaphores. Keith Marzullo

Achieving Synchronization or How to Build a Semaphore

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

CS 5523 Operating Systems: Thread and Implementation

High-level Synchronization

CS Operating Systems: Threads (SGG 4)

Introduction to OS Synchronization MOS 2.3

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Dealing with Issues for Interprocess Communication

Process Synchronization

Process Synchronization

Programming with Threads Dec 7, 2009"

Lecture 6. Process Synchronization

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Concept of a process

Concurrency: Mutual Exclusion and

Dept. of CSE, York Univ. 1

Synchronization Basic Problem:

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Synchronization I. Jo, Heeseung

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Transcription:

Outline CS3733: Operating Systems Topics: Synchronization, Critical Sections and Semaphores (SGG Chapter 6) Instructor: Dr. Tongping Liu 1 Memory Model of Multithreaded Programs Synchronization for coordinated processes/threads The produce/consumer and bounded buffer problem Critical Sections and solutions requirements Peterson s solution for two processes/threads Synchronization Hardware: atomic instructions Semaphores Ø Two operations (wait and signal) and standard usages Bounded buffer problem with semaphores Ø Initial values and order of operations 2 Threads Memory Model Example Program to Illustrate Sharing Conceptual model: Ø Multiple threads run in the same context of a process Ø Each thread has its own separate thread context ü Thread ID, stack, stack pointer, PC, and GP registers Ø All threads share the remaining process context ü Code, data, heap, and shared library segments ü Open files and installed handlers Operationally, this model is not strictly enforced: Ø Register values are truly separate and protected, but Ø Any thread can read and write the stack of any other thread char **ptr; /* global */ int main() int i; pthread_t tid; char *msgs[2] = "Hello from foo", "Hello from bar" ; ptr = msgs; for (i = 0; i < 2; i++) Pthread_create(&tid, NULL, thread, (void *)i); Pthread_exit(NULL); /* thread routine */ void *thread(void *vargp) int myid = (int) vargp; static int cnt = 0; printf("[%d]: %s (svar=%d)\n", myid, ptr[myid], ++cnt); Peer threads reference main thread s stack indirectly through global ptr variable Mapping Variable Instances to Memory Global variables Ø Def: Variable declared outside of a function Ø Virtual memory contains exactly one instance of any global variable Local variables Ø Def: Variable declared inside function without static attribute Ø Each thread stack contains one instance of each local variable Local static variables Ø Def: Variable declared inside function with the static attribute Ø Virtual memory contains exactly one instance of any local static variable. Mapping Variable Instances to Memory Global var: 1 instance (ptr [data]) Variable Referenced by Referenced by Referenced by instance Local main vars: thread? 1 instance peer (i.m, thread msgs.m) 0? peer thread 1? ptr char **ptr; /* global */ cnt i.m int main() msgs.m myid.t0 int i; myid.t1 pthread_t tid; char *msgs[2] = "Hello from foo", "Hello from bar" ; ptr = msgs; for (i = 0; i < 2; i++) Pthread_create(&tid, NULL, thread, (void *)i);. yes yes Local var: 2 instances ( yes myid.t0 no [peer thread yes 0 s stack], yes myid.t1 yes [peer thread no 1 s stack] no ) yes yes yes no yes no no no /* thread routine */ yes void *thread(void *vargp) int myid = (int)vargp; static int cnt = 0; What is possible output? sharing.c! printf("[%d]: %s (svar=%d)\n", myid, ptr[myid], ++cnt); Local sta-c var: 1 instance (cnt [data]) 1

Mapping Variable Instances to Memory Global var: 1 instance (ptr [data]) char **ptr; /* global */ int main() int i; pthread_t tid; char *msgs[2] = "Hello from foo", "Hello from bar" ; ptr = msgs; for (i = 0; i < 2; i++) Pthread_create(&tid, NULL, thread, (void *)i);. Local vars: 1 instance (i.m, msgs.m) Local var: 2 instances ( myid.t0 [peer thread 0 s stack], myid.t1 [peer thread 1 s stack] ) /* thread routine */ void *thread(void *vargp) int myid = (int)vargp; static int cnt = 0; What is possible output? sharing.c! printf("[%d]: %s (svar=%d)\n", myid, ptr[myid], ++cnt); Local sta-c var: 1 instance (cnt [data]) Different outputs [0]: Hello from foo (svar=1) [1]: Hello from bar (svar=2) [1]: Hello from bar (svar=1) [0]: Hello from foo (svar=2) [1]: Hello from bar (svar=1) [0]: Hello from foo (svar=1) [0]: Hello from foo (svar=1) [1]: Hello from bar (svar=1) 8 Concurrent Access to Shared Data Two threads A and B have access to a shared variable Balance Thread A: Thread B: Balance = Balance + 100 Balance = Balance - 200 A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 B1. LOAD R1, BALANCE B2. SUB R1, 200 B3. STORE BALANCE, R1 What is the problem then? Observe: In a time-shared system, the exact instruction execution order cannot be predicted Scenario 1:! A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 Context Switch! B1. LOAD R1, BALANCE B2. SUB R1, 200 B3. STORE BALANCE, R1 Sequential correct execution Balance is effectively decreased by 100! Scenario 2:! B1. LOAD R1, BALANCE B2. SUB R1, 200 Context Switch! A1. LOAD R1, BALANCE A2. ADD R1, 100 A3. STORE BALANCE, R1 Context Switch! B3. STORE BALANCE, R1 Mixed wrong execution Balance is effectively decreased by 200!!! 9 10 Race Conditions Multiple processes/threads write/read shared data and the outcome depends on the particular order to access shared data are called race conditions Ø A serious problem for concurrent system using shared variables! How do we solve the problem?! Need to make sure that some high-level code sections are executed atomically Ø Atomic operation means that it completes in its entirety without worrying about interruption by any other potentially conflictcausing process What is Synchronization? Cooperating processes/threads share data & have effects on each other à executions are NOT reproducible with non-deterministic exec. speed Concurrent executions Ø Single processor à achieved by time slicing Ø Parallel/distributed systems à truly simultaneous Synchronization à getting processes/threads to work together in a coordinated manner 11 12 2

Producer/Consumer Problem The producer/consumer problem Ø A producer process/thread produces/generates data Ø A consumer process/thread consumes/uses data Ø Buffer is normally used to exchange data between them Bounded buffer: shared by producer/consumer Ø The buffer has a finite amount of buffer space Ø Producer must wait if no buffer space is available Ø Consumer must wait if all buffers are empty and no data is available Bounded Buffer: A Simple Implementation Shared circular buffer of size n item buffer[n]; int in, out, counter; Ø in points to the next free buffer Ø out points to the first full buffer Ø counter contains the number of full buffers No data available when counter = 0 No available space to store additional data when counter = n 13 14 Producer/Consumer Loops ++/-- Operations in Hardware Producer Loop Consumer Loop For the shared variable counter, What may go wrong? Hardware Ø Data in memory cannot be operated directly Ø Data in memory should be loaded to cache and registers first Ø Operations can be applied to registers only Ø Data in registers can be saved to memory 15 Executions of these instructions can be interrupted anywhere. What will be potential problem? 16 Mutual Exclusion Critical Sections Critical section (CS) Ø A section of code that modify the same shared variables/ data that must be executed mutually exclusively in time General structure for processes/threads with CS Ø entry section: The code which requests permission to enter the critical section. Ø critical section: as above Ø exit section: The code which removes the mutual exclusion. Ø : Everything else 17 18 3

General Structure for Critical Sections Requirements for CS Solutions do entry section critical section exit section while (1); In the entry section, the process requests permission. 19 Mutual Exclusion Ø At most one process/thread in its CS at any time Progress Ø If all other processes/threads are in their remainder sections, a process/thread is allowed to enter its CS Ø Only those processes that are not in their remainder section can decide which process can enter its CS next, and this decision cannot be postponed indefinitely. Bounded Waiting Ø Once a process has made a request to enter its CS à other processes can only enter their CSs with a bounded number of times 20 A Simple (Wrong) Solution int turn; à indicate whose turn to enter CS T0 and T1: alternate between CS and remainder Peterson s Solution Three shared variables: turn and flag[2] What is the problem with this solution? Mutual exclusion? Progress? Bounded Waiting? 21 How does this solution satisfy the CS requirements (mutual exclusion, progress, bounded waiting)? 22 Peterson s Solution (cont.) Peterson s Solution (cont.) Mutual Exclusion Ø Q: when process/thread 0 is in its critical section, can process/thread 1 get into its critical section? à flag[0] = 1; and either flag[1]=0; or turn = 0 à Thus, for process/thread (1): flag[0]=1; if flag[1]=0, it is in reminder section; if turn = 0, it waits before its CS Progress: can P0 be prevented into CS, e.g. stuck in while loop? Bounded Waiting Toggle first two lines for one process/thread How does this solution break the CS requirements (mutual exclusion, progress, bounded waiting)? 23 24 4

Hardware Solution: Disable Interrupt Uniprocessors could disable interrupts" Ø Currently running code would execute without preemption" Inefficient on multiprocessor systems" do DISABLE INTERRUPT critical section ENABLE INTERRUPT remainder statements while (1); What are the problems with this solution? 1. Time consuming, decreasing efficiency 2. Clock problem 3. Machine dependent Hardware Support for Synchronization Synchronization Ø Need to test and set a value atomically IA32 hardware provides a special instruction: xchg Ø When the instruction accesses memory, the bus is locked during its execution and no other process/thread can access memory until it completes!!! Ø Other variations: xchgb, xchgw, xchgl Other hardware instructions Ø TestAndSet (a) Ø Swap (a,b) 25 26 Hardware Instruction TestAndSet Another Hardware Instruction: Swap The TestAndSet instruction tests and modifies the content of a word atomically (non-interruptable)" Keep setting the lock to 1 and return old value. bool TestAndSet(bool *target) boolean m = *target; *target = true; return m; What s the problem? 1. Busy-waiting, waste cpu 2. Hardware dependent, not do Swap contents of two memory words void Swap (bool *a, bool *b) bool temp = *a; *a = *b; *b = temp: What s the problem? 1. Busy-waiting, waste cpu 2. Hardware dependent, not bounded-waiting bool lock = FALSE; While(true) while(testandset(&lock)); bool key = TRUE; critical section while(key == TRUE) LOCK == FALSE //free the lock Swap(&key, &lock) ; lock = false; critical section; while(true); lock = FALSE; //release permission bounded-waiting 27 28 Semaphores Semaphore Operations wait(value) Synchronization without busy waiting Ø Motivation: Avoid busy waiting by blocking a process execution until some condition is satisfied Semaphore S integer variable Two indivisible (atomic) operations: Ø wait(s) (also called P(s) or down(s) or acquire()); Ø signal(s) (also called V(s) or up(s) or release()) Ø User-visible operations on a semaphore Ø Easy to generalize, and less complicated for application programmers" Semaphore is an integer. wait(s): //wait until s.value > 0; s.value-- ; /* Executed atomically! */ Ø The value of s could be negative à the number of waits A process execute the wait operation on a semaphore with value <=0 is blocked Ø Blocked à non-runnable state, yield CPU to others signal(s):: s.value++; /* Executed atomically! */ Ø If s=1, wake up only one blocked process; which one?! signal(value) Implementation is a spinlock: waste CPU, but no need for context switc 29 30 5

Semaphores without Busy Waiting The idea: Ø Once need to wait, remove the process/thread from CPU Ø The process/thread goes into a special queue waiting for a semaphore (like an I/O waiting queue) Ø OS/runtime manages this queue (e.g., FIFO manner) and remove a process/thread when a signal occurs A semaphore consists of an integer (S.value) and a linked list of waiting processes/threads (S.list) Ø If the integer is 0 or negative, its magnitude is the number of processes/threads waiting for this semaphore. Start with an empty list, and normally initialized to 0 31 Implement Semaphores typedef struct int value; struct process *list; semaphore; wait(semaphore * s) s->value--; if (s->value <0) enlist(s->list); block(); Two operations by system calls" block place the current process to the waiting queue." wakeup remove one of processes in the waiting queue and place it in the ready queue." signal(semaphore * s) s->value++; if (s->value <= 0) delist(p, s->list); wakeup(p); Is this one without busy waiting? 32 down WaitingQ.ins WaitingQ.del Semaphore Usage Counting semaphore " Ø Can be used to control access to a given resources with finite number of instances " Binary semaphore integer value can range only between 0 and 1; Also known as mutex locks! " S = number of resources while(1) wait(s); use one of S resource signal(s); mutex = 1 while(1) wait(mutex); Critical Section signal(mutex); 33 Attacking CS Problem with Semaphores Shared data Ø semaphore mutex = 1; /* initially mutex = 1 */ For any process/thread do wait(mutex); critical section signal(mutex); while(1); Revisit Balance Update Problem Shared data: int Balance;! semaphore mutex; // initially mutex = 1! Process A:!.! wait (mutex);! Balance = Balance 100;! signal (mutex);!!! Process B:!.! wait (mutex);! Balance = Balance 200;! signal (mutex);!!! 35 36 6

Semaphore for General Synchronization Classical Synchronization Problems Execute code B in P j after code A is executed in P i Use semaphore flag : what is the initial value? Code P i : P j : Producer-Consumer Problem Ø Shared bounded-buffer Ø Producer puts items to the buffer area, wait if buffer is full Ø Consumer consumes items from the buffer, wait if is empty..... wait(flag) A B signal(flag) Readers-Writers Problem Ø Multiple readers can access concurrently Ø Writers mutual exclusive with writes/readers What about 2 threads wait for 1 thread? Or 1 thread waits for 2 threads? 37 Dining-Philosophers Problem Ø Multiple resources, get one at each time 38 Producer-Consumer Problem What Semaphores are needed? With Bounded-Buffer producer!........! consumer! Need to make sure that Ø The producer and the consumer do not access the buffer area and related variables at the same time Ø No item is available to the consumer if all the buffer slots are empty. Ø No slot in the buffer is available to the producer if all the buffer slots are full semaphore mutex, full, empty; Initially: What are the initial values? full = 0 /* The number of full buffer slots */ empty = n /* The number of empty buffer slots */ mutex = 1 /* controlling mutual access to the buffer pool */ 39 40 Producer/Consumer Loops Producer-Consumer Codes Producer Loop Consumer Loop For the shared variable counter, What may go wrong? 41 Producer Consumer What will happen if we change the order? do produce an item in nextp wait(empty); wait(mutex); add nextp to buffer signal(mutex); signal(full); while (1) do wait(full); wait(mutex); remove an item from buffer to nextc signal(mutex); signal(empty); consume the item in nextc while (1) 42 7

Summary Synchronization for coordinated processes/threads The produce/consumer and bounded buffer problem Critical Sections and solutions requirements Peterson s solution for two processes/threads Synchronization Hardware: atomic instructions Semaphores Ø Its two operations wait and signal and standard usages Bounded buffer problem with semaphores Ø Initial values and order of operations are crucial 43 8