Operating Systems CMPSC 473. Synchronization February 21, Lecture 11 Instructor: Trent Jaeger

Similar documents
Last class: Today: CPU Scheduling. Start synchronization

Dept. of CSE, York Univ. 1

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Announcements. Office hours. Reading. W office hour will be not starting next week. Chapter 7 (this whole week) CMSC 412 S02 (lect 7)

Process Synchronization

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Process Coordination

Concurrency: Mutual Exclusion and Synchronization

Chapter 6: Process Synchronization

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 7: Process Synchronization. Background. Illustration

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Part II Process Management Chapter 6: Process Synchronization

Operating Systems CMPSC 473 Midterm 2 Review April 15, Lecture 21 Instructor: Trent Jaeger

Example Threads. compile: gcc mythread.cc -o mythread -lpthread What is the output of this program? #include <pthread.h> #include <stdio.

Process Synchronization

CS370 Operating Systems

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 7: Process Synchronization. Background

Concurrency: Mutual Exclusion and

Mutual Exclusion and Synchronization

Process Synchronization

Process Synchronization

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Chapter 6: Process Synchronization

! Why is synchronization needed? ! Synchronization Language/Definitions: ! How are locks implemented? Maria Hybinette, UGA

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

PROCESS SYNCHRONIZATION

Concept of a process

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Part III Synchronization Software and Hardware Solutions

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

IV. Process Synchronisation

Synchronization Principles

CS370 Operating Systems

QUESTION BANK. UNIT II: PROCESS SCHEDULING AND SYNCHRONIZATION PART A (2 Marks)

Introduction to Operating Systems

Chapter 7: Process Synchronization!

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Synchronization Spinlocks - Semaphores

CSCI [4 6] 730 Operating Systems. Example Execution. Process [& Thread] Synchronization. Why does cooperation require synchronization?

Interprocess Communication By: Kaushik Vaghani

CHAPTER 6: PROCESS SYNCHRONIZATION

CS420: Operating Systems. Process Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

CS3733: Operating Systems

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Process Synchronization

1. Motivation (Race Condition)

Lesson 6: Process Synchronization

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Section I Section Real Time Systems. Processes. 1.4 Inter-Processes Communication (part 1)

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Running. Time out. Event wait. Schedule. Ready. Blocked. Event occurs

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

CS370 Operating Systems

Synchronization I. Jo, Heeseung

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Achieving Synchronization or How to Build a Semaphore

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

SAMPLE MIDTERM QUESTIONS

Synchronization for Concurrent Tasks

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Process Synchronization (Part I)

CSE Traditional Operating Systems deal with typical system software designed to be:

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

CIS Operating Systems Synchronization based on Busy Waiting. Professor Qiang Zeng Spring 2018

C09: Process Synchronization

Synchronization. Disclaimer: some slides are adopted from the book authors slides with permission 1

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

STUDENT NAME: STUDENT ID: Problem 1 Problem 2 Problem 3 Problem 4 Problem 5 Total

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Remaining Contemplation Questions

Operating Systems: Quiz2 December 15, Class: No. Name:

Chapter 5 Asynchronous Concurrent Execution

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

Module 6: Process Synchronization

Introduction to OS Synchronization MOS 2.3

Chapters 5 and 6 Concurrency

Concurrency: Mutual Exclusion and Synchronization. Concurrency

Synchronization. Before We Begin. Synchronization. Example of a Race Condition. CSE 120: Principles of Operating Systems. Lecture 4.

UNIX Input/Output Buffering

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Concurrency. Chapter 5

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Lecture Topics. Announcements. Today: Concurrency: Mutual Exclusion (Stallings, chapter , 5.7)

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Comp 310 Computer Systems and Organization

Chapter 6: Process Synchronization

CSE 120: Principles of Operating Systems. Lecture 4. Synchronization. October 7, Prof. Joe Pasquale

Transcription:

Operating Systems CMPSC 473 Synchronization February 21, 2008 - Lecture 11 Instructor: Trent Jaeger

Last class: CPU Scheduling Today: A little more scheduling Start synchronization

Little s Law

Evaluating Scheduling Algorithms Suppose that you have developed a new scheduling algorithm How do you compare its performance to others? What workloads should you use?

Workload Estimation Can estimate Arrival rate of requests How frequently a new request may arrive Service rate of request How long a process may use a service CPU Burst

Relates the Little s Law Arrival rate (lambda) Average waiting time (W) Average queue length (N) E.g., number of ready processes N = lambda x W

Using Little s Law Can estimate the arrival rate Rate at which processes become ready the average waiting time CPU burst and scheduling algorithm If I give you a scheduling algorithm and an arrival rate You can use Little s Law to compute the average length of the queue

The Utility of Little s Law Not practical for complex systems The arrival rate can be estimated By the average waiting is more complex Depends on the scheduling algorithm s behavior Alternative: simulation Build a computer system to emulate your system Run it under some load See what happens

Synchronization

Synchronization Processes (threads) share resources. How do processes share resources? How do threads share resources? It is important to coordinate their activities on these resources to ensure proper usage.

Resources There are different kinds of resources that are shared between processes: Physical (terminal, disk, network, ) Logical (files, sockets, memory, ) For the purposes of this discussion, let us focus on memory to be the shared resource i.e. processes can all read and write into memory (variables) that are shared.

Problems due to sharing Consider a shared printer queue, spool_queue[n] 2 processes want to enqueue an element each to this queue. tail points to the current end of the queue Each process needs to do tail = tail + 1; spool_queue[tail] = element ;

What we are trying to do Spool_queue X Y Process 1 tail = tail + 1; Spool_queue[tail] = X tail Process 2 tail = tail + 1; Spool_queue[tail] = Y

What is the problem? tail = tail + 1 is NOT 1 machine instruction It can translate as follows: Load tail, R1 Add R1, 1, R2 Store R2, tail These 3 machine instructions may NOT be executed atomically.

Interleaving If each process is executing this set of 3 instructions, context switching can happen at any time. Let us say we get the following resultant sequence of instructions being executed: P1: Load tail, R1 P1: Add R1, 1, R2 P2: Load tail, R1 P2: Add R1, 1, R2 P1: Store R2, tail P2: Store R2, tail

Leading to Spool_queue Y X Process 1 tail = tail + 1; Spool_queue[tail] = X tail Process 2 tail = tail + 1; Spool_queue[tail] = Y

Race Conditions Situations like this that can lead to erroneous execution are called race conditions The outcome of the execution depends on the particular interleaving of instructions Debugging race conditions can be fun! since errors can be non-repeatable.

Avoiding Race Conditions If we had a way of making those (3) instructions atomic i.e. while one process is executing those instructions, another process cannot execute the same instructions then we could have avoided the race condition. These 3 instructions are said to constitute a critical section.

Requirements for Solution 1. Mutual Exclusion - If process P i is executing in its critical section, then no other processes can be executing in their critical sections 2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely 3. Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted Assume that each process executes at a nonzero speed No assumption concerning relative speed of the N processes

Synchronization Solutions

How do we implement Critical Sections/Mutual Exclusion? Disable Interrupts Effectively stops scheduling other processes. Busy-wait/spinlock Solutions Pure software solutions Integrated hardware-software solutions Blocking Solutions

Disabling Interrupts Advantages: Simple to implement Disadvantages: Do not want to give such power to user processes Does not work on a multiprocessor Disables multiprogramming even if another process is NOT interested in critical section

S/W solns. with busy-waiting Overall philosophy: Keep checking some state (variables) until they indicate other process(es) are not in critical section. However, this is a non-trivial problem.

locked = FALSE; P1 { while (locked == TRUE) ; locked = TRUE; /************ (critical section code) /************ locked = FALSE; } P2 { while (locked == TRUE) ; locked = TRUE; /************ (critical section code) /************ locked = FALSE; } We have a race condition again since there is a gap between detection locked is FALSE, and setting locked to TRUE.

How do we implement Critical Sections/Mutual Exclusion? Disable Interrupts Effectively stops scheduling other processes. Busy-wait/spinlock Solutions Pure software solutions Integrated hardware-software solutions Blocking Solutions

1. Strict Alternation turn = 0; P0 { while (turn!= 0); /*********/ critical section /*********/ turn = 1; } It works! Problems: - requires processes to alternate getting into CS - does NOT meet Progress requirement. P1 { while (turn!= 1); /*********/ critical section /*********/ turn = 0; }

Fixing the progress requirement bool flag[2]; // initialized to FALSE P0 { flag[0] = TRUE; while (flag[1] == TRUE) ; /* critical section */ flag[0] = FALSE; } Problem: Both can set their flags to true and wait indefinitely for the other P1 { flag[1] = TRUE; while (flag[0] == TRUE) ; /* critical section */ flag[1] = FALSE; }

Peterson s Solution Two process solution Assume that the LOAD and STORE instructions are atomic; that is, cannot be interrupted. The two processes share two variables: int turn; Boolean flag[2] The variable turn indicates whose turn it is to enter the critical section. The flag array is used to indicate if a process is ready to enter the critical section. flag[i] = true implies that process P i is ready!

2. Peterson s Algorithm int turn; int interested[n]; /* all set to FALSE initially */ enter_cs(int myid) { /* param. is 0 or 1 based on P0 or P1 */ int other; } otherid = 1 myid; /* id of the other process */ interested[myid] = TRUE; turn = otherid; while (turn == otherid && interested[otherid] == TRUE) ; /* proceed if turn == myid or interested[otherid] == FALSE */ leave_cs(int myid) { interested[myid] = FALSE; }

Intuitively This works because a process can enter CS, either because Other process is not even interested in critical section Or even if the other process is interested, it did the turn = otherid first.

Prove that It is correct (achieves mutex) If both are interested, then 1 condition is false for one and true for the other. This has to be the turn == otherid which cannot be false for both processes. Otherwise, only one is interested and gets in

There is progress Prove that If a process is waiting in the loop, the other person has to be interested. One of the two will definitely get in during such scenarios.

Prove that There is bounded waiting When there is only one process interested, it gets through When there are two processes interested, the first one which did the turn = otherid statement goes through. When the current process is done with CS, the next time it requests the CS, it will get it only after any other process waiting at the loop.

We have looked at only 2 process solutions. How do we extend for multiple processes?

Multi-process solution Analogy to serving different customers in some serial fashion. Make them pick a number/ticket on arrival. Service them in increasing tickets Need to use some tie-breaker in case the same ticket number is picked (e.g. larger process id wins).

3. Bakery Algorithm Notation: (a,b) < (c,d) if a<c or a=c and b<d Every process has a unique id (integer) Pi bool choosing[0..n-1]; int number[0..n-1]; enter_cs(myid) { choosing[myid] = TRUE; number[myid] = max(number[0],number[1],., number[n-1]) + 1; choosing[myid] = FALSE; for (j=0 to n-1) { while (choosing[j]) ; while (number[j]!= 0) && ((number[j],pj)<(number[myid],myid)) ; } } leave_cs(myid) { number[myid] = 0; }

Show that it meets Exercise Mutex Progress Bounded waiting requirements

Where are we? Disable Interrupts Effectively stops scheduling other processes. Busy-wait/spinlock Solutions Pure software solutions Integrated hardware-software solutions Blocking Solutions

Complications arose because we had atomicity only at the granularity of a machine instruction, and what a machine instruction could do was limited. Can we provide specialized instructions in hardware to provide additional functionality (with an instruction still being atomic)?

Specialized Instructions Bool Test&Set(bool) Swap (bool, bool) Note that these are machine/assembly instructions, and are thus atomic.

Test&Set Atomic bool Test&Set(bool x) { temp = x; x = TRUE; return (temp); } Note that =x and x= would have required at least 1 machine instruction each without this specialized instruction.

Using Test&Set() Bool lock; Enter_CS() { while (Test&Set(lock)) ; } NOTE: This solution does not guarantee bounded Waiting. EXERCISE: Enhance this Solution for bounded waiting Exit_CS() { lock = FALSE; }

Swap() Atomic Swap(bool a, bool b) { } temp = a; a = b; b = temp; Again, all this is done atomically!

Bool lock; Using swap() Enter_cs() { key = TRUE; /* local var */ while (key == TRUE) swap(key,lock); } Exit_cs() { lock = FALSE; }

Where are we? Disable Interrupts Effectively stops scheduling other processes. Busy-wait/spinlock Solutions Pure software solutions Integrated hardware-software solutions Blocking Solutions

Spinning vs. Blocking In the previous solns., we busy-waited for some condition to change. This change should be effected by some other process. We are presuming that this other process will eventually get the CPU (some kind of pre-emptive scheduler). This can be inefficient because: You are wasting the rest of your time quantum in busy-waiting Sometimes, your programs may not work! (if the OS scheduler is not pre-emptive).

In blocking solutions, you relinquish the CPU at the time you cannot proceed, i.e. you are put in the blocked queue. It is the job of the process changing the condition to wake you up (i.e. move you from blocked back to ready queue). This way you do not unnecessarily occupy CPU cycles.

Example Blocking Implementation Enter_CS(L) { Disable Interrupts Check if anyone is using L If not { Set L to being used } else { Move this PCB to Blocked queue for L Select another process to run from Ready queue Context switch to that process } Enable Interrupts } Exit_CS(L) { Disable Interrupts Check if blocked queue for L is empty if so { Set L to free } else { Move PCB from head of Blocked queue of L to Ready queue } Enable Interrupts } NOTE: These are OS system calls!

Until now Exclusion synchronization/constraint Typical construct mutual exclusion lock Mutex_lock(m) Mutex_unlock(m) Do a man on pthread_mutex_lock() on your Solaris/Linux machine for further syntactic/semantic information.

Summary Synchronization Exclusive access to critical sections Mutual exclusion, progress, bounded waiting Approaches Disable interrupts Software only Hardware-enabled Spinning vs Blocking

Next time: Synchronization