The concept of concurrency is fundamental to all these areas.

Similar documents
Concurrency(I) Chapter 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency: Mutual Exclusion and Synchronization. Concurrency

Mutual Exclusion and Synchronization

CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

IT 540 Operating Systems ECE519 Advanced Operating Systems

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Principles of Operating Systems CS 446/646

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

Concurrency. Chapter 5

Lecture 6. Process Synchronization

Dept. of CSE, York Univ. 1

CS420: Operating Systems. Process Synchronization

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of MCA

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Process Synchronization - I

Chapter 6: Process Synchronization

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Lecture Topics. Announcements. Today: Concurrency: Mutual Exclusion (Stallings, chapter , 5.7)

Dealing with Issues for Interprocess Communication

CHAPTER 6: PROCESS SYNCHRONIZATION

Process Management And Synchronization

Concurrency: mutual exclusion and synchronization

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CSE Traditional Operating Systems deal with typical system software designed to be:

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Concept of a process

Lecture 5: Synchronization w/locks

Chapter 7: Process Synchronization!

1. Motivation (Race Condition)

Announcement. Exercise #2 will be out today. Due date is next Monday

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Synchronization Principles

Module 1. Introduction:

Lesson 6: Process Synchronization

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

Multiprocessor scheduling

Synchronization I. Jo, Heeseung

Process Synchronization

Midterm Exam Amy Murphy 19 March 2003

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

IV. Process Synchronisation

Interprocess Communication By: Kaushik Vaghani

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

PROCESS SYNCHRONIZATION

Concurrency. Glossary

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Operating Systems Overview. Chapter 2

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

PESIT Bangalore South Campus

Process Synchronization

Process Synchronization

Operating Systems. Synchronization

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Page 1. Challenges" Concurrency" CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks"

Synchronization API of Pthread Mutex: lock, unlock, try_lock CondVar: wait, signal, signal_broadcast. Synchronization

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Operating Systems. Sina Meraji U of T

Chapter 6: Process [& Thread] Synchronization. CSCI [4 6] 730 Operating Systems. Why does cooperation require synchronization?

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Process Synchronization

Part II Process Management Chapter 6: Process Synchronization

2 Introduction to Processes

Concurrent Processes Rab Nawaz Jadoon

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background. Illustration

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Deadlock and Starvation

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

CS370 Operating Systems

CS370 Operating Systems

Module 6: Process Synchronization

CSE 153 Design of Operating Systems

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Process Coordination

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Process Synchronization

Process/Thread Synchronization

Process Synchronization (Part I)

Introduction to OS Synchronization MOS 2.3

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Transcription:

Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency is fundamental to all these areas. It leads to a whole collection of design issues, including communication among processes, sharing of, and competing for, resources, synchronization of the activities of multiple processes, and allocation of processor time to processes. 1

They are all the same... In a single processor based multiprogramming system, processes are interleaved in time to yield the appearance of simultaneous exclusion. The lack of certainty opens a can of worms. In a multiprocessing environment, we can also overlap the processes to achieve real parallel processing, where concurrency and resource sharing is everywhere. This phenomenon also occurs in our lives, e.g., when we have to use the bathroom in Memorial... 2

More specifically,... Interleaving and overlapping are both examples of concurrent processing, and suffer from the same issues: The relative speed of process execution is unpredictable, thus leading to uncertainly. In the case of single processor, the sharing of global resources is a big one. Database application provides a gold mine of examples to this regard. For example, if two processes read and write on the same (global) variable, then the order in which the various read and write are done is critical. As mentioned earlier, deposit and withdraw with a shared account is also a good example. 3

Other issues It is also tough to manage the allocation of resources. For example, one process may request use of, and be granted access to, a particular I/O device, and then be suspended before using it. It may be problematic to simply lock this device to prevent its use by other processes, since it may lead to deadlock. Pepper and salt is a good example here. Finally, it becomes very difficult to locate a programming error in such an environment since results are typically not deterministic and reproducible. How do you do a trace in this case? All of these difficulties are present in a multiprocessing system as well. It must also deal with the problems caused by simultaneous execution of multiple processes, where synchronization and coordination become the key. 4

A simple example Given the following simplified code for the echo program: char chin, chout; void echo(){ chin=getchar(); chout=chin; putchar(chout); } Any program can call this procedure to accept user s input and echo back. Assume we have a uniprocessor multiprogramming case supporting one user, who would run multiple applications, many calling the above procedure, using the same input and output device. It makes sense to share a single copy of the code among these applications to save space. 5

What could go wrong? Such code sharing can lead to problems, e.g., when the following sequence is followed. 1. P 1 calls echo and is interrupted before the putchar(chout) is done. Assume at this point, the most recently entered character is x. 2. P 2 is activated and calls the echo procedure, which runs all the way to completion, inputting and then outputting y on the screen. 3. P 1 resumes. But at this point, the value x stored in chin has been overwritten with y. Hence, what will be output by P 1 is another y. The essence of this problem is that the global variables, chin and chout, as well as the above code, are shared, and accessed by multiple processes without any coordination. 6

A solution One to address this issue is to allow multiple processes to access the code, but ensure that only one process gets access to chin at one time: 1. P 1 calls echo and is interrupted right after the getchar() is completed. At this point, chin holds x. 2. P 2 is activated and calls echo as well. However, since P 1 is still insideecho, although blocked for the moment, P 2 has to be blocked from entering echo. Thus, P 2 is actually blocked, waiting for the availability of echo. 3. At some point, P 1 comes back, goes all the way through, and prints out x. 4. Now, echo is available, so P 2 can be resumed, it can now call echo, get, and send out, y. 7

Multiprocessor case When both P 1 and P 2 execute on two separate processors, but call the same copy of echo. Out of twenty interleaving patterns of the six lines, some of them, e.g., the following one, will get us into trouble. P 1 time P 2 - - chin=getchar(); t 1 - - chin=getchar(); chout=chin; t 2 - chout=chin; putchar(chout); t 3 - - putchar(chout); - t 4 Again, we will have the problem that the input to P 1 will get lost before it is displayed(?). 8

The same solution We can again add the ability to enforce that only one process can be executing echo at one time. Thus, 1. Both P 1 and P 2 are executing, each on a separate processor. P 1 calls on echo first. 2. While P 1 is inside echo, P 2 tries to call echo as well, but it has to be blocked, waiting for the availability of echo. 3. At a later time, P 1 completes the execution of echo and make the procedure available again. Thus, P 2 will resume its execution, and start to execute echo. Now, we lock up the code, instead of the variables, as it is easier to do... 9

What is in common? In the uniprocessor case, the problem is that an interrupt can stop execution at any time; and in the multiprocessor case, two processes execute simultaneously and both are trying to get access to the same global variable. The solution is the same: control access to the shared resources, which could be either data space, e.g., the variable chin; or actual program segment, e.g., the echo() method. Homework: Problem 5.1. 10

Race condition Two processes P 3 and P 4 share b and c, with their initial values being 1 and 2, respectively. At some point, P 3 might do b=b+c; Later P 4 might do c=b+c; Although they do update different variables, the final value of the variables depends on the relative order of these two operations. (?) Again, we have to control whatsoever as which one is to be picked up and run first... 11

Yet another example Let s look at the following segment of codes: Assume that x = 0 initially, r 1 = x; r 2 = x; r 1 + +; r 2 + +; x = r 1 ; x = r 2 ; Question: What is the value of x at the end? 12

It depends... on the interleaving pattern of the codes. Again, out of twenty of such interleaving patters, some of them will work, some will not, including the following trouble making one. For the following pattern, x r 1 r 2 x=0; 0 0 0 r 1 = x; 0 0 0 r 1 + +; 0 1 0 x = r 1 ; 1 1 0 r 2 = x; 1 1 1 r 2 + +; 1 1 2 x = r 2 ; 2 1 2 at the end, x 2. 13

On the other hand, for the following pattern, x r 1 r 2 x=0; 0 0 0 r 1 = x; 0 0 0 r 2 = x; 0 0 0 r 1 + +; 0 1 0 r 2 + +; 0 1 1 x = r 1 ; 1 1 1 x = r 2 ; 1 1 1 we don t have x 2, as the first update is lost in the process. In general, it is tough to know what pattern it is going to follow, thus when multiple processes/threads are involved, it is tough to know what they will do at the end... Homework: Complete Problem 5.3(a), and think about part 5.3(b). 14

Operation system concerns 1. The OS must be able to keep track of both the state and the status of various processes, using PCB. 2. The OS must allocate and deallocate resources to various processes, such as processing time, memory, file and I/O devices. 3. It must protect data and physical resources from unintended interference. 4. The results of a process must be independent of its execution speed, relative to the speed of the other concurrent processes. This is referred to as speed independence. To understand this issue better, let s consider the many ways in which processes can interact with each other. 15

Process interaction 1. When processes are completely independent of each other, they are not intended to work with each other. On the other hand, they may compete with each other for the same resources, e.g., the same file, or the same printer. OS must regulate these accesses. 2. Processes might not interact directly, with, e.g., their respective IDs, but they may share access to the same object, e.g., the same I/O buffer. Such processes cooperate with each other. 3. Finally, processes might communicate with each other since, they are designed to work jointly. These processes also exhibit cooperation. 16

Competing process To manage competing processes, three control problems have to be dealt with. One is mutual exclusion. Assume two or more processes require access to a non-sharable resource, such as a printer, we will refer to such resource as a critical resource, and the portion of the program that uses a critical resource a critical section. It is important that only one program can be allowed, at a time, in such a critical section. For example, we want any individual process to have a total control of the printer when it prints its entire output. The enforcement of mutual exclusion may lead to other issues. One of them is deadlock: Possessing R 1, P 1 may request R 2 ; while P 2, possessing R 2 exclusively, may want to have R 1. P 1 and P 2 are deadlocked. (Recall the Pepper and Salt problem.) 17

Another problem could be starvation. Assume that P 1, P 2 and P 3 all want to have resource R, and P 1 now has R, thus both P 2 and P 3 are delayed. When P 1 exits its critical section for R, assumes that the OS gives R to P 3. Further assume that P 1 again asks for R, before P 3 exits its critical section, and OS decides to give it back to P 1. If this situation continues, then P 2 will never get it, thus starved. Solutions to these problems have to deal with both OS and the processes. OS is fundamental in allocating resources; while the processes have to be able to lock up its resource with the locking mechanism provided by the OS. 18

A general framework const int n=/*number of processes*/; void P(int i){ while(true){ entercritical(i); /*critical section*/ exitcritical(i); /*remainder */; } void main(){ /* Some stuff */ parbegin(p(r1),..., P(Rn)); /* Other stuff */ } 19

What is going one? In the program, the parbegin structure is to suspend the execution of main process, initiate the concurrent execution of P with the respective parameters R 1,, R n, and once they are all done, resume main. Each process includes a critical section, and the remainder. Each function takes the name of the required resource, an integer, as its argument. Any process that attempts to enter its critical section while another process is in its critical section for the same resource is made to wait, or blocked. We will discuss the implementation of the two locking functions entercritical() and exitcritical later. 20

The toilet case When a bunch of people need to use the toilet on the third floor in Memorial, they are competing for the only facility there, which only one person can use at a time. That is why we have to box the toilet and add a door with a bolt. This turns the toilet into a critical section. When a person p i gets into the bathroom and before getting into the critical section, i.e., the box, he has to execute the entercritical(i) to check the bolt status. If it is not locked, he would get in, lock it and start to do his business; otherwise, he has to line up. Once he is done with his business, he would execute the exitcritical(i) to unlock the bolt. How about the urinals? 21

Sharing and cooperating Multiple processes may have access to shared data, and may use and update them without reference to other processes, while knowing their existence. We discussed related problem in the area of database application back in CS3600. All the file management processes also fall into this category as they have to share the same pile of files. Those processes must cooperate with each other to ensure that the shared data are properly managed. Again, since all these data are stored in resources, the problems of mutual exclusion, deadlock, and starvation might occur, together with another problem of data coherence. 22

An example Assume that two pieces of data, a and b have to be maintained such that a = b holds. Now consider the following two processes: P1: P2: a=a+1; b=b+1; b=2*b; a=2*a; If the state is initially consistent, and each process executed separately, the resulted data will also be consistent. On the other hand, the following concurrent execution of the above two processes will leave the state inconsistent afterwards: a=a+1; b=2*b; b=b+1; a=2*a; 23

A solution It is clear that the above problem can be avoided if we require that the whole process of using the shared data a and b become the critical section. Thus, the concept of critical section is also essential in the cooperating process case. 24

Communicating process When processes cooperate by communicating with each other, they participate in an effort that links all of them. The communication itself provides a way to synchronize, coordinate, the activities involved. Communication is usually carried out by passing messages. The corresponding primitives for sending and receiving messages could be either part of the programming language, or can be provided by the OS kernel. Since nothing is shared between processes for this category, mutual exclusion is not a needed mechanism. But, the problems of deadlock and starvation persist. For example, two processes might be waiting for each other s message. 25

The mutual exclusion requirement Any facility that is to support mutual exclusion must meet the following requirement: 1. Only one process at one time is allowed to enter a critical section. 2. A process that halts in its non-critical section must not interfere with other processes. 3. It must not be possible for a process requiring access to some requests to be delayed indefinitely.(starvation) 4. When no process is in a critical section, any process should be permitted to enter without delay.(deadlock) 5. No assumptions should be made about relative process execution speed.(speed independence) 6. A process remains in its critical section only for a finite amount of time.(deadlock) 26

Hardware approaches In a uniprocessor machine, concurrent processes cannot be overlapped, only interleaved. A process, once started, will continue to run until it requests an OS service, or when it is interrupted. Hence, to guarantee mutual exclusion, it suffices to prevent a running process from being interrupted. This can be done in the form of primitives provided by the kernel for enabling and disabling interrupts. Then, the basic configuration will be the follows: while(true){ /* disable interrupts */; /* critical section */; /* enable interrupts */; /* remainder */; } Remember the Do not disturb sign in a hotel? 27

Not an ideal solution... Since the critical section can t be interrupted, or rather, during the period that the critical section is being executed, the running process can not be interrupted, this guarantees that no other process has a chance to get in the critical section at the same time. Hence, mutual exclusion is upheld. However, the efficiency of execution is degraded since this approach limits processor s ability to interleave programs. A second problem is certainly that this will not work in a multiprocessor scenario, when the interrupt mechanism manages a processor. Thus, it is still possible for a process, dispatched to a different processor, to enter the critical section for the same resource. 28

Special instructions Based on the mutual exclusion at the memory location level, i.e., one process visits one memory cell at a time, a few approaches have been suggested at the instruction level. Those instructions can carry two actions, such as reading and writing, or reading and testing, in a single machine cycle (Remember firmware?), thus not subject to interference by other instructions. We will discuss two approaches based on such instructions, and you will have the chance to play with a third. 29

Test and set This instruction can be defined as follows: boolean testset(int& i){ if(i==0){ i=1; return true; } else return false; } The idea is that this entire program is hard coded as an single, atomic, instruction. Note that here int& i refers to call by reference, which associates the calling parameter with the actual one. int & i=r; Then both r and i refer to the same cell r. 30

An application Below shows a mutual exclusion protocol based on the above test-and-set instruction. const int n=/*number of processes*/; int bolt; void P(int i){ while(true){ while(!testset(bolt)) //int & i =bolt; /*do nothing*/; /*critical section*/; bolt=0; /*remainder*/; } } void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); } 31

How does it work? When the first process tries to get in, the value of the shared variable bolt is 0; thus this process gets in to the critical section, after resetting the value of bolt to 1. All the other processes, will be blocked outside the critical section, since the instruction always returns a 1. This will change when the process leaving the section resets bolt to 0. Hence, all the other processes, perhaps organized as a queue, will stay in the while loop, waiting for the lucky process to exit the critical section and reset bolt to 0, when the process at the front end of the queue will be allowed to get into the critical section. 32

Compare and swap A variant can be defined as follows: int compare_swap(int *word,int testval,int newval) { int oldval; oldval=*word; //int *word=&bolt; if(oldval==testval) *word=newval; return oldval; } If the value of a cell as pointed by word equals the testval, it gets the newval; otherwise, the content of word stays the same. It always returns the original content of the cell pointed by word. 33

An application Below shows a mutual exclusion protocol based on the above test-and-set instruction. const int n=/*number of processes*/; int bolt; void P(int i){ while(true){ while(compare_swap(&bolt,0,1)==1) /*do nothing*/; /*critical section*/; bolt=0; /*remainder*/; } } void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); } 34

How does it work? When the first process tries to get in, the value of the shared variable bolt, oldvalue, is 0, which equals to testvalue; thus this process sets bolt to newvalue=1, and returns the value of oldvalue, i.e., 0, thus gets into the critical section. All the other processes, will be blocked outside the critical section, since the instruction always returns a 1, without resetting bold. This will change when the process leaving the critical section resets bolt to 0. Hence, all the other processes, organized as some sort of a queue, will stay in the while loop, waiting for the lucky process to exit the critical section and reset bolt to 0, when the process at the front end of the queue will be allowed to get into the critical section. 35

Good news,... Besides being simple, thus easy to verify, this hardware approach is applicable to any number of processes on either a single processor machine, or on a multiple processor machine with shared memory, since it is associated with a memory cell bolt. It can be used to support multiple critical sections, each of them can be associated with a separate bolt. Homework: Study the other approach, namely, the exchange instruction (Cf. Fig. 5.2(b)), and explain how does it accomplish mutual exclusion? 36

... and bad news However, while a processor is waiting for access with that infinite loop, it still consumes processor s time, in the while loop. Also, when the critical section becomes available again, the selection by the dispatcher is arbitrary, thus, some process may never get it(starvation). Moreover, deadlock is also possible. For example, P 1 is interrupted after entering a critical section and gives up the processor to P 2 to get something done. P 2 cannot get into the same critical section(?), thus has to wait. On the other hand, P 1 may not exit the section if it has to wait for P 2 to finish first. They then wait for each other(deadlock). Homework: Problem 5.5. 37