Concurrency(I) Chapter 5

Size: px
Start display at page:

Download "Concurrency(I) Chapter 5"

Transcription

1 Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency is fundamental to all these areas. It is about a whole collection of design issues, including communication among processes, sharing of and competing for resources, synchronization of the activities of multiple processes, and allocation of processor time to processes. In a single processor multiprogramming system, processes are interleaved in time to yield the appearance of simultaneous exclusion. In a multiprocessing environment, we can also overlap the processes to achieve real parallel processing. 1

2 Interleaving and overlapping are both examples of concurrent processing, and suffer from the same problems. The fundamental issue is that the relative speed of process execution is unpredictable. In the case of single processor, the sharing of global resources is a big one. For example, if two processes read and write on the same (global) variable, then the order in which the various read and write are done is critical. Deposit and withdraw with a shared account is a good example. It is also tough to manage the allocation of resources. For example, one process may request use of, and be granted access to, a particular I/O device, and then be suspended before using it. It may be problematic to simply lock this device to prevent its use by other processes, since it may lead to deadlock. Pepper and salt is a good example here. 2

3 Finally, it becomes very difficult to locate a programming error in such an environment since results are typically not deterministic and reproducible. All of these difficulties are present in a multiprocessing system as well. It must also deal with the problems caused by simultaneous execution of multiple processes. 3

4 A simple example Given the following code char chin, chout; void echo(){ chin=getchar(); chout=chin; putchar(chout); Any program can call this procedure to accept user s input and echoes it back. Assume we have a single-processor multiprogramming case supporting one user, who can jump from application to application, calling the same procedure, using the same input and output device. It makes sense to share a single copy of the code among these applications to save space. 4

5 A problem Such code sharing can lead to problems, e.g., when the following sequence is followed. 1. P 1 calls the echo and is interrupted as soon as getchar() is done. Assume at this point, the most recently entered character is x. 2. P 2 is activated and calls the echo procedure, which runs all the way to completion, inputting and then outputting y on the screen. 3. P 1 resumes. But at this point, the value x stored in chin has been overwritten with y. Hence, what will be output by P 1 is another y. The essence of this problem is that the global variable, chin, is shared, and accessed by multiple processes. 5

6 A solution On the other hand, if we allow only one process to get access to chin at one time, then, we will have the following: 1. P 1 calls echo and is interrupted right after the getchar() is completed. At this point, chin holds x. 2. P 2 is activated and calls echo as well. However, since P 1 is still inside echo, although suspended for the moment, P 2 has to be blocked from entering echo. Thus, P 2 is actually suspended, waiting for the availability of echo. 3. At some point, P 1 is resumed, go all the way through, and print out x. 4. Now, echo is available, so P 2 can be resumed, it can now call echo, and gets, and sends out, y. Homework: Problem

7 Multiprocessing case Consider the following situation, when both P 1 and P 2 execute on a separate processor, and call echo: P 1 time P chin=getchar(); t chin=getchar(); chout=chin; t 2 - chout=chin; putchar(chout); t putchar(chout); - t 4 Again, we will have the problem that the input to P 1 will get lost before it is displayed(?). 7

8 The same solution We can again add the ability to enforce that only one process can be executing echo at the same time. Thus, 1. Both P 1 and P 2 are executing, each on a separate processor. P 1 calls on echo first. 2. While P 1 is inside echo, P 2 tries to call echo as well, but it has to be blocked. Therefore, it is suspended, waiting for the availability of echo. 3. At a later time, P 1 completes the execution of echo and make the procedure available again. Thus, P 2 will resume its execution, and start to execute echo. 8

9 What is in common? In the uniprocessor case, the problem is that an interrupt can stop execution at any time; and in the multiprocessor case, two processes executes simultaneously and both are trying to get access to the same global variable. The solution is the same: control access to the shared resources, which could be either data space, e.g., the variable chin; or actual program segment, e.g., the echo() method. 9

10 Race condition A race condition occurs when multiple processes or threads read and write data items so that the final result depends on the execution order. For example, assume two processes, P 1 and P 2 share a global variable a. At some point, P 1 updates a to 1, and at some point, P 2 updates a to 2. Thus, the two tasks are in a race to write into a. In the case, the loser determines the final value of a. As another example, P 3 and P 4 hare b and c, with their initial values being 1 and 2, respectively. At some point, P 3 might do b=b+c; Later P 4 might do c=b+c; Although they do update different variables, the final value of the variables depends on the relative order of these two operations. 10

11 Operation concerns 1. The OS must be able to keep track of various processes, using PCB. 2. The OS must allocate and deallocate resources to various processes, such as processing time, memory, file and I/O devices. 3. It must protect data and physical resources from unintended interference. 4. The results of a process must be independent of its execution speed, relative to the speed of the other concurrent processes. This is referred to as speed independence. To understand this issue better, let s consider the many ways processes can interact with each other. 11

12 Process interaction 1. When processes are completely independent of each other, they are not intended to work with each other. On the other hand, they may compete with each other for the same resources, e.g., the same file, or the same printer. OS must regulate these accesses. 2. Processes might not interact directly, with, e.g., their respective IDs, but they may share access to the same object, e.g., the same I/O buffer. Such processes cooperate with each other. 3. Finally, processes might communicate with each other since, they are designed to work jointly. These processes also exhibit cooperation. 12

13 Competing process To manage competing processes, three control problems have to be dealt with. One is mutual exclusion. Assume two or more processes require access to a nonsharable resource, such as a printer, we will refer to such resource as a critical resource, and the portion of the program that uses a critical resource a critical section. It is important that only one program can be allowed, at a time, in such a critical section. For example, we want any individual process to have a total control of the printer when it prints its entire output. The enforcement of mutual exclusion may lead to problems. One of them is deadlock: Possessing R 1, P 1 may request R 2 ; while P 2, possessing R 2 exclusively, may want to have R 1. P 1 and P 2 are deadlocked. (Recall the Pepper and Salt problem.) 13

14 Another problem could be starvation. Assume that P 1, P 2 and P 3 all want to have resource R, and P 1 now has R, thus both P 2 and P 3 are delayed. When P 1 exits its critical section for R, assumes that the OS gives R to P 3. Further assume that P 1 again asks for R, before P 3 exits its critical section, and OS decides to give it back to P 1. If this situation continues, then P 2 will never get it, thus starved. Solutions to these problems have to deal with both OS and the processes. OS is fundamental in allocating resources; while the processes have to be able to lock up its resource with the locking mechanism provided by the OS. 14

15 A general framework In the following program, the parbegin structure means that suspends the execution of main, initiates the concurrent execution of P 1,, P n, and once all them are done, resume main. Each process includes a critical section, and the remainder. Each function takes the name of the required resource, an integer, as its argument. Any process that attempts to enter its critical section while another process is in its critical section for the same resource is made to wait, or blocked. 15

16 The code const int n=/*number of processes*/; void P(int i){ while(true){ entercritical(i); /*critical section*/ exitcritical(i); /*remainder */; void main(){ parbegin(p(r1),..., P(Rn)); We will discuss the implementation of the two locking functions entercritical() and exitcritical later. 16

17 Sharing and cooperating Multiple processes may have access to shared data, and may use and update them without reference to other processes, while knowing their existence. Thus, those processes must cooperate with each other to ensure that the shared data are properly managed. Again, since all these data are stored in resources, the problems of mutual exclusion, deadlock, and starvation might occur, together with another problem of data coherence. Homework: Problem

18 An example Assume that two pieces of data, a and b have to be maintained such that a = b holds. Now consider the following two processes: P1: P2: a=a+1; b=b+1; b=2*b; a=2*a; If the state is initially consistent, and each process executed separately, the resulted data will also be consistent. On the other hand, the following concurrent execution of the above two processes will leave the state inconsistent afterwards: a=a+1; b=2*b; b=b+1; a=2*a; 18

19 A solution It is clear that the above problem can be avoided if we require that the whole process of using the shared a and b become the critical section. Thus, the concept of critical section is also essential in the cooperating process case. 19

20 Communicating process When processes cooperate by communicating with each other, they participate in an effort that links all of them. The communication itself provides a way to synchronize, coordinate, the activities involved. Communication is usually carried out by passing messages. The corresponding primitives for sending and receiving messages could be either part of the programming language, or can be provided by the OS kernel. Since nothing is shared between processes for this category, mutual exclusion is not a needed mechanism. But, the problems of deadlock and starvation persist. For example, two processes might be waiting for each other s message. 20

21 The mutual exclusion requirement Any facility that is to support mutual exclusion must meet the following requirement: 1. Only one process at one time is allowed to enter a critical section. 2. A process that halts in its non-critical section must not interfere with other processes. 3. It must not be possible for a process requiring access to some requests to be delayed indefinitely.(starvation) 4. When no process is in a critical section, any process should be permitted to enter without delay.(deadlock) 5. No assumptions should be made about relative process execution speed.(speed independence) 6. A process remains in its critical section only for a finite amount of time.(deadlock) 21

22 Hardware approaches In a uniprocessor machine, concurrent processes cannot be overlapped, only be interleaved. A process, once started, will continue to run until it requests an OS service, or when it is interrupted. Hence, to guarantee mutual exclusion, it suffices to prevent a running process from being interrupted. This can be done in the form of primitives provided by the kernel for enabling and disabling interrupts. Then, the basic configuration will be the follows: while(true){ /* disable interrupts */; /* critical section */; /* enable interrupts */; /* remainder */; 22

23 Since the critical section can t interrupted, or rather, during the period that the critical section is being executed, the running process can not be interrupted, this guarantees that no other process has a chance to get in the critical section at the same time. Hence, mutual exclusion is upheld. However, the efficiency of execution is degraded since this approach limits processor s ability to interleave programs. A second problem is certainly that this will not work in a multiprocessor scenario, when it is still possible for a process, running on a different processor to enter the critical section for the same resource. 23

24 Special instructions In a multiprocessor situation, multiple processors work together, and independently, in a peer relationship. There does not exit an interrupt mechanism between processors that will prevent mutual exclusion. Based on the mutual exclusion at the memory location level, a few approaches have been suggested at the instruction level. Those instructions can carry two actions, such as reading and writing, or reading and testing, in a single machine cycle, thus not subject to interference by other instructions. We will discuss two approaches based on such instructions. 24

25 Test and set This instruction can be defined as follows: boolean testset(int i){ if(i==0){ i=1; return true; else return false; The idea is that this entire program is hard coded as an single, atomic, instruction. 25

26 An application Below shows a mutual exclusion protocol based on the above test-and-set instruction. const int n=/*number of processes*/; int bolt; void P(int i){ while(true){ while(!testset(bolt)) /*do nothing*/; /*critical section*/; bolt=0; /*remainder*/; void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); 26

27 How come? When the first process tries to get in, the value of the shared variable bolt is 0; thus this process gets in to the critical section, after flipping bolt to 1. Hence, all the other processes, perhaps organized as a queue, will stay in the while loop, waiting for the lucky process to exit the critical section and reset bolt to 0, when the process at the front end of the queue will be allowed to get into the critical section. Homework: Study the other approach, namely, the exchange instruction, and answer the following questions: 1) How does it accomplish mutual exclusion? 2) Why does the equation bolt + i key i = n hold? 27

28 The exchange instruction This atomic instruction can be defined as follows: void exchange(int register, int memory){ int temp; temp=memory; memory=register; register=memory; This one exchanges the content of a register and that of a memory location. During its execution, that memory location is blocked from any other instruction to get access to it. 28

29 An application int const n=/*number of processes*/; int bolt; void P(int i){ int keyi=1; while(true){ keyi=1; do exchange(keyi, bolt) while(keyi!=0); /*critical section*/; exchange(keyi, bolt); /*remainder*/; void main(){ bolt=0; parbegin(p(1), P(2),..., P(n)); 29

30 What happens? When implementing the mutual exclusion mechanism, a shared variable, bolt, is set to 0. Each process uses a local variable key, initialized to 1. The only process that is allowed to enter its critical section is the one that finds bolt equal to 0. This process then excludes other processes by setting bolt to 1, the value of its local variable keyi. When a process exits from its critical section, it resets bolt back to 0, which is then used to activate the next looping process. At any moment, the following expression holds: bolt + i key i = n. If bolt is 0, no process is in its critical section. If it is 1, then exactly one process is in, i.e., the one with its key value being 0. 30

31 Properties Besides being simple, thus easy to verify, this approach is applicable to any number of processes on either a single processor machine, or on a multiple processor machine with shared memory. Finally, it can be used to support multiple critical sections, each of them can be associated with a separate bolt. However, while a processor is waiting for access, it still consumes processor s time. Also, when the critical section becomes available again, the selection is arbitrary, thus, some process may never get it(starvation). Finally, deadlock is also possible. For example, P 1 is interrupted after entering a critical section and give the processor to a more important P 2. P 2 can t get into the same critical section(?), thus has to wait. On the other hand, P 1 can t exit the section since it has a wait for P 2 to finish first. Homework: Problem

32 A software approach Mutual exclusion can be implemented by following a software approach for concurrent processes that execute on a single processor or a multiprocessor machine with shared main memory. It is usually assumed that mutual exclusion holds at the memory access level, i.e., simultaneous accesses to the same location in main memory are serialized by certain mechanism, although the order of access is not specified. Other than that, no support in hardware, OS, or programming language is assumed. 32

33 Semaphores Semaphores is one of the mechanisms provided by operating system and programming languages to support concurrency. The basic idea is that two or more processes can cooperate by means of simple signals, such that a process can be forced to stop at a specific place until it has received a special signal. For signals, special variables called semaphores are used. To send a signal via a semaphore s, the process executes the primitive semsignal(s); to receive such a signal, semwait(s) is executed. If the expected signal has not been transmitted, a process is suspended until such a transmission happens. 33

34 A few requirements To achieve such effects, a semaphore is defined as a variable, with an integer value, together with the following operations: 1. A semaphore can be initialized to a nonnegative value. 2. The semwait operation decrements the value. If the value becomes negative, the process that executes semwait is blocked. 3. The semsignal operations increments its value. It the value is less or equal to 0 after the increment, then a process blocked by a semwait operation is released from the blocked state. There are no other ways to inspect or manipulate a semaphore. Homework: Problem

35 struct semaphore{ int count; queuetype queue; Semaphore code void semwait(semaphore s){ s.count--; if(s.count<0){ place the process into queue; block this process; void semsignal(semaphore s){ s.count++; if(s.count<=0){ remove a process from s.queue; place it on ready list; 35

36 Binary semaphore The following binary semaphore is easier to implement, but has the same power as the general one. struct binary_semaphore{ enum(zero, one) value; queuetype queue; void semwaitb(binary_semaphore s){ if(s.count==1) s.count=0; else { place the process into queue; block this process; 36

37 void semsignalb(binary_semaphore s){ if(s.queue.is_empty()) s.value=1; else { remove a process from s.queue; place it on ready list; Homework: Problem

38 Strong and weak semaphore For semaphores, a queue is used to hold those processes waiting on semaphore. The question arises when we have to decide the order in which the waiting processes are removed, when the semaphore becomes available again. The fairest policy is the first-in, and first-out. This way, the process that is blocked longest will be released first. Such a semaphore is called a strong semaphore, otherwise, a weak semaphore. Both of them can be used to implement mutual exclusion algorithm. Although strong semaphore will prevent starvation from happening, a weak semaphore can t do that. Hence, we will assume the strong version. 38

39 An example Below shows an execution of a strong semaphore, where processes A, B and C depends on a result produced by process D. 39

40 Support mutual exclusion Below shows a straightforward implementation of mutual exclusion. int const n=/*number of processes*/; semaphore s=1; void P(int i){ while(true){ semwait(s); /* critical section */; semsignal(s); /* remainder */; void main(){ parbegin(p(1), P(2),..., P(n)); 40

41 How does it work? For each process, a semwait operation is executed first before it enters its critical section. If the value of s becomes negative, this process is suspended. On the other hand, if this value is 1, then it is decremented to 0, and the process enters the critical section immediately. Because s becomes 0 now, no further process will be able to enter the critical section any more, since it has to execute a sigwait operation first, which leads to a negative value for s. The semaphore is initialized to 1. Hence, the first one is able to enter the critical section immediately, setting s to 0. Any number of further processes will keep on decrementing s, and be put into the queue. 41

42 When the process that initially entered the critical section completes and leaves, it executes a semsignal operation which increments s by 1. As a result, one of the blocked processes will be dequeued, and put into the Ready state. Thus, when it is scheduled next time, it may enter the critical section. 42

43 The program for semaphore mutual exclusion can also handle the case that more than one processes be allowed to enter their critical section at a time. This can be done easily by initializing the s value to a specified value. Thus, at any time, the value of s.count can be interpreted as follows: 1. When s.count is not negative, it is the number of processes that can execute semwait(s) without suspension. 2. Otherwise, the magnitude of s.count is the number of processes suspended in s.queue. 43

44 The producer/consumer problem This is one of the most common problems in concurrent processing: One or more producers generate data and put them into a buffer. A single consumer takes items out of the buffer one at a time. The goal is that only one agent, either consumer or producer, may get access to the system at one time since we want to prevent an overlap of the buffer operations. We will look at a few solutions to illustrate both the power and pitfalls of semaphores. 44

45 To begin with Assume that the buffer is infinite and consists of a linear array of elements. Then the two functions can be defined as follows: producer: while(true){ /* produce item v */; b[in++]=v; consumer: while(true){ while(in<=out) /*do nothing */; w=b[out++]; /* consume w */; 45

46 Below shows the structure of the buffer b. The producer basically generates items and store them in b at its own speed. Whenever it puts something into the buffer, an index in is incremented. The consumer proceeds in the same fashion, but must make sure that it will not try to read something out of an empty buffer. b[1] b[2] out b[3] b[4] in b[5] Now let s try to implement a solution, using binary semaphores. 46

47 The first attempt Instead of using both in and out, we can keep track of, n, the number of items in the buffer, i.e., the difference between out and in. We also use a semaphore s to enforce mutual exclusion, and another one delay to force the consumer to wait if the buffer is empty. The producer is free to add to the buffer at any time. It performs a waitb(s) before adding, and signalb(s) afterwards to make sure that the consumer will not try to take something out while it is adding in an item. Also, the producer will increment n. If n=1, then the buffer was empty before this new addition. Thus, the producer will also execute signalb(delay) to alert the consumer. 47

48 Now let s look at the code. int n; binary_semaphore s=1; binary_semaphore delay=0; void producer(){ produce(); semwaitb(s); append(); n++; if(n==1) semsignalb(delay); semsignalb(s); void consumer(){ semwaitb(delay); while(true){ semwaitb(s); take(); n--; semsignalb(s); consume(); if(n==0) semwaitb(delay); void main(){ n=0; parbegin(producer, consumer); 48

49 The consumer begins by waiting for the first item to be produced, using semwaitb(delay). It then takes an item and decrements n in its critical section. If the producer is able to produce enough items, then the consumer will rarely block on delay because n is usually positive. Hence both of them will run smoothly. Otherwise, when the consumer exhausts the buffer, it has to reset delay and will be forced to wait until the producer generates more items. But, in some case, the above code will generate incorrect result. Homework: Figure out the error by checking through Table

50 The above problem cannot be easily fixed by moving in the test into the critical section of the consumer part since that may lead to a deadlock (?). A fix for the problem could be to use another variable that can be set inside the consumer s critical section to pass out some needed information as shown below. void consumer(){ int m; waitb(delay); while(true){ waitb(s); take(); n--; m=n; semsignalb(s); consume(); if(m==0) waitb(delay); Homework: Draw a similar figure as in Table 5.3 to show, with the revised code, the problem is solved; but if we move the test of n==0 into the critical section, then a deadlock will occur. 50

51 Yet another solution When using general semaphores, we can have a cleaner solution, as shown below. semaphore n=0; semaphore s=1; void producer(){ while(true){ produce(); semwait(s); append(); semsignal(s); semsignal(n); void consumer(){ while(true){ semwait(n); semwait(s); take(); emsignal(s); consume(); 51

52 What about Reverse semsignal(s) and semsignal(n)? Then, semsignal(n) will be included in the critical section of the producer. This does not matter as consumer is concerned, since it needs to go through both semaphores before taking anything out. 2. Reverse semwait(n); and semwait(s); This could be a serious problem. Assume that consumer gets into its critical section, when the buffer is empty, then it will get stuck there, without releasing the s semaphore. As a result, no producer will be able to get into its critical section to add another item. This leads to a deadlock. 52

53 Get real In reality, the buffer is certainly finite. This is treated similar to the queue case, i.e., as a circular list, which requires the modular operations. For example, producer: while(true){ /* produce item v */; while((in+1)%n==out) /*do nothing */; b[in]=v; in=(in+1)%n; 53

54 A solution In the code, semaphore e is used to keep track of empty spaces. const int sizeofbuffer=/*buffer size*/; semaphore n=0; semaphore s=1; semaphore e=sizeofbuffer; void producer(){ while(true){ produce(); semwait(e); semwait(s); append(); semsignal(s); semsignal(n); void consumer(){ while(true){ semwait(n); semwait(s); take(); semsignal(s); semsignal(e); consume(); 54

55 Semaphore implementation The key is to implement both semwait and semsignal as atomic operations. Recall that only one process at any time may manipulate a semaphore with either a semwait or semsignal operation. Any of the software mutual exclusion certainly works, but leads to large overhead. Another alternative is to use one of the hardware schemes. For example, we can use the test-and-set instruction to implement such a semaphore. 55

56 One implementation semwait(s){ while(!testset(s.flag) /*do nothing*/; s.count--; if(s.count<0){ place the process in s.queue; block the process, and set s.flag to 0 else s.flag=0; semsignal(s){ while(!testset(s.flag) /* do nothing */; s.count++; if(s.count<=0){ remove a process from s.queue; place it on the Ready list s.flag=0; Homework: Problems 5.8 and

57 Other mechanisms Semaphores provides a primitive, but powerful and flexible, method to enforce mutual exclusion and for coordinating processes. But, it is not very easy to use since wait and signal operations may spread all over the program and their overall effect will be tough to see. There are some other mechanisms as well. Monitor is a programming language construct that provides equivalent functionality as that of semaphores, but it is easier to use. It is essentially a software module consisting of a few procedures, an initialization sequence, and local data. The concept of monitor has been implemented in a few languages, including Java. 57

58 Monitor all share the following properties: 1. The local variables are accessible only by the monitor s procedures and not by any external procedure. 2. A process enters the monitor by invoking one of its procedures. 3. Only one process may be executing in the monitor at any time; any other process that has invoked the monitor is suspended, waiting for the monitor to become available. Obviously, the third property helps to enforce the mutual exclusion we are expecting. 58

59 An example Let s look at a solution of the bounded -buffer producer/consumer problem in terms of a monitor. Here, we define two conditional variables: notfull, notempty, essentially two semaphores. monitor boundedbuffer; char buffer[n]; int nextin, nextout; int count; cond notfull, notempty; void append(char x){ if(count==n) cwait(notfull); buffer[nextin]=x; nextin=nextin+1%n; count++; csignal(notempty); 59

60 The take method checks to see if there is still stuff to take. If not, it waits on the notempty semaphore. Otherwise, it takes it, and decrement the counter. void take(char x){ if(count==0) cwait(notempty); x=buffer[nextout]; nextout=(nextout+1)%n; count-- csignal(notfull) {nextin=nextout=count=0; 60

61 Both producer and consumer can only use the defined append and take to add and/or delete stuff from the buffer. void producer(){ char x; while(true){ produce(x); append(x); void consumer(){ char x; while(true){ take(x); consume(x); void main(){ Parbegin(producer, consumer); 61

62 Message passing When processes interact with one another, two fundamental requirements must be satisfied: synchronization and communication. The former serves the purpose of enforcing mutual exclusion, and the latter is needed for processes to achieve cooperation. One way to meet both requirements is to pass messages between processes. This mechanism has an additional advantage, namely, it can also be used in a distributed system, besides in a uniprocessor system, and multiprocessors with shared memory. Homework: Self-study 5.5, and answer the following questions: 1) How does message passing accomplish synchronization? and 2) How mutual exclusion is achieved with message passing? 62

The concept of concurrency is fundamental to all these areas.

The concept of concurrency is fundamental to all these areas. Chapter 5 Concurrency(I) The central themes of OS are all concerned with the management of processes and threads: such as multiprogramming, multiprocessing, and distributed processing. The concept of concurrency

More information

Lecture 6. Process Synchronization

Lecture 6. Process Synchronization Lecture 6 Process Synchronization 1 Lecture Contents 1. Principles of Concurrency 2. Hardware Support 3. Semaphores 4. Monitors 5. Readers/Writers Problem 2 1. Principles of Concurrency OS design issue

More information

CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION

CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION M05_STAL6329_06_SE_C05.QXD 2/21/08 9:25 PM Page 205 CHAPTER CONCURRENCY:MUTUAL EXCLUSION AND SYNCHRONIZATION 5.1 Principles of Concurrency A Simple Example Race Condition Operating System Concerns Process

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

Concurrency: mutual exclusion and synchronization

Concurrency: mutual exclusion and synchronization Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles, 8/E William Stallings (Chapter 5). Sistemi di Calcolo (II semestre) Roberto

More information

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

Concurrency. Chapter 5

Concurrency. Chapter 5 Concurrency 1 Chapter 5 2 Concurrency Is a fundamental concept in operating system design Processes execute interleaved in time on a single processor Creates the illusion of simultaneous execution Benefits

More information

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

Concurrency: Mutual Exclusion and Synchronization. Concurrency

Concurrency: Mutual Exclusion and Synchronization. Concurrency Concurrency: Mutual Exclusion and Synchronization Chapter 5 1 Concurrency Multiple applications Structured applications Operating system structure 2 1 Concurrency 3 Difficulties of Concurrency Sharing

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems Concurrency 1 Concurrency Execution of multiple processes. Multi-programming: Management of multiple processes within a uni- processor system, every system has this support, whether big, small or complex.

More information

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event Semaphores Synchronization tool (provided by the OS) that do not require busy waiting A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually

More information

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations Semaphores Synchronization tool (provided by the OS) that do not require busy waiting A semaphore S is an integer variable that, apart from initialization, can only be accessed through 2 atomic and mutually

More information

Mutual Exclusion and Synchronization

Mutual Exclusion and Synchronization Mutual Exclusion and Synchronization Concurrency Defined Single processor multiprogramming system Interleaving of processes Multiprocessor systems Processes run in parallel on different processors Interleaving

More information

semsignal (s) & semwait (s):

semsignal (s) & semwait (s): Semaphores Two or more processes can cooperate through signals A semaphore is a special variable used for signaling semsignal (s) & semwait (s): primitive used to transmit a signal or to wait for a signal

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

CS420: Operating Systems. Process Synchronization

CS420: Operating Systems. Process Synchronization Process Synchronization James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Background

More information

Concurrency: Mutual Exclusion and Synchronization - Part 2

Concurrency: Mutual Exclusion and Synchronization - Part 2 CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu October 20, 2003 Concurrency: Mutual Exclusion and Synchronization - Part 2 To avoid all kinds of problems in either software approaches or

More information

Dept. of CSE, York Univ. 1

Dept. of CSE, York Univ. 1 EECS 3221.3 Operating System Fundamentals No.5 Process Synchronization(1) Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University Background: cooperating processes with shared

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Introduction to OS Synchronization MOS 2.3

Introduction to OS Synchronization MOS 2.3 Introduction to OS Synchronization MOS 2.3 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 Challenge How can we help processes synchronize with each other? E.g., how

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Principles of Operating Systems CS 446/646

Principles of Operating Systems CS 446/646 Principles of Operating Systems CS 446/646 2. Processes a. Process Description & Control b. Threads c. Concurrency Types of process interaction Mutual exclusion by busy waiting Mutual exclusion & synchronization

More information

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE

MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE MS Windows Concurrency Mechanisms Prepared By SUFIAN MUSSQAA AL-MAJMAIE 163103058 April 2017 Basic of Concurrency In multiple processor system, it is possible not only to interleave processes/threads but

More information

Chapter 7: Process Synchronization!

Chapter 7: Process Synchronization! Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Monitors 7.1 Background Concurrent access to shared

More information

Process Synchronization

Process Synchronization Process Synchronization Concurrent access to shared data may result in data inconsistency Multiple threads in a single process Maintaining data consistency requires mechanisms to ensure the orderly execution

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Classic Problems of Synchronization

Classic Problems of Synchronization Classic Problems of Synchronization Bounded-Buffer Problem s-s Problem Dining Philosophers Problem Monitors 2/21/12 CSE325 - Synchronization 1 s-s Problem s s 2/21/12 CSE325 - Synchronization 2 Problem

More information

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28) Lecture Topics Today: Concurrency (Stallings, chapter 5.1-5.4, 5.7) Next: Exam #1 1 Announcements Self-Study Exercise #5 Project #3 (due 9/28) Project #4 (due 10/12) 2 Exam #1 Tuesday, 10/3 during lecture

More information

Process Synchronization - I

Process Synchronization - I CSE 421/521 - Operating Systems Fall 2013 Lecture - VIII Process Synchronization - I Tevfik Koşar University at uffalo September 26th, 2013 1 Roadmap Process Synchronization Race Conditions Critical-Section

More information

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology Process Synchronization: Semaphores CSSE 332 Operating Systems Rose-Hulman Institute of Technology Critical-section problem solution 1. Mutual Exclusion - If process Pi is executing in its critical section,

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

1 Process Coordination

1 Process Coordination COMP 730 (242) Class Notes Section 5: Process Coordination 1 Process Coordination Process coordination consists of synchronization and mutual exclusion, which were discussed earlier. We will now study

More information

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang COP 4225 Advanced Unix Programming Synchronization Chi Zhang czhang@cs.fiu.edu 1 Cooperating Processes Independent process cannot affect or be affected by the execution of another process. Cooperating

More information

IV. Process Synchronisation

IV. Process Synchronisation IV. Process Synchronisation Operating Systems Stefan Klinger Database & Information Systems Group University of Konstanz Summer Term 2009 Background Multiprogramming Multiple processes are executed asynchronously.

More information

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of MCA

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of MCA SOLUTION SET- TEST 2 Subject & Code : Operating System (16MCA24) Name of faculty : Ms. Richa Sharma Max Marks: 40 1 Explain in detail Symmetric multiprocessing 8 2 Explain in detail principles of concurrency

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Concurrency Principles

Concurrency Principles Concurrency Concurrency Principles Where is the problem Interleaving: Multi-programming: Management of multiple processes within a uniprocessor system, every system has this support, whether big, small

More information

Concurrent Processes Rab Nawaz Jadoon

Concurrent Processes Rab Nawaz Jadoon Concurrent Processes Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS Lahore Pakistan Operating System Concepts Concurrent Processes If more than one threads

More information

Process Synchronization

Process Synchronization Chapter 7 Process Synchronization 1 Chapter s Content Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors 2 Background

More information

Lecture 8: September 30

Lecture 8: September 30 CMPSCI 377 Operating Systems Fall 2013 Lecture 8: September 30 Lecturer: Prashant Shenoy Scribe: Armand Halbert 8.1 Semaphores A semaphore is a more generalized form of a lock that can be used to regulate

More information

Synchronization Principles

Synchronization Principles Synchronization Principles Gordon College Stephen Brinton The Problem with Concurrency Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms

More information

Chapter 7: Process Synchronization. Background. Illustration

Chapter 7: Process Synchronization. Background. Illustration Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review

CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, Review CSc33200: Operating Systems, CS-CCNY, Fall 2003 Jinzhong Niu December 10, 2003 Review 1 Overview 1.1 The definition, objectives and evolution of operating system An operating system exploits and manages

More information

Lecture Topics. Announcements. Today: Concurrency: Mutual Exclusion (Stallings, chapter , 5.7)

Lecture Topics. Announcements. Today: Concurrency: Mutual Exclusion (Stallings, chapter , 5.7) Lecture Topics Today: Concurrency: Mutual Exclusion (Stallings, chapter 5.1-5.4, 5.7) Next: Concurrency: Deadlock and Starvation (Stallings, chapter 6.1, 6.6-6.8) 1 Announcements Self-Study Exercise #5

More information

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

CSC 1600: Chapter 6. Synchronizing Threads. Semaphores " Review: Multi-Threaded Processes"

CSC 1600: Chapter 6. Synchronizing Threads. Semaphores  Review: Multi-Threaded Processes CSC 1600: Chapter 6 Synchronizing Threads with Semaphores " Review: Multi-Threaded Processes" 1 badcnt.c: An Incorrect Program" #define NITERS 1000000 unsigned int cnt = 0; /* shared */ int main() pthread_t

More information

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor. Synchronization 1 Concurrency On multiprocessors, several threads can execute simultaneously, one on each processor. On uniprocessors, only one thread executes at a time. However, because of preemption

More information

Module 1. Introduction:

Module 1. Introduction: Module 1 Introduction: Operating system is the most fundamental of all the system programs. It is a layer of software on top of the hardware which constitutes the system and manages all parts of the system.

More information

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9

Midterm on next week Tuesday May 4. CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9 CS 361 Concurrent programming Drexel University Fall 2004 Lecture 9 Bruce Char and Vera Zaychik. All rights reserved by the author. Permission is given to students enrolled in CS361 Fall 2004 to reproduce

More information

Process Synchronization

Process Synchronization Process Synchronization Concurrent access to shared data in the data section of a multi-thread process, in the shared memory of multiple processes, or in a shared file Although every example in this chapter

More information

CS3502 OPERATING SYSTEMS

CS3502 OPERATING SYSTEMS CS3502 OPERATING SYSTEMS Spring 2018 Synchronization Chapter 6 Synchronization The coordination of the activities of the processes Processes interfere with each other Processes compete for resources Processes

More information

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor. Synchronization 1 Concurrency On multiprocessors, several threads can execute simultaneously, one on each processor. On uniprocessors, only one thread executes at a time. However, because of preemption

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Process Synchronization

Process Synchronization Process Synchronization Chapter 6 2015 Prof. Amr El-Kadi Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly

More information

CSC501 Operating Systems Principles. Process Synchronization

CSC501 Operating Systems Principles. Process Synchronization CSC501 Operating Systems Principles Process Synchronization 1 Last Lecture q Process Scheduling Question I: Within one second, how many times the timer interrupt will occur? Question II: Within one second,

More information

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Process Coordination

Process Coordination Process Coordination Why is it needed? Processes may need to share data More than one process reading/writing the same data (a shared file, a database record, ) Output of one process being used by another

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

Chapters 5 and 6 Concurrency

Chapters 5 and 6 Concurrency Operating Systems: Internals and Design Principles, 6/E William Stallings Chapters 5 and 6 Concurrency Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall Concurrency When several processes/threads

More information

Chapter 5 Asynchronous Concurrent Execution

Chapter 5 Asynchronous Concurrent Execution Chapter 5 Asynchronous Concurrent Execution Outline 5.1 Introduction 5.2 Mutual Exclusion 5.2.1 Java Multithreading Case Study 5.2.2 Critical Sections 5.2.3 Mutual Exclusion Primitives 5.3 Implementing

More information

Process Synchronization

Process Synchronization TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Process Synchronization [SGG7] Chapter 6 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

Process/Thread Synchronization

Process/Thread Synchronization CSE325 Principles of Operating Systems Process/Thread Synchronization David Duggan dduggan@sandia.gov February 14, 2013 Reading Assignment 7 Chapter 7 Deadlocks, due 2/21 2/14/13 CSE325: Synchronization

More information

Process Synchronization

Process Synchronization CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules

Concurrency Control. Synchronization. Brief Preview of Scheduling. Motivating Example. Motivating Example (Cont d) Interleaved Schedules Brief Preview of Scheduling Concurrency Control Nan Niu (nn@cs.toronto.edu) CSC309 -- Summer 2008 Multiple threads ready to run Some mechanism for switching between them Context switches Some policy for

More information

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

PESIT Bangalore South Campus

PESIT Bangalore South Campus INTERNAL ASSESSMENT TEST II Date: 04/04/2018 Max Marks: 40 Subject & Code: Operating Systems 15CS64 Semester: VI (A & B) Name of the faculty: Mrs.Sharmila Banu.A Time: 8.30 am 10.00 am Answer any FIVE

More information

1. Motivation (Race Condition)

1. Motivation (Race Condition) COSC4740-01 Operating Systems Design, Fall 2004, Byunggu Yu Chapter 6 Process Synchronization (textbook chapter 7) Concurrent access to shared data in the data section of a multi-thread process, in the

More information

Synchronization I. Jo, Heeseung

Synchronization I. Jo, Heeseung Synchronization I Jo, Heeseung Today's Topics Synchronization problem Locks 2 Synchronization Threads cooperate in multithreaded programs To share resources, access shared data structures Also, to coordinate

More information

Models of concurrency & synchronization algorithms

Models of concurrency & synchronization algorithms Models of concurrency & synchronization algorithms Lecture 3 of TDA383/DIT390 (Concurrent Programming) Carlo A. Furia Chalmers University of Technology University of Gothenburg SP3 2016/2017 Today s menu

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

Achieving Synchronization or How to Build a Semaphore

Achieving Synchronization or How to Build a Semaphore Achieving Synchronization or How to Build a Semaphore CS 241 March 12, 2012 Copyright University of Illinois CS 241 Staff 1 Announcements MP5 due tomorrow Jelly beans... Today Building a Semaphore If time:

More information

G52CON: Concepts of Concurrency

G52CON: Concepts of Concurrency G52CON: Concepts of Concurrency Lecture 11: Semaphores I" Brian Logan School of Computer Science bsl@cs.nott.ac.uk Outline of this lecture" problems with Peterson s algorithm semaphores implementing semaphores

More information

Interprocess Communication and Synchronization

Interprocess Communication and Synchronization Chapter 2 (Second Part) Interprocess Communication and Synchronization Slide Credits: Jonathan Walpole Andrew Tanenbaum 1 Outline Race Conditions Mutual Exclusion and Critical Regions Mutex s Test-And-Set

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

Module 6: Process Synchronization

Module 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic

More information

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang Process Synchronization CISC3595, Spring 2015 Dr. Zhang 1 Concurrency OS supports multi-programming In single-processor system, processes are interleaved in time In multiple-process system, processes execution

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Process Synchronization

Process Synchronization Process Synchronization Reading: Silberschatz chapter 6 Additional Reading: Stallings chapter 5 EEL 358 1 Outline Concurrency Competing and Cooperating Processes The Critical-Section Problem Fundamental

More information

Multitasking / Multithreading system Supports multiple tasks

Multitasking / Multithreading system Supports multiple tasks Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing

More information

CS3733: Operating Systems

CS3733: Operating Systems Outline CS3733: Operating Systems Topics: Synchronization, Critical Sections and Semaphores (SGG Chapter 6) Instructor: Dr. Tongping Liu 1 Memory Model of Multithreaded Programs Synchronization for coordinated

More information

UNIX Input/Output Buffering

UNIX Input/Output Buffering UNIX Input/Output Buffering When a C/C++ program begins execution, the operating system environment is responsible for opening three files and providing file pointers to them: stdout standard output stderr

More information

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Process Synchronisation (contd.) Operating Systems. Autumn CS4023 Operating Systems Autumn 2017-2018 Outline Process Synchronisation (contd.) 1 Process Synchronisation (contd.) Synchronization Hardware 6.4 (SGG) Many systems provide hardware support for critical section

More information

Concurrent & Distributed Systems Supervision Exercises

Concurrent & Distributed Systems Supervision Exercises Concurrent & Distributed Systems Supervision Exercises Stephen Kell Stephen.Kell@cl.cam.ac.uk November 9, 2009 These exercises are intended to cover all the main points of understanding in the lecture

More information

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution Race Condition Synchronization CSCI 315 Operating Systems Design Department of Computer Science A race occurs when the correctness of a program depends on one thread reaching point x in its control flow

More information

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems Synchronization CS 475, Spring 2018 Concurrent & Distributed Systems Review: Threads: Memory View code heap data files code heap data files stack stack stack stack m1 m1 a1 b1 m2 m2 a2 b2 m3 m3 a3 m4 m4

More information

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers Mutual exclusion & synchronization mutexes Unbounded buffer, 1 producer, N consumers out shared by all consumers mutex among consumers producer not concerned: can still add items to buffer at any time

More information

Concurrency. Glossary

Concurrency. Glossary Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the

More information

Chapter 6 Concurrency: Deadlock and Starvation

Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles Chapter 6 Concurrency: Deadlock and Starvation Seventh Edition By William Stallings Operating Systems: Internals and Design Principles When two trains

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

Roadmap. Shared Variables: count=0, buffer[] Producer: Background. Consumer: while (1) { Race Condition. Race Condition.

Roadmap. Shared Variables: count=0, buffer[] Producer: Background. Consumer: while (1) { Race Condition. Race Condition. CSE / - Operating Systems Fall 0 Lecture - VIII Process Synchronization - I Tevfik Koşar Roadmap Process Synchronization s Critical-Section Problem Solutions to Critical Section Different Implementations

More information

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han Synchronization CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han Announcements HW #3 is coming, due Friday Feb. 25, a week+ from now PA #2 is coming, assigned about next Tuesday Midterm is tentatively

More information

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling. Background The Critical-Section Problem Background Race Conditions Solution Criteria to Critical-Section Problem Peterson s (Software) Solution Concurrent access to shared data may result in data inconsistency

More information