Interprocess Communication By: Kaushik Vaghani

Similar documents
Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6: Process Synchronization

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CHAPTER 6: PROCESS SYNCHRONIZATION

Process Co-ordination OPERATING SYSTEMS

Process Management And Synchronization

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization

CS370 Operating Systems

Lesson 6: Process Synchronization

Synchronization Principles

Process Synchronization

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Process Synchronization

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CS370 Operating Systems

Process Synchronization

Process Synchronization

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Chapter 7: Process Synchronization!

CS370 Operating Systems

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Module 6: Process Synchronization

Process Synchronization

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

PESIT Bangalore South Campus

PROCESS SYNCHRONIZATION

Process Synchronization(2)

Process Synchronization(2)

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

Process Synchronization

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Prof. Hui Jiang Dept of Computer Science and Engineering York University

Dealing with Issues for Interprocess Communication

CS3502 OPERATING SYSTEMS

Process Synchronization

OS Process Synchronization!

Chapter 7: Process Synchronization. Background

Chapter 5: Process Synchronization

CSE 4/521 Introduction to Operating Systems

Chapter 6: Process Synchronization

Process Synchronization. studykorner.org

Process Synchronization(2)

Chapter 6 Synchronization

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Concurrency. Chapter 5

Introduction to Operating Systems

CS420: Operating Systems. Process Synchronization

Multitasking / Multithreading system Supports multiple tasks

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

Process Coordination

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Chapter 5: Process Synchronization

Chapter 7: Process Synchronization. Background. Illustration

1. Motivation (Race Condition)

Lecture 3: Synchronization & Deadlocks

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

UNIT II PROCESS MANAGEMENT 9

Module 6: Process Synchronization

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Dept. of CSE, York Univ. 1

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Chapter 6 Process Synchronization

Silberschatz and Galvin Chapter 6

Remaining Contemplation Questions

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

CSE Opera,ng System Principles

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Process Synchronization

Lecture 5: Inter-process Communication and Synchronization

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Lecture 6. Process Synchronization

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

IV. Process Synchronisation

Lecture Topics. Announcements. Today: Concurrency (Stallings, chapter , 5.7) Next: Exam #1. Self-Study Exercise #5. Project #3 (due 9/28)

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

9/29/2014. CS341: Operating System Mid Semester Model Solution Uploaded Semaphore ADT: wait(), signal()

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Background. Old Producer Process Code. Improving the Bounded Buffer. Old Consumer Process Code

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - XI Deadlocks - II. Louisiana State University

Real-Time Operating Systems M. 5. Process Synchronization

Roadmap. Bounded-Buffer Problem. Classical Problems of Synchronization. Bounded Buffer 1 Semaphore Soln. Bounded Buffer 1 Semaphore Soln. Tevfik Ko!

Chapter 6: Synchronization

Transcription:

Interprocess Communication By: Kaushik Vaghani

Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the particular order in which the access takes place, is called a Race Condition.

Background Critical Section: The part of program or code of segment of a process where the shared resource is accessed is called critical section. Mutual Exclusion: It is a way of making sure that if one process is using a shared variable or file; the other process will be excluded (stopped) from doing the same thing.

IPC Process executing concurrently in the OS may be either independent processes or cooperating processes. Cooperating processes require an IPC mechanism that will allow them to exchange data and information. Two models of IPC: 1. Shared Memory 2. Message Passing

IPC 1 2

Peterson s Solution A classic software-based solution to the critical-section problem known as Peterson's solution. Because of the way modern computer architectures perform basic machine-language instructions, such as load and store, there are no guarantees that Peterson's solution will work correctly on such architectures. However the solution provides a good algorithmic description of solving the critical-section problem and addresses the requirements of mutual exclusion, progress, and bounded waiting.

Peterson's solution is restricted to two processes. The processes are numbered Po and P1. For convenience, when presenting Pi, we use Pj to denote the other process; that is, j = 1 - i. Peterson's solution requires the two processes to share two variables: int turn; boolean flag[2];

The variable turn indicates whose turn it is to enter its critical section. That is, if turn == i, then process Pi is allowed to execute in its critical section. The flag array is used to indicate if a process is interested to enter its critical section. That is, if flag [i] is true, this value indicates that Pi is ready to enter its critical section. The shared variables flag[0] and flag[1] are initialized to FALSE because neither process is yet interested in the critical section. The shared variable turn is set to either 0 or 1 randomly (or it can always be set to say 0).

Solution: do { flag [ i ] = TRUE ; turn = j ; while ( flag [ j ] && turn == j ) { } // wait critical section flag [ i ] = FALSE ; remainder section } while ( TRUE ) ;

Explanation: P0 : i = 0, j = 1; P1 : i = 1, j = 0;

The eventual value of turn determines which of the two processes is allowed to enter its critical section first. We now prove that this solution is correct. We need to show that: 1. Mutual exclusion is preserved. 2. The progress requirement is satisfied. 3. The bounded-waiting requirement is met.

Mutual exclusion we note that each Pi enters its critical section only if either flag [j] == false or turn == i. Also note that, if both processes can be executing in their critical sections at the same time, then flag [0] == flag [1] == true. These two observations imply that Po and P1 could not have successfully executed their while statements at about the same time, since the value of turn can be either 0 or 1 but cannot be both. Hence, one of the processes -say, Pj -must have successfully executed the while statencent, whereas Pi had to execute at least one additional statement ("turn== j"). However, at that time, flag [j] == true and turn == j, and this condition will persist as long as Pj is in its critical section; as a result, mutual exclusion is preserved.

Progress Progress is defined as the following: if no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in making the decision as to which process will enter its critical section next. This selection cannot be postponed indefinitely. A process cannot immediately re-enter the critical section if the other process has set its flag to say that it would like to enter its critical section.

Bounded waiting Bounded waiting, or bounded bypass means that the number of times a process is bypassed by another process after it has indicated its desire to enter the critical section is bounded by a function of the number of processes in the system. In Peterson's algorithm, a process will never wait longer than one turn for entrance to the critical section: After giving priority to the other process, this process will run to completion and set its flag to 1, thereby never allowing the other process to enter the critical section.

Synchronization Hardware Hardware based solution to Critical Section Problem software-based solutions to critical section problem such as Peterson s are not guaranteed to work on modern computer architectures. Instead, we can generally state that any solution to the criticalsection problem requires a simple tool-a LOCK. Race conditions are prevented by requiring that critical regions be protected by locks. That is, a process must acquire a lock before entering a critical section; it releases the lock when it exits the critical section.

do { acquire lock critical section release lock remainder section } while (TRUE); Solution to the critical-section problem using locks

Many modern computer systems provide special hardware instructions that allow us either to test and modify the content of a word or to swap the contents of two words atomically that is as one uninterruptible unit. We can use these special instructions to solve the critical-section problem in a relatively simple manner. Hardware Based Solution Ex: TestAndSet (), Swap() instructions

TestAndSet() The definition of the TestAndSet () instruction Mutual-exclusion implementation with TestAndSet ().

The important characteristic of this instruction is that it is executed atomically. Thus, if two TestAndSet () instructions are executed simultaneously (each on a different CPU), they will be executed sequentially in some arbitrary order. If the machine supports the TestAndSet () instruction, then we can implement mutual exclusion by declaring a Boolean variable lock, initialized to false.

Example:

Swap() The Swap() instruction, in contrast to the TestAndSet () instruction, operates on the contents of two words. Like the TestAndSet() instruction, it is executed atomically. If the machine supports the Swap() instruction, then mutual exclusion can be provided as follows. A global Boolean variable lock is declared and is initialized to false. Each process has a local Boolean variable key.

The definition of the Swap () instruction Mutual-exclusion implementation with the Swap() instruction

Semaphore (Software Based Solution) The hardware-based solutions to the criticalsection problem are complicated for application programmers to use. To overcome this difficulty, we can use a synchronization tool called Semaphore. Definition: A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: 1. wait (or acquire) 2. signal (or release)

The definition of wait () is as follows: wait(s) { while (S <= 0) ; // no-op S-- ; } The definition of signal() is as follows: signal(s) { S++; }

All modifications to the integer value of the semaphore in the wait () and signal() operations must be executed indivisibly. That is, when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value. A resource such as a shared data structure is protected by a semaphore. You must acquire the semaphore before using the resource and release the semaphore when you are done with the shared resource.

Operating systems often distinguish between counting and binary semaphores. The value of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1. On some systems, binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the critical-section problem for multiple processes. The n processes share a semaphore, mutex, initialized to 1.

Implementation do { wait (mutex) ; // critical section signal(mutex); // remainder section } while (TRUE); Mutual-exclusion implementation with semaphores (Process Pi)

Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will block until the count becomes greater than 0.

The main disadvantage of the semaphore definition given here is that it requires busy waiting. While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the entry code. This continual looping is clearly a problem in a real multiprogramming system, where a single CPU is shared among many processes. Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock because the process "spins" while waiting for the lock.

To overcome the need for busy waiting, we can modify the definition of the wait() and signal() semaphore operations. When a process executes the wait() operation and finds that the semaphore value is not positive, it must wait. However, rather than engaging in busy waiting, the process can block itself. The block operation places a process into a waiting queue associated with the semaphore, and the state of the process is switched to the waiting state. Then control is transferred to the CPU scheduler, which selects another process to execute.

A process that is blocked, waiting on a semaphore S, should be restarted when some other process executes a signal() operation. The process is restarted by a wakeup() operation, which changes the process from the waiting state to the ready state. The process is then placed in the ready queue. To implement semaphores under this definition, we define a semaphore as a C struct:

The wait() semaphore operation can now be defined as The signal() semaphore operation can now be defined as

The block() operation suspends the process that invokes it. The wakeup(p) operation resumes the execution of a blocked process P. These two operations are provided by the operating system as basic system calls. Note that in this implementation, semaphore values may be negative, although semaphore values are never negative under the classical definition of semaphores with busy waiting.

Priority Inversion Problem A scheduling challenge arises when a higher-priority process needs to read or modify kernel data that are currently being accessed by a lower-priority process-or a chain of lower-priority processes. Since kernel data are typically protected with a lock, the higher-priority process will have to wait for a lowerpriority one to finish with the resource. The situation becomes more complicated if the lowerpriority process is preempted in favor of another process with a higher priority.

Example: assume we have three processes, L, M, and H, whose priorities follow the order L < M < H. Assume that process H requires resource R, which is currently being accessed by process L. Ordinarily, process H would wait for L to finish using resource R. However, now suppose that process M becomes runnable, thereby preempting process L. Indirectly, a process with a lower priority-process M-has affected how long process H must wait for L to relinquish resource R. This problem is known as priority inversion.

It occurs only in systems with more than two priorities, so one solution is to have only two priorities. That is insufficient for most general-purpose operating systems, however. Typically these systems solve the problem by implementing a priority-inheritance protocol. According to this protocol, all processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question. When they are finished, their priorities revert to their original values.

In the example above, a priority-inheritance protocol would allow process L to temporarily inherit the priority of process H, thereby preventing process M from preempting its execution. When process L had finished using resource R, it would relinquish its inherited priority from H and assume its original priority. Because resource R would now be available, process H- not M-would run next.

Classic Problems of Synchronization Concurrency-control problems: The Bounded-Buffer Problem (producer consumer problem) The Readers-Writers Problem The Dining-Philosophers Problem In our solutions to the above problems, we use semaphores for synchronization.

Bounded Buffer Producer Consumer Problem Producer - consumer problem, - a common paradigm for cooperating processes. A producer process produces information that is consumed by a consumer process. For example : client-server paradigm We generally think of a server as a producer and a client as a consumer. For example, a Web server produces (that is, provides) HTML files and images, which are consumed (that is, read) by the client Web browser requesting the resource.

One solution to the producer-consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced.

Two types of buffers can be used to solve this. The unbounded buffer places no practical limit on the size of the buffer. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. The following variables reside in a region of memory shared by the producer and consumer processes:

The shared buffer is implemented as a circular array with two logical pointers: in and out. In: points to the next free position in the buffer. Out: points to the first full position in the buffer. When in== out : buffer is Empty. When ((in+ 1)% BUFFER_SIZE) == out : buffer is Full.

The producer process The consumer process

Bounded Buffer Problem (Semaphore solution) We assume that the pool consists of n buffers, each capable of holding one item. The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and full semaphores count the number of empty and full buffers. The semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0. We can interpret this code as the producer producing full buffers for the consumer or as the consumer producing empty buffers Prepared By: for Kaushik the Vaghaniproducer.

The producer process The consumer process

The Readers-Writers Problem Suppose that a database is to be shared among several concurrent processes. Some of these processes may want only to read the database, whereas others may want to update (that is, to read and write) the database. We distinguish between these two types of processes by referring to the former as readers and to the latter as writers. Obviously, if two readers access the shared data simultaneously, no adverse effects will result. However, if a writer and some other process (either a reader or a writer) access the database simultaneously, confusion may arise.

To ensure that these difficulties do not arise, we require that the writers have exclusive access to the shared database while writing to the database. This synchronization problem is referred to as the readers-writers problem. The readers-writers problem has several variations, all involving priorities. first readers-writers problem: It requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object. In other words, no reader should wait for other readers to finish simply because a writer is waiting.

Second readers-writers problem: It requires that, once a writer is ready, that writer performs its write as soon as possible. In other words, if a writer is waiting to access the object, no new readers may start reading. A solution to either problem may result in starvation. In the first case, writers may starve; in the second case, readers may starve. Here, we present a solution to the first readerswriters problem.

In the solution to the first readers-writers problem, the reader processes share the following data structures: The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The semaphore wrt is shared between reader and writer processes. The mutex semaphore is shared between readers and used to ensure mutual exclusion when the variable readcount is updated.

The readcount variable keeps track of how many processes are currently reading the object. The semaphore wrt functions as a mutualexclusion semaphore for the writers.» It is also used by the first or last reader that enters or exits the critical section.» It is not used by readers who enter or exit while other readers are in their critical sections.

The structure of a writer process The structure of a reader process.

The readers-writers problem and its solutions have been generalized to provide reader-writer locks on some systems. Acquiring a reader-writer lock requires specifying the mode of the lock either read or write access. When a process wishes only to read shared data, it requests the reader-writer lock in read mode; a process wishing to modify the shared data must request the lock in write mode. Multiple processes are permitted to concurrently acquire a reader-writer lock in read mode, but only one process may acquire the lock for writing, as exclusive access is required for writers.

Reader-writer locks are most useful in the following situations: 1. In applications where it is easy to identify which processes only read shared data and which processes only write shared data. 2. In applications that have more readers than writers. This is because readerwriter locks generally require more overhead to establish than semaphores or mutual-exclusion locks. The increased concurrency of allowing multiple readers compensates for the overhead involved in setting up the readerwriter lock.

The Dining-Philosophers Problem Consider five philosophers who spend their lives thinking and eating. The philosophers share a circular table surrounded by five chairs, each belonging to one philosopher. In the centre of the table is a bowl of rice, and the table is laid with five single chopsticks.

The situation of the dining philosophers

When a philosopher thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her and her left and right neighbors). A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbour.

When a hungry philosopher has both her chopsticks at the same time, she eats without releasing her chopsticks. When she is finished eating, she puts down both of her chopsticks and starts thinking again. The dining-philosophers problem is considered a classic synchronization problem because it is an example of a large class of concurrency-control problems. It is a simple representation of the need to allocate several resources among several processes in a deadlock-free and starvation-free manner.

One simple solution is to represent each chopstick with a semaphore. A philosopher tries to grab a chopstick by executing a wait () operation on that semaphore; she releases her chopsticks by executing the signal() operation on the appropriate semaphores. Thus, the shared data are semaphore chopstick[5]; where all the elements of chopstick are initialized to 1.

The structure of philosopher i execution flow

Although this solution guarantees that no two neighbours are eating simultaneously, but it could create a deadlock. Suppose that all five philosophers become hungry simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she will be delayed forever.

Several possible solutions to the deadlock problem are listed below: 1. Allow at most four philosophers to be sitting simultaneously at the table. 2. Allow a philosopher to pick up her chopsticks only if both chopsticks are available. 3. Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then her right chopstick, whereas an even philosopher picks up her right chopstick and then her left chopstick.

Monitors semaphores provide a convenient and effective mechanism for process synchronization, But using them incorrectly can result in timing errors that are difficult to detect. Some situation may be caused by an honest programming error or an uncooperative programmer. Ex: In this situation, several processes may be executing in their critical sections simultaneously, violating the mutualexclusion requirement. In this case, a deadlock will occur.

These examples illustrate that various types of errors can be generated easily when programmers use semaphores incorrectly to solve the critical-section problem. Similar problems may arise in the other synchronization models we have seen. To deal with such errors, researchers have developed high-level synchronization primitive - the monitor type. A monitor is a collection of procedures, variables, and data structures that are all grouped together in a special kind of module or package.

Processes may call the procedures in a monitor whenever they want to, but they cannnot directly access the monitor s internal data structures from procedures declared outside the monitor. A monitor type is an Abstract Data Type (ADT) which presents a set of programmer-defined operations that are provided mutual exclusion within the monitor.

The monitor type also contains: 1. The declaration of variables whose values define the state of an instance of that type. 2. The bodies of procedures or functions that operate on those variables.

monitor example interger i; condition c; procedure producer();... end; procedure consumer();... end; end monitor; Schematic view of a monitor Example: Pascal Code

The representation of a monitor type cannot be used directly by the various processes. Thus, a procedure defined within a monitor can access only those variables declared locally within the monitor and its formal parameters. Similarly, the local variables of a monitor can be accessed by only the local procedures. Monitor have an important property that makes them useful for achieving mutual exclusion: only one process can be active in a monitor at any instant.

Monitors are a programming language construct, so the compiler knows they are special and can handle calls to monitor procedures differently from other procedure calls. Typically, when a process calls a monitor procedure, the first few instructions of the procedure will check to see if any other process is currently active within the monitor. If so, the calling process will be suspended until the other process has left the monitor. If no other process is using the monitor, the calling process may enter. The common way to implement mutual exclusion on monitor entries is to use a mutex or a binary semaphore. Because the compiler, not the programmer is arranging for the mutual exclusion, it is much less likely that something will go wrong.

Although monitors provide an easy way to achieve mutual exclusion, that is not enough. We also need a way for processes to block when they cannot proceed. The solution lies in the introduction of condition variables, along with two operations on them: wait and signal When a monitor procedure discovers that it can not continue(ie. The producer finds the buffer full), it does a wait on some condition variable, say, full. This action causes the calling process to block. The other process, (Consumer), can wake up its sleeping partner by doing a signal on the condition variable that its partner is waiting on.

Producer Consumer Problem with monitors.