UNIT II PROCESS MANAGEMENT 9

Size: px
Start display at page:

Download "UNIT II PROCESS MANAGEMENT 9"

Transcription

1 UNIT II PROCESS MANAGEMENT 9 Processes-Process Concept, Process Scheduling, Operations on Processes, Interprocess Communication; Threads- Overview, Multicore Programming, Multithreading Models; Windows 7 - Thread and SMP Management. Process Synchronization - Critical Section Problem, Mutex Locks, Semophores, Monitors; CPU Scheduling and Deadlocks. 2.1 Process Concept Definition It is defined as a program in execution. A process is different from a program. A program is a passive entity stored on disks and does not compete the resources. Once a user wants to execute a program, it should in the main memory as soon as the program is loaded into the main memory it becomes process. A process includes: program counter stack data section The program code, also called text section. Current activity including program counter, processor registers. Stack containing temporary data example Function parameters, return addresses, local variables. Data section containing global variables and Heap containing memory dynamically allocated during run time. Process in Memory Process States As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states: New : A process is being created Ready : The process is waiting to be assigned to a processor Running : Instructions are being executed. Waiting : The process is waiting for some event to occur such as I/O completion or reception of signal. Terminate : The process has finished execution. 1

2 Process State Diagram Process Relationships NewReady: The Os creates a process and prepare the process to be executed, then the OS moved the process to ready queue. ReadyRunning: when it is time to select a process to run, the operating systems selects one of the jobs from the ready queue and move the process from ready state to running state. Running Terminated: when the execution of a process has completed then the Operating system terminates the process from running state. Running Ready: When the time slot of the processor expired or if the processor received any interrupt signal, then the operating system shifted running process to ready state. RunningWaiting: A process is put in to waiting state, if a process needs an event occur, or an I/O devices require. The operating system does not provide the I/O or event immediately then the process moved to waiting state by operating system. Waiting Ready: A process in the blocked state is moved the ready state when the event for which it has been waiting occurs Process Control Block Each process is represented in the operating system by a process control block (PCB) also called as task control block. It contains many pieces of information associated with a specific process. The PCB simply serves as the repository for any information that may vary from process to process. Process Control Block 2

3 Process State: Program Counter: CPU Registers: CPU scheduling Information: Memory information: Accounting information: I/O States information: Management CPU Switch From Process to Process CS6401- Operating Systems The state May be new, ready, running, waiting, halted and so on. The counter indicates the address of the next instruction to be executed by the process. It include accumulators, index registers, stack pointer, and general purpose registers, plus condition code information. Along with program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward. This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters. This information may hold the value of base and limit registers, the page table or the segment table, depending on the memory system used by the operating system. This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers and so on. The information includes the list of I/O devices allocated to this process, a list of open files, and so on. 2.2 Process Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among 3

4 processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process (possibly from a set of several available processes) for program execution on the CPU Scheduling Queues As processes enter the system, they are put into a job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is generally stored as a linked list. A ready-queue header contains pointers to the first and final PCBs in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue. The system also includes other queues. When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O request. The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue Ready Queue And Various I/O Device Queues Queueing diagram 4

5 Each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device queues. The circles represent the resources that serve the queues, and the arrows indicate the flow of processes in the system. A new process is initially put in the ready queue. It waits there until it is selected for execution, or is dispatched. Once the process is allocated the CPU and is executing, one of several events could occur: The process could issue an I/O request and then be placed in an I/O queue. The process could create a new subprocess and wait for the subprocess's termination. The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. In the first two cases, the process switches from the waiting state to the ready state and is then put back in the ready queue. A process continues this cycle until it terminates, at which time it is removed from all queues and has its PCB and resources deallocated Schedulers A process has to transfer between queues throughout its lifetime. The selections of process from these queues are done by schedulers. Long Term Scheduler: Initially all the user programs are stored in the secondary memory named job pool. Long term scheduler selects programs from job pool and loads them into main memory. The long-term scheduler controls the degree of multiprogramming (the number of processes in memory). If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. It is important that the long-term scheduler make a careful selection. In general, processes can be described as either L/O bound or CPU bound. An I/O-bound process is one that spends more of its time doing I/O than it spends doing computations. A CPU-bound process, in contrast, generates I/O requests infrequently, using more of its time doing computations. It is important that the long-term scheduler select a good process mix of I/O-bound and CPU-bound process. Short term Scheduler: Short term scheduler selects program from main memory and allocates CPU to one of them. The frequency of execution of short term scheduler is higher than long term scheduler. Because a process may execute for only a few milliseconds and may wait for an event to occur. Medium term scheduler: It is necessary to swap out processes from memory to manage the degree of multiprogramming. Swapping means removing a process from memory temporarily and then brought back into main memory for continued execution. Medium term scheduler does the job of swap out and swap in. 5

6 2.2.3 Context Switch When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process. Thus task is known as context switching. The context of a process is represented in the PCB of a process it includes the value of the CPU registers, the process state and memory management information. When the context switch occurs, the kernel saves the context of the old process in its PCB and loads the saved context of the new process scheduled to run. 2.3 Operations on Processes The processes in most systems can execute concurrently, and they may be created and deleted dynamically. Thus, these systems must provide a mechanism for process creation and termination. Generally, process identified and managed via a process identifier (pid) Process creation A process may create several new processes, via a create-process system call, during the course of execution. The creating process is called a parent process, and the new processes are called the children of that process. Each of these new processes may in turn create other processes, forming a tree of processes. Resource sharing between the parent and children process 1. Parent and children share all resources 2. Children share subset of parent s resources 3. Parent and child share no resources When a process creates a new process, two possibilities exist in terms of execution: 1. The parent continues to execute concurrently with its children. 2. The parent waits until some or all of its children have terminated. There are also two possibilities in terms of the address space of the new process: 1. The child process is a duplicate of the parent process (it has the same program and data as the parent). 2. The child process has a new program loaded into it. UNIX examples 1. fork system call creates new process 6

7 2. exec system call used after a fork to replace the process memory space with a new program Process Termination A process terminates when it finishes executing its final statement and asks the operating system to delete it by using the exit () system call. The process may return a status value to its parent process (via the wait() system call).all the resources of the process including physical and virtual memory, open files, and I/O buffers are deallocated by the operating system. A parent may terminate the execution of one of its children for a variety of reasons, (abort () ) The child has exceeded its usage of some of the resources that it has been allocated. (To determine whether this has occurred, the parent must have a mechanism to inspect the state of its children.) The task assigned to the child is no longer required. The parent is exiting, and the operating system does not allow a child to continue if its parent terminates. Some operating systems do not allow a child to exist if its parent has terminated. In such systems, if a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system. 2.4 Interprocess Communication Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system.any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Any process that shares data with other processes is a cooperating process. There are several reasons for providing an environment that allows process cooperation: Information sharing. Since several users may be interested in the same piece of information, we must provide an environment to allow concurrent access to such information. Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads. Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel. 7

8 Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication: (1) shared memory and (2) message passing. In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message passing model, communication takes place by means of messages exchanged between the cooperating processes. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. Message passing is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, as it can be done at memory speeds when within a computer. Shared memory is faster than message passing, as message-passing systems are typically implemented using system calls and thus require the more time consuming task of kernel intervention. Communications Models (a) Msg passing (b) Shared memory Shared-Memory Systems Interprocess communication using shared memory requires communicating processes to establish a region of shared memory. Typically, a shared-memory region resides in the address space of the process creating the shared-memory segment. Other processes that wish to communicate using this shared-memory segment must attach it to their address space. Shared memory requires that two or more processes agree to remove this restriction. They can then exchange information by reading and writing data in the shared areas. The processes are also responsible for ensuring that they are not writing to the same location simultaneously. 8

9 Consider the producer-consumer problem, which is a common paradigm for cooperating processes. A producer process produces information that is consumed by a consumer process. One solution to the producer-consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced. Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. The consumer must wait if the buffer is empty, and the producer must wait if the buffer is full. The following variables reside in a region of memory shared by the producer and consumer processes: #define BUFFER_SIZE 10 typedef struct { }item; item buffer [BUFFER_SIZE] ; int in = 0 ; int out = 0 ; The shared buffer is implemented as a circular array with two logical pointers: in and out. The variable in points to the next free position in the buffer; out points to the first full position in the buffer. The buffer is empty when in == out; the buffer is full when ((in + 1) % BUFFER_SIZE) == out. The producer process has a local variable nextproduced in which the new item to be produced is stored. The consumer process has a local variable nextconsumed in which the item to be consumed is stored. This scheme allows at most BUFFER_SIZE - l items in the buffer at the same time. item nextproduced; while (true) { /* produce an item in nextproduced */ while (((in + 1) % BUFFER-SIZE) == out) ; /* do nothing */ buffer[in] = nextproduced; in = (in + 1) % BUFFER_SIZE; } The producer process. item nextconsumed; while (true) { 9

10 while (in == out) ; //do nothing nextconsumed = buffer[out]; out = (out + 1) % BUFFEFLSIZE; /* consume the item in nextconsumed */ } The consumer process Message passing Mechanism for processes to communicate and to synchronize their actions Message system processes communicate with each other without resorting to shared variables IPC facility provides two operations: o send(message) message size fixed or variable o receive(message) If P and Q wish to communicate, they need to: o establish a communication link between them o exchange messages via send/receive Implementation of communication link o physical (e.g., shared memory, hardware bus) o logical (e.g., logical properties) Naming Processes that want to communicate must have a way to refer to each other. They can use either direct or indirect communication. Direct Communication Processes must name each other explicitly: o send (P, message) send a message to process P o receive(q, message) receive a message from process Q Properties of communication link o Links are established automatically o A link is associated with exactly one pair of communicating processes o Between each pair there exists exactly one link o The link may be unidirectional, but is usually bi-directional This scheme exhibits symmetry in addressing; that is, both the sender process and the receiver process must name the other to communicate. A variant of this scheme employs asymmetry in addressing. Here, only the sender names the recipient; the recipient is not required to name the sender. In this scheme, the send() and receive () primitives are defined as follows: o send(p, message) Send a message to process P. o receive(id, message) -Receive a message from any process; The variable id is set to the name of the process with which communication has taken place. The disadvantage in both of these schemes (symmetric and asymmetric) is the limited modularity of the resulting process definitions 10

11 Indirect Communication Messages are directed and received from mailboxes (also referred to as ports) o Each mailbox has a unique id o Processes can communicate only if they share a mailbox Properties of communication link o Link established only if processes share a common mailbox o A link may be associated with many processes o Each pair of processes may share several communication links o Link may be unidirectional or bi-directional Operations o create a new mailbox o send and receive messages through mailbox o destroy a mailbox Primitives are defined as: o send(a, message) send a message to mailbox A o receive(a, message) receive a message from mailbox A Mailbox sharing o P1, P2, and P3 share mailbox A o P1, sends; P2 and P3 receive o Who gets the message? Solutions o Allow a link to be associated with at most two processes o Allow only one process at a time to execute a receive operation o Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was Synchronization Message passing may be either blocking or non-blocking also known as synchronous and asynchronous. Blocking is considered synchronous o Blocking send has the sender block until the message is received o Blocking receive has the receiver block until a message is available Non-blocking is considered asynchronous o Non-blocking send has the sender send the message and continue o Non-blocking receive has the receiver receive a valid message or null Buffering Queue of messages attached to the link; implemented in one of three ways Zero capacity. The queue has a maximum length of zero; thus, the link cannot have any messages waiting in it. In this case, the sender must block until the recipient receives the message. Bounded capacity. The queue has finite length n; thus, at most n messages can reside in it. If the queue is not full when a new message is sent, the message is placed in the queue and the sender can continue execution without waiting. The links capacity is finite. If the link is full, the sender must block until space is available in the queue. 11

12 Unbounded capacity. The queues length is potentially infinite; thus, any number of messages can wait in it. The sender never blocks. 2.5 Threads A thread is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional (or heavyweight) process has a single thread of control. If a process has multiple threads of control, it can perform more than one task at a time. For Example A web browser might have one thread display images or text while another thread retrieves data from the network. Benefits: 1. Responsiveness. Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. 2. Resource sharing. By default, threads share the memory and the resources of the process to which they belong. The benefit of sharing code and data is that it allows an application to have several different threads of activity within the same address space. 3. Economy. Allocating memory and resources for process creation is costly. Because threads share resources of the process to which they belong, it is more economical to create and context-switch threads. 4. Utilization of multiprocessor architectures. The benefits of multithreading can be greatly increased in a multiprocessor architecture, where threads may be running in parallel on different processors. User thread and Kernel threads User threads Supported above the kernel and implemented by a thread library at the user level. Thread creation, management and scheduling are done in user space. Fast to create and manage 12

13 When a user thread performs a blocking system call,it will cause the entire process to block even if other threads are available to run within the application. Example: POSIX Pthreads, Mach C-threads and Solaris 2 UI-threads. Kernel threads Supported directly by the OS. Thread creation, management and scheduling are done in kernel space. Slow to create and manage When a kernel thread performs a blocking system call,the kernel schedules another thread in the application for execution. Example: Windows NT, Windows 2000, Solaris 2,BeOS and Tru64 UNIX support kernel threads. 2.7 Multithreading Models Threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads. User threads are supported above the kernel and are managed without kernel support, whereas kernel threads are supported and managed directly by the operating system. Many-to-One Model The many-to-one model maps many user-level threads to one kernel thread. Thread management is done by the thread library in user space, so it is efficient; but the entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors One-to-one Model The one-to-one model maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The drawback to this model is that creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an application, most implementations of this model restrict the number of threads supported by the system. 13

14 Many-to-Many Model The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. The number of kernel threads may be specific to either a particular application or a particular machine Threading Issues Semantics of fork() and exec() system calls. Thread cancellation. Signal handling Thread pools Thread specific data 1. fork() and exec() system calls. A fork() system call may duplicate all threads or duplicate only the thread that invoked fork(). If a thread invoke exec() system call,the program specified in the parameter to exec will replace the entire process. 2. Thread cancellation. It is the task of terminating a thread before it has completed. A thread that is to be cancelled is called a target thread. There are two types of cancellation namely 1. Asynchronous Cancellation One thread immediately terminates the target thread. 2. Deferred Cancellation The target thread can periodically check if it should terminate, and does so in an orderly fashion. 3. Signal handling A signal is a used to notify a process that a particular event has occurred. 14

15 A signal may be received either synchronously or asynchronously depending on the source of and the reason for the event being signaled. All signals, whether synchronous or asynchronous, follow the same pattern: 1. A signal is generated by the occurrence of a particular event. 2. A generated signal is delivered to a process. 3. Once delivered, the signal must be handled. Synchronous signals are delivered to the same process that performed the operation that caused the signal. When a signal is generated by an event external to a running process, that process receives the signal asynchronously. A generated signal is delivered to the process. a. Deliver the signal to the thread to which the signal applies. b. Deliver the signal to every thread in the process. c. Deliver the signal to certain threads in the process. d. Assign a specific thread to receive all signals for the process. Once delivered the signal must be handled. a. Signal is handled by i. A default signal handler ii. A user defined signal handler 4. Thread pools Creation of unlimited threads exhaust system resources such as CPU time or memory. Hence we use a thread pool. In a thread pool, a number of threads are created at process startup and placed in the pool. When there is a need for a thread the process will pick a thread from the pool and assign it a task. After completion of the task, the thread is returned to the pool. Thread pools offer these benefits: 1. Servicing a request with an existing thread is usually faster than waiting to create a thread. 2. A thread pool limits the number of threads that exist at any one point. This is particularly important on systems that cannot support a large number of concurrent threads. 5. Thread specific data Threads belonging to a process share the data of the process. However each thread might need its own copy of certain data known as thread-specific data. For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction may be assigned a unique identifier. 2.9 Process Synchronization Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed to share data only through files or messages. Concurrent access to shared data may result in data inconsistency. 15

16 Eg. Producer-consumer problem. It is described that how a bounded buffer could be used to enable processes to share memory o o Bounded buffer problem. The solution allows at most Buffer_size-1 items in the buffer at the same time. An integer variable counter, initialized to 0. Counter is incremented every time we add a new item to the buffer and is decremented every time we remove one item from the buffer. The code for the producer process: while (true) { /* produce an item in next Produced */ while (counter == BUFFER_SIZE); /* do nothing */ buffer [in] = nextproduced; in = (in + 1) % BUFFER_SIZE; counter++; } The code for the consumer process: while (true) { while (counter == 0) ; /* do nothing */ nextconsumed = buffer [out] ; out = (out + 1) % BUFFER_SIZE; counter--; /* consume the item in nextconsumed */ } Although both the producer and consumer routines are correct separately, they may not function correctly when executed concurrently. It results in incorrect state because both processes are allowed to manipulate the variable counter concurrently. A race condition is a situation where two or more processes access shared data concurrently and final value of shared data depends on timing (race to access and modify data) To guard against the race condition above, we need to ensure that only one process at a time can be manipulating the variable counter (process synchronization) The Critical-Section Problem Consider a system consisting of n processes {P1,P2..Pn}. Each process has a segment of code, called a critical section (CS), in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that, when one process is executing in its CS, no other process is to be allowed to execute in its CS. 16

17 Each process must request permission to enter its CS. The section of code implementing this request is the entry section. The CS may be followed by an exit section. The remaining code is the remainder section. A solution to the CS problem must satisfy the following requirements: 1. Mutual exclusion. If process Pi is executing in its CS, then no other processes can be executing in their CSs. 2. Progress. If no process is executing in its CS and some processes wish to enter their CSs, then only those processes that are not executing in their remainder sections can participate in the decision on which will enter its CS next, and this selection cannot be postponed indefinitely. (No process should have to wait forever to enter its CS.) 3. Bounded waiting. There exists a bound, or limit, on the number of times that other processes are allowed to enter their CSs after a process has made a request to enter its CS and before that request is granted Two process solution Algorithm I Shared variables: o int turn; initially turn = i o turn = i Pi can enter its critical section Process Pi do { while (turn!= i) ; critical section turn = j; reminder section } while (1); Satisfies mutual exclusion, but not progress Algorithm II Shared variables o boolean flag[2]; initially flag [i] = flag [j] = false. 17

18 o flag [i] = true Pi ready to enter its critical section Process Pi do { flag[i] := true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1); Satisfies mutual exclusion, but not progress requirement. Algorithm III Combined shared variables of algorithms 1 and 2. Process Pi do { flag [i]= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1); Meets all three requirements; solves the critical-section problem for two processes Bakery Algorithm Critical section for n processes Before entering its critical section, process receives a number. Holder of the smallest number enters the critical section. If processes Pi and Pj receive the same number, if i < j, then Pi is served first; else Pj is served first. Notation < lexicographical order (ticket #, process id #) (a,b) < c,d) if a < c or if a = c and b < d max (a0,, an-1) is a number, k, such that k ai for i - 0,, n 1 Shared data boolean choosing[n]; int number[n]; Data structures are initialized to false and 0 respectively do { choosing[i] = true; number[i] = max(number[0], number[1],, number [n 1])+1; 18

19 choosing[i] = false; for (j = 0; j < n; j++) { while (choosing[j]) ; while ((number[j]!= 0) && (number[j,j] < number[i,i])) ; } critical section number[i] = 0; remainder section } while (1); Synchronization Hardware Many systems provide hardware support for critical section code Uniprocessors could disable interrupts o Currently running code would execute without preemption Generally it is too inefficient on multiprocessor systems o Operating systems using this not broadly scalable o Modern machines provide special atomic hardware instructions o Atomic = non-interruptable The two instructions that are used to provide synchronization to hardware are : 1. TestAndSet 2. Swap TestAndSet instruction boolean TestAndSet(boolean &target) { boolean rv = target; target = true; return rv; } Mutual Exclusion with Test-and-Set: Shared boolean variable lock, initialized to FALSE do { while ( TestAndSet (lock )); // do nothing // critical section lock = FALSE; // remainder section } while (TRUE); Swap instruction void Swap(boolean &a, boolean &b) { 19

20 boolean temp = a; a = b; b = temp; } Mutual Exclusion with Swap: Shared Boolean variable lock initialized to FALSE; Each process has a local Boolean variable key do { key = TRUE; while ( key == TRUE) Swap (lock, key ); // critical section lock = FALSE; // remainder section } while (TRUE); Bounded-waiting Mutual Exclusion with TestandSet() The common data structures are boolean waiting[n]; boolean lock; These data structures are initialized to false. To prove that the mutual exclusion requirement is met, note that process P; can enter its critical section only if either waiting[i] == false or key == false. The value of key can become false only if the TestAndSet() is executed. The first process to execute the TestAndSet () will find key == false; all others must wait. The variable waiting[i] can become false only if another process leaves its critical section; only one waiting [i] is set to false, maintaining the mutual-exclusion requirement. do { waiting[i] = TRUE; key = TRUE; while (waiting[i] && key) key = TestAndSet(&lock); waiting[i] = FALSE; // critical section j = (i + 1) % n; while ((j!= i) &&!waiting[j]) j = (j + 1) % n; if (j == i) lock = FALSE; else waiting[j] = FALSE; // remainder section } while (TRUE); 20

21 2.12 Semaphores A synchronization tool called semaphore. Semaphores are variables that are used to signal the status of shared resources to processes. A semaphore is an integer variable that, apart from initialization, is accessed only through two standard atomic operations: wait() and signal() The definition of wait() is as follows: wait (S) { while (S <= 0) ; // no-op S--; } The definition of signal() is as follows: signal (S) { S++; } All the modifications to the integer value of the semaphore in the wait() and signal() operations must be executed indivisibly. That is, when one process modifies the semaphore value, no other process can simultaneously modify that same semaphore value. In addition, in the case of wait(), the testing of the integer value of (s<=0), and its possible modification (s--), must also be executed without interruption. Usage Counting and binary semaphores. The value of a counting semaphore can range over an unrestricted domain. The value of a binary semaphore can range only between 0 and 1. On some systems, binary semaphores are known as mutex locks, as they are locks that provide mutual exclusion. We can use binary semaphores to deal with the critical-section problem for multiple processes. The n processes share a semaphore, mutex, initialized to 1. Each process Pi is organized as shown below Semaphore mutex; // initialized to 1 do { wait (mutex); // Critical Section 21

22 signal (mutex); // remainder section } while (1); Counting semaphores can be used to control access to a given resource consisting of a finite number of instances. The semaphore is initialized to the number of resources available. Each process that wishes to use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a process releases a resource, it performs a signal() operation (incrementing the count). When the count for the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will block until the count becomes greater than 0. Implementation The main disadvantage of the semaphore definition is that it requires busy waiting. o While a process is in its CS, any other process that tries to enter its CS must loop continuously in the entry code. o Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spinlock because the process ``spins'' while waiting for the lock. To overcome the need for busy waiting, we can modify the definition of the wait() and signal() semaphore operations. o When a process executes the wait() operation and finds that the semaphore value is not positive, it must wait. Rather than engaging in busy waiting, the process can block itself. A process that is blocked, waiting on a semaphore, should be restarted when some other process executes a signal() operation. The process is restarted by a wakeup() operation, which changes the process from the waiting state to the ready state. The process is then placed in the ready queue. The critical aspect of semaphores is that they be executed atomically. We must guarantee that no two processes can execute wait() and signal() operations on the same semaphore at the same time. To implemente the semaphore, typedef struct { int value; struct process *L; } semaphore; Semaphore operations now defined as wait(s) S.value--; if (S.value < 0) { add this process to S.L; block;} 22

23 Deadlocks and Starvation signal(s) S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(p); } CS6401- Operating Systems The implementation of a semaphore with a waiting queue may result in a situation where two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes. The event in question is the execution of a signal() operation. When such a state is reached, these processes are said to be deadlocked. To illustrate this, we consider a system consisting of two processes, Po and P1, each accessing two semaphores, and Q, set to the value 1: P0 wait (S); wait (Q); signal (S); signal (Q); P wait (Q); wait (S); signal (Q); signal (S); o Suppose that Po executes wait(s) and then P1 executes wait(q). o When Po executes wait(q)., it must wait until P1 executes signal(q). o Similarly, when P1 executes wait(s), it must wait until Po executes signal(s). o Since these signal() operations cannot be executed, Po and P1 are deadlocked. Another problem related to deadlocks is indefinite blocking, or starvation, a situation in which processes wait indefinitely within the semaphore. Classic problems of synchronization The Bounded-Buffer Problem Assume that the pool consists of n buffers, each capable of holding one item. The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and full semaphores count the number of empty and full buffers. o The semaphore empty is initialized to the value n. o The semaphore full is initialized to the value 0. The code for the producer process is shown below; The code for the consumer process is shown below; The structure of the producer process 23

24 do { // produce an item in nextp wait (empty); wait (mutex); // add the item to the buffer signal (mutex); signal (full); } while (1); The structure of the consumer process do { wait (full); wait (mutex); // remove an item from buffer to nextc signal (mutex); signal (empty); // consume the item in nextc } while (1); The Readers-Writers Problem A database is to be shared among several concurrent processes. Some of these processes may want only to read the database (readers), whereas others may want to update (that is, to read and write) the database (writers). If two readers access the shared data simultaneously, no adverse effects will result. However, if a writer and some other thread (either a reader or a writer) access the database simultaneously, there could be some synchronization issues. To ensure that these difficulties do not arise, we require that the writers have exclusive access to the shared database. This synchronization problem is referred to as the readers-writers problem. The readers-writers problem has several variations, all involving priorities. o The first readers-writers problem, requires that no reader will be kept waiting unless a writer has already obtained permission to use the shared object. o The second readers-writers problem requires that, once a writer is ready, that writer performs its write as soon as possible. A solution to either problem may result in starvation. o In the first case, writers may starve. o In the second case, readers may starve. In the solution to the first readers-writers problem, the reader processes share the following data structures: semaphore mutex, wrt; int readcount; 24

25 The semaphores mutex and wrt are initialized to 1; readcount is initialized to 0. The semaphore wrt is common to both reader and writer processes. The mutex semaphore is used to ensure mutual exclusion when the variable readcount is updated. The readcount variable keeps track of how many processes are currently reading the object. The semaphore wrt functions as a mutual-exclusion semaphore for the writers. It is also used by the first or last reader that enters or exits the CS. It is not used by readers who enter or exit while other readers are in their CSs. The code for a writer process is shown do { wait (wrt) ; // writing is performed signal (wrt) ; } while (1); The code for a reader process is shown do { wait (mutex) ; readcount ++ ; if (readcount == 1) wait (wrt) ; signal (mutex) // reading is performed wait (mutex) ; readcount - - ; if (readcount == 0) signal (wrt) ; signal (mutex) ; } while (1); Note that, if a writer is in the CS and n readers are waiting, then one reader is queued on wrt, and n-1 readers are queued on mutex. Also observe that, when a writer executes signal(wrt), we may resume the execution of either the waiting readers or a single waiting writer. The selection is made by the scheduler. The Dining-Philosophers Problem Consider five philosophers who spend their lives thinking and eating. The philosophers share a circular table surrounded by five chairs, each belonging to one philosopher. In the center of the table is a bowl of rice, and the table is laid with five single chopsticks o When a philosopher thinks, she does not interact with her colleagues. 25

26 o o o o From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her. A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbour. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing her chopsticks. When she is finished eating, she puts down both of her chopsticks and starts thinking again. One simple solution is to represent each chopstick with a semaphore. o A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore; she releases her chopsticks by executing the signal() operation on the appropriate semaphores. The shared data are semaphore chopstick[5] ; where all the elements of chopstick are initialized to 1. The structure of Philosopher i: do { wait ( chopstick[i] ); wait ( chopstick[ (i + 1) % 5] ); // eat signal ( chopstick[i] ); signal (chopstick[ (i + 1) % 5] ); // think } while (1); o Although this solution guarantees that no two neighbors are eating simultaneously, it nevertheless must be rejected because it could create a deadlock. o Suppose that all five philosophers become hungry simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she will be delayed forever. Several possible remedies to the dining-philosophers problem that ensures freedom from deadlocks. Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this she must pick them up in a critical section). Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then her right chopstick, whereas an even philosopher picks up her right chopstick and then her left chopstick Critical Region The problems with semaphores are Timing Error. 26

27 Incorrect use of semaphore operations: 1. signal (mutex). wait (mutex) Several processes may be executing in their critical sections simultaneously, violating the mutual-exclusion requirement 2. wait (mutex) wait (mutex) A deadlock will occur 3. Omitting of wait (mutex) or signal (mutex) (or both) Either mutual exclusion is violated or a deadlock will occur Hence we use high level synchronization construct called as critical region. A shared variable v of type T is declared as v: shared T Variable v is accessed only inside the statement region v when B do S where B is a Boolean expression. While statement S is being executed no other process can access variable v. Regions referring to the same shared variable exclude each other in time. When a process tries to execute the region statement, the Boolean expression B is evaluated. If B is true, statement S is executed. If it is false, the process is delayed until B becomes true and no other process is in the region associated with v. Solving Bounded Buffer problem using critical Region The buffer space and its pointers are encapsulated in Struct Buffer { item pool[n]; int count,in,out; }; Producer region Buffer when (count<n) { pool[in]=itemp; in=(in+1)%n; count++; } Consumer region Buffer when (count>0) { itemc =pool[out]; out=(out+1)%n; count--; } 2.12 Monitors A high-level abstraction that provides a convenient and effective mechanism for process synchronization Abstract data type, internal variables only accessible by code within the procedure Only one process may be active within the monitor at a time But not powerful enough to model some synchronization schemes 27

28 monitor monitor-name { // shared variable declarations procedure P1 ( ) {. } procedure Pn ( ) { } Initialization code ( ) { } } Schematic view of a Monitor Condition Variables A programmer who needs to write a synchronization scheme can define one or more variables of type condition: condition x, y; Two operations on a condition variable: x.wait () a process that invokes the operation is suspended until x.signal () x.signal () resumes one of processes (if any) that invoked x.wait () If no x.wait () on the variable, then it has no effect on the variable Monitor with Condition Variables 28

29 Condition Variables Choices If process P invokes x.signal (), with Q in x.wait () state, what should happen next? o If Q is resumed, then P must wait Options include o Signal and wait P waits until Q leaves monitor or waits for another condition o Signal and continue Q waits until P leaves the monitor or waits for another condition Solution to Dining Philosophers using Monitors To distinguish among three states in which we may find a philosopher. The following data structure: enum {thinking, hungry, eating} s t a t e [5] ; Philosopher i can set the variable s t a t e [i] = eating only if her two neighbors are not eating: ( s t a te [(i+4) % 5]!= eating) and ( s t a te [(i+1) % 5]!= eating) We also need to declare condition self [5]; where philosopher can delay herself when she is hungry but is unable to obtain the chopsticks she needs. monitor dp { enum { THINKING; HUNGRY, EATING) state [5] ; condition self [5]; void pickup (int i) { state[i] = HUNGRY; test(i); if (state[i]!= EATING) self [i].wait; } void putdown (int i) { state[i] = THINKING; // test left and right neighbors test((i + 4) % 5); test((i + 1) % 5); } void test (int i) { if ( (state[(i + 4) % 5]!= EATING) &&(state[i] == HUNGRY) && (state[(i + 1) % 5]!= EATING) ) { state[i] = EATING ; 29

30 } self[i].signal () ; } } initialization_code() { for (int i = 0; i < 5; i++) state[i] = THINKING; } Each philosopher i invokes the operations pickup() and putdown() in the following sequence: dp.pickup (i); EAT dp.putdown (i); It is easy to show that this solution ensures that no two neighbors are eating simultaneously and that no deadlocks will occur. But it is possible for a philosopher to starve to death CPU Scheduling CPU scheduling is the basis of multi programmed operating systems. The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use CPU-I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. 30

31 2.1.2 CPU Scheduler Whenever the CPU becomes idle, the operating system must select one of the processes in the ready queue to be executed. The selection process is carried out by the short-term scheduler (or CPU scheduler). The ready queue is not necessarily a first-in, first-out (FIFO) queue. It may be a FIFO queue, a priority queue, a tree, or simply an unordered linked list Preemptive Scheduling CPU scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state 2. When a process switches from the running state to the ready state 3. When a process switches from the waiting state to the ready state 4. When a process terminates Under 1 & 4 scheduling scheme is non preemptive. Otherwise the scheduling scheme is preemptive Non-preemptive Scheduling In non preemptive scheduling, once the CPU has been allocated a process, the process keeps the CPU until it releases the CPU either by termination or by switching to the waiting state. This scheduling method is used by the Microsoft windows environment. 31

32 2.1.5 Dispatcher The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. Function involves: 1. Switching context 2. Switching to user mode 3. Jumping to the proper location in the user program to restart that program 2.2 Scheduling Criteria 1. CPU utilization: The CPU should be kept as busy as possible. CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used system). 2. Throughput: It is the number of processes completed per time unit. For long processes, this rate may be 1 process per hour; for short transactions, throughput might be 10 processes per second. 3. Turnaround time: The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. 4. Waiting time: Waiting time is the sum of the periods spent waiting in the ready queue. 5. Response time: It is the amount of time it takes to start responding, but not the time that it takes to output that response. 2.3 CPU Scheduling Algorithms 1. First-Come, First-Served Scheduling 2. Shortest Job First Scheduling 3. Priority Scheduling 4. Round Robin Scheduling First-Come, First-Served Scheduling The process that requests the CPU first is allocated the CPU first. It is a non-preemptive Scheduling technique. The implementation of the FCFS policy is easily managed with a FIFO queue. Example: Process Burst Time P1 24 P2 3 P3 3 32

33 If the processes arrive in the order PI, P2, P3, and are served in FCFS order, we get the result shown in the following Gantt chart: Gantt Chart Average waiting time = ( ) / 3 = 17 ms Average Turnaround time = ( ) / 3 = 27 ms The FCFS algorithm is particularly troublesome for time sharing systems, where it is important that each user get a share of the CPU at regular intervals. Convoy effect - short process behind long process. Eg. Consider one CPU-bound and many I/O-bound processes Shortest Job First Scheduling The CPU is assigned to the process that has the smallest next CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie. Example: Process Burst Time P1 6 P2 8 P3 7 P4 3 Gantt Chart Average waiting time is ( )/4 = 7 milliseconds. Average turnaround time = ( ) / 4 = 13 ms SJF is optimal gives minimum average waiting time for a given set of processes The difficulty is knowing the length of the next CPU request Preemptive & non preemptive scheduling is used for SJF Example : Process Arrival Time Burst Time 33

34 P1 0 8 P2 1 4 P3 2 9 P4 3 5 Preemptive Scheduling Average waiting time : P1 : 10 1 = 9 P2 : 1 1 = 0 P3 : 17 2 = 15 P4 : 5 3 = 2 AWT = ( ) / 4 = 6.5 ms Preemptive SJF is known as shortest remaining time first Non-preemtive Scheduling AWT = 0 + (8 1) + (12 3) + (17 2) / 4 = 7.75 ms Priority Scheduling The SJF algorithm is a special case of the general priority-scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process Example : Process Burst Time Priority P P2 1 1 P3 2 4 P4 1 5 P5 5 2 AWT=8.2 ms SJF is a priority scheduling where priority is the predicted next CPU burst time. Priority Scheduling can be preemptive or non-preemptive. Drawback : Indefinite Block/ Starvation low priority processes may never execute. 34

35 Solution : Aging It is a technique of gradually increasing the priority of processes that wait in the system for a long time Round-Robin Scheduling The round-robin (RR) scheduling algorithm is designed especially for timesharing systems. It is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time, called a time quantum (or time slice), is defined. The ready queue is treated as a circular queue. Example : Process Burst Time P1 24 P2 3 P3 3 Time Quantum = 4 ms. Gantt Chart Waiting time P1 = = 6 P2 = 4 P3 = 7 (6+4+7 / 3 = 5.66 ms) The average waiting time is 17/3 = 5.66 milliseconds. The performance of the RR algorithm depends heavily on the size of the time quantum. If time-quantum is very large (infinite) then RR policy is same as FCFS policy. If time quantum is very small, RR approach is called processor sharing and appears to the users as though each of n process has its own processor running at 1/n the speed of real processor Multilevel Queue Scheduling It partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority, or process type. There must be scheduling between the queues, which is commonly implemented as a fixed-priority preemptive scheduling. For example the foreground queue may have absolute priority over the background queue. 35

36 Each queue has its own scheduling algorithm: o foreground RR o background FCFS Scheduling must be done between the queues: o Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. o Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR o 20% to background in FCFS Example of a multilevel queue scheduling algorithm with five queues 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5. Student processes Each queue has absolute priority over lower-priority queue Multilevel Feedback Queue Scheduling It allows a process to move between queues. The idea is to separate processes with different CPU-burst characteristics. If a process uses too much CPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound and interactive processes in the higher-priority queues. Similarly, a process that waits too long in a lower priority queue may be moved to a higher-priority queue. This form of aging prevents starvation. 36

37 Example: Consider a multilevel feedback queue scheduler with three queues, numbered from 0 to 2. The scheduler first executes all processes in queue 0. Only when queue 0 is empty will it execute processes in queue 1. Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty. A process that arrives for queue 1 will preempt a process in queue 2. A process that arrives for queue 0 will, in turn, preempt a process in queue 1. A multilevel feedback queue scheduler is defined by the following parameters: 1. The number of queues 2. The scheduling algorithm for each queue 3. The method used to determine when to upgrade a process to a higher priority queue 4. The method used to determine when to demote a process to a lower-priority queue 5. The method used to determine which queue a process will enter when that process needs service 2.4 Multiple Processor Scheduling If multiple CPUs are available, the scheduling problem is correspondingly more complex. If several identical processors are available, then load-sharing can occur. It is possible to provide a separate queue for each processor. In this case however, one processor could be idle, with an empty queue, while another processor was very busy. To prevent this situation, use a common ready queue. All processes go into one queue and are scheduled onto any available processor. In such a scheme, one of two scheduling approaches may be used. 1. Self Scheduling - Each processor is self-scheduling. Each processor examines the common ready queue and selects a process to execute. We must ensure that two processors do not choose the same process, and that processes are not lost from the queue. 2. Master Slave Structure - This avoids the problem by appointing one processor as scheduler for the other processors, thus creating a master-slave structure. 2.5 Real-Time Scheduling 37

38 Real-time computing is divided into two types. 1. Hard real-time systems 2. Soft real-time systems Hard Real Time Systems are required to complete a critical task within a guaranteed amount of time. Generally, a process is submitted along with a statement of the amount of time in which it needs to complete or perform I/O. The scheduler then either admits the process, guaranteeing that the process will complete on time, or rejects the request as impossible. This is known as resource reservation. Soft real-time computing is less restrictive. It requires that critical processes receive priority over less fortunate ones. The system must have priority scheduling, and real-time processes must have the highest priority. The priority of real-time processes must not degrade over time, even though the priority of non-real-time processes may. Dispatch latency must be small. The smaller the latency, the faster a real-time process can start executing. The high-priority process would be waiting for a lower-priority one to finish. This situation is known as priority inversion. 2.6 Algorithm Evaluation To select an algorithm, we must first define the relative importance of these measures. Maximize CPU utilization Maximize throughput Algorithm Evaluation can be done using 1. Deterministic Modeling 2. Queueing Models 3. Simulation Deterministic Modeling One major class of evaluation methods is called analytic evaluation. One type of analytic evaluation is deterministic modeling. This method takes a particular predetermined workload and defines the performance of each algorithm for that workload. Consider FCFS, SJF and RR Scheduling Algorithms 38

39 FCFS Average Waiting Time=( )/5=28ms SJF Average Waiting Time=( )/5=13ms RR Average Waiting Time=( )/5=23ms SJF policy results in less than one-half the average waiting time obtained with FCFS scheduling; The RR algorithm gives us an intermediate value. This model is simple and fast Queueing Models The computer system is described as a network of servers. Each server has a queue of waiting processes. The CPU is a server with its ready queue, as is the I/O system with its device queues. Knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, and so on. This area of study is called queueing-network analysis. Let n be the average queue length, let W be the average waiting time in the queue, and let X be the average arrival rate for new processes in the queue. This equation is known as Little's formula. Little's formula is particularly useful because it is valid for any scheduling algorithm and arrival distribution Simulations To get a more accurate evaluation of scheduling algorithms, we can use simulations. A distribution-driven simulation may be inaccurate, however, due to relationships between successive events in the real system. The frequency distribution indicates only how many of each event occur; it does not indicate anything about the order of their occurrence. To correct this problem, we can use trace tapes. We create a trace tape by monitoring the real system, recording the sequence of actual events 39

40 2.16 Deadlock A process requests resources. If the resources are not available at that time, the process enters a wait state. Waiting processes may never change state again because the resources they have requested are held by other waiting processes. This situation is called a deadlock. A process must request a resource before using it, and must release resource after using it. 1. Request: If the request cannot be granted immediately then the requesting process must wait until it can acquire the resource. 2. Use: The process can operate on the resource 3. Release: The process releases the resource Deadlock Characterization Four Necessary conditions for a deadlock 1. Mutual exclusion: At least one resource must be held in a non sharable mode. That is only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and wait: A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes. 3. No preemption: Resources cannot be preempted. Resource can be released only voluntarily by the process holding it, after that process has completed its task 4. Circular wait: there exists a set {P0, P1,, Pn} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2,, Pn 1 is waiting for a resource that is held by Pn, and Pn is waiting for a resource that is held by P Resource-Allocation Graph It is a Directed Graph with a set of vertices V and set of edges E. 40

41 V is partitioned into two types: o P = {P1, P2,, Pn}, the set consisting of all the processes in the system o R = {R1, R2,, Rm}, the set consisting of all resource types in the system request edge directed edge Pi Rj assignment edge directed edge Rj Pi Pi is denoted as a circle and Rj as a square. Rj may have more than one instance represented as a dot with in the square. P = { P1,P2,P3} R = {R1,R2,R3,R4} E= {P1->R1, P2->R3, R1->P2, R2->P1, R3->P3 } Resource instances One instance of resource type R1,Two instance of resource type R2,One instance of resource type R3,Three instances of resource type R4. Process states Process P1 is holding an instance of resource type R2, and is waiting for an instance of resource type R1. Resource Allocation Graph with a deadlock Process P2 is holding an instance of R1 and R2 and is waiting for an instance of resource type R3.Process P3 is holding an instance of R3. P1->R1->P2->R3->P3->R2->P1 P2->R3->P3->R2->P2 41

42 Graph With A Cycle But No Deadlock If graph contains no cycles no deadlock If graph contains a cycle o if only one instance per resource type, then deadlock o if several instances per resource type, possibility of deadlock 2.16 Methods for handling Deadlocks Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlock state. Allow the system to enter a deadlock state, detect it, and recover. Ignore the problem altogether and pretend that deadlocks never occur in the system. Most operating systems, including Linux Deadlock Prevention: This ensures that the system never enters the deadlock state. Deadlock prevention is a set of methods for ensuring that at least one of the necessary conditions cannot hold. By ensuring that at least one of these conditions cannot hold, we can prevent the occurrence of a deadlock. 1. Mutual exclusion Mutual exclusion condition must hold for non-sharable resources. Printer cannot be shared simultaneously shared by several processes. Sharable resource - example Read-only files. If several processes attempt to open a read-only file at the same time, they can be granted simultaneous access to the file. A process never needs to wait for a sharable resource. 2. Hold and wait Whenever a process requests a resource, it does not hold any other resource. One technique that can be used requires each process to request and be allocated all its resources before it begins execution. Consider a process that copies data from a DVD drive to a file on disk, sorts the file, and then prints the results to a printer. If all resources must be requested at the beginning of the process, then the process must initially request the DVD drive, disk file, and printer. It will hold the printer for its entire execution, even though it needs the printer only at the end. 42

43 Another technique is before it can request any additional resources, it must release all the resources that it is currently allocated. Allows the process to request initially only the DVD drive and disk file. It copies from the DVD drive to the disk and then releases both the DVD drive and the disk file. The process must then again request the disk file and the printer. After copying the disk file to the printer, it releases these two resources and terminates. These techniques have two main disadvantages : 1. Resource utilization may be low, since many of the resources may be allocated but unused for a long time. 2. Starvation is possible. A process that needs several popular resources may have to wait indefinitely, 3. No preemption If a Process is holding some resources and requests another resource that cannot be immediately allocated to it. (that is the process must wait), then all resources currently being held are preempted. These resources are implicitly released. The process will be restarted only when it can regain its old resources. Alternatively, if a process requests some resources, we first check whether they are available. If they are, we allocate them. If they are not, we check whether they are allocated to some other process that is waiting for additional resources. If so, we preempt the desired resources from the waiting process and allocate them to the requesting process. If the resources are neither available nor held by a waiting process, the requesting process must wait. While it is waiting, some of its resources may be preempted, but only if another process requests them. A process can be restarted only when it is allocated the new resources it is requesting and recovers any resources that were preempted while it was waiting. 4. Circular wait Impose a total ordering of all resource types and allow each process to request for resources in an increasing order of enumeration. Let R = {R1,R2,...Rm} be the set of resource types. Assign to each resource type a unique integer number. If the set of resource types R includes tapedrives, disk drives and printers. F(tapedrive)=1, F(diskdrive)=5, F(Printer)=12. Each process can request resources only in an increasing order of enumeration. A process can initially request any number of instances of a resource type say, R,. After that, the process can request instances of resource type R; if and only if F(Rj) > F(Ri) Deadlock Avoidance: Deadlock avoidance request that the OS be given in advance additional information concerning which resources a process will request and use during its life time. With this information it can be decided for each request whether or not the process should wait. 43

44 To decide whether the current request can be satisfied or must be delayed, a system must consider the resources currently available, the resources currently allocated to each process and future requests and releases of each process. Safe State A state is safe if the system can allocate resources to each process in some order and still avoid a dead lock. A deadlock is an unsafe state. Not all unsafe states are dead locks An unsafe state may lead to a dead lock Two algorithms are used for deadlock avoidance namely; 1. Resource Allocation Graph Algorithm - single instance of a resource type. 2. Banker s Algorithm several instances of a resource type. Resource allocation graph algorithm Claim edge - Claim edge Pi---> Rj indicates that process Pi may request resource Rj at some time, represented by a dashed directed edge. When process Pi request resource Rj, the claim edge Pi -> Rj is converted to a request edge. Similarly, when a resource Rj is released by Pi the assignment edge Rj -> Pi is reconverted to a claim edge Pi -> Rj The request can be granted only if converting the request edge Pi -> Rj to an assignment edge Rj -> Pi does not form a cycle. If no cycle exists, then the allocation of the resource will leave the system in a safe state. If a cycle is found, then the allocation will put the system in an unsafe state. 44

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

Process Synchronization

Process Synchronization Chapter 7 Process Synchronization 1 Chapter s Content Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors 2 Background

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Chapter 7: Process Synchronization. Background

Chapter 7: Process Synchronization. Background Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Chapter 7: Process Synchronization. Background. Illustration

Chapter 7: Process Synchronization. Background. Illustration Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Synchronization Principles

Synchronization Principles Synchronization Principles Gordon College Stephen Brinton The Problem with Concurrency Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms

More information

Process Synchronization

Process Synchronization Process Synchronization Chapter 6 2015 Prof. Amr El-Kadi Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Module 6: Process Synchronization

Module 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization Examples Atomic

More information

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Process Coordination

Process Coordination Process Coordination Why is it needed? Processes may need to share data More than one process reading/writing the same data (a shared file, a database record, ) Output of one process being used by another

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Process Synchronization

Process Synchronization CS307 Process Synchronization Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2018 Background Concurrent access to shared data may result in data inconsistency

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin and Gagne 2009

More information

Process Co-ordination OPERATING SYSTEMS

Process Co-ordination OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 PROCESS - CONCEPT Processes executing concurrently in the

More information

Chapter 7: Process Synchronization!

Chapter 7: Process Synchronization! Chapter 7: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Monitors 7.1 Background Concurrent access to shared

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra

Processes. Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra Processes Rafael Ramirez Dep Tecnologia Universitat Pompeu Fabra Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Process Concept Early

More information

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background

Background. Module 6: Process Synchronization. Bounded-Buffer (Cont.) Bounded-Buffer. Background Module 6: Process Synchronization Background Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Operating System Concepts 9th Edition Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

Process Synchronization

Process Synchronization TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Process Synchronization [SGG7] Chapter 6 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB)

Process Concept. Chapter 4: Processes. Diagram of Process State. Process State. Process Control Block (PCB) Process Control Block (PCB) Chapter 4: Processes Process Concept Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems An operating system

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang Process Synchronization CISC3595, Spring 2015 Dr. Zhang 1 Concurrency OS supports multi-programming In single-processor system, processes are interleaved in time In multiple-process system, processes execution

More information

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems

Chapter 5: Processes & Process Concept. Objectives. Process Concept Process Scheduling Operations on Processes. Communication in Client-Server Systems Chapter 5: Processes Chapter 5: Processes & Threads Process Concept Process Scheduling Operations on Processes Interprocess Communication Communication in Client-Server Systems, Silberschatz, Galvin and

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Process Concept An operating

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Module 6: Process Synchronization Chapter 6: Process Synchronization Background! The Critical-Section Problem! Peterson s Solution! Synchronization Hardware! Semaphores! Classic Problems of Synchronization!

More information

Chapter 3: Process-Concept. Operating System Concepts 8 th Edition,

Chapter 3: Process-Concept. Operating System Concepts 8 th Edition, Chapter 3: Process-Concept, Silberschatz, Galvin and Gagne 2009 Chapter 3: Process-Concept Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Silberschatz, Galvin

More information

Module 6: Process Synchronization

Module 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Synchronization Hardware Semaphores Classical Problems of Synchronization Critical Regions Monitors Synchronization in Solaris

More information

Course: Operating Systems Instructor: M Umair. M Umair

Course: Operating Systems Instructor: M Umair. M Umair Course: Operating Systems Instructor: M Umair Process The Process A process is a program in execution. A program is a passive entity, such as a file containing a list of instructions stored on disk (often

More information

Diagram of Process State Process Control Block (PCB)

Diagram of Process State Process Control Block (PCB) The Big Picture So Far Chapter 4: Processes HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection,

More information

Chapter 6 Synchronization

Chapter 6 Synchronization Chapter 6 Synchronization Da-Wei Chang CSIE.NCKU Source: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, "Operating System Concepts", 9th Edition, Wiley. 1 Outline Background The Critical-Section

More information

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition

Chapter 3: Processes. Operating System Concepts Essentials 2 nd Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01 1 UNIT 2 Basic Concepts of CPU Scheduling UNIT -02/Lecture 01 Process Concept An operating system executes a variety of programs: **Batch system jobs **Time-shared systems user programs or tasks **Textbook

More information

Part Two - Process Management. Chapter 3: Processes

Part Two - Process Management. Chapter 3: Processes Part Two - Process Management Chapter 3: Processes Chapter 3: Processes 3.1 Process Concept 3.2 Process Scheduling 3.3 Operations on Processes 3.4 Interprocess Communication 3.5 Examples of IPC Systems

More information

Process Synchronization

Process Synchronization Process Synchronization Daniel Mosse (Slides are from Silberschatz, Galvin and Gagne 2013 and Sherif Khattab) Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State

Chapter 3: Processes. Chapter 3: Processes. Process in Memory. Process Concept. Process State. Diagram of Process State Chapter 3: Processes Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 3.2 Silberschatz,

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Dept. of CSE, York Univ. 1

Dept. of CSE, York Univ. 1 EECS 3221.3 Operating System Fundamentals No.5 Process Synchronization(1) Prof. Hui Jiang Dept of Electrical Engineering and Computer Science, York University Background: cooperating processes with shared

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Operating System Concepts 4.1 Process Concept An operating system executes

More information

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication

Module 4: Processes. Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication Module 4: Processes Process Concept Process Scheduling Operation on Processes Cooperating Processes Interprocess Communication 4.1 Process Concept An operating system executes a variety of programs: Batch

More information

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.)

Chapter 3 Processes. Process Concept. Process Concept. Process Concept (Cont.) Process Concept (Cont.) Process Concept (Cont.) Process Concept Chapter 3 Processes Computers can do several activities at a time Executing user programs, reading from disks writing to a printer, etc. In multiprogramming: CPU switches from program to

More information

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1

Processes. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & Apps 1 Processes Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & Apps 1 Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess

More information

Process Synchronization

Process Synchronization CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem

More information

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013! Chapter 5: Process Synchronization Background" The Critical-Section Problem" Petersons Solution" Synchronization Hardware" Mutex

More information

Chapter 6: Synchronization

Chapter 6: Synchronization Chapter 6: Synchronization Module 6: Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science

Processes and More. CSCI 315 Operating Systems Design Department of Computer Science Processes and More CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating Systems Concepts,

More information

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection

Part V. Process Management. Sadeghi, Cubaleska RUB Course Operating System Security Memory Management and Protection Part V Process Management Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Roadmap of Chapter 5 Notion of Process and Thread Data Structures Used to Manage

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Lecture 5: Inter-process Communication and Synchronization

Lecture 5: Inter-process Communication and Synchronization Lecture 5: Inter-process Communication and Synchronization Real-Time CPU Scheduling Periodic processes require the CPU at specified intervals (periods) p is the duration of the period d is the deadline

More information

Lecture 3: Synchronization & Deadlocks

Lecture 3: Synchronization & Deadlocks Lecture 3: Synchronization & Deadlocks Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Chapter 3: Processes

Chapter 3: Processes Chapter 3: Processes Silberschatz, Galvin and Gagne 2013 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

CS420: Operating Systems. Process Synchronization

CS420: Operating Systems. Process Synchronization Process Synchronization James Moscola Department of Engineering & Computer Science York College of Pennsylvania Based on Operating System Concepts, 9th Edition by Silberschatz, Galvin, Gagne Background

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Chapter 4: Processes. Process Concept

Chapter 4: Processes. Process Concept Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

The Big Picture So Far. Chapter 4: Processes

The Big Picture So Far. Chapter 4: Processes The Big Picture So Far HW Abstraction Processor Memory IO devices File system Distributed systems Example OS Services Process management, protection, synchronization Memory Protection, management, VM Interrupt

More information

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3 Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Process Concept Process Scheduling Operations On Process Inter-Process Communication Communication in Client-Server Systems

Process Concept Process Scheduling Operations On Process Inter-Process Communication Communication in Client-Server Systems Process Concept Process Scheduling Operations On Process Inter-Process Communication Communication in Client-Server Systems Process Process VS Program Process in Memory Process State Process Control Block

More information

Chapter 4: Processes

Chapter 4: Processes Chapter 4: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Silberschatz, Galvin and Gagne

More information

Chapter 6 Process Synchronization

Chapter 6 Process Synchronization Chapter 6 Process Synchronization Cooperating Process process that can affect or be affected by other processes directly share a logical address space (threads) be allowed to share data via files or messages

More information

Chapter 3: Processes. Operating System Concepts 8 th Edition,

Chapter 3: Processes. Operating System Concepts 8 th Edition, Chapter 3: Processes, Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019 CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,

More information

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition

Chapter 3: Processes. Operating System Concepts Essentials 8 th Edition Chapter 3: Processes Silberschatz, Galvin and Gagne 2011 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication in Client-Server Systems 4.1 Sana a University, Dr aimen Process Concept

More information

Process Synchronization

Process Synchronization Process Synchronization Mandar Mitra Indian Statistical Institute M. Mitra (ISI) Process Synchronization 1 / 28 Cooperating processes Reference: Section 4.4. Cooperating process: shares data with other

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Chapter 3: Process Concept Process Concept Process Scheduling Operations on Processes Inter-Process Communication (IPC) Communication in Client-Server Systems Objectives 3.2

More information

Process Synchronization

Process Synchronization Process Synchronization Concurrent access to shared data may result in data inconsistency Multiple threads in a single process Maintaining data consistency requires mechanisms to ensure the orderly execution

More information

CSE Opera,ng System Principles

CSE Opera,ng System Principles CSE 30341 Opera,ng System Principles Synchroniza2on Overview Background The Cri,cal-Sec,on Problem Peterson s Solu,on Synchroniza,on Hardware Mutex Locks Semaphores Classic Problems of Synchroniza,on Monitors

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept By Worawut Srisukkham Updated By Dr. Varin Chouvatut, Silberschatz, Galvin and Gagne 2010 Chapter 3: Process-Concept Process Concept Process Scheduling Operations on Processes

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process. Zhi Wang Florida State University

COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process. Zhi Wang Florida State University COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 3: Process Zhi Wang Florida State University Contents Process concept Process scheduling Operations on processes Inter-process communication

More information

CHAPTER 3 - PROCESS CONCEPT

CHAPTER 3 - PROCESS CONCEPT CHAPTER 3 - PROCESS CONCEPT 1 OBJECTIVES Introduce a process a program in execution basis of all computation Describe features of processes: scheduling, creation, termination, communication Explore interprocess

More information

Chapter 3: Processes. Operating System Concepts 8th Edition, modified by Stewart Weiss

Chapter 3: Processes. Operating System Concepts 8th Edition, modified by Stewart Weiss Chapter 3: Processes Operating System Concepts 8 Edition, Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication

More information

Chapter 3: Process Concept

Chapter 3: Process Concept Chapter 3: Process Concept Silberschatz, Galvin and Gagne 2013! Chapter 3: Process Concept Process Concept" Process Scheduling" Operations on Processes" Inter-Process Communication (IPC)" Communication

More information

Processes-Process Concept:

Processes-Process Concept: UNIT-II PROCESS MANAGEMENT Processes-Process Concept: An operating system executes a variety of programs: O Batch system jobs o Time-shared systems user programs or tasks We will use the terms job and

More information

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition Module 6: Process Synchronization 6.1 Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 12 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ 2 Mutex vs Semaphore Mutex is binary,

More information

Unit 1: Introduction

Unit 1: Introduction Unit 1: Introduction What is an Operating System? Objectives and Funcations Computer System Architecture OS Structure OS Operations Evolution of Operating System Distributed Systems Clustered System Real

More information

Chapter 3: Processes

Chapter 3: Processes Operating Systems Chapter 3: Processes Silberschatz, Galvin and Gagne 2009 Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication (IPC) Examples of IPC

More information

Concurrency: Mutual Exclusion and

Concurrency: Mutual Exclusion and Concurrency: Mutual Exclusion and Synchronization 1 Needs of Processes Allocation of processor time Allocation and sharing resources Communication among processes Synchronization of multiple processes

More information

1. Motivation (Race Condition)

1. Motivation (Race Condition) COSC4740-01 Operating Systems Design, Fall 2004, Byunggu Yu Chapter 6 Process Synchronization (textbook chapter 7) Concurrent access to shared data in the data section of a multi-thread process, in the

More information

Introduction to Operating Systems

Introduction to Operating Systems Introduction to Operating Systems Lecture 4: Process Synchronization MING GAO SE@ecnu (for course related communications) mgao@sei.ecnu.edu.cn Mar. 18, 2015 Outline 1 The synchronization problem 2 A roadmap

More information