UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION

Size: px
Start display at page:

Download "UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION"

Transcription

1 UNIT II PROCESS SCHEDULING AND SYNCHRONIZATION CPU Scheduling: Scheduling criteria Scheduling algorithms Multiple-processor scheduling Real time scheduling Algorithm Evaluation. Case study: Process scheduling in Linux. Process Synchronization: The critical-section problem Synchronization hardware Semaphores Classic problems of synchronization critical regions Monitors. Deadlock: System model Deadlock characterization Methods for handling deadlocks Deadlock prevention Deadlock avoidance Deadlock detection Recovery from deadlock. CPU Scheduling: Introduction: CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In this chapter, we introduce the basic scheduling concepts and present several different CPU-scheduling algorithms. We also consider the problem of selecting an algorithm for a particular system. Basic Concepts The objective of multiprogramming is to have some process running at all times, in order to maximize CPU utilization. In a unprocessed system, only one process may run at a time; any other processes must wait until the CPU is free and can be rescheduled. The idea of multiprogramming is relatively simple. A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU would then sit idle; all this waiting time is wasted. With multiprogramming, we try to use this time productively. Several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Scheduling is a fundamental operating-system function. Almost all computer resources are scheduled before use. The CPU is, of course, one of the primary computer resources. Thus, its scheduling is central to operating-system design.

2 CPU-I/O Burst Cycle Process consists of CPU and I/O bound instructions. Process execution is a cycle of CPU execution and I/O wait. Any process switch in these two states i.e. CPU or I/O. CPU bound means the process generates I/O request infrequently, using more of its time doing computation than an I/O bound process uses. An I/O bound process spends more of its time doing I/O than it spends doing computations. Process begins with a CPU burst, then followed by I/ 0 burst, again CPU burst and so on. An I/O bound program would typically have many very short CPU bursts. A CPU bound program might have a few very long CPU bursts. Scheduling Method Scheduling algorithms may use different criteria for selecting process from the ready list. In general, scheduling algorithm may be preemptive or nonpreemptive. Four circumstances are used for making scheduling decisions. 1. When a process switches from running state to the waiting state. 2. When a process switches from the running state to the ready state. 3. When a process switches from the waiting state to the ready state. 4. When a process terminates

3 Preemptive scheduling takes place for circumstances 2 and 3. Nonpreemptive scheduling takes place under circumstances 1 and 4. For 1 and 4 circumstance, scheduling is not possible and for remaining circumstance, scheduling is possible. In Preemptive scheduling, a running process may be replaced by a higher priority process at any time. Preemptive strategies are sometimes used to ensure quick response to high priority processors. Preemptive scheduling incurs a cost. Preemptive scheduling is more responsive but it imposes higher overhead since each process rescheduling entails a complete process switch. In Nonpreemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switch in to the waiting state. This method uses some hardware platforms. Microsoft windows 3.1 and Apple mackintosh operating system uses this type of scheduling method. Nonpreemptive scheduling is attractive due to its simplicity. Dispatcher Dispatcher is also called short term scheduler. It allocates the CPU to a process that is loaded into main memory and ready to run. The dispatcher allocates the CPU for a fixed maximum amount of time. A function of dispatcher involves: a. Switching context b. Switching to user mode c. Jumping to the proper location in the user program to restart that program. The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency. Scheduling Criteria Scheduler may use in attempting to maximize system performance. The scheduling policy determines the importance of each of the criteria. Some commonly used criteria are: 1. CPU utilization 2. Throughput 3. Waiting time 4. Turnaround time 5. Response time 6. Priority

4 7. Balanced utilization 8. Fairness 1. CPU utilization: CPU utilization is the average function of time, during which the processor is busy. The load on the system affects the level of utilization that can be achieved. CPU utilization may range from 0% to 100%. On large and expensive system i.e. time shared system; CPU utilization may be the primary consideration. 2. Throughput: Throughput refers to the amount of work completed in a unit of time. The number of processes the system can execute in a period of time. The higher the number, the more work is done by the system. 3. Waiting time: The average period of time a process spends waiting. Waiting time may be expressed as turnaround time less the actual execution time. 4. Turnaround time: The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting into the ready queue, executing on the CPU and doing I/O. 5. Response time: Response time is the time from the submission of a request until the first response is produced. 6. Priority: Give preferential treatment to processes with higher priorities. 7. Balanced Utilization Utilization of memory, I/O devices and other system resources are also considered. Not only CPU utilization considered for performance.

5 8. Fairness: execute. Avoid the process from the starvation. All the processes must be given equal opportunity to Scheduling Algorithms CPU scheduling deals with the problem of deciding which of the processes in the ready queue is to be allocated the CPU. In this section, we describe several of the many CPU scheduling algorithms that exist. Types of scheduling algorithms are given below: 1. First come first served (FCFS) 2. Shortest job first (SJF) 3. Priority 4. Round-Robin (RR) 5. Multilevel Feedback Queue (MFQ) 6. Multilevel Queue First Come First Served (FCFS) FCFS is the simplest scheduling algorithm. CPU is allocated to the process in the order of arrival. FCFS is a nonpreemptive scheduling algorithm. Implementation of the FCFS policy is easily managed with a FIFO queue. When process enters the ready queue, its process control block (PCB) is linked onto the tail of the queue. While the FCFS algorithm is easy to implement. FCFS generally does not perform well under any specific set of system requirements, so it is not often used. Let us consider the processes that arrive at time 0, with the CPU-burst time given in milliseconds. Process Burst time P1 3 P2 6 P3 4 P4 2

6 i)gantt chart : ii) Waiting time: Process Burst time P1 0 P2 3 P3 9 P4 13 iii) Average waiting time: Sum of the all the process waiting times divided by number of processes. Average waiting time = = / 4 = 25/4 = 6.25 Waiting times of all processes Number of processes iv) Turn around time: It is computed by subtracting the time the process entered the system from the time it terminated. Process entered time is 0 for all processes.

7 Process Turn around time (Burst time + Waiting time) P1 3+0=3 P2 6+3=9 P3 4+9=13 P4 2+13=15 V) Average turnaround time: = / 4 =10 A average waiting time is generally not minimal and may vary substantially if t1 process CPU burst times vary greatly. FCFS has relatively low throughput for heavy workload. The FCFS algorithm is particularly troublesome for time sharing systems. Short Job First Scheduling (SJF) This algorithm associates with each process the length of the latter's next CP1 burst. When the CPU is free, it is assigned to the process of the ready queue whic has smallest next CPU burst. If two processes have the same length, next CPU burs FCFS scheduling is used to break the tie. SJF scheduling algorithm is used frequent! in long term scheduling. SJF algorithm may be either preemptive or nonpreemptive A preemptive SJF algorithm will preempt the currently executing process, where nonpreemptive SJF algorithm will allow the currently running process to finish it CPU burst. Let us consider the set of process with burst time in milliseconds. Let us consider the set of process with burst time in milliseconds. Process Burst time P1 3 P2 6

8 P3 4 P4 2 Arrival time of the process is 0 and processes arrive in the order of H, P2, P3 an( P4. The Gantt chart, waiting time and turnaround time is given below. i)gantt chart: ii) Waiting time: Process Burst time P1 2 P2 9 P3 5 P4 a iii) Average waiting time: Average waiting time = / 4 = 16/4 = 4 iv) Turn around time: It is sum of burst time plus waiting time of each process.

9 Process Turn around time (Burst time + Waiting time) P1 3+2=5 P2 6+9=15 P3 4+5=9 P4 2+0=2 v) Average turn around time = /4 =31/4 = 7.75 SJF algorithm is optimal algorithm. It gives the minimum average waiting time for a given set of processes. SJP algorithm cannot be implemented at the level of short term CPU scheduling. There is no way to know the length of the next CPU burst. Priority Scheduling CPU is allocated to the highest priority of the process from the ready queue. Each process has a priority number. If two or more processes have the-same priority, then PCFS algorithm is applied for solving the tie. In our examples, low numbers have the higher priority. Priority scheduling is preemptive or nonpreemptive. Priority of the process can be defined either internally or externally. Internally defined priority considers the time limits, number of open files, use of memory and use of I/O devices. External priorities are set by using external parameter of the process, like importance of a process, cost of process etc. When the process arrives at the ready queue, its priority is compared with the priority of the currently running process. A nonpreemptive priority algorithm will simply put the new process at the head of the ready queue. A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. In this, current executing process will change the state from running to ready. In nonpreemptive scheduling algorithm, currently executing process will not change state.

10 Let us consider the set of processes with burst time in milliseconds. Arrival time of the I process is O. Process Burst time Priority P1 2 2 P2 9 4 P3 5 1 P4 a 3 Processes are arrived in the order of Pt, P2, P3, P4. Gantt chart, waiting time and turn around time for priority scheduling algorithms are given below. i)gantt chart: ii) Waiting time: Process Waiting time P1 4 P2 9 P3 0 P4 7 iii) Average waiting time: = = 20/4

11 = 5 iv) Turn around time: Process Turn around time P1 3+4=7 P2 6+9=15 P3 4+0=4 P4 2+7=9 v) Average turn around time = /4 =35 /4 = 8.75 A priority scheduling algorithm can leave some low priority processes waiting indefinitely for the CPU.This problem is called starvation. Priority scheduling algorithm faces the starvation problem. Starvation problem is solved by using Aging technique. In aging technique, priority of the processes will increase which is waiting for a long time in the ready queue. Round-Robin Scheduling: Time sharing system used the round robin algorithm. Use of small time quantum allows round robin to provide good response time. RR scheduling algorithm is a preemptive algorithm. CPU selects the process from the ready queue. To implement RR scheduling, ready queue is maintained as a FIFO queue (First In First Out) of the processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum and dispatches the process. With the RR algorithm, the principal design issue is the length of the time quantum or slice to be used. If the time quantum is very short, then short processes will move through the system relatively quickly. It increases the processing overhead involved in handling the clock interrupt and performing the scheduling and dispatch function.

12 Thus very short time quantum should be avoided. Let us consider the set of process with burst time in milliseconds. All the processes are arrived at time O. We can draw the Gantt chart, calculate waiting time and so on. Process Burst time P1 4 P2 9 P3 0 P4 7 Time quantum is 2 milliseconds i)gantt chart ii)waiting time: Process Waiting time P1 0+6=6 P =9 P3 4+5=9 P4 6=6 iii) Average waiting time = /4 = 30/4 = 7.5

13 iv) Turn around time: Process Turn around time P1 3+6=9 P2 6+9=15 P3 4+9=13 P4 2+6=8 v) Average turnaround time = /4 =45 /4 = Multilevel Queue Scheduling: Multilevel queues are an extension of priority scheduling whereby all processes of the same priority are placed in a single queue. For example, timesharing systems often support the idea of foreground and background processes. Foreground processes service an interactive user, while background processes are intended to run whenever no foreground process requires the CPU. These two types of processes have different response time requirement, so they require different scheduling algorithms. May be the foreground processes have priority over the background processes algorithm. It divides the ready queue into the number of separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority and process type. Each queue has its own scheduling algorithm. One queue may be scheduled by FCFS and another queue scheduled by RR method. Once the processes are assigned to the queue, they can not change the queue i.e. processes do not move from one queue to the other since processes do not change their foreground or background nature. All the processes are arranged in ready list according to the priority. Processes PI, P4, Pg have a larger time slice than processes PI, P4 and Pg. So they get a chance to execute only when processes

14 P2, Ps, P7 and Pl are blocked. Processes P3 and PJj can execute only when all other processes in the system are blocked. They would face starvation if this situation is rare. Multilevel Queue Scheduling: Let us look at an example of a multilevel queue-scheduling algorithm with five queues: 1. System processes 2. Interactive processes 3. Interactive editing processes 4. Batch processes 5. Student processes Each queue has absolute priority over lower-priority queues. No process in the batch queue, for example, could run unless the queues for system processes, interactive processes, and interactive editing processes were all empty. If an interactive editing process entered the ready queue while a batch process was running, the batch process would be preempted. Solaris 2 uses a form of this algorithm. Another possibility is to time slice between the queues. Each queue gets a certain portion of the CPU time, which it can then schedule among the various processes in its queue. For instance, in the foreground-background queue example, the foreground queue can be given 80 percent of the CPU time for RR scheduling among its processes, whereas the background queue receives 20 percent of the CPU to give to its processes in a FCFS manner

15 Multilevel Feedback Queue Scheduling Multilevel feedback queue (MFQ) scheduling algorithm overcomes the problem of multilevel queue scheduling algorithm. MFQ allows a process to move between the queues.. MFQ implements two or more scheduling queues. MFQ idea is to separate processes with different CPU burst time. If a process uses too much CPU time, it will be moved to a lower priority queue. Multilevel Feedback Queue Scheduling For example: each process may start at the top level queue. If the process is completed within a given me ince, it departs the system. Processes that need more than one time slice may be reassigned by the operating system to a lower priority queue, which gets a lower percentage of the processor time. If the process is still not finished after having run a few times in that queue, it may be moved to yet another lower level queue. Multilevel feedback queue scheduler is defined by the following parameters: 1. The number of queues. 2. Scheduling algorithm for eacn queue. 3. Method used to determine when to demote a process to a lower priority queue. 4. Method used: to determine when to upgrade a process to a higher priority queue. 5. Method used to determine which queue a process will enter when that process needs service. The definition of a multilevel feedback queue scheduler makes it the most general CPUscheduling algorithm. It can be configured to match a specific system under design. Unfortunately, it also requires some means of selecting values for all the parameters to define the best scheduler. Although a multilevel feedback queue is the most general scheme, it is also the most complex.

16 Comparison between FCFS and RR Method Sr. No. FCFS Round Robin 1 FCFS decision made is non preemptive RR decision made is preemptive. 2 It has minimum overhead. It has low overhead. 3 Response time may be high. Provides good response time for short processes. 4 It is troublesome for time sharing system. It is mainly designed for time sharing system. 5 The workload is simply processed in the order of arrival. It is similar like FCFS but uses time quantum. 6 No starvation in FCFS. No starvation in RR. Multiple-Processor Scheduling Our discussion thus far has focused on the problems of scheduling the CPU in a system with a single processor. If multiple CPUs are available, the scheduling problem is correspondingly more complex. Many possibilities have been tried, and, as we saw with single-processor CPU scheduling, there is no one best solution. In the following, we discuss briefly some of the issues concerning multiprocessor scheduling. (Complete coverage of multiprocessor scheduling is beyond the scope of this text; for more information, please refer to the Bibliographical Notes.) We concentrate on systems where the processors are identical (or homogeneous) in terms of their functionality; any available processor can then be used to run any processes in the queue. Only programs compiled for a given processor's instruction set could be run on that processor. Even within a homogeneous multiprocessor, there are sometimes limitations on scheduling.

17 Consider a system with an I/O device attached to a private bus of one processor. Processes wishing to use that device must be scheduled to run on that processor, otherwise the device would not be available. If several identical processors are available, then load sharing can occur. It would be possible to provide a separate queue for each processor. In this case, however, one processor could be idle, with an empty queue, while another processor was very busy. To prevent this situation, we use a common ready queue. All processes go into one queue and are scheduled onto any available processor. In such a scheme, one of two scheduling approaches may be used. In one approach, each processor is self-scheduling. Each processor examines the common ready queue and selects a process to execute. We must ensure that two processors do not choose the same process, and that processes are not lost from the queue. The other approach avoids this problem by appointing one processor as scheduler for the other processors, thus creating a master-slave structure. Some systems carry this structure one step further, by having all scheduling decisions, I/O processing, and other system activities handled by one single processor-the master server. The other processors only execute user code. This asymmetric multiprocessing is far simpler than symmetric multiprocessing, because only one processor accesses the system data structures, alleviating the need for data sharing. However, it is also not as efficient. I/O-bound processes may bottleneck on the one CPU that is performing all of the operations. Typically, asymmetric multiprocessing is implemented first within an operating system, and is then upgraded to symmetric multiprocessing as the system evolves. Real Time Scheduling Real time computing is divided into two types: hard real time and soft real time. Hard real time task must meet its deadline, otherwise it will cause undesirable damage or a fatal error to the system. A soft real time task has an associated deadline, which. is desirable but not mandatory, it still makes sense to schedule and complete the task even if it has passed its deadline. Hard real time systems are composed of. special purpose software running on hardware dedicated to their critical process and lack the full functionality of modern computers and operating systems. Implementing soft real time functionality requires careful design of the scheduler and related aspects of the operating system. The system must, have priority scheduling, and real time processes must have the highest priority. The dispatch latency must be small. The smaller the latency, the faster a real time process can start executing once it is runnable.

18 Algorithm Evaluation: Deterministic modeling takes a particular predetermined workload and defines the performance of each algorithm for that workload. Our criteria may include several measures, such as: Maximize CPU utilization under the constraint that the maximum response time is 1 second. Maximize throughput such that turnaround time is (on average) linearly proportional to total execution time. Once the selection criteria have been defined, we want to evaluate the various algorithms under consideration. Deterministic Modeling One major class of evaluation methods is called analytic evaluation. Analytic evaluation uses the given algorithm and the system workload to produce a formula or number that evaluates the performance of the algorithm for that workload. One type of analytic evaluation is deterministic modeling. This method takes a particular predetermined workload and defines the performance of each algorithm for that workload. For example, assume that we have the workload shown. All five processes arrive at time 0, in the order given, with the length of the CPU-burst time given in milliseconds: Process Burst Time p1 10 p2 29 p3 3 p4 7 p5 12 Consider the FCFS, SJF, and RR (quantum = 10 milliseconds) scheduling algorithms for this set of processes. Which algorithm would give the minimum average waiting time? For the FCFS algorithm, we would execute the processes as

19 The waiting time is 0 milliseconds for process PI, 10 milliseconds for process P2, 39 milliseconds for process Pb, 42 milliseconds for process P4, and 49 milliseconds for process Ps. Thus, the average waiting time is ( )/5 = 28 milliseconds. With nonpreemptive SJF scheduling, we execute the processes as P3 P4 P1 P5 P The waiting time is 10 milliseconds for process PI, 32 milliseconds for process P2, 0 milliseconds for process PJ, 3 milliseconds for process P4, and 20 milliseconds for process P5. Thus, the average waiting time is ( )/5 = 13 milliseconds. With the RR algorithm, we execute the processes as The waiting time is 0 milliseconds for process PI, 32 milliseconds for process P2, 20 milliseconds for process Pg, 23 milliseconds for process P4, and 40 milliseconds for process Ps. Thus, the average waiting time is ( )/5 = 23 milliseconds. We see that, in this case, the SJF policy results in less than one-half the average waiting time obtained with FCFS scheduling; the RR algorithm gives us an intermediate value. Deterministic modeling is simple and fast. It gives exact numbers, allowing the algorithms to be compared. However, it requires exact numbers for input, and its answers apply to only those cases. The main uses of deterministic modeling are in describing scheduling algorithms and providing examples. In cases where we may be running the same programs over and over again and can measure the program's processing requirements exactly, we may be able to use deterministic modeling to select a scheduling algorithm. Over a set of examples, deterministic modeling may indicate trends that can then be analyzed and proved separately.

20 For example, it can be shown that, for the environment described (all processes and their times available at time O), the SJF policy will always result in the minimum waiting time. In general, however, deterministic modeling is too specific, and requires too much exact knowledge, to be useful. Queuing Models The processes that are run on many systems vary from day to day, so there is no static set of processes (and times) to use for deterministic modeling. What can be determined, however, is the distribution of CPU and I/O bursts. These distributions may be measured and then approximated or simply estimated. The result is a mathematical formula describing the probability of a particular CPU burst. Commonly, this distribution is exponential and is described by its mean. Similarly, the distribution of times when processes arrive in the system -the arrival-time distribution-must be given. The computer system is described as a network of servers. Each server has a queue of waiting processes. The CPU is a server with its ready queue, as is the I/O system with its device queues. Knowing arrival rates and service rates, we can compute utilization, average queue length, average wait time, and so on. This area of study is called queuing-network analysis. As an example, let n be the average queue length (excluding the process being serviced), let W be the average waiting time in the queue, and let X be the average arrival rate for new processes in the queue (such as three processes per second). Then, we expect that during the time W that a process waits, X x W new processes will arrive in the queue. If the system is in a steady state, then the number of processes leaving the queue must be equal to the number of processes that arrive. Thus, n = λ x w This equation is known as Little's formula. Little's formula is particularly useful because it is valid for any scheduling algorithm and arrival distribution. We can use Little's formula to compute one of the three variables, if we know the other two. For example, if we know that seven processes arrive every second (on average), and that there are normally 14 processes in the queue, then we can compute the average waiting time per process as 2 seconds. Queuing analysis can be useful in comparing scheduling algorithms, but it also has limitations. At the moment, the classes of algorithms and distributions that can be handled are fairly limited. The mathematics of complicated algorithms or distributions can be difficult to work with. Thus, arrival and service distributions are often defined in unrealistic, but mathematically tractable, ways. It is also generally necessary to make a number of independent assumptions that may not be accurate. Thus, so

21 that they will be able to compute an answer, queuing models are often only an approximation of a real system. As a result, the accuracy of the computed results may be questionable. Simulations To get a more accurate evaluation of scheduling algorithms, we can use simulations. Simulations involve programming a model of the computer system. Software data structures represent the major components of the system. The simulator has a variable representing a clock; as this variable's value is increased, the simulator modifies the system state to reflect the activities of the devices, the processes, and the scheduler. As the simulation executes, statistics that indicate algorithm performance are gathered and printed. The data to drive the simulation can be generated in several ways. The most common method uses a random-number generator, which is programmed to generate processes, CPU-burst times, arrivals, departures, and so on, according to probability distributions. The distributions may be defined mathematically (uniform, exponential, Poisson) or empirically. If the distribution is to be defined empirically, measurements of the actual system under study are taken. The results are used to define the actual distribution of events in the real system, and this distribution can then be used to drive the simulation. A distribution-driven simulation may be inaccurate, however, due to relationships between successive events in the real system. The frequency distribution indicates only how many of each event occur; it does not indicate anything about the order of their occurrence. To correct this problem, we can use trace tapes. We create a trace tape by monitoring the real system, recording the sequence of actual events (Figure 6.9). This sequence is then used to drive the simulation. Trace tapes provide an excellent way to compare two algorithms on exactly the same set of real inputs. This method can produce accurate results for its inputs.

22 Simulations can be expensive, however, often requiring hours of computer time. A more detailed simulation provides more accurate results, but also requires more computer time. In addition, trace tapes can require large amounts of storage space. Finally, the design, coding, and debugging of the simulator can be a major task. Implementation Even a simulation is of limited accuracy. The only completely accurate way to evaluate a scheduling algorithm is to code it, put it in the operating system, and see how it works. This approach puts the actual algorithm in the real system for evaluation under real operating conditions. The major difficulty is the cost of this approach. The expense is incurred not only in coding the algorithm and modifying the operating system to support it as well as its required data structures, but also in the reaction of the users to a constantly changing operating system. Most users are not interested in building a better operating system; they merely want to get their processes executed and to use their results. A constantly changing operating system does not help the users to get their work done. A form of this method is used commonly for new computer installations. For instance, a new web facility may have simulated user loads generated against it before it "goes live", to determine any bottlenecks in the facility and to estimate how many users the system can support. The other difficulty with any algorithm evaluation is that the environment in which the algorithm is used will change. The environment will change not only in the usual way, as new programs are written and the types of problems change, but also as a result of the performance of the scheduler. If short processes are given priority, then users may break larger processes into sets of smaller processes. If

23 interactive processes are given priority over no interactive processes, then users may switch to interactive use. For example, in DEC TOPS-20, the system classified interactive and no interactive processes automatically by looking at the amount of terminal I/O. If a process did not input or output to the terminal in a 1-minute interval, the process was classified as no interactive and was moved to a lower-priority queue. This policy resulted in a situation where one programmer modified his programs to write an arbitrary character to the terminal at regular intervals of less than 1 minute. The system gave his programs a high priority, even though the terminal output was completely meaningless. The most flexible scheduling algorithms can be altered by the system managers or by the users. During operatingsystem build time, boot time, or run time, the variables used by the schedulers can be changed to reflect the expected future use of the system. The need for flexible scheduling is another instance where the separation of mechanism from policy is useful. For instance, if paychecks need to be processed and printed immediately, but are normally done as a low-priority batch job, the batch queue could be given a higher priority temporarily. Unfortunately, few operating systems allow this type of tunable scheduling. Process Synchronization When a co-operating process runs on a system with a single processor, concurrency is simulated by the processes sharing the CPU. Co-operating processes may directly share a logical address space or be allowed to share data only through files. We discuss here some of issue of process synchronization. The Critical-Section Problem Consider a system consisting of n processes {Po,P1,..., P,-1). Each process has a segment of code, called a critical section, in which the process may be changing common variables, updating a table, writing a file, and so on. The important feature of the system is that, when one process is executing in its critical section, no other process is to be allowed to execute in its critical section. Thus, the execution of critical sections by the processes is mutually exclusive in time. The critical-section problem is to design a protocol that the processes can use to cooperate. Each process must request permission to enter its critical section. The section of code implementing this request is the entry section. The critical section may be followed by an exit section. The remaining code is the remainder section.

24 do { Enter section Crirtical section Exit Section Remainder section }while(1); General structure of a typical process Pi To ensure correctness, mechanisms to control access to critical sections should satisfy the following requirements. 1. Mutual exclusion 2. Progress 3. Bounded waiting. 1. Mutual exclusion: Ensure mutual exclusion between processes accessing the protected shared resource. Suppose one process (g) is executing in its critical section, then no other processes are allowed to execute in the critical sections. 2. Progress: If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder section can participate in the decision on which will enter its critical section next. This selection of the process can not be postponed indefinitely. 3. Bounded waiting:

25 When a process requests access to a critical section, a decision that grants it access may not be delayed indefinitely. A process may not be denied access because of starvation or deadlock. Solution to the critical section problem is as follows: Two Process Solutions We consider only two processes Po and Pt for solving critical section problem. Following are the algorithms for critical section. 1) Algorithm 1: Both the processes Po and PI share the common integer variable. We assign the name to this variable as turn and is initialized to 0 or 1. If turn==i then process Po is allowed to execute in its critical section. The following fig shows the structure of process Po in algorithm 1. Algorithm 1 allows only one process to enter into critical section. do { While (turn!= critical section Turn = i; remainder section } while (1); Structure of algorithm 1 For example, if turn == 0 and Pt is ready to enter its critical section, Pt can not do so, even though Po may be in its remainder section.

26 2) Algorithm Algorithm 1 can not give the sufficient information about the state of each process. It keeps only records of the process which is entered into the critical section. To solve this problem, variable turn is replaced with new variable called flag. It is initialized as : boolean flag [2] ; First the elements of array are initialized to false. If flag [i] is true, then Pi is ready to enter the critical section. The structure for algorithm 2 is given below. do { flag [ i ] = true; while (flag [j ]); critical section flag [ i ] = false remainder section } while (1); Structure for algorithm 2 In this algorithm, process Pi first sets flag [i] to be true, then process Pi is ready to enter its critical section. Process Pi also checks for process 11. If process Pj were ready, then Pi would wait unit flag [j] was false. So process Pi would enter into the critical section. 3) Algorithm 3: It gives the correct solution to the critical section problem. Algorithm 3 satisfies all the three requirements of the critical section. The processes share two variables: boolean flag [2] ; int turn ; Initialize condition is flag [0] = flag [1] = false shows the structure of process Pi in algorithm 3. do {

27 flag [ i ] = true; turn. = j while (flag [j ] && turn = = j); critical section flag [ i ] = false; remainder section } while (1); Structure of process Pi in algorithm 3 For process, to enter the critical section first set flag [i] to be true and then sets turn to the value j. If both processes try to enter at the same time, turn will set to both i and j at the same time. This algorithm satisfies the all three requirements of critical section. Multiple Process Solutions Bakery algorithm is used in multiple process solution. It solves the problem of critical section for n processes. Each process requesting entry to critical section is given a numbered token such that the number on the token is larger than the maximum number issued earlier. This algorithm was developed for a distributed environment. The algorithm permits processes to enter the critical section in the order of their token numbers. The bakery algorithm can not guarantee that two processes do not receive the same number. In this case, process with the lowest name is served first. If Pi and Pj receive the same number and if i < j, then Pi is served first. Fig. 5.5 shows the structure of process Pi in the bakery algorithm. Synchronization Hardware Hardware feature can make the programming task easier and improve system efficiency. Various synchronization mechanisms are available to provide interprocess coordination and communication. Test and set instruction is used for critical section problem. In most synchronization schemes, a physical entity must be used to represent the resource. This instruction is executed automatically. This test and set instruction is used in multiprocessor environment. If two test and set instructions are executed simultaneously, they will be executed sequentially in some arbitrary order.

28 Test and set instructions are initially as follows: Do choosing [ i ] = true number [i ] = max(number [0], number [ 1 ],..number[n - 1]) + 1; choosing [ i ] = false; for(j=0;j<n;j++) { while (choosing [j ]); while ((number [j ]! ::: 0) && (number [j, j ] < number [ i, i ] )); } critical section number [ i ] = 0; remainder section } while (1); Structure of process Pi in the bakery algorithm boolean test and set (boolean & target) boolean rv = target; target = true; return rv ; Test and set instruction is used in implementing mutual exclusion. Data structure r this is given below. do { while (Test and Set(lock)) critical section lock=false; remainder section } while(l);

29 Semaphores Semaphore is used to solve critical section problem. A semaphore (s) is an integer value. Semaphore is a variable that has an integer value upon which the following three operations are defined. 1. A semaphore may be initialized to a non-negative value. 2. The wait operation decrements the semaphore value. If the value becomes { negative, then the process executing the wait is blocked. 3. The signal operation increments the semaphore value. If the value is not positive, then a process blocked by a wait operation is unblocked. Pseudo-code for wait: wait(s) while (s < 1) s=s-l Pseudo-code for signal: signal(s) { s=s+l } Semaphores are executed atomically. There is no guarantee that no two processes can execute wait and signal operations on the same semaphore at the same time. This situation is a critical section problem, and can be solved in either of two ways. Binary semaphore is a semaphore with an integer value that can range only between 0 and 1. In principle, it should be easier to implement the binary semaphore. In this, queue is used to hold processes waiting on the semaphore. The process that has been blocked the longest is released from the queue. Semaphores are not provided by hardware. But they have several attractive

30 Properties: 1. Semaphores are machine independent. 2. Semaphores are simple to implement. 3. Correctness is easy to determine. 4. Can have many different critical sections with different semaphores. 5. Semaphore acquire many resources simultaneously. For both semaphores and binary semaphore, a queue is used to hold processes waiting on the semaphore. First In First Out (FIFO) policy is used to remove the process from the queue. The process that has been blocked the longest is released from the queue first is called a strong semaphore. A semaphore that does not specify the order in which processes are removed from the queue is called as weak semaphore. Busy waiting: A process is waiting for an event to occur and it does so by executing instructions. A process is waiting for an event to occur in some waiting queue (e.g. I/O, semaphore) and it does so without having the CPU assigned to it. Busy waiting cannot be avoided together. Busy waiting wastes CPU cycles that some other process might be able to use productively. This type of semaphore is also called a spin lock. Spin locks are useful in multiprocessor system. Context switching is not required in spin lock. Drawback of Semaphore 1. They are essentially shared global variables. 2. Access to semaphores can come from anywhere in a program. 3. There is no control or guarantee of proper usage. 4. There is no linguistic connection between the semaphore and the data to which the semaphore controls access. 5. They serve two purposes, mutual exclusion and scheduling constraints. Producer-Consumer Problem using Semaphore It requires three semaphores to solve the producer-consumer problem. These semaphores are full, mutex and empty: Full is used for counting the number of slots that are full. Empty for counting the number of empty slots. Mutex semaphore is used in between producer and consumer, both do not access the buffer at the same time. # define buffercapaci ty 200

31 typedef int semaphore; semaphore full = 0; semaphore empty = buffercapaci ty; semaphore mutex = 1; void producer (void) { int item; while (true) item = produce_item(); down (& empty); down (& Mute x); insert item (item) ; up(& mutex); up (&full) ; } } void consumer (void) { int item; while (true) { down (& full); down (& mutex); item = remove item ( ) ; up (& mutex);, up (& empty); consume_item(item); All three semaphores are initially as follows: i) Full is O. ii) Empty if equal to the number of slots in the buffer. iii) Mutex is initially 1.

32 In the above algorithm, the producer stops running Le. producing item when the buffer is full. Consumer stops consuming the item when buffer is empty. Classic Problems of Synchronization Race condition and critical section problem is solved using various methods. In this session some of the examples are discussed here. Producer-Consumer Problem One or more producers are generating some type of data and placing these in a buffer. A single consumer is taking items out of the buffer one at a time. The system is to be constrained to prevent the overlap of buffer operations. That is- only one agent (producer or consumer) may access the buffer at any one time. Fig. 5.6 shows the structure of buffer. The producer can generate items and store them in the buffer at its own space. Each time, an index (in) into the buffer is incremented. b[1] b[2] b[3] b[4] b[5] b[6]. Out In Infinite buffer for producer consumer problem The consumer proceeds in a similar fashion but must make sure that it does not attempt to read from an empty buffer. Given the infinite buffer, producers may run at any time without restrictions. The buffer itself may be implemented as an array, a linked list, or any other collection of those data items. Important contents: Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem

33 Bounded buffer: In bounded buffer, producer may produce items only when there ar-e empty buffer slots. A consumer may consume only produced items and must wait :when -no items are available. All producers must be kept waiting when the buffer is full. When buffer is empty, consumers must wait, for they can never get ahead of producers. b[1] b[n] out in In practice, buffers are usually implemented in a circular fashion. In and out points to the next slot available for a produced item, and to the place where the next item is to be consumed from. In real life, the people watch the bin, and it is empty or too full the problem is recognized and quickly resolved. However, in a computer system such resolution is not so easy. Consider the case of CPU. The CPU can generate output data much faster than a line printer can print it. Therefore, since this involves a producer and a consumer of two different speeds, we need a buffer where the producer can temporarily store data that can be retrieved by the consumer at a more appropriate speed three typical buffer States Buffer Full producer consumer producer Partially empty Buffer consumer

34 Empty Buffer producer consumer A solution to the producer-consumer problem satisfies the following conditions. 1. A producer must not overwrite a full buffer. 2. A consumer must not consume an empty buffer. 3. Producers and consumers must access buffers in a mutually exclusive manner. Readers and Writers Problem Reader-writer problem is good example of process synchronization anal concurrency mechanisms. It is defined as follows. 'There is a data area shared among number of processes. The data area could be a file, a block of main memory etc. There are a number of processes that only read the data area - readers. Processes that mol write to the data area - writers. The following conditions must b~ satisfied. 1. Any number of readers may simultaneously read the file. 2. Only one writer at a time may write to the file. 3. If a writer is writing to the file, no reader may read it. Structure of reader process is given below. wait (mutex); read count++; if (readcount==l) wait (wrt); signal (mutex) ; reading is performed

35 wait (mutex) ; readcount--; if (readcount==o) signal(wrt); signal (mutex) ; The structure of a writer process is as follows. wait (wrt) ; writing is performed signal (wrt) ; The readers-writers problem has several variations, all involving priorities, may be the readers having highest priority or writers having high priority. Can the producer-consumer problem be simply a special case of the readers-writers problem with a single writer (the producer) and a single reader (the consumer). The answer is no. The producer is not just a writer. It must read queue pointers to determine where to write the next item and it must determine if the buffer is full. Similarly, the consumer is not just a reader because it must adjust the queue pointers to show that it has removed a unit from the buffer. The Dining-Philosophers Problem Consider five philosophers who spend their lives thinking and eating. The philosophers share a common circular table surrounded by five chairs, each belonging to one philosopher. In the center of the table is a bowl of rice, and the table is laid with five single chopsticks. When a philosopher thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her and her left and right neighbors). A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbor. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing her chopsticks.

36 When she is finished eating, she puts down both of her chopsticks and starts thinking again. The dining-philosophers problem is considered a classic synchronization problem, neither because of its practical importance nor because computer scientists dislike philosophers, but because it is an example of a large class of concurrency-control problems. It is a simple representation of the need to allocate several resources among several processes in a deadlock- and starvation free manner. One simple solution is to represent each chopstick by a semaphore. A philosopher tries to grab the chopstick by executing a wait operation on that semaphore; she releases her chopsticks by executing the signal operation on the appropriate semaphores. Thus, the shared data are semaphore chopstick [5] ; where all the elements of chopstick are initialized to 1. Although this solution guarantees that no two neighbors are eating simultaneously, it nevertheless must be rejected because it has the possibility of creating a deadlock. Suppose that all five philosophers become hungry simultaneously, and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she will be delayed forever., we present a solution to the dining-philosophers problem that ensures freedom from deadlocks. Allow at most four philosophers to be sitting simultaneously at the table. Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this she must pick them up in a critical section). Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and then her right chopstick, whereas an even philosopher picks up her right chopstick and then her left chopstick. Finally, any satisfactory solution to the dining-philosophers problem must guard against the

37 possibility that one of the philosophers will starve to death. A deadlock-free solution does not necessarily eliminate the possibility of starvation. Critical Regions Critical regions are small and infrequent so that system through put is largely unaffected by their existence..' Critical region is a control structure for implementing mutual exclusion over a shared variable. The declaration of shared variable is given below. var mutex:shared T; The variable mutex of type T is to be shared among many processes. The variable mutex can be accessed by only inside the region statement of the following form: region mutex when B do S; While statement 5 is being executed, no other process can access the variable mutex. B is the boolean expression that governs the access to the critical region. Critical regions enforce restricted usage of shared variables and prevent potential erro~ resulting form improper use of ordinary semaphores. Critical region is very convenient, for mutual exclusion. However, it is less versatile than a semaphore. Conditional Critical Regions Conditional critical region allow us to specify synchronization as well as mutux exclusion. It is similar to a critical region. The shared variable is declared in the sad'/ way. Conditional critical region provides following features: 1. Provide mutual exclusion. 2. It permit a process executing to conditional critical region to block itself unman arbitrary boolean condition becomes true. Following code give the idea about conditional critical regions. var X : Shared T; begin repeat region X do begin await condition;

38 end; Variable X is called the conditional critical region variable. The above code allow a process waiting on a condition within a critical region to be suspended in a special queue, pending satisfaction of the related condition Monitors Monitors are based on abstract data types. A monitor is a programming language construct that provides equivalent functionality to that of semaphores but is easier to control. A monitor consists of procedures, the shared object and administrative data. Characteristics of a monitor are as follows: 1. Only one process can be active within the monitor at a time. 2. The local data variables are accessible only by the monitor's procedures and not by any external procedure. 3. A process enters the monitors by invoking one of its procedures. Monitor provides high-level of synchronization. The synchronization of process is accomplished via two special operations namely, wait and signal, which are executed within the monitors procedures. Monitors are a high level data abstraction tool combining three features: 1. Shared data 2. Operation on data 3. Synchronization, schu1ing. A monitor is character by a set of programmer defined operators. Monitors were derived to simply the complexity of synchronization problems. Every synchronization problem that can be solved with monitors can also be solved with semaphores and vice versa. Monitors are based on abstract data types. Monitor is an abstract data type for which only one process may be executing a procedure at any given time. Processes desiring to enter the monitor when it is already in use must wait. This waiting is automatically managed by the monitor.

39 Monitor view A monitor is a software module consisting of one or more procedures, an initialization sequence and local data. Monitor monitor-name { declaration of shared variable Procedure body Pi () { } P2( ) { procedure body }... Pn( )

40 { procedure body } { initialization code } } monitor syntax The monitor construct has been implemented in a number of programming languages. Since monitors are a language feature, they are implemented with the help of a compiler. In response to the keywords monitor, condition, signal, wait and notify, the complier insert little bits of code in the program. The data variables in the monitor can be accessed by only one process at a time. A shared data structure can be protected by placing it in a monitor. The data inside the monitor may be either global to all procedures within the monitors or local to a specific procedure. A monitor supports synchronization by the use of condition variables that are contained within the monitor and accessible only within the monitor. Two condition variables are: 1. X. wait ( ): Suspend execution of the calling process on condition X. The monitor is now available for use by another process. 2. X. signal ( ): Resume execution of some process suspended often a X.wait on the same condition. This operation resumes exactly one suspended process. A condition variable is like a semaphore, with two differences: caller 1. A semaphore counts the number of excess up operations, but a signal operation on a condition variable has no effect unless some process is waiting. A wait on a condition variable always blocks the calling process. 2. A wait on a condition variable automatically does an up on the monitor mutex and blocks the

41 shows monitor with a condition variables. Interface condition { public void X.signal( ) public void X.wait(); } Bounded Buffer problem using monitors Monitor Bounded Buffer { private Buffer b=new Buffer (20); private int count=0; private condition nonfull, nonempty; public void add(object item) } if (count==20) nonfull.wait (); b.add(item); count ++; nonempty.signal (); } public object remove ( ) {

42 if (count == 0) nonempty.wait(); item result = b.remove ( ); count=count i; nonfull.signal ( ); return result; } Each condition variable is associated with some logical condition on the state of the monitor. Consider what happens when a consumer is blocked one the nonempty condition variable and producer calls add. 1. The producer adds the item to the buffer and calls nonempty signal(). 2. The producer is immediately blocked and the consumer is allowed to continue. 3. The consumer removes the item from the buffer and leaves the monitor. 4. The producer wakes up and since the signal operation wait the last statement in add, leaves the monitor. Monitors are a higher level concept than P & V. they are easier and safer to use but less flexible. Many languages are not supported by the monitor. Java is making monitors much more popular and well known. Solve the reader writer problem using monitor with reader priority. Ans: Reader-writers : monitor; Begin Integer readercount; Condition okread,okwrite; Boolean busy; Procedure startread; Begin

43 If busy then ok read.wait; Readercount :=readercount +1; Okread.signal; end startread; procedure endread; begin reader count:=readercount -1; if reder count = 0 then okwrite.signal; end endread; procedure startwrite; begin if busy OR readercount _ 0 then okwrite.wait; busy :=true; end startwrite; procedure endwrite; begin busy :=false; if okread.queue then okread.signal else okwrite.signal end endwrite; begin (*initialization*) readercount := 0; busy=false; end; end readers-writers; Solve procedure- consumer problem with monitors. Ans.: monitor oprocedure-consumer condition full empty; integer count;

44 procedure insert (item:integer); begin if count=n then wait (full); insert-item(item); count :=count+1; if count=1 then signal (empty) end function remove:integer; begin if count = 0 then wait(empty); remove=remove_item; if count =N-1 then signal (full); end; count :=0; end monitor; procedure producer; begin while true do begin item=produce-item; producer_consumer.insert(item) end end; procedure consumer; begin while true do begin item=procedure_consumer.remove; consume_item(item) end end;

45 By making the mutual the exclusion of critical regions automatic, monitors make parallel programming much less error prone than with semaphores. Drawbacks of monitors 1. Major weakness of monitors is the absence of concurrency if a monitor encapsulates the resource, since only one process can be active within a monitor at a time. 2. There is the possibility of deadlocks in the case of nested monitors calls. 3. Monitor concept is its lack of implementation most commonly used programming languages. 4. Monitor~ cannot easily be added if they are not natively supported by the language. DEADLOCK Introduction Deadlock is a significant problem that cn arise in a community of co-operating or competing processes. A deadlock is a situation where a group of processes are permanently blocked as a result of each process having acquired a subset of the resources needed for its completion and waiting for release of the remaining resource held by other in the same group thus making it impossible for any of the processes to proceed. Resource managers and other operating system processes can be divided in a deadlock situation. System Model Finite numbers of resources is available in the system. These resources are distributed among a number of competing processes. Two general categories of resources can be resources. 1. Reusable resources. 2. Consumable resources. A reusable resource is one can be safely used by only one process at a time and is not depleted

46 by that use. Processes obtain resource units that they later release for reuse by other processes. Examples of reusable resources include processor, I/O channels, I/O devices, primary and secondary memory, files, database, semaphores etc. Consumable resource is once that can be created and destroyed. There is no limit on the number of interrupts, signals, messages and information inn I/O buffers. A process must request a resource before using it, and must release the resource after using it. The number of resources requested may not excesses the total number of resources available in the system. If the system has 4 printers then the request for printer is equal to or less than 4. A process may utilize a resource in only the following sequence. 1. Request: If the request cannot be granted immediately, then the requesting process must wait until it can acquire the resources. 2. Use: The process releases the resource. 3. Release: The process releases the resources. follows: The three processes might have put the system in the state shown below by executing as Process 1 Process 2 Process request (resource 1); /* Holding res 1*/ request (resource 2); request (resource 2); /* Holding res 2*/ request (resource 3); request (resource 3); /* Holding res 3*/ request (resource 1); Process 1 is holding resource 1 and requesting resource 2; Process 2 is holding resource 2 and requesting resource 3; Process 3 is holding resource 3 and requesting resource 1; none of the processes can proceed because all are waiting for a resource held by another blocked process. Unless one of the process detects the situation and is able to withdraw its request for a resource and release the one resource allocated to it, none of the processes will ever be able to run.

47 Deadlock Characterization In a deadlock, processes never finish executing and system resources are tied up, preventing other jobs from starting. Necessary Conditions Suppose the following conditions hold regarding the way a process uses resources. 1. Mutual exclusion 2. Hold and wait 3. No pre-emption 4. Circular wait. 1. Mutual exclusion : Only one process may use a resource at a time. Once a process has been allocated a particular resource, it has exclusive use of the resource. No other process can use a resource while it is allocated to a process. 2. Hold and wait: A process can arise in which process, P1 holds resource R1 another one. 3. Circular Waiting: A situation can arise in which process, P1 holds resource R1 while it requests resource R2 and process P2 holds R2 while it requests resource R1. Each process holds at least one resource needed by the next process in the chain. There may be more than two processes involved in a circular wait. 4. No pre emption:

48 No resource can be forcibly removed from a process holding it. Resources can be released only by the explicit action of the process, rather than by the action of an external authority. A deadlock is possible only if all four of these conditions simultaneously hold in the community of processes. These conditions are necessary for a deadlock to exist. Resource Allocation Graph Resource Allocation graph is used to describe the deadlock. It is also called system resource allocation graph. Graph consists of a set of vertices (V) and set of edges (E). All the active processes in the system denoted by P={P1,,P2,..Pn} and set consisting of all resource type in the system is denoted by R={R1,,R2,..Rm}. Request edge is an edge from process to resource and denoted by Pi _Rj. An assignment edge is an edge from resource to process and denoted Rj_Pi. Holding of resource by process is denoted by assignment edge. Requesting of resource in the resource allocation graph is shown by square and circle. Each process is represented by circle and resource by square. Dot within the square represents the number of instance. Fig. 6.2 shows the resource-allocation graph. System consists of three processes i.e.p1,p2 and P3 and four resources i.e..r1,r2 R3 and R4. resource R1 and R3 have one instance, R2 has two instance and R4 has three instances. 1) The sets P, R and E consists * P = {P1, P2, P3} * R = {R1,R2,R3,R4} * E = {P1_ R1, P2_ R3, R1_ P2, R2_ P2, R2_ P1, R3_ P3} 2) Resource instances * Resource R1 One instance * Resource R2 - Two instances * Resource R3 - One instance * Resource R4 - Three instances 3) Process states

49 * Process P1 is holding an instance of resource type R2 and is waiting for an instance of resource type R1 * Process P2 is holding an instance of R1 and R2 and waiting for an instance of resource type R3 * Process P3 is holding an instance R3. If the graph contains no cycles, then no process in the system is deadlock. If the graph does contain a cycle, then deadlock may exist. Suppose that process P3 requests an instance of resource type R2. Since no resource instance is currently available, a request edge P3_ R2 is added to the graph.. P1_ R1 _ P2 _ R3 _ P3 _ R2 _ P1 P2_ R3 _ P3 _ R2 _ P2 Process P1, P2 and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by process P3. Process P3 is waiting for either process P1 or process P2 to release resource R2.

50 Resource Allocation Graph With A Deadlock Process P1 is waiting for process P2 to release resource R1. Let us consider P1 _ R1 _ P3 _ R2_P1 There is a cycle but no deadlock. Because the process P4 may release its instance of resource type R2. That resource can then be allocated to P3, breaking the cycle. Example: Given the process resource usage and availability, draw the resource allocation graph.

51 Process R1 R2 R3 R1 R2 R3 R1 R2 R3 P P P P Consider the traffic deadlock shown in fig. show that the four necessary conditions for deadlocks hold in this example.

52 Ans : 1. Mutual exclusion : Only one car may be occupying a particular spot on the road at any instant 2. Hold and wait : No car ever back up. 3. No pre-emption : No car is permitted to push another car out of the way. 4. Circular wait : Each corner of the city block contains vehicles whose movement depends on the vehicles blocking the next intersection. Methods for Handling Deadlocks Deadlock problem is handled in following ways : 1. Protocol : Using protocol, we can prevent or avoid deadlocks. To take care that, system will never enter a deadlock state. 2. Detect and recover : Allow the system to enter a deadlock state, detect it and recover from deadlock. 3. Ignore the deadlock : To ensure the deadlock never occurs in the system Deadlock prevention and deadlock detection algorithm is used for ignoring the deadlock. Deadlock prevention is a set of methods for ensuring that at least one of the necessary condition cannot hold. These methods prevent deadlocks by constraining how requests for resources can be made. Deadlock avoidance requires that the operating system be given in advance, additional information concerning which resources, a process will request and use during its lifetime. If a system does not employ either a deadlock prevention or a deadlock avoidance algorithm, then a deadlock situation may occur. If a system does not ensure that a deadlock will never occur, and also does not provide a mechanism for deadlock detection and recovery, then the system is in a deadlock state. Deadlock Prevention Methods for preventing deadlock are of two classes : Indirect method and direct method. An indirect method is to prevent the occurrence of one of the three necessary condition i.e. mutual exclusion, hold and wait and no pre-emption. A direct method is to prevent the occurrence of a circular wait.

53 Mutual Exclusion Mutual exclusion condition must hold for non-sharable resources. If access to a resource requires mutual exclusion, then mutual exclusion must be supported by the operating system. Some resources, such as files, may allow multiple accesses for reads but only exclusive access for writes. In this case deadlock can occur if more than one process requires write permission. Hold and Wait The hold and wait condition can be eliminated by forcing a process to release all resources held by it whenever it requests a resource that is not available. For example, process copies data from a floppy disk to a hard disk, sort a disk file and then prints the results to a printer. If all the resources must be requested at the beginning of the process, then the process must initially request the floppy disk, hard disk and a printer. It will hold the printer for its entire execution, even though it needs the printer only at the end. In these two method, resource utilization is low in the first method and second method is affected by the starvation. No Pre-emption This condition is also caused by the nature of the resource. This condition can be prevented in several way. If a process holding certain resources is denied a further request. That process must release its original resources and if necessary request them again, together with additional resource. If a process requests a resource that is currently held by another process, the operating system may preempt the second process and require it to release its resources. In general, sequential I /O devices cannot be preempted. Pre-emption is possible for certain types of resources, such as CPU and main memory. Circular Wait One way to prevent the circular-wait condition is by linear ordering of different types of system resources. In this, system resources are divided into different classes. If a process has been allocated resources of type R, then it may subsequently request only those resource types following R in the

54 ordering. For example, process hold the resource of class Ci, then it can only request resource of class i+1 or higher thereafter. Linear of resource classes eliminates the possibility of circular waiting, since a process Pi holding a resource in class Ci cannot possibly wait for any process that is itself waiting for a resource in class Ci or lower. As with hold and wait prevention, circular wait prevention may be inefficient, slowing down processes and denying resource access unnecessarily. Deadlock Avoidance Deadlock avoidance allows the three necessary conditions but makes judicious choices to assure that the deadlock point is never reached. Deadlock avoidance therefore allows more concurrency than prevention does. Deadlock avoidance requires additional information about how resources are to be requested. With deadlock avoidance, a decision is made dynamically whether the current resource allocation request could, if granted, potentially lead to a deadlock. Two approaches are used to avoid the deadlock shows the relationship between safe, unsafe state and a deadlock states 1) Do not start a process if its demands might lead to deadlock. 2) Do not grant an incremental resource request to a process if this allocation might lead to deadlock. A deadlock avoidance algorithm dynamically examines the resource allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resource and the maximum demands of processes.

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L10 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Development project: You

More information

Chapter 5 CPU scheduling

Chapter 5 CPU scheduling Chapter 5 CPU scheduling Contents Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling

More information

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition, Chapter 5: CPU Scheduling Operating System Concepts 8 th Edition, Hanbat National Univ. Computer Eng. Dept. Y.J.Kim 2009 Chapter 5: Process Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Chapter 5: Process Scheduling

Chapter 5: Process Scheduling Chapter 5: Process Scheduling Chapter 5: Process Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Thread Scheduling Operating Systems Examples Algorithm

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation

More information

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition Chapter 5: CPU Scheduling Silberschatz, Galvin and Gagne 2011 Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2019 Lecture 8 Scheduling Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ POSIX: Portable Operating

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 10 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Chapter 6: CPU Scheduling Basic Concepts

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

CS307: Operating Systems

CS307: Operating Systems CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling COP 4610: Introduction to Operating Systems (Fall 2016) Chapter 5: CPU Scheduling Zhi Wang Florida State University Contents Basic concepts Scheduling criteria Scheduling algorithms Thread scheduling Multiple-processor

More information

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01 1 UNIT 2 Basic Concepts of CPU Scheduling UNIT -02/Lecture 01 Process Concept An operating system executes a variety of programs: **Batch system jobs **Time-shared systems user programs or tasks **Textbook

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Chapter 6: Synchronization 6.1 Background 6.2 The Critical-Section Problem 6.3 Peterson s Solution 6.4 Synchronization Hardware 6.5 Mutex Locks 6.6 Semaphores 6.7 Classic

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms Operating System Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts Scheduling Criteria Scheduling Algorithms OS Process Review Multicore Programming Multithreading Models Thread Libraries Implicit

More information

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6. Part Three - Process Coordination Chapter 6: Synchronization 6.1 Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure

More information

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts CS307 Basic Concepts Maximize CPU utilization obtained with multiprogramming CPU Scheduling CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait CPU burst distribution

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

OPERATING SYSTEMS UNIT-II

OPERATING SYSTEMS UNIT-II UNIT-II Threads Overview Threading issues - CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real Time Scheduling - The Critical- Section Problem Synchronization

More information

CPU Scheduling Algorithms

CPU Scheduling Algorithms CPU Scheduling Algorithms Notice: The slides for this lecture have been largely based on those accompanying the textbook Operating Systems Concepts with Java, by Silberschatz, Galvin, and Gagne (2007).

More information

Operating Systems CS 323 Ms. Ines Abbes

Operating Systems CS 323 Ms. Ines Abbes Taibah University College of Community of Badr Computer Science Department Operating Systems CS71/CS72 جامعة طيبة كلية المجتمع ببدر قسم علوم الحاسب مقرر: نظم التشغيل Operating Systems CS 323 Ms. Ines Abbes

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L11 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel feedback queue:

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Synchronization, Silberschatz, Galvin and Gagne 2009 Outline Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information

Properties of Processes

Properties of Processes CPU Scheduling Properties of Processes CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution: CPU Scheduler Selects from among the processes that

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Homework Assignment #5

Homework Assignment #5 Homework Assignment #5 Question 1: Scheduling a) Which of the following scheduling algorithms could result in starvation? For those algorithms that could result in starvation, describe a situation in which

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

Department of Computer applications. [Part I: Medium Answer Type Questions]

Department of Computer applications. [Part I: Medium Answer Type Questions] Department of Computer applications BBDNITM, Lucknow MCA 311: OPERATING SYSTEM [Part I: Medium Answer Type Questions] UNIT 1 Q1. What do you mean by an Operating System? What are the main functions of

More information

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3 Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through

More information

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition Chapter 5: Process Synchronization Silberschatz, Galvin and Gagne 2013 Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks

More information

User-level threads are threads that are visible to a programmer and are unknown to the kernel OS kernel supports and manages kernel-level threads

User-level threads are threads that are visible to a programmer and are unknown to the kernel OS kernel supports and manages kernel-level threads Overview A thread is a flow of control within a process A multithreaded process contains several different flows of control within the same address space A traditional (heavyweight) process has one thread

More information

Lesson 6: Process Synchronization

Lesson 6: Process Synchronization Lesson 6: Process Synchronization Chapter 5: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Mutex Locks Semaphores Classic Problems of Synchronization

More information

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition, Chapter 6: Process Synchronization, Silberschatz, Galvin and Gagne 2009 Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

Process Co-ordination OPERATING SYSTEMS

Process Co-ordination OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 PROCESS - CONCEPT Processes executing concurrently in the

More information

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

Process Synchronization

Process Synchronization Process Synchronization Chapter 6 2015 Prof. Amr El-Kadi Background Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly

More information

Chapter 6: Process Synchronization

Chapter 6: Process Synchronization Chapter 6: Process Synchronization Objectives Introduce Concept of Critical-Section Problem Hardware and Software Solutions of Critical-Section Problem Concept of Atomic Transaction Operating Systems CS

More information

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table.

2. The shared resource(s) in the dining philosophers problem is(are) a. forks. b. food. c. seats at a circular table. CSCI 4500 / 8506 Sample Questions for Quiz 3 Covers Modules 5 and 6 1. In the dining philosophers problem, the philosophers spend their lives alternating between thinking and a. working. b. eating. c.

More information

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Midterm Exam Reviews ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far. Particular attentions on the following: System call, system kernel Thread/process, thread vs process

More information

Practice Exercises 305

Practice Exercises 305 Practice Exercises 305 The FCFS algorithm is nonpreemptive; the RR algorithm is preemptive. The SJF and priority algorithms may be either preemptive or nonpreemptive. Multilevel queue algorithms allow

More information

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation The given code is as following; boolean flag[2]; int turn; do { flag[i]=true; turn=j; while(flag[j] && turn==j); critical

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

Process Synchronization

Process Synchronization Process Synchronization Concurrent access to shared data in the data section of a multi-thread process, in the shared memory of multiple processes, or in a shared file Although every example in this chapter

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 12: Scheduling & Deadlock Priority Scheduling Priority Scheduling Choose next job based on priority» Airline checkin for first class passengers Can

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

CS370 Operating Systems Midterm Review

CS370 Operating Systems Midterm Review CS370 Operating Systems Midterm Review Yashwant K Malaiya Fall 2015 Slides based on Text by Silberschatz, Galvin, Gagne 1 1 What is an Operating System? An OS is a program that acts an intermediary between

More information

COSC243 Part 2: Operating Systems

COSC243 Part 2: Operating Systems COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30 Overview Last lecture: Cooperating

More information

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization. Topic 4 Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently.

More information

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne Histogram of CPU-burst Times 6.2 Silberschatz, Galvin and Gagne Alternating Sequence of CPU And I/O Bursts 6.3 Silberschatz, Galvin and Gagne CPU

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

Process Synchronization

Process Synchronization CSC 4103 - Operating Systems Spring 2007 Lecture - VI Process Synchronization Tevfik Koşar Louisiana State University February 6 th, 2007 1 Roadmap Process Synchronization The Critical-Section Problem

More information

Scheduling. Today. Next Time Process interaction & communication

Scheduling. Today. Next Time Process interaction & communication Scheduling Today Introduction to scheduling Classical algorithms Thread scheduling Evaluating scheduling OS example Next Time Process interaction & communication Scheduling Problem Several ready processes

More information

Job Scheduling. CS170 Fall 2018

Job Scheduling. CS170 Fall 2018 Job Scheduling CS170 Fall 2018 What to Learn? Algorithms of job scheduling, which maximizes CPU utilization obtained with multiprogramming Select from ready processes and allocates the CPU to one of them

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All

More information

Scheduling. Scheduling 1/51

Scheduling. Scheduling 1/51 Scheduling 1/51 Learning Objectives Scheduling To understand the role of a scheduler in an operating system To understand the scheduling mechanism To understand scheduling strategies such as non-preemptive

More information

PESIT Bangalore South Campus

PESIT Bangalore South Campus INTERNAL ASSESSMENT TEST II Date: 04/04/2018 Max Marks: 40 Subject & Code: Operating Systems 15CS64 Semester: VI (A & B) Name of the faculty: Mrs.Sharmila Banu.A Time: 8.30 am 10.00 am Answer any FIVE

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019 CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,

More information

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes: CPU Scheduling (Topic 3) 홍성수 서울대학교공과대학전기공학부 Real-Time Operating Systems Laboratory CPU Scheduling (1) Resources fall into two classes: Preemptible: Can take resource away, use it for something else, then

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

1. Motivation (Race Condition)

1. Motivation (Race Condition) COSC4740-01 Operating Systems Design, Fall 2004, Byunggu Yu Chapter 6 Process Synchronization (textbook chapter 7) Concurrent access to shared data in the data section of a multi-thread process, in the

More information

OS Process Synchronization!

OS Process Synchronization! OS Process Synchronization! Race Conditions! The Critical Section Problem! Synchronization Hardware! Semaphores! Classical Problems of Synchronization! Synchronization HW Assignment! 3.1! Concurrent Access

More information

Scheduling. Scheduling 1/51

Scheduling. Scheduling 1/51 Scheduling 1/51 Scheduler Scheduling Scheduler allocates cpu(s) to threads and processes. This action is known as scheduling. The scheduler is a part of the process manager code that handles scheduling.

More information

Process Synchronization

Process Synchronization TDDI04 Concurrent Programming, Operating Systems, and Real-time Operating Systems Process Synchronization [SGG7] Chapter 6 Copyright Notice: The lecture notes are mainly based on Silberschatz s, Galvin

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 9 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 CPU Scheduling: Objectives CPU scheduling,

More information

Chapter 6: Process Synchronization. Module 6: Process Synchronization

Chapter 6: Process Synchronization. Module 6: Process Synchronization Chapter 6: Process Synchronization Module 6: Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization

More information