Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.

Similar documents
Introduction to Deadlocks

Ch 4 : CPU scheduling

Chapter 7: Deadlocks. Operating System Concepts 8th Edition, modified by Stewart Weiss

Scheduling. The Basics

COP 4610: Introduction to Operating Systems (Spring 2016) Chapter 7 Deadlocks. Zhi Wang Florida State University

Deadlocks. Prepared By: Kaushik Vaghani

ICS Principles of Operating Systems. Lectures Set 5- Deadlocks Prof. Nalini Venkatasubramanian

UNIT:2. Process Management

University of Babylon / College of Information Technology / Network Department. Operating System / Dr. Mahdi S. Almhanna & Dr. Rafah M.

UNIT-5 Q1. What is deadlock problem? Explain the system model of deadlock.

CS420: Operating Systems. Deadlocks & Deadlock Prevention

System Model. Types of resources Reusable Resources Consumable Resources

Chapter 5: CPU Scheduling

Module 6: Deadlocks. Reading: Chapter 7

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009!

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Deadlocks. Operating System Concepts - 7 th Edition, Feb 14, 2005

Chapter 7: Deadlocks

Chapter 7: Deadlocks

Silberschatz, Galvin and Gagne 2013! CPU cycles, memory space, I/O devices! " Silberschatz, Galvin and Gagne 2013!

The Deadlock Problem

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Chapter - 4. Deadlocks Important Questions

Process- Concept &Process Scheduling OPERATING SYSTEMS

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

The Deadlock Problem (1)

Chapter 8: Deadlocks

Chapter 8: Deadlocks. The Deadlock Problem. System Model. Bridge Crossing Example. Resource-Allocation Graph. Deadlock Characterization

Chapter 7: Deadlocks. Operating System Concepts 9th Edition DM510-14

1.1 CPU I/O Burst Cycle

Module 7: Deadlocks. The Deadlock Problem. Bridge Crossing Example. System Model

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,

CS3733: Operating Systems

The Slide does not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams.

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

OPERATING SYSTEMS. Deadlocks

Unit 3 : Process Management

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition! Silberschatz, Galvin and Gagne 2013!

CHAPTER 7 - DEADLOCKS

Principles of Operating Systems

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Chapter 7: Deadlocks. Operating System Concepts with Java 8 th Edition

Deadlocks. Deadlock Overview

CSE Opera+ng System Principles

CMSC 412. Announcements

Chapter 7 : 7: Deadlocks Silberschatz, Galvin and Gagne 2009 Operating System Concepts 8th Edition, Chapter 7: Deadlocks

Chapter 7: Deadlocks. Chapter 7: Deadlocks. The Deadlock Problem. Chapter Objectives. System Model. Bridge Crossing Example

OPERATING SYSTEMS. Prescribed Text Book. Operating System Principles, Seventh Edition. Abraham Silberschatz, Peter Baer Galvin and Greg Gagne

Module 7: Deadlocks. The Deadlock Problem

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Chapter 8: Deadlocks. The Deadlock Problem

Operating Systems: Quiz2 December 15, Class: No. Name:

Chapter 7: Deadlocks

Deadlock. Chapter Objectives

CS307 Operating Systems Deadlocks

Lecture 7 Deadlocks (chapter 7)

Deadlocks. Bridge Crossing Example. The Problem of Deadlock. Deadlock Characterization. Resource-Allocation Graph. System Model

Chapter seven: Deadlock and Postponement

The Deadlock Problem

Deadlock Prevention. Restrain the ways request can be made. Mutual Exclusion not required for sharable resources; must hold for nonsharable resources.

Chapter 7: Deadlocks

The Deadlock Problem

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Deadlocks. Dr. Yingwu Zhu

Module 7: Deadlocks. System Model. Deadlock Characterization. Methods for Handling Deadlocks. Deadlock Prevention. Deadlock Avoidance

Contents. Chapter 8 Deadlocks

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Deadlock. Concepts to discuss. A System Model. Deadlock Characterization. Deadlock: Dining-Philosophers Example. Deadlock: Bridge Crossing Example

CS307: Operating Systems

Chapter 8: Deadlocks. Operating System Concepts with Java

Chapter 7: Deadlocks. Operating System Concepts 8 th Edition,

Scheduling of processes

Deadlock Risk Management

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Subject Teacher: Prof. Sheela Bankar

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Final Exam Review. CPSC 457, Spring 2016 June 29-30, M. Reza Zakerinasab Department of Computer Science, University of Calgary

MYcsvtu Notes. Unit - 1

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Last Class: Synchronization Problems. Need to hold multiple resources to perform task. CS377: Operating Systems. Real-world Examples

Deadlocks. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Operating Systems. Deadlocks. Stephan Sigg. November 30, Distributed and Ubiquitous Systems Technische Universität Braunschweig

Operating Systems 2015 Spring by Euiseong Seo DEAD LOCK

Process-1 requests the tape unit, waits. In this chapter, we shall analyze deadlocks with the following assumptions:

Chapter 7: Deadlocks. Operating System Concepts 9 th Edition

Deadlocks. System Model

Operating Systems Unit 3

Chapter 7: Deadlocks CS370 Operating Systems

CHAPTER 7: DEADLOCKS. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

Chapter 6 Concurrency: Deadlock and Starvation

Chapter 8: Deadlocks. The Deadlock Problem

The Deadlock Problem. Chapter 8: Deadlocks. Bridge Crossing Example. System Model. Deadlock Characterization. Resource-Allocation Graph

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

Principles of Operating Systems

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Properties of Processes

The deadlock problem

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Resource Management and Deadlocks 1

Transcription:

Topic 4 Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently. In uniprocessor only one process is running.a process migrates between various scheduling queues throughout its lifetime.the process of selecting processes from among these queues is carried out by a scheduler.the aim of processor scheduling is to assign processes to be executed by the processor. Scheduling affects the performance of the system, because it determines which process will wait and which will progress. CPU Scheduling CPU scheduling is a process which allows one process to use the CPU while the execution of another process is on hold(in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair. BASIC CONCEPTS The objective of multi-programming is to have some process running at all times, to maximize CPU utilization. For a Uni-processor system, there will never be more than one running process. Scheduling is a fundamental operating system function. The idea of multi-programming is to execute a process until it must wait, typically for the completion of some I/O request. The CPU is one of the primary computer resources.

The CPU scheduling is central to operating system design Scheduling Criteria Scheduling criteria is also called as scheduling methodology. Key to multiprogramming is scheduling.different CPU scheduling algorithm have different properties.the criteria used for comapring these algorithms include the following: CPU Utilization: Keep the CPU as busy as possible. It range from 0 to 100%. In practice, it range from 40 to 90%. Throughput: Throughput is the rate at which processes are completed per unit of time. Turnaround time: This is the how long a process takes to execute a process. It is calculated as the time gap between the submission of a process and its completion. Waiting time: Waiting time is the sum of the time periods spent in waiting in the ready queue. Response time: Response time is the time it takes to start responding from submission time.it is calculated as the amount of time it takes from when a request was submitted until the first response is produced. Fairness: Each process should have a fair share of CPU.

CPU-I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. Processes alternate between these two states. Process execution begins with a CPU burst. That is followed by an I/O burst, then another CPU burst, then another I/O burst, and so on. Eventually, the last CPU burst will end with a system request to terminate execution, rather than with another I/O burst. Non-preemptive Scheduling : In non-preemptive mode, once if a process enters into running state, it continues to execute until it terminates or blocks itself to wait for Input/Output or by requesting some operating system service. Preemptive Scheduling :

In preemptive mode, currently running process may be interrupted and moved to the ready State by the operating system. When a new process arrives or when an interrupt occurs, preemptive policies may incur greater overhead than non-preemptive version but preemptive version may provide better service. It is desirable to maximize CPU utilization and throughput, and to minimize turnaround time, waiting time and response time. Preemptive Scheduling CPU scheduling decisions take place under one of four conditions: 1. When a process switches from the running state to the waiting state, such as for an I/O request or invocation of the wait( ) system call. 2. When a process switches from the running state to the ready state, for example in response to an interrupt. 3. When a process switches from the waiting state to the ready state, say at completion of I/O or a return from wait( ). 4. When a process terminates. A Process Scheduler schedules different processes to be assigned to the CPU based on particular scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss in this chapter First-Come, First-Served (FCFS) Scheduling Shortest-Job-Next (SJN) Scheduling Priority Scheduling Shortest Remaining Time Round Robin(RR) Scheduling Multiple-Level Queues Scheduling These algorithms are either non-preemptive or preemptive. Nonpreemptive algorithms are designed so that once a process enters the running state, it cannot be preempted until it completes its allotted time, whereas the preemptive scheduling is based on priority where a

scheduler may preempt a low priority running process anytime when a high priority process enters into a ready state. First Come First Serve (FCFS) Jobs are executed on first come, first serve basis. It is a non-preemptive, pre-emptive scheduling algorithm. Easy to understand and implement. Its implementation is based on FIFO queue. Poor in performance as average wait time is high. Wait time of each process is as follows Process Wait Time : Service Time - Arrival Time P0 0-0 = 0 P1 5-1 = 4 P2 8-2 = 6 P3 16-3 = 13 Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN) This is also known as shortest job first, or SJF This is a non-preemptive, pre-emptive scheduling algorithm. Best approach to minimize waiting time. Easy to implement in Batch systems where required CPU time is known in advance. Impossible to implement in interactive systems where required CPU time is not known. The processer should know in advance how much time process will take. Wait time of each process is as follows Process Wait Time : Service Time - Arrival Time P0 3-0 = 3 P1 0-0 = 0 P2 16-2 = 14

P3 8-3 = 5 Average Wait Time: (3+0+14+5) / 4 = 5.50 Priority Based Scheduling Priority scheduling is a non-preemptive algorithm and one of the most common scheduling algorithms in batch systems. Each process is assigned a priority. Process with highest priority is to be executed first and so on. Processes with same priority are executed on first come first served basis. Priority can be decided based on memory requirements, time requirements or any other resource requirement. Wait time of each process is as follows Process Wait Time : Service Time - Arrival Time P0 9-0 = 9 P1 6-1 = 5

P2 14-2 = 12 P3 0-0 = 0 Average Wait Time: (9+5+12+0) / 4 = 6.5 Shortest Remaining Time Shortest remaining time (SRT) is the preemptive version of the SJN algorithm. The processor is allocated to the job closest to completion but it can be preempted by a newer ready job with shorter time to completion. Impossible to implement in interactive systems where required CPU time is not known. It is often used in batch environments where short jobs need to give preference. Round Robin Scheduling Round Robin is the preemptive process scheduling algorithm. Each process is provided a fix time to execute, it is called a quantum. Once a process is executed for a given time period, it is preempted and other process executes for a given time period. Context switching is used to save states of preempted processes. Wait time of each process is as follows

Process Wait Time : Service Time - Arrival Time P0 (0-0) + (12-3) = 9 P1 (3-1) = 2 P2 (6-2) + (14-9) + (20-17) = 12 P3 (9-3) + (17-12) = 11 Average Wait Time: (9+2+12+11) / 4 = 8.5 Multiple-Level Queues Scheduling Multiple-level queues are not an independent scheduling algorithm. They make use of other existing algorithms to group and schedule jobs with common characteristics. Multiple queues are maintained for processes with common characteristics. Each queue can have its own scheduling algorithms. Priorities are assigned to each queue. For example, CPU-bound jobs can be scheduled in one queue and all I/Obound jobs in another queue. The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based on the algorithm assigned to the queue.

Deadlock What is a Deadlock? Deadlocks are a set of blocked processes each holding a resource and waiting to acquire a resource held by another process. How to avoid Deadlocks Deadlocks can be avoided by avoiding at least one of the four conditions, because all this four conditions are required simultaneously to cause deadlock. 1. Mutual Exclusion Resources shared such as read-only files do not lead to deadlocks but resources, such as printers and tape drives, requires exclusive access by a single process. 2. Hold and Wait In this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others. 3. No Preemption

Preemption of process resource allocations can avoid the condition of deadlocks, where ever possible. 4. Circular Wait Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly increasing(or decreasing) order. Necessary Conditions for Deadlock There are four necessary conditions for a deadlock to occur: Mutual exclusion, hold and wait, no preemption and circular wait. Mutual Exclusion There should be at least one non-shareable resource. Hold and Wait A deadlock will occur if a process can hold a resource and then wait for another. If only every resource can wait for a resource (without holding), there won t be any deadlock as no one is holding on to any resource. No preemption If resources allocated to a process cannot be preempted there is a high change of deadlocks. But if resources allocated to a process can be preempted, meaning taken out and given to one waiting, deadlocks won t happen. Circular Wait If each process holds a resource and wait for another resource, which is held by another waiting process, and if it finally forms a circular wait sequence, a deadlock will occur. For instance P1 (hold R1) waits for R2 (hold by P2), P2 (hold R2) wait for R3 (hold by P3), P3 (hold R3) wait for R1 (hold by P1). Everybody will keep on holding and waiting in a loop. There are many diagrams that will help you figure out if there is a circular wait sequence. Handling Deadlock The above points focus on preventing deadlocks. But what to do once a deadlock has occured. Following three strategies can be used to remove deadlock after its occurrence. 1. Preemption

We can take a resource from one process and give it to other. This will resolve the deadlock situation, but sometimes it does causes problems. 2. Rollback In situations where deadlock is a real possibility, the system can periodically make a record of the state of each process and when deadlock occurs, roll everything back to the last checkpoint, and restart, but allocating resources differently so that deadlock does not occur. 3. Kill one or more processes This is the simplest way, but it works. One problem that arises in multiprogrammed systems is deadlock. A process or thread is in a state of deadlock (or is deadlocked) if the process or thread is waiting for a particular event that will not occur. In a system deadlock, one or more processes are deadlocked. Most deadlocks develop because of the normal contention for dedicated resources (i.e., resources that may be used by only one user at a time). Circular wait is characteristic of deadlocked systems. One example of a system that is prone to deadlock is a spooling system. A common solution is to restrain the input spoolers so that, when the spooling files begin to reach some saturation threshold, they do not read in more print jobs. Characteristics of Deadlock The four necessary conditions for deadlock are: a) A resource may be acquired exclusively by only one process at a time (mutual exclusion condition); b) A process that has acquired an exclusive resource may hold it while waiting to obtain other resources (wait-for condition, also called the hold-and-wait condition); c) Once a process has obtained a resource, the system cannot remove the resource from the process's control until the process has finished using the resource (nopreemption condition); d) And two or more processes are locked in a "circular chain" in which each processing the chain is waiting for one or more resources that the next process in the chain is holding (circular-wait condition).

Because these are necessary conditions for a deadlock to exist, the existence of a deadlock implies that each of them must be in effect. Taken together, all four conditions are necessary and sufficient for deadlock to exist (i.e., if all these conditions are in place, the system is deadlocked). The four major areas of interest in deadlock research are deadlock prevention, deadlock avoidance, deadlock detection, and deadlock recovery. Deadlock Prevention Deadlock prevention algorithms ensure that at least one of the necessary conditions (Mutual exclusion, hold and wait, no preemption and circular wait) does not hold true. However most prevention algorithms have poor resource utilization, and hence result in reduced throughputs. Mutual Exclusion Not always possible to prevent deadlock by preventing mutual exclusion (making all resources shareable) as certain resources are cannot be shared safely. Hold and Wait We will see two approaches, but both have their disadvantages. A resource can get all required resources before it start execution. This will avoid deadlock, but will result in reduced throughputs as resources are held by processes even when they are not needed. They could have been used by other processes during this time. Second approach is to request for a resource only when it is not holing any other resource. This may result in a starvation as all required resources might not be available freely always. No preemption We will see two approaches here. If a process request for a resource which is held by another waiting resource, then the resource may be preempted from the other waiting resource. In the second approach, if a process request for a resource which are not readily available, all other resources that it holds are preempted. The challenge here is that the resources can be preempted only if we can save the current state can be saved and processes could be restarted later from the saved state. Circular wait

To avoid circular wait, resources may be ordered and we can ensure that each process can request resources only in an increasing order of these numbers. The algorithm may itself increase complexity and may also lead to poor resource utilization. Deadlock avoidance As you saw already, most prevention algorithms have poor resource utilization, and hence result in reduced throughputs. Instead, we can try to avoid deadlocks by making use prior knowledge about the usage of resources by processes including resources available, resources allocated, future requests and future releases by processes. Most deadlock avoidance algorithms need every process to tell in advance the maximum number of resources of each type that it may need. Based on all these info we may decide if a process should wait for a resource or not, and thus avoid chances for circular wait. If a system is already in a safe state, we can try to stay away from an unsafe state and avoid deadlock. Deadlocks cannot be avoided in an unsafe state. A system can be considered to be in safe state if it is not in a state of deadlock and can allocate resources upto the maximum available. A safe sequence of processes and allocation of resources ensures a safe state. Deadlock avoidance algorithms try not to allocate resources to a process if it will make the system in an unsafe state. Since resource allocation is not done right away in some cases, deadlock avoidance algorithms also suffer from low resource utilization problem. A resource allocation graph is generally used to avoid deadlocks. If there are no cycles in the resource allocation graph, then there are no deadlocks. If there are cycles, there may be a deadlock. If there is only one instance of every resource, then a cycle implies a deadlock. Vertices of the resource allocation graph are resources and processes. The resource allocation graph has request edges and assignment edges. An edge from a process to resource is a request edge and an edge from a resource to process is an allocation edge. A calm edge denotes that a request may be made in future and is represented as a dashed line. Based on calm edges we can see if there is a chance for a cycle and then grant requests if the system will again be in a safe state. Consider the image with calm edges as below:

If R2 is allocated to p2 and if P1 request for R2, there will be a deadlock. The resource allocation graph is not much useful if there are multiple instances for a resource. In such a case, we can use Banker s algorithm. In this algorithm, every process must tell upfront the maximum resource of each type it need, subject to the maximum available instances for each type. Allocation of resources is made only, if the allocation ensures a safe state; else the processes need to wait. The Banker s algorithm can be divided into two parts: Safety algorithm if a system is in a safe state or not. The resource request algorithm make an assumption of allocation and see if the system will be in a safe state. If the new state is unsafe, the resources are not allocated and the data structures are restored to their previous state; in this case the processes must wait for the resource. You can refer to any operating system text books for details of these algorithms. Deadlock Detection If deadlock prevention and avoidance are not done properly, as deadlock may occur and only things left to do is to detect the recover from the deadlock.

If all resource types has only single instance, then we can use a graph called wait-for-graph, which is a variant of resource allocation graph. Here, vertices represent processes and a directed edge from P1 to P2 indicate that P1 is waiting for a resource held by P2. Like in the case of resource allocation graph, a cycle in a wait-for-graph indicate a deadlock. So the system can maintain a wait-for-graph and check for cycles periodically to detect any deadlocks. The wait-for-graph is not much useful if there are multiple instances for a resource, as a cycle may not imply a deadlock. In such a case, we can use an algorithm similar to Banker s algorithm to detect deadlock. We can see if further allocations can be made on not based on current allocations. You can refer to any operating system text books for details of these algorithms. Deadlock Recovery Once a deadlock is detected, you will have to break the deadlock. It can be done through different ways, including, aborting one or more processes to break the circular wait condition causing the deadlock and preempting resources from one or more processes which are deadlocked.

Banker s Algorithm Banker s algorithm is a deadlock avoidance algorithm. It is named so because this algorithm is used in banking systems to determine whether a loan can be granted or not. Consider there are n account holders in a bank and the sum of the money in all of their accounts is S. Everytime a loan has to be granted by the bank, it subtracts the loan amount from the total money the bank has. Then it checks if that difference is greater than S. It is done because, only then, the bank would have enough money even if all the n account holders draw all their money at once. Banker s algorithm works in a similar way in computers. Whenever a new process is created, it must exactly specify the maximum instances of each resource type that it needs. Let us assume that there are n processes and m resource types. Some data structures are used to implement the banker s algorithm. They are: Available: It is an array of length m. It represents the number of available resources of each type. If Available[j] = k, then there are k instances available, of resource type Rj. Max: It is an n x m matrix which represents the maximum number of instances of each resource that a process can request. If Max[i][j] = k, then the process Pi can request atmost k instances of resource type Rj. Allocation: It is an n x m matrix which represents the number of resources of each type currently allocated to each process. If Allocation[i][j] = k, then process Pi is currently allocated k instances of resource type Rj. Need: It is an n x m matrix which indicates the remaining resource needs of each process. If Need[i][j] = k, then process Pi may need k more instances of resource type Rj to complete its task. Need[i][j] = Max[i][j] - Allocation [i][j] Resource Request Algorithm: This describes the behavior of the system when a process makes a resource request in the form of a request matrix. The steps are: 1. If number of requested instances of each resource is less than the need (which was declared previously by the process), go to step 2. 2. If number of requested instances of each resource type is less than the available resources of each type, go to step 3. If not, the process has to wait because sufficient resources are not available yet. 3. Now, assume that the resources have been allocated. Accordingly do,

Available = Available - Requesti Allocationi = Allocationi + Requesti Needi = Needi - Requesti This step is done because the system needs to assume that resources have been allocated. So there will be less resources available after allocation. The number of allocated instances will increase. The need of the resources by the process will reduce. That s what is represented by the above three operations. After completing the above three steps, check if the system is in safe state by applying the safety algorithm. If it is in safe state, proceed to allocate the requested resources. Else, the process has to wait longer. Safety Algorithm: 1. Let Work and Finish be vectors of length m and n, respectively. Initially, 2. Work = Available 3. Finish[i] =false for i = 0, 1,..., n - 1. This means, initially, no process has finished and the number of available resources is represented by the Available array. 4. Find an index i such that both 5. Finish[i] ==false 6. Needi <= Work If there is no such i present, then proceed to step 4. It means, we need to find an unfinished process whose need can be satisfied by the available resources. If no such process exists, just go to step 4. 7. Perform the following: 8. Work = Work + Allocation; 9. Finish[i] = true; Go to step 2. When an unfinished process is found, then the resources are allocated and the process is marked finished. And then, the loop is repeated to check the same for all other processes. 10. If Finish[i] == true for all i, then the system is in a safe state. That means if all processes are finished, then the system is in safe state.

MSBTE QUESTION ANSWERS 1.State and explain four scheduling criteria. 4M ANS: CPU utilization: In multiprogramming the main objective is to keep CPU as busy as possible. CPU utilization can range from 0 to 100 percent. Throughput: It is the number of processes that are completed per unit time. It is a measure of work done in the system. When CPU is busy in executing processes, then work is being done in the system. Throughput depends on the execution time required for any process. For long processes, throughput can be one process per unit time whereas for short processes it may be 10 processes per unit time. Turnaround time: The time interval from the time of submission of a process to the time of completion of that process is called as turnaround time. It is the sum of time period spent waiting to get into the memory, waiting in the ready queue, executing with the CPU, and doing I/O operations. It indicates the time period for which a process exists in the system. Waiting time: - It is the sum of time periods spent in the ready queue by a process. When a process is selected from job pool, it is loaded into the main memory (readyqueue).a process waits in ready queue till CPU is allocated to it. Once the CPU is allocated to the process, it starts its execution and if required request for resources. When the resources are not available that process goes into waiting state and when I/O request completes, it goes back to ready queue. In ready queue again it waits for CPU allocation. Response time:-the time period from the submission of a request until the first response is produced is called as response time. It is the time when system responds to the process request not the completion of a process. In the system, a process can Produce some output fairly early and can continue computing new results while previous results are being output to the user. 2.State and describe necessary conditions for dead lock. 4M ANS: 1. Mutual Exclusion: The resources involved are non-shareable. At least one resource (thread) must be held in a non-shareable mode, that is, only one process at a time claimsexclusive control of the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and Wait: Requesting process hold already, resources while waiting for requested resources. There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes. 3. No-Preemptive: Resources already allocated to a process cannot be preempted. Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it.

4. Circular Wait: The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list. 2.With neat diagram, explain multilevel queue scheduling. 4M ANS: Any relevant diagram shall be considered. Multilevel queue scheduling classifies processes into different groups. It partitions the ready queue into several separate queues. The processes are permanently assigned to one queue based on some properties such as memory size, priority, process type, etc. Each queue has its own scheduling algorithm. In a system there are foreground processes and background processes. So system can divide processes into two queues: one for background and other for foreground. Foreground queue can be scheduled with Round Robin algorithm where as background queue can be scheduled by First Come First Serve algorithm. Scheduling is done for all the processes inside the queue as well as for all separate queues. Example: Consider all the processes in the system are divided into four groups: system, interactive, interactive editing, batch and student processes queue. Each queue contains processes. CPU is first scheduled for all queues on may be priority, total burst time or process type. There can many different ways to schedule various queues. 1. On the basis of priority, suppose system process queue has highest priority then processes from all other queue can be executed only when system process queue is empty. When a process from batch queue is executing, if new process arrives in the system queue then process from batch queue is pre-empted and process from system queue will be executed. 2. System can use Round Robin algorithm to schedule various queues. Time quantum can be defined for CPU allocation. For specified time, each queue will execute its own processes. For example, time quantum 40 milliseconds. CPU will be assigned first to system queue for 40 milliseconds. Processes from system queue executes one by one for 40 ms. Once the time quantum expires current process is pre-empted and CPU is assigned to interactive queue for time quantum of 40 ms. like this each

queue executes one by one in circular form i.e starting with system queue then interactive queue then batch queue then student queue and again system queue and so on in circular fashion. 3.Write steps for Banker s algorithm to avoid dead lock. Also give one example showing working of Banker s Algorithm. 8M ANS: Banker s Algorithm: This algorithm calculates resources allocated, required and available before allocating resources to any process to avoid deadlock. It contains two matrices on a dynamic basis. Matrix A contains resources allocated to different processes at a given time. Matrix B maintains the resources which are still required by different processes at the same time. Algorithm F: Free resources Step 1: When a process requests for a resource, the OS allocates it on a trial basis. Step 2: After trial allocation, the OS updates all the matrices and vectors. This updating can be done by the OS in a separate work area in the memory. Step 3: It compares F vector with each row of matrix B on a vector to vector basis. Step 4: If F is smaller than each of the row in Matrix B i.e. even if all free resources are allocated to any process in Matrix B and not a single process can completes its task then OS concludes that the system is in unstable state. Step 5: If F is greater than any row for a process in Matrix B the OS allocates all required resources for that process on a trial basis. It assumes that after completion of process, it will release all the recourses allocated to it. These resources can be added to the free vector. Step 6: After execution of a process, it removes the row indicating executed process from both matrices. Step 7: This algorithm will repeat the procedure step 3 for each process from the matrices and finds that all processes can complete execution without entering unsafe state. For each request for any resource by a process OS goes through all these trials of imaginary allocation and updation. After this if the system remains in the safe state, and then changes can be made in actual matrices. Example: 5 processes P0 through P4; 3 resource types: A (10 instances), B (5instances), and C (7 instances) Snapshot at time T0:

The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety criteria 4.State necessary condition for Deadlock. (Four conditions - 1 Mark each) Ans: 1. Mutual Exclusion: The resources involved are non-shareable. At least one resource (thread) must be held in a non-shareable mode, that is, only one process at a time claims exclusive control of the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released. 2. Hold and Wait: Requesting process hold already, resources while waiting for requested resources. There must exist a process that is holding a resource already allocated to it while waiting for additional resource that are currently being held by other processes. 3. No-Preemptive: Resources already allocated to a process cannot be preempted. Resources cannot be removed from the processes are used to completion or released voluntarily by the process holding it. 4. Circular Wait: The processes in the system form a circular list or chain where each process in the list is waiting for a resource held by the next process in the list.

5.State and describe types of schedules. Describe how each of them schedule the job (State types of scheduler - 1 Mark, Description of three types - 1 Mark each) [**Note: - Any relevant description about schedules shall be considered] Ans: Schedulers are of three types:- o o o Long Term Scheduler Short Term Scheduler Medium Term Scheduler Long Term Scheduler It is also called job scheduler. Long term scheduler determines which programs are admitted to the system for processing. Job scheduler selects processes from the queue and loads them into memory for execution. Process loads into the memory for CPU scheduling. The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation must be equal to the average departure rate of processes leaving the system. On some systems, the long term scheduler may not be available or minimal. Timesharing operating systems have no long term scheduler. When process changes the state from new to ready, then there is use of long term scheduler. Short Term Scheduler It is also called CPU scheduler. Main objective is increasing system performance in accordance with the chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects process among the processes that are ready to execute and allocates CPU to one of them.short term scheduler also known as dispatcher, execute most frequently and makes the fine grained decision of which process to execute next. Short term scheduler is faster than long term scheduler. Medium Term Scheduler Medium term scheduling is part of the swapping. It removes the processes from the memory. It reduces the degree of multiprogramming. The medium term scheduler is in-charge of handling the swapped out-processes. Running process may become suspended if it makes an I/O request. Suspended processes cannot make any progress towards completion. In this condition, to remove the process from memory and make space for other process, the suspended process is moved to the secondary storage. This process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

6.State and explain criteria in CPU scheduling. (List any four criteria - 2 marks; Explanation - 2 marks) Ans: 1. CPU utilization 2. Throughput 3. Turnaround time 4. Waiting time 5. Response time Explanation of criteria for CPU scheduling - 1. CPU utilization: Keep the CPU as busy as possible. 2. Throughput: Number of processes that complete their execution per time unit. 3. Turnaround time: Amount of time to execute a particular process. The interval from the time of submission of a process to the time of completion is the turnaround time. 4. Waiting time: Amount of time a process has been waiting in the ready queue 5. Response time: Amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment) 7.What is FCFS algorithm? Describe with example. (Explanation - 2 marks; Example - 2 marks) Ans: First-Come - First-Served (FCFS) Scheduling FCFS scheduling is non preemptive algorithm. Once the CPU is allocated to a process, it keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. In this algorithm, a process, that a request the CPU first, is allocated the CPU first. FCFS scheduling is implemented with a FIFO queue. When a process enters the ready queue, its PCB is linked to the tail of the queue. When the CPU is available, it is allocated to the process at the head of the queue. Once the CPU is allocated to a process, that process is removed from the queue. The process releases the CPU by its own.

8.Differentiate between long term scheduler and short term scheduler on basis of i) Selection of job ii) Frequency of execution iii) Speed iv) Accessing which part of system (Difference with respect to four criteria s - 1 mark each) Ans: 9.Explain how priority scheduling algorithm works with suitable explain, also list advantages and disadvantages.

(Description - 3 marks; any relevant example - 3 marks; any one advantage - 1 mark; any one disadvantage - 1 mark) Ans: Priority scheduling algorithm: In priority scheduling algorithm, Number (integer) indicating priority is associated with each process. The CPU is allocated to a process with the highest priority. A priority algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process. A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long period of time. Advantage: Priority Scheduling- Simplicity. Reasonable support for priority. Suitable for applications with varying time and resource requirements. Disadvantages of Priority Scheduling- Indefinite blocking or starvation. A priority scheduling can leave some low priority processes waiting indefinitely for CPU. Example: