Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Size: px
Start display at page:

Download "Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS"

Transcription

1 Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS Chapter1: Introduction of Operating System An operating system acts as an intermediary between the user of a computer and the computer hardware. An Operating System (OS) is software that manages the computer hardware. Hardware: It provides the basic computing resources for the system. It consists of CPU, memory and the input/output (I/O) devices. Application Programs: Define the ways in which these resources are used to solve user s computing problems. e.g., word processors, spreadsheets, compilers and web browsers. Components of a Computer System Process Management: The operating system manages many kinds of activities ranging from user programs to system programs like printer spooler, name servers, file server etc. Main-Memory Management: Primary-Memory or Main-Memory is a large array of words or bytes. Each word or byte has its own address. Main-memory provides storage that can be access directly by the CPU. That is to say for a program to be executed, it must in the main memory. File Management: A file is a collected of related information defined by its creator. Computer can store files on the disk (secondary storage), which provide long term storage. Some examples of storage media are magnetic tape, magnetic disk and optical disk. Each of these media has its own properties like speed, capacity, data transfer rate and access methods. I/O System Management: I/O subsystem hides the peculiarities of specific hardware devices from the user. Only the device driver knows the peculiarities of the specific device to which it is assigned. Secondary-Storage Management: Secondary storage consists of tapes, disks, and other media designed to hold information that will eventually be accessed in primary storage

2 (primary, secondary, cache) is ordinarily divided into bytes or words consisting of a fixed number of bytes. Each location in storage has an address; the set of all addresses available to a program is called an address space. Protection System: Protection refers to mechanism for controlling the access of programs, processes, or users to the resources defined by computer systems. Networking: generalizes network access Command-Interpreter System: interface between the user and the OS. Functions of Operating System Operating system provides many functions for ensuring the efficient operation of the system itself. Some functions are listed below Resource Allocation: Allocation of resources to the various processes is managed by operating system. Accounting: Operating system may also be used to keep track of the various computer resources and how much and which users are using these resources. Protection: Protection ensures that all access to the system resources is controlled. When several processes execute concurrently, it should not be possible for one process to interface with the other or with the operating system itself. Operating System Services Many services are provided by OS to the user s programs. Program Execution The operating system helps to load a program into memory and run it. I/O Operations Each running program may request for I/O operation and for efficiency and protection the users cannot control I/O devices directly. Thus, the operating system must provide some means to do I/O operations. File System Manipulation Files are the most important part which is needed by programs to read and write the files and files may also be created and deleted by names or by the programs. The operating system is responsible for the file management.

3 Communications Many times, one process needs to exchange information with another process, this exchange of information can takes place between the processes executing on the same computer or the exchange of information may occur between the processes executing on the different computer systems, tied together by a computer network. All these things are taken care by operating system. Error Detection It is necessary that the operating system must be aware of possible errors and should take the appropriate action to ensure correct and consistent computing. Types of Operating Systems The Operating system can perform a Single Operation and also Multiple Operations at a Time. So there are many types of Operating systems those are organized by using their Working Techniques. 1. Serial Processing: The Serial Processing Operating Systems are those which Performs all the instructions into a Sequence Manner or the Instructions those are given by the user will be executed by using the FIFO Manner means First in First Out.Mainly the Punch Cards are used for this. In this all the Jobs are firstly Prepared and Stored on the Card and after that card will be entered in the System and after that all the Instructions will be executed one by One. But the Main Problem is that a user doesn t interact with the System while he is working on the System, means the user can not be able to enter the data for Execution. 2. Batch Processing: The Batch Processing is same as the Serial Processing Technique. But in the Batch Processing Similar Types of jobs are Firstly Prepared and they are Stored on the Card and that card will be Submit to the System for the Processing. The Main Problem is that the Jobs those are prepared for Execution must be the Same Type and if a job requires for any type of Input then this will not be Possible for the user. The Batch Contains the Jobs and all those jobs will be executed without the user Intervention. 3. Multi-Programming: Execute Multiple Programs on the System at a Time and in the Multi-programming the CPU will never get idle, because with the help of Multi-Programming we can Execute Many Programs on the System and When we are Working with the Program then we can also Submit the Second or Another Program for Running and the CPU will then

4 Execute the Second Program after the completion of the First Program. And in this we can also specify our Input means a user can also interact with the System. 4. Real Time System: In this Response Time is already fixed. Means time to Display the Results after Possessing has fixed by the Processor or CPU. Real Time System is used at those Places in which we Requires higher and Timely Response. o o Hard Real Time System: In the Hard Real Time System, Time is fixed and we can t Change any Moments of the Time of Processing. Means CPU will Process the data as we Enters the Data. Soft Real Time System: In the Soft Real Time System, some Moments can be Change. Means after giving the Command to the CPU, CPU Performs the Operation after a Microsecond. 5. Distributed Operating System: Distributed Means Data is Stored and Processed on Multiple Locations. When a Data is stored on to the Multiple Computers, those are placed in Different Locations. Distributed means In the Network, Network Collections of Computers are connected with Each other. Then if we want to Take Some Data From other Computer, Then we uses the Distributed Processing System. And we can also Insert and Remove the Data from out Location to another Location. In this Data is shared between many users. And we can also Access all the Input and Output Devices are also accessed by Multiple Users. 6. Multiprocessing: In the Multi Processing there are two or More CPU in a Single Operating System if one CPU will fail, then other CPU is used for providing backup to the first CPU. With the help of Multi-processing, we can Execute Many Jobs at a Time. All the Operations are divided into the Number of CPU s. if first CPU Completed his Work before the Second CPU, then the Work of Second CPU will be divided into the First and Second. 7. Parallel operating systems: These are used to interface multiple networked computers to complete tasks in parallel. Parallel operating systems are able to use software to manage all of the different resources of the computers running in parallel, such as memory, caches, storage space, and processing power. A parallel operating system works by dividing sets of calculations into smaller parts and distributing them between the machines on a network.

5 Question and answer section: 1) To schedule the processes, they are considered. a) Infinitely long b) periodic c) heavy weight d) light weight Answer: b 2) If the period of a process is p, then the rate of the task is : a) p^2 b) 2*p c) 1/p d) p Answer: c 3) The scheduler admits a process using : a) two phase locking protocol b) admission control algorithm c) busy wait polling d) None of these Answer: c 4) The scheduling algorithm schedules periodic tasks using a static priority policy with preemption. a) Earliest deadline first b) rate monotonic c) first cum first served d) priority Answer: b 5) Rate monotonic scheduling assumes that the : a) processing time of a periodic process is same for each CPU burst b) processing time of a periodic process is different for each CPU burst

6 c) periods of all processes is the same d) None of these Answer: a 6) In rate monotonic scheduling, a process with a shorter period is assigned: a) a higher priority b) a lower priority c) None of these Answer: a 7) There are two processes P1 and P2, whose periods are 50 and 100 respectively. P1 is assigned higher priority than P2. The processing times are t1 = 20 for P1 and t2 = 35 for P2. Is it possible to schedule these tasks so that each meets its deadline using Rate monotonic scheduling? a) Yes b) No c) maybe d) none of these Answer: a 8) If a set of processes cannot be scheduled by rate monotonic scheduling algorithm, then : a) they can be scheduled by EDF algorithm b) they cannot be scheduled by EDF algorithm c) they cannot be scheduled by any other algorithm d) None of these Answer: c 9) A process P1 has a period of 50 and a CPU burst of t1 = 25, P2 has a period of 80 and a CPU burst of 35. The total CPU utilization is: a) 0.90 b) 0.74 c) 0.94

7 d) 0.80 View Answer Answer: c 10) Can the processes in the previous question be scheduled without missing the deadlines? a) Yes b) No c) maybe Answer: b Chapter2: CPU Scheduling All of the processes which are ready to execute and are placed in main memory then selection of one of those processes is known as scheduling, and after selection that process gets the control of CPU. Scheduling Criteria: The criteria for comparing CPU scheduling algorithms include the following CPU Utilization: Means keeping the CPU as busy as possible. Throughput: It is nothing but the measure of work i.e., the number of processes that are completed per time unit. Turnaround Time: The interval from the time of submission of a process to the time of completion. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU and doing I/O. Waiting Time: The sum of the periods spent waiting in the ready queue. Response Time: The time from the submission of a request until the first response is produced. First Come First Served (FCFS) Scheduling With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with FIFO queue. When a process enters the ready queue, its PCB (Process Control Block) is linked onto the tail of the queue.

8 When the CPU is free, it is allocated to the process at the head of the queue. The running process is then removed from the queue. FCFS scheduling is non-preemptive, very simple, can be implemented with a FIFO queue, Not a good choice when there are variable burst times (CPU bound and I/O bound), drawback: causes short processes to wait for longer ones. Shortest Job First (SJF) Scheduling When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the two processes have the same length or amount of next CPU burst, FCFS scheduling is used to break the tie.

9 A more appropriate term for this scheduling method would be the shortest next CPU burst algorithm because scheduling depends on the length of the next CPU burst of a process rather than its total length. SJF works based on burst times (not total job time), difficult to get this information, must approximate it for short term scheduling, used frequently for long term scheduling, pre-emptive (SJF) or non-pre-emptive (SRTF), Special case of priority scheduling. Preemptive and Non-preemptive Algorithm The SJF algorithm can either be pre-emptive or non-pre-emptive. The choice arises when a new process arrives at the ready queue while a previous process is still executing. The next CPU burst of the newly arrived process may be shorter than what is left of the currently executing process. A pre-emptive SJF algorithm will pre-empt the currently executing process, whereas a, non-preemptive algorithm will allow the currently running process to finish its CPU burst. Pre-emptive SJF scheduling, is sometimes called shortest-remaining-time-first-scheduling,

10 Process Table Waiting time for P 1 = 10 1 = 9 Waiting time for P 2 = 1 1 = 0 Waiting time for P 3 = 17 2 = 15 Waiting time for P 4 = 5 3 Average waiting time Priority Scheduling A priority is associated with each process and the CPU is allocated to the process with the highest priority, Equal priority processes are scheduled in FCFS order. We can be provided that low numbers represent high priority or low numbers represent low priority, According to the question, we need to assume anyone of the above. Priority scheduling can be either pre-emptive or non-pre-emptive. A pre-emptive priority scheduling algorithm will pre-empt the CPU, if the priority of the newly arrived process is higher than the priority of the currently running process. A non-pre-emptive priority scheduling algorithm will simply put the new process at the head of the ready queue. A major problem with priority scheduling algorithm is indefinite blocking or starvation.

11 Process Table Waiting time for P 1 = 6 Waiting time for P 2 = 0 Waiting time for P 3 = 16 Waiting time for P 4 = 18 Waiting time for P 5 = 1 Average waiting time Round Robin (RR) Scheduling The RR scheduling algorithm is designed especially for time sharing systems. It is similar to FCFS scheduling but pre-emption is added to switch between processes. A small unit of time called a time quantum or time slice is defined. If time quantum is too large, this just becomes FCFS, since context-switching costs time, the rule of thumb is 80% of CPU bursts should be shorter than the time quantum. Process Table Let s take time quantum = 4 ms. Then the resulting RR schedule is as Follows

12 P 1 waits for the 6 ms (10 4), P 2 waits for 4 ms and P 3 waits for 7 ms. Thus, Average waiting time Multilevel Queue Scheduling: This scheduling algorithm has been created for saturations in which processes are easily classified into different groups A multilevel queue scheduling algorithm partitions the ready queue into several separate queues. The processes are permanently assigned to one queue, generally based on some property of the process, such as memory size, process priority or process type. Multilevel Feedback Queue Scheduling This scheduling algorithm allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower priority queue. Similarly, a process that waits too long in a lower priority queue may be moved to a higher priority queue. This form of aging prevents starvation. CPU Overhead: First In First Out: Low

13 Shortest job first: Medium Priority based Scheduling: Medium Multilevel Queue Scheduling: High Throughput: First In First Out: Low Shortest job first: High Priority based Scheduling: Low Multilevel Queue Scheduling: Medium Turn Around Time: First In First Out: High Shortest job first: Medium Priority based Scheduling: Medium Multilevel Queue Scheduling: Medium Response Time: First In First Out: Low Shortest job first: Medium Priority based Scheduling: High Multilevel Queue Scheduling: Medium Question answer section: 1) Which of the following do not belong to queues for processes? a) Job Queue b) PCB queue c) Device Queue d) Ready Queue Answer: b

14 2) When the process issues an I/O request : a) It is placed in an I/O queue b) It is placed in a waiting queue c) It is placed in the ready queue d) It is placed in the Job queue Answer: a 3) When a process terminates : (Choose Two) a) It is removed from all queues b) It is removed from all, but the job queue c) Its process control block is de-allocated d) Its process control block is never de-allocated View Answer Answer: a and c 4) What is a long-term scheduler? a) It selects which process has to be brought into the ready queue b) It selects which process has to be executed next and allocates CPU c) It selects which process to remove from memory by swapping d) None of these Answer: a 5) If all processes I/O bound, the ready queue will almost always be, and the Short term Scheduler will have a to do. a) Full, little b) full, lot c) empty, little d) empty, lot Answer: c 6) What is a medium-term scheduler? a) It selects which process has to be brought into the ready queue b) It selects which process has to be executed next and allocates CPU

15 c) It selects which process to remove from memory by swapping d) None of these Answer: c 7) What is a short-term scheduler? a) It selects which process has to be brought into the ready queue b) It selects which process has to be executed next and allocates CPU c) It selects which process to remove from memory by swapping d) None of these Answer: b 8) The primary distinction between the short term scheduler and the long term scheduler is: a) The length of their queues b) The type of processes they schedule c) The frequency of their execution d) None of these Answer: c 9) The only state transition that is initiated by the user process itself is : a) block b) wakeup c) dispatch d) None of these Answer: a 10) In a time-sharing operating system, when the time slot given to a process is completed, the process goes from the running state to the : a) Blocked state b) Ready state c) Suspended state d) Terminated state

16 Answer: b 11) In a multi-programming environment : a) the processor executes more than one process at a time b) the programs are developed by more than one person c) more than one process resides in the memory d) a single user can execute many programs at the same time Answer: c 12) Suppose that a process is in Blocked state waiting for some I/O service. When the service is completed, it goes to the : a) Running state b) Ready state c) Suspended state d) Terminated state Answer: b 13) The context of a process in the PCB of a process does not contain : a) the value of the CPU registers b) the process state c) memory-management information d) context switch time Answer: d 14) Which of the following need not necessarily be saved on a context switch between processes? (GATE CS 2000) a) General purpose registers b) Translation look-aside buffer c) Program counter d) All of these Answer: b 14) Which of the following does not interrupt a running process? (GATE CS 2001) a) A device

17 b) Timer c) Scheduler process d) Power failure Answer: c 15) Several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a(n). a) Shared Memory Segments b) Entry Section c) Race condition d) Process Synchronization Answer: c 16) Which of the following state transitions is not possible? a) Blocked to running b) ready to running c) blocked to ready d) running to blocked Answer: a Chapter3: Processes Process: A process can be defined in any of the following ways A process is a program in execution. It is an asynchronous activity. It is the entity to which processors are assigned. It is the dispatch-able unit. It is the unit of work in a system.

18 A process is more than the program code. It also includes the current activity as represented by following: Current value of Program Counter (PC) Contents of the processors registers Value of the variables The process stack which contains temporary data such as subroutine parameter, return address, and temporary variables. A data section that contains global variables. Process in Memory: Each process is represented in the as by a Process Control Block (PCB) also called a task control block. PCB: A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including The current state of the process i.e., whether it is ready, running, waiting, or whatever. Unique identification of the process in order to track which is which information. A pointer to parent process. Similarly, a pointer to child process (if it exists). The priority of process (a part of CPU scheduling information). Pointers to locate memory of processes. A register save area.

19 The processor it is running on. Process State Model Process state: The process state consist of everything necessary to resume the process execution if it is somehow put aside temporarily. The process state consists of at least following: Code for the program. Program s static data. Program s dynamic data. Program s procedure call stack. Contents of general purpose register. Contents of program counter (PC) Contents of program status word (PSW). Operating Systems resource in use. A process goes through a series of discrete process states. New State: The process being created. Running State: A process is said to be running if it has the CPU, that is, process actually using the CPU at that particular instant. Blocked (or waiting) State: A process is said to be blocked if it is waiting for some event to happen such that as an I/O completion before it can proceed. Note that a process is unable to run until some external event happens. Ready State: A process is said to be ready if it use a CPU if one were available. A ready state process is ready to run but temporarily stopped running to let another process run.

20 Terminated state: The process has finished execution. Schedulers: A process migrates among various scheduling queues throughout its lifetime. The OS must select for scheduling purposes, processes from those queues in some fashion. The selection process is carried out by the appropriate scheduler. Long Term Scheduler: A long term scheduler or job scheduler selects processes from job pool (mass storage device, where processes are kept for later execution) and loads them into memory for execution. The long term scheduler controls the degree of multiprogramming (the number of processes in memory). Short Term Scheduler: A short term scheduler or CPU scheduler selects from the main memory among the processes that are ready to execute and allocates the CPU to one of them. Medium Term Scheduler: The medium term scheduler available in all systems which is responsible for the swapping in and out operations which means loading the process into, main memory from secondary memory (swap in) and take out the process from main memory and store it into the secondary memory (swap out).

21 Dispatcher: It is the module that gives control of the CPU to the process selected by the short term scheduler. Functions of Dispatcher: Switching context, Switching to user mode, and Jumping to the proper location in the user program to restart that program. Question answer section: 1. Which process can be affected by other processes executing in the system? a) Cooperating process b) child process c) parent process

22 d) init process Answer: a 2. When several processes access the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a) dynamic condition b) race condition c) essential condition d) critical condition Answer: b 3. If a process is executing in its critical section, then no other processes can be executing in their critical section. This condition is called a) mutual exclusion b) critical exclusion c) synchronous exclusion d) asynchronous exclusion View Answer Answer: a 4. Which one of the following is a synchronization tool? a) Thread b) pipe c) semaphore d) socket Answer: c 5. A semaphore is a shared integer variable a) that cannot drop below zero b) that cannot be more than zero c) that cannot drop below one d) that cannot be more than one Answer: a

23 6. Mutual exclusion can be provided by the a) mutex locks b) binary semaphores c) both (a) and (b) d) none of the mentioned Answer: c Explanation: Binary Semaphores are known as mutex locks. 7. When high priority task is indirectly preempted by medium priority task effectively inverting the relative priority of the two tasks, the scenario is called a) priority inversion b) priority removal c) priority exchange d) priority modification Answer: a 8. Process synchronization can be done on a) hardware level b) software level c) both (a) and (b) d) none of the mentioned Answer: c 9. A monitor is a module that encapsulates a) shared data structures b) procedures that operate on shared data structure c) synchronization between concurrent procedure invocation d) all of the mentioned Answer: d 10. To enable a process to wait within the monitor, a) a condition variable must be declared as condition b) condition variables must be used as Boolean objects c) semaphore must be used

24 d) all of the mentioned Answer: a Chapter4: Inter-Process Communication Processes executing concurrently in the operating system may be either independent or cooperating processes. A process is independent, if it can t affect or be affected by the other processes executing in the system. Any process that shares data with other processes is a cooperating process. Advantages of process cooperation are information sharing, computation speed up, modularity and convenience to work on many tasks at the same time. Cooperating processes require an Inter-Process Communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of IPC: o Shared memory: In the shared memory model, a region of memory that is shared by cooperating process is established. Process can then exchange information by reading and writing data to the shared region. o Message passing: In the message passing model, communication takes place by means of messages exchanged between the cooperating processes. Question and Answer section: 1) Inter process communication : a) allows processes to communicate and synchronize their actions when using the same address space. b) Allows processes to communicate and synchronize their actions without using the same address space.

25 c) Allows the processes to only synchronize their actions without communication. d) None of these View Answer Answer: b 2) Message passing system allows processes to : a) communicate with one another without resorting to shared data. b) Communicate with one another by resorting to shared data. c) Share data d) name the recipient or sender of the message View Answer Answer: a 3) An IPC facility provides at least two operations : (choose two) a) write message b) delete message c) send message d) receive message Answer: c and d 4) Messages sent by a process : a) have to be of a fixed size b) have to be a variable size c) can be fixed or variable sized d) None of these Answer: c 5) The link between two processes P and Q to send and receive messages is called : a) communication link b) message-passing link c) synchronization link d) All of these Answer: a

26 6) Which of the following are TRUE for direct communication :(choose two) a) A communication link can be associated with N number of process(n = max. number of processes supported by system) b) A communication link can be associated with exactly two processes c) Exactly N/2 links exist between each pair of processes(n = max. number of processes supported by system) d) Exactly one link exists between each pair of processes Answer: b and d 7) In indirect communication between processes P and Q : a) there is another process R to handle and pass on the messages between P and Q b) there is another machine between the two processes to help communication c) there is a mailbox to help communication between P and Q d) None of these Answer: c 8) In the non blocking send : a) the sending process keeps sending until the message is received b) the sending process sends the message and resumes operation c) the sending process keeps sending until it receives a message d) None of these Answer: b 9) In the Zero capacity queue : (choose two) a) the queue has zero capacity b) the sender blocks until the receiver receives the message c) the sender keeps sending and the messages don t wait in the queue d) the queue can store at least one message Answer: a and b 10) The Zero Capacity queue : a) is referred to as a message system with buffering

27 b) is referred to as a message system with no buffering c) is referred to as a link d) None of these Answer: b 11) Bounded capacity and Unbounded capacity queues are referred to as: a) Programmed buffering b) Automatic buffering c) User defined buffering d) No buffering Answer: b Chapter5: Concurrency and Process Synchronization & Deadlock Process Synchronization means sharing system resources by processes in a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes. Process Synchronization was introduced to handle problems that arose while multiple process executions. Some of the problems are discussed below. Critical Section Problem A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes.

28 Solution to Critical Section Problem A solution to the critical section problem must satisfy the following three conditions : 1. Mutual Exclusion Out of a group of cooperating processes, only one process can be in its critical section at a given point of time. 2. Progress If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section. 3. Bounded Waiting After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is

29 granted. So after the limit is reached, system must grant the process permission to get into its critical section. Synchronization Hardware Many systems provide hardware support for critical section code. The critical section problem could be solved easily in a single-processor environment if we could disallow interrupts to occur while a shared variable or resource is being modified. In this manner, we could be sure that the current sequence of instructions would be allowed to execute in order without pre-emption. Unfortunately, this solution is not feasible in a multiprocessor environment. Disabling interrupt on a multiprocessor environment can be time consuming as the message is passed to all the processors. This message transmission lag, delays entry of threads into critical section and the system efficiency decreases. Mutex Locks As the synchronization hardware solution is not easy to implement fro everyone, a strict software approach called Mutex Locks was introduced. In this approach, in the entry section of code, a LOCK is acquired over the critical resources modified and used inside critical section, and in the exit section that LOCK is released. As the resource is locked while a process executes its critical section hence no other process can access it. Semaphores In 1965, Dijkstra proposed a new and very significant technique for managing concurrent processes by using the value of a simple integer variable to synchronize the progress of interacting processes. This integer variable is called semaphore. So it is basically a

30 synchronizing tool and is accessed only through two low standard atomic operations, wait and signal designated by P() and V() respectively. The classical definition of wait and signal are : Wait: decrement the value of its argument S as soon as it would become non-negative. Signal: increment the value of its argument, S as an individual operation. Properties of Semaphores 1. Simple 2. Works with many processes 3. Can have many different critical sections with different semaphores 4. Each critical section has unique access semaphores 5. Can permit multiple processes into the critical section at once, if desirable. Types of Semaphores Semaphores are mainly of two types: 1. Binary Semaphore It is a special form of semaphore used for implementing mutual exclusion, hence it is often called Mutex. A binary semaphore is initialized to 1 and only takes the value 0 and 1 during execution of a program. 2. Counting Semaphores These are used to implement bounded concurrency.

31 Limitations of Semaphores 1. Priority Inversion is a big limitation os semaphores. 2. Their use is not enforced, but is by convention only. 3. With improper use, a process may block indefinitely. Such a situation is called Deadlock. We will be studying deadlocks in details in coming lessons. Chapter6: Deadlock What is a Deadlock? Deadlocks are a set of blocked processes each holding a resource and waiting to acquire a resource held by another process.

32 How to avoid Deadlocks Deadlocks can be avoided by avoiding at least one of the four conditions, because all this four conditions are required simultaneously to cause deadlock. 1. Mutual Exclusion Resources shared such as read-only files do not lead to deadlocks but resources, such as printers and tape drives, requires exclusive access by a single process. 2. Hold and Wait In this condition processes must be prevented from holding one or more resources while simultaneously waiting for one or more others. 3. No Preemption Preemption of process resource allocations can avoid the condition of deadlocks, where ever possible. 4. Circular Wait Circular wait can be avoided if we number all resources, and require that processes request resources only in strictly increasing(or decreasing) order. Handling Deadlock The above points focus on preventing deadlocks. But what to do once a deadlock has occurred. Following three strategies can be used to remove deadlock after its occurrence. 1. Preemption

33 We can take a resource from one process and give it to other. This will resolve the deadlock situation, but sometimes it does causes problems. 2. Rollback In situations where deadlock is a real possibility, the system can periodically make a record of the state of each process and when deadlock occurs, roll everything back to the last checkpoint, and restart, but allocating resources differently so that deadlock does not occur. 3. Kill one or more processes Previous Year Solved Questions with Explanation: GATE A critical section is a program segment (a) which should run in a certain specified amount of time (b) which avoids deadlocks (c) where shared resources are accessed (d) which must be enclosed by a pair of semaphore operations, P and V Ans: option (c) GATE A solution to the Dining Philosophers Problem which avoids deadlock is (a) ensure that all philosophers pick up the left fork before the right fork (b) ensure that all philosophers pick up the right fork before the left fork (c) ensure that one particular philosopher picks up the left fork before the right fork, and that all other philosophers pick up the right fork before the left fork (d) None of the above Ans: option (c) GATE Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below. The initial values of shared boolean variables S1 and S2 are randomly assigned.

34 Method Used by P1 while (S1 == S2) ; Critica1 Section S1 = S2; Method Used by P2 while (S1!= S2) ; Critica1 Section S2 = not (S1); Which one of the following statements describes the properties achieved? (a) Mutual exclusion but not progress (b) Progress but not mutual exclusion (c) Neither mutual exclusion nor progress (d) Both mutual exclusion and progress Ans: option (a) Explanation: Principle of Mutual Exclusion: No two processes may be simultaneously present in the critical section at the same time. That is, if one process is present in the critical section other should not be allowed. P1 can enter critical section only if S1 is not equal to S2, and P2 can enter critical section only if S1 is equal to S2. Therefore Mutual Exclusion is satisfied. Progress: no process running outside the critical section should block the other interested process from entering critical section whenever critical section is free. Suppose P1 after executing critical section again want to execute the critical section and P2 dont want to enter the critical section, then in that case P1 has to unnecesarily wait for P2. Hence progress is not satisfied. GATE A counting semaphore was initialized to 10. Then 6 P (wait) operations and 4V (signal) operations were completed on this semaphore. The resulting value of the semaphore is

35 (a) 0 (b) 8 (c) 10 (d) 12 Ans: option(b) Explanation: 6P => decrements the semaphore 6 times. Hence, the value becomes 4. 4V => increments the semaphore 4 times. Hence, the value becomes 8. Note: The positive value of counting semaphore indicates that those many down (P) operations can be carried out successfully. The negative value of counting semaphore indicates that the number of blocked processes. GATE Let m[0] m[4] be mutex (binary semaphores) and P[0]. P[4] be processes. Suppose each process P[i] executes the following: wait (m[i]);wait (m[(i+1) mode 4]);... release (m[i]); release (m[(i+1)mod 4]); This could cause (a) Thrashing (c) Starvation, but not deadlock (b) Deadlock (d) None of the above Ans: option (b) Explanation: Deadlock occurs, if each process gets preempted after executing wait(m[i]); GATE The enter_cs() and leave_cs() functions to implement critical section of a process are realized using test-and-set instruction as follows: void enter_cs(x) { while test-and-set(x) ; } void leave_cs(x) {

36 X = 0; } In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the following statements: I. The above solution to CS problem is deadlock-free II. The solution is starvation free. III. The processes enter CS in FIFO order. IV More than one process can enter CS at the same time. Which of the above statements is TRUE? (a) I only (b) I and II (c) II and III (d) IV only Ans: option (a) Explanation: The test-and-set instruction is an instruction used to write to a memory location and return its old value as a single atomic (i.e., non-interruptible) operation. Since it is an atomic instruction it guarantees mutual exclusion. Reference: GATE The following program consists of 3 concurrent processes and 3 binary semaphores. The semaphores are initialized as S0=1, S1=0, S2=0. Process P0 Process P1 Process P2 while (true) { wait (S0); print (0); release (S1); release (S2); } wait (S1); Release (S0); wait (S2); How many times will process P0 print '0'? (a) At least twice (b) Exactly twice (c) Exactly thrice (d) Exactly once Ans: option (a) Explanation: release (S0);

37 P0 will execute first because only S0=1. Hence it will print 0 (for the first time). Also P0 releases S1 and S2. Since S1=1 and S2=1, therefore P1 or P2, any one of them can be executed. Let us assume that P1 executes and releases S0 (Now value of S0 = 1). Note that P1 process is completed. Now S0=1 and S2=1, hence either P0 can execute or P2 can execute. Let us check both the conditions:- 1. Let us assume that P2 executes, and releases S0 and completes its execution. Now P0 executes; S0=0 and prints 0 (i.e. second 0). And then releases S1 and S2. But note that P1 and P2 processes have already finished their execution. Again if P0 tries to execute it goes into sleep condition because S0=0. Therefore, minimum number of times '0' gets printed is Now, let us assume that P0 executes. Hence S0=0, (due to wait(s0)), and it will print 0 (second 0) and releases S1 and S2. Now only P2 can execute, because P1 has already completed its execution and P0 cannot execute because S0 = 0. Now P2 executes and releases S0 (i.e. S0=1) and finishes its execution. Now P0 starts its execution and again prints 0 (thrid 0) and releases S1 and S2 (Note that now S0=0). P1 and P2 has already completed its execution therefore again P1 takes its turn, but since S0=0, it goes into sleep condition. And the processes P1 and P2 which could wakeup P0 has already finished their execution.therefore, maximum number of times '0' gets printed is 2. GATE Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy-wait lock. L is an unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value corresponds to the lock being not available. AcquireLock(L){ while (Fetch_And_Add(L,1)) L = 1; } ReleaseLock(L){

38 L = 0; } This implementation (a) fails as L can overflow (b) fails as L can take on a non-zero value when the lock is actually available (c) works correctly but may starve some processes (d) works correctly without starvation Ans: option (b) Explanation: Assume that L=0, that means the lock is now available. A process P1 wants to acquire the lock by executing the Acquire Lock () function. We can see that the while loop fails because the Fetch_And_Add instruction returns the previous value of L, which was 0. (Note that after the execution of the atomic instruction, the present value of L is now 1). Since the returned value was 0, the while loop fails and P1 process comes out of the while loop and hence acquires the lock. Now scheduler schedules another process P2 and context switching takes place. P2 tries to acquire the lock by executing the Acquire Lock () function. But it goes on executing the while loop infinitely (because the atomic instruction always returns a non-zero value and the while loop condition is always true). After some time, scheduler again schedules P1. We assume that P2 was stopped soon after the Fetch_And_Add() instruction returned the value. Hence L has some non-zero value. Now P1 releases the lock L. That means now L=0. Assume that again context switch takes place and P2 arrives. We have assumed that P2 was switched out when it executed the Fetch_And_Add instruction and a non-zero value has been returned. Since the returned value was non-zero the condition becomes true and the statement L=1 is executed. Hence now L=1. Again the while loop is executed, Fetch_And_Add instruction will return value 1 and hence again the while loop enters

39 into infinite loop. Now none of the process is able to acquire the lock. Therefore the above implementation fails. GATE Two processes, P1 and P2, need to access a critical section of code. Consider the following synchronization construct used by the processes: /* P1 */ /* P2 */ while (true) { while (true) { wants1 = true; wants2 = true; while (wants2 == true); while (wants1==true); /* Critical /* Critical Section */ Section */ wants1=false; wants2 = false; } } /* Remainder section */ /* Remainder section */ Here, wants1 and wants2 are shared variables, which are initialized to false. Which one of the following statements is TRUE about the above construct? (a) It does not ensure mutual exclusion. (b) It does not ensure bounded waiting. (c) It requires that processes enter the critical section in strict alternation. (d) It does not prevent deadlocks, but ensures mutual exclusion. Ans: option (d) Explanation: The code ensures the condition of mutual exclusion: Assume P1 is initiated. It sets wants1=true. Now since wants2 = false, P1 exists from its while loop and enters its critical section. Now suppose context switch takes place and P2 gets executed. Now it sets wants2=true, and now enters the while loop and remains busy till P1 comes out of the critical section and sets wants1=false, because wants1=true ( as set by P1). So we can see that the mutual exclusion condition is satisfied.

40 The code does not prevent deadlock: Assume that P1 starts its execution. It sets wants1=true and then gets preempted. Now P2 starts its execution. P2 sets wants2=true and suddenly gets preempted. Now P1 starts execution; it enters the while loop and finds that wants2=true and remains busy in the while loop. Now P1 gets preempted. P2 enters into execution; it enters the while loop and finds that wants1=true remains busy in the while loop. Hence both P1 and P2 remains busy forever. GATE The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x in y without allowing any intervening access to the memory location x. consider the following implementation of P and V functions on a binary semaphore. void P (binary_semaphore *s) { unsigned y; unsigned *x = &(s->value); do { fetch-and-set x, y; } while (y); } void V (binary_semaphore *s) { S->value = 0; } Which one of the following is true? (a) The implementation may not work if context switching is disabled in P. (b) Instead of using fetch-and-set, a pair of normal load/store can be used (c) The implementation of V is wrong (d) The code does not implement a binary semaphore

41 Ans: option (a) Explanation: Assume that P1 process enters the P function. Initially if the value of semaphore is 1, according to the definition of the fetch-and-set instruction, the value of y will be 1 when the fetch-and-set instruction is executed. Hence the while loop goes on executing until other processes execute the V function. But if context switching is disabled in P while loop will keep on executing and other processes are not allowed to execute. GATE-2006 Linked Answer Questions for 11 & 12 Barrier is a synchronization construct where a set of processes synchronizes globally i.e. each process in the set arrives at the barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers shown on left. void barrier (void) { 1: P(S); 2: process_arrived++; 3: V(S); 4: while (process_arrived!=3); 5: P(S); 6: process_left++; 7: if (process_left==3) { 8: process_arrived = 0; 9: process_left = 0; 10: } 11: V(S); }

42 The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program all the three processes call the barrier function when they need to synchronize globally. 11. The above implementation of barrier is incorrect. Which one of the following is true? (a) The barrier implementation is wrong due to the use of binary semaphore S (b) The barrier implementation may lead to a deadlock if two barrier in invocations are used in immediate succession. (c) Lines 6 to 10 need not be inside a critical section (d) The barrier implementation is correct if there are only two processes instead of three. Ans: option (b) Explanation: It is mentioned that there are 3 processes. Let it be P1, P2 & P3. Every process arrives at the barrier that is it completes its first section of the code. Hence process_arrived = 3. Now assume that P1 continues its execution and leaves the barrier and makes process_left=1. Now assume that P1 again invokes the barrier function immediately. At this stage, process_arrived =4. P1 now waits. Soon P2 leaves the barrier, which makes process_left=2. Now P3 leaves the barrier which makes process_left=3. "If" condition is true and now process_arrived and process_left is reset to 0. We know that P1 is waiting. At this stage assume that P2 invokes the barrier function, hence process_arrived=1 and it waits. P3 invokes the barrier function, hence process_arrived=2. All process have reached the barrier but since the process_arrived=2, all processes keeps on waiting. Hence deadlock. 12. Which one of the following rectifies the problem in the implementation? (a) Lines 6 to 10 are simply replaced by process_arrived-- (b) At the beginning of the barrier the first process to enter the barrier waits until process_arrived becomes zero before proceeding to execute P(S).

43 (c) Context switch is disabled at the beginning of the barrier and re-enabled at the end. (d) The variable process_left is made private instead of shared. Ans: option (b) Additional Questions 1. Which process can be affected by other processes executing in the system? a) cooperating process b) child process c) parent process d) init process Answer: a 2. When several processes access the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a) dynamic condition b) race condition c) essential condition d) critical condition Answer: b 3. If a process is executing in its critical section, then no other processes can be executing in their critical section. This condition is called a) mutual exclusion b) critical exclusion c) synchronous exclusion d) asynchronous exclusion Answer: a

44 4. Which one of the following is a synchronization tool? a) thread b) pipe c) semaphore d) socket Answer: c 5. A semaphore is a shared integer variable a) that cannot drop below zero b) that cannot be more than zero c) that cannot drop below one d) that cannot be more than one Answer: a 6. Mutual exclusion can be provided by the a) mutex locks b) binary semaphores c) both (a) and (b) d) none of the mentioned Answer: c Explanation: Binary Semaphores are known as mutex locks. 7. When high priority task is indirectly preempted by medium priority task effectively inverting the relative priority of the two tasks, the scenario is called a) priority inversion b) priority removal c) priority exchange d) priority modification Answer: a 8. Process synchronization can be done on a) hardware level b) software level

45 c) both (a) and (b) d) none of the mentioned View Answer Answer: c 9. A monitor is a module that encapsulates a) shared data structures b) procedures that operate on shared data structure c) synchronization between concurrent procedure invocation d) all of the mentioned Answer: d 10. To enable a process to wait within the monitor, a) a condition variable must be declared as condition b) condition variables must be used as boolean objects c) semaphore must be used d) all of the mentioned View Answer Answer: a 11. In order to allow only one process to enter its critical section, binary semaphore are initialized to: A. 0 B. 1 C. 2 D. 3 Ans: B. 12. A deadlock in an operating system is A. desirable process B. undesirable process C. definite waiting process D. all of the above Ans: B. 13. Three concurrent processes X, Y, and Z execute three different code segments that access andupdate certain shared variables. Process X executes the P operation (i.e., wait) on semaphores a, band c; process Y executes the P operation on semaphores b, c and d; process Z executes the Poperation on semaphores c, d, and a before entering the respective code segments. Aftercompleting the execution of its code segment, each process invokes the V operation (i.e.,

46 signal)on its three semaphores. All semaphores are binary semaphores initialized to one. Which one ofthe following represents a deadlock free order of invoking the P operations by the processes? A. X :P (a )P B.P C. Y :P (b )P C.P D. Z :P (c )P D.P A. B. X :P (b )P A.P C. Y :P (b )P C.P D. Z :P (a )P C.P D. C. X :P (b )P A.P C. Y :P (c )P B.P D. Z :P (a )P C.P D. D. X :P (a )P B.P C. Y :P (c )P B.P D. Z :P (c )P D.P A. Ans: B. 14. A semaphore count of negative n means(s = -n) that the queue contains. Waiting processes: A. n+1 B. n C. n-1 D. 0 Ans: A. 15.Which of the following approaches do not require knowledge of the system state? A.deadlock detection. B.deadlock prevention. C.deadlock avoidance. D.none of the above. Ans: D. Chapter7: Threads Thread is an execution unit which consists of its own program counter, a stack, and a set of registers. Threads are also known as Lightweight processes. Threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. As each thread has its own independent resource for process execution, multpile processes can be executed parallely by increasing number of threads.

47 Types of Thread There are two types of threads: User Threads Kernel Threads User threads are above the kernel and without kernel support. These are the threads that application programmers use in their programs. Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel system calls simultaneously. Multithreading Models The user threads must be mapped to kernel threads, by one of the following strategies. Many-To-One Model One-To-One Model Many-To-Many Model

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to: F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.

Topic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization. Topic 4 Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently.

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

Unit 3 : Process Management

Unit 3 : Process Management Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

Operating Systems. Figure: Process States. 1 P a g e

Operating Systems. Figure: Process States. 1 P a g e 1. THE PROCESS CONCEPT A. The Process: A process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity,

More information

CHAPTER 2: PROCESS MANAGEMENT

CHAPTER 2: PROCESS MANAGEMENT 1 CHAPTER 2: PROCESS MANAGEMENT Slides by: Ms. Shree Jaswal TOPICS TO BE COVERED Process description: Process, Process States, Process Control Block (PCB), Threads, Thread management. Process Scheduling:

More information

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski Operating Systems Design Fall 2010 Exam 1 Review Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 To a programmer, a system call looks just like a function call. Explain the difference in the underlying

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

(MCQZ-CS604 Operating Systems)

(MCQZ-CS604 Operating Systems) command to resume the execution of a suspended job in the foreground fg (Page 68) bg jobs kill commands in Linux is used to copy file is cp (Page 30) mv mkdir The process id returned to the child process

More information

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

GATE SOLVED PAPER - CS

GATE SOLVED PAPER - CS YEAR 2001 Q. 1 Which of the following statements is false? (A) Virtual memory implements the translation of a program s address space into physical memory address space. (B) Virtual memory allows each

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

Lecture 2 Process Management

Lecture 2 Process Management Lecture 2 Process Management Process Concept An operating system executes a variety of programs: Batch system jobs Time-shared systems user programs or tasks The terms job and process may be interchangeable

More information

CS370 Operating Systems Midterm Review

CS370 Operating Systems Midterm Review CS370 Operating Systems Midterm Review Yashwant K Malaiya Fall 2015 Slides based on Text by Silberschatz, Galvin, Gagne 1 1 What is an Operating System? An OS is a program that acts an intermediary between

More information

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs: OPERATING SYSTEMS UNIT II Sections A, B & D PREPARED BY ANIL KUMAR PRATHIPATI, ASST. PROF., DEPARTMENT OF CSE. PROCESS CONCEPT An operating system executes a variety of programs: Batch system jobs Time-shared

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

MARUTHI SCHOOL OF BANKING (MSB)

MARUTHI SCHOOL OF BANKING (MSB) MARUTHI SCHOOL OF BANKING (MSB) SO IT - OPERATING SYSTEM(2017) 1. is mainly responsible for allocating the resources as per process requirement? 1.RAM 2.Compiler 3.Operating Systems 4.Software 2.Which

More information

PROCESS SYNCHRONIZATION

PROCESS SYNCHRONIZATION PROCESS SYNCHRONIZATION Process Synchronization Background The Critical-Section Problem Peterson s Solution Synchronization Hardware Semaphores Classic Problems of Synchronization Monitors Synchronization

More information

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31 CPU scheduling CPU scheduling is the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. In a single-processor

More information

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) Announcements Program #1 Due 2/15 at 5:00 pm Reading Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) 1 Scheduling criteria Per processor, or system oriented CPU utilization

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Operating Systems Unit 3

Operating Systems Unit 3 Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling

More information

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year Dr. Rafiq Zakaria Campus Maulana Azad College of Arts, Science & Commerce, Aurangabad Department of Computer Science Academic Year 2015-16 MCQs on Operating System Sem.-II 1.What is operating system? a)

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 10 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Chapter 6: CPU Scheduling Basic Concepts

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms Operating System Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts Scheduling Criteria Scheduling Algorithms OS Process Review Multicore Programming Multithreading Models Thread Libraries Implicit

More information

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017 CS 471 Operating Systems Yue Cheng George Mason University Fall 2017 Outline o Process concept o Process creation o Process states and scheduling o Preemption and context switch o Inter-process communication

More information

Mid Term from Feb-2005 to Nov 2012 CS604- Operating System

Mid Term from Feb-2005 to Nov 2012 CS604- Operating System Mid Term from Feb-2005 to Nov 2012 CS604- Operating System Latest Solved from Mid term Papers Resource Person Hina 1-The problem with priority scheduling algorithm is. Deadlock Starvation (Page# 84) Aging

More information

Operating Systems (Classroom Practice Booklet Solutions)

Operating Systems (Classroom Practice Booklet Solutions) Operating Systems (Classroom Practice Booklet Solutions) 1. Process Management I 1. Ans: (c) 2. Ans: (c) 3. Ans: (a) Sol: Software Interrupt is generated as a result of execution of a privileged instruction.

More information

Module 1. Introduction:

Module 1. Introduction: Module 1 Introduction: Operating system is the most fundamental of all the system programs. It is a layer of software on top of the hardware which constitutes the system and manages all parts of the system.

More information

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD OPERATING SYSTEMS #5 After A.S.Tanenbaum, Modern Operating Systems, 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD General information GENERAL INFORMATION Cooperating processes

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

TDIU25: Operating Systems II. Processes, Threads and Scheduling

TDIU25: Operating Systems II. Processes, Threads and Scheduling TDIU25: Operating Systems II. Processes, Threads and Scheduling SGG9: 3.1-3.3, 4.1-4.3, 5.1-5.4 o Process concept: context switch, scheduling queues, creation o Multithreaded programming o Process scheduling

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307

Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Department of CSIT ( G G University, Bilaspur ) Model Answer 2013 (Even Semester) - AR-7307 Class: MCA Semester: II Year:2013 Paper Title: Principles of Operating Systems Max Marks: 60 Section A: (All

More information

Multitasking / Multithreading system Supports multiple tasks

Multitasking / Multithreading system Supports multiple tasks Tasks and Intertask Communication Introduction Multitasking / Multithreading system Supports multiple tasks As we ve noted Important job in multitasking system Exchanging data between tasks Synchronizing

More information

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS Schedulers in the OS Scheduling Structure of a Scheduler Scheduling = Selection + Dispatching Criteria for scheduling Scheduling Algorithms FIFO/FCFS SPF / SRTF Priority - Based Schedulers start long-term

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem? What is the Race Condition? And what is its solution? Race Condition: Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular

More information

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo CSE 120 Principles of Operating Systems Fall 2007 Lecture 8: Scheduling and Deadlock Keith Marzullo Aministrivia Homework 2 due now Next lecture: midterm review Next Tuesday: midterm 2 Scheduling Overview

More information

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions Chapter 2 Processes and Threads [ ] 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling 85 Interprocess Communication Race Conditions Two processes want to access shared memory at

More information

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5) Announcements Reading Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) 1 Relationship between Kernel mod and User Mode User Process Kernel System Calls User Process

More information

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections ) CPU Scheduling CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections 6.7.2 6.8) 1 Contents Why Scheduling? Basic Concepts of Scheduling Scheduling Criteria A Basic Scheduling

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L10 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Development project: You

More information

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3 Operating Systems Antonio Vivace - 2016 revision 4 Licensed under GPLv3 Process Synchronization Background A cooperating process can share directly a logical address space (code, data) or share data through

More information

Main Points of the Computer Organization and System Software Module

Main Points of the Computer Organization and System Software Module Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a

More information

Mid-Semester Examination, September 2016

Mid-Semester Examination, September 2016 I N D I A N I N S T I T U T E O F I N F O R M AT I O N T E C H N O LO GY, A L L A H A B A D Mid-Semester Examination, September 2016 Date of Examination (Meeting) : 30.09.2016 (1st Meeting) Program Code

More information

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the

More information

Course: Operating Systems Instructor: M Umair. M Umair

Course: Operating Systems Instructor: M Umair. M Umair Course: Operating Systems Instructor: M Umair Process The Process A process is a program in execution. A program is a passive entity, such as a file containing a list of instructions stored on disk (often

More information

Scheduling of processes

Scheduling of processes Scheduling of processes Processor scheduling Schedule processes on the processor to meet system objectives System objectives: Assigned processes to be executed by the processor Response time Throughput

More information

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation The given code is as following; boolean flag[2]; int turn; do { flag[i]=true; turn=j; while(flag[j] && turn==j); critical

More information

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM. Program #1 Announcements Is due at 9:00 AM on Thursday Program #0 Re-grade requests are due by Monday at 11:59:59 PM Reading Chapter 6 1 CPU Scheduling Manage CPU to achieve several objectives: maximize

More information

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018 SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T 2. 3. 8 A N D 2. 3. 1 0 S P R I N G 2018 INTER-PROCESS COMMUNICATION 1. How a process pass information to another process

More information

Properties of Processes

Properties of Processes CPU Scheduling Properties of Processes CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait. CPU burst distribution: CPU Scheduler Selects from among the processes that

More information

FCM 710: Architecture of Secure Operating Systems

FCM 710: Architecture of Secure Operating Systems FCM 710: Architecture of Secure Operating Systems Practice Exam, Spring 2010 Email your answer to ssengupta@jjay.cuny.edu March 16, 2010 Instructor: Shamik Sengupta Multiple-Choice 1. operating systems

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

Following are a few basic questions that cover the essentials of OS:

Following are a few basic questions that cover the essentials of OS: Operating Systems Following are a few basic questions that cover the essentials of OS: 1. Explain the concept of Reentrancy. It is a useful, memory-saving technique for multiprogrammed timesharing systems.

More information

Concurrency: Deadlock and Starvation. Chapter 6

Concurrency: Deadlock and Starvation. Chapter 6 Concurrency: Deadlock and Starvation Chapter 6 Deadlock Permanent blocking of a set of processes that either compete for system resources or communicate with each other Involve conflicting needs for resources

More information

Process Co-ordination OPERATING SYSTEMS

Process Co-ordination OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne 1 PROCESS - CONCEPT Processes executing concurrently in the

More information

CPU Scheduling: Objectives

CPU Scheduling: Objectives CPU Scheduling: Objectives CPU scheduling, the basis for multiprogrammed operating systems CPU-scheduling algorithms Evaluation criteria for selecting a CPU-scheduling algorithm for a particular system

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

Subject Teacher: Prof. Sheela Bankar

Subject Teacher: Prof. Sheela Bankar Peoples Empowerment Group ISB&M SCHOOL OF TECHNOLOGY, NANDE, PUNE DEPARTMENT OF COMPUTER ENGINEERING Academic Year 2017-18 Subject: SP&OS Class: T.E. computer Subject Teacher: Prof. Sheela Bankar 1. Explain

More information

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 370: SYSTEM ARCHITECTURE & SOFTWARE [CPU SCHEDULING] Shrideep Pallickara Computer Science Colorado State University OpenMP compiler directives

More information

LECTURE 3:CPU SCHEDULING

LECTURE 3:CPU SCHEDULING LECTURE 3:CPU SCHEDULING 1 Outline Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 2 Objectives

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019

CS370 Operating Systems Midterm Review. Yashwant K Malaiya Spring 2019 CS370 Operating Systems Midterm Review Yashwant K Malaiya Spring 2019 1 1 Computer System Structures Computer System Operation Stack for calling functions (subroutines) I/O Structure: polling, interrupts,

More information

Processes, PCB, Context Switch

Processes, PCB, Context Switch THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering EIE 272 CAOS Operating Systems Part II Processes, PCB, Context Switch Instructor Dr. M. Sakalli enmsaka@eie.polyu.edu.hk

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 11 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Multilevel Feedback Queue: Q0, Q1,

More information

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100 GATE- 2016-17 Postal Correspondence 1 Operating Systems Computer Science & Information Technology (CS) 20 Rank under AIR 100 Postal Correspondence Examination Oriented Theory, Practice Set Key concepts,

More information

1.1 CPU I/O Burst Cycle

1.1 CPU I/O Burst Cycle PROCESS SCHEDULING ALGORITHMS As discussed earlier, in multiprogramming systems, there are many processes in the memory simultaneously. In these systems there may be one or more processors (CPUs) but the

More information

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies Chapter 2 Processes and Threads Processes The Process Model 2.1 Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling Multiprogramming of four programs Conceptual

More information

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

COSC243 Part 2: Operating Systems

COSC243 Part 2: Operating Systems COSC243 Part 2: Operating Systems Lecture 17: CPU Scheduling Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 17 1 / 30 Overview Last lecture: Cooperating

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 9 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 CPU Scheduling: Objectives CPU scheduling,

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

UNIT - II PROCESS MANAGEMENT

UNIT - II PROCESS MANAGEMENT UNIT - II PROCESS MANAGEMENT Processes Process Concept A process is an instance of a program in execution. An operating system executes a variety of programs: o Batch system jobs o Time-shared systems

More information

OPERATING SYSTEMS. Deadlocks

OPERATING SYSTEMS. Deadlocks OPERATING SYSTEMS CS3502 Spring 2018 Deadlocks Chapter 7 Resource Allocation and Deallocation When a process needs resources, it will normally follow the sequence: 1. Request a number of instances of one

More information

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition, Chapter 5: CPU Scheduling Operating System Concepts 8 th Edition, Hanbat National Univ. Computer Eng. Dept. Y.J.Kim 2009 Chapter 5: Process Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms

More information

CHAPTER NO - 1 : Introduction:

CHAPTER NO - 1 : Introduction: Sr. No L.J. Institute of Engineering & Technology Semester: IV (26) Subject Name: Operating System Subject Code:21402 Faculties: Prof. Saurin Dave CHAPTER NO - 1 : Introduction: TOPIC:1 Basics of Operating

More information

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01 1 UNIT 2 Basic Concepts of CPU Scheduling UNIT -02/Lecture 01 Process Concept An operating system executes a variety of programs: **Batch system jobs **Time-shared systems user programs or tasks **Textbook

More information

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) CPU Scheduling Daniel Mosse (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013) Basic Concepts Maximum CPU utilization obtained with multiprogramming CPU I/O Burst Cycle Process

More information

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date: Subject Name: OPERATING SYSTEMS Subject Code: 10EC65 Prepared By: Kala H S and Remya R Department: ECE Date: Unit 7 SCHEDULING TOPICS TO BE COVERED Preliminaries Non-preemptive scheduling policies Preemptive

More information