Scheduling. 2 Constraints 3. Glossary 14

Size: px
Start display at page:

Download "Scheduling. 2 Constraints 3. Glossary 14"

Transcription

1 Contents 1 Processes Metrics Process Types Running Environments Constraints 3 3 Limited Direct Execution Restricted Operations System Calls Switching Between Processes Before Interrupts First in, first out (FIFO) Shortest job first (SJF) Shortest job next (SJN) After Interrupts Shortest time to completion first (STCF) Allowing I/O Round Robin (RR) Priority Multilevel feedback queue (MLFQ) Glossary 14 List of Figures 1 Direct Execution Shortest Job First Shortest Job First with Interrupts Shortest Time to Completion XV6 Process State Model CPU Utilization with I/O Enabled Round Robin Multi-level Feedback Queue

2 1 Processes It is important that we distinguish the scheduling process from scheduling a process. Process scheduling is done by the scheduler, which in xv6 is wholly contined in the function scheduler() in proc.c. The evolution of schedulers parallel important advances in computer hardware. We will discuss only a few of the schedulers that have existed and even then only discuss the algorithms in the most basic form. This will allow us to clearly see the strengths and weaknesses of the different approaches. Although discussed in an historical context, all the algorithms we will discuss have application in today s diverse computing environments; after all, an Internet-aware toaster 1 hardly requires a multi-core processor and the Linux operating system, complete with GUI. answers a very basic question Which process gets the CPU next? This question can be a surprisingly difficult to answer. One thing to keep in mind: running the scheduler takes time. Any time spent running the scheduler is time that the CPU cannot devote to running a user process. Scheduler latency is a large concern. If the scheduler is going to become more complex, and thus have more latency, there needs to be an offsetting benefit. Otherwise, the scheduler isn t of very practical use. Computer systems exist to run user programs, not operating systems. Even the overall operating system has to have some benefit that offsets the amount of overhead (CPU, memory, storage, etc.) it imposes on the computing environment. 1.1 Metrics We will use two simple metrics to help us decide the suitability of the different approaches. Turnaround time. T turnaround = T completion T arrival Long running jobs with no user interaction Simple, logical Can be used with deadline scheduling Simple and easy to calculate Response time. T response = T firstrun T arrival Interactive jobs Not really intuitive, but useful Simple and easy to calculate 1.2 Process Types CPU-bound. Long running with little or no I/O Interactive. Frequent use of I/O, such as a process that prints to the display and requests user input. 1 See Talkie Toaster from Red Dwarf. 2

3 1.3 Running Environments Batch Multiprogramming Multiprocessor and multi-core 2 Constraints We will initially put constraints on our operating environment. These constraints reflect the computing environments of the time, or even the environment of specialized environments (e.g., embedded systems) of today. 1. Each job finishes within the same time period. This means that each job takes exactly the same amount of time. Jobs are allocated a fixed time period. Historically based on cost per unit time. You paid for a fixed amount time to run your job. If the job ended end early, you still had to pay for the entire period. Job terminated if not complete at end of time period. A difficult optimization problem. What if there is more work to do than last week? 2. All jobs arrive in the system at the same time. This means that all the jobs that can run in the next interval are present at the start of the interval. No late arrivals are allowed. This reflects the scheduling approach to classic batch processing. Matches the batch processing systems that were popular up until the late 70s. Card decks or tape (paper or Mylar). Floppies represented a technical step forward Operator runs each job to completion. Return card deck(s) and any output. Programmer has to wait to see if code had any bugs; overnight was typical. Manually checking cards for typo is tedious. 3. Each job runs to completion once scheduled. This means that there are no interrupts that cause a scheduler to run. Once a job starts, the job has exclusive use of the computer until it is done no matter what! Better than one size fits all. Cost effective for users Short jobs pay less than long jobs Issue: how do we know how much time a job will take before it is run? What happens if the guess is wrong? 4. No I/O. The jobs have no input and no output. This is impossible in practice but will help make it easier to reason about the algorithms. We call these types of jobs CPU bound. 5. Run time for each job is known a priori. This means that when any job arrives to be scheduled, the scheduler knows how much CPU time the job needs to complete. This may look like a duplicate of condition 1, but it really isn t, as we will see. We will relax these constraints one-by-one to model certain changes in the evolution of the computer. 3

4 3 Limited Direct Execution Main goals for our operating system Performance. how can we implement virtualization without adding excessive overhead to the system? Control. how can we run processes efficiently while retaining control over the CPU? Control is particularly important to the OS, as it is in charge of resources; without control, a process could simply run forever and take over the machine, or access information that it should not be allowed to access. Obtaining high performance while maintaining control is a key challenge in designing an operating system. Example: This gives rise to two concerns Figure 1: Direct Execution 1. If we just run a program, how can the OS make sure the program doesn t do anything that we don t want it to do, while still running it efficiently? That is, we need some degree of protection from rogue processes. 2. When we are running a process, how does the operating system stop it from running and switch to another process, thus implementing the time sharing we require to virtualize the CPU? And, of course, later return to the same process at the same point execution point? 3.1 Restricted Operations There are many CPU operations that we do not want a user process to execute. As we have already seen, the processor can be designed with restricted, or privileged, operations. The CPU is configured at boot time to prevents any process in user mode from invoking a privileged operation. When a user mode process makes such an attempt, an interrupt is posted and the OS gains control. This provides basic protection by keeping one process from being able to modify the OS or to directly impact another process. Kernel Mode 4

5 User Mode To make this work, we will give each process its own virtual CPU. This will give the process a very simplified execution environment because all the operations that require privilege will not be directly executable by the virtual CPU. This means that hardware management, device drivers, etc. are managed by the OS instead of the user program. The principle advantage with a virtual CPU, then, is that it simplifies the programming model for the programmer as the programmer does not have to worry about those aspects of the computing environment that the OS is already managing. However, user mode will need some form of access to resources that require kernel mode. Enter the system call. 3.2 System Calls 3.3 Switching Between Processes The context switch now becomes the storing of the CPU context of the process leaving the CPU, determining the next context to be loaded, and then restoring the new context. When a process is not running in a CPU, the CPU context is stored in the process control block. Cooperative. Synchronous. Quite polite. Jobs know when they yield. They also know that there are other processes using the computing environment. voluntary yield. Interrupts. Asynchronous. Processes do not know when they yield. Processes do not need to know about any other process in the computing environment. involuntary yield. The combination of virtual CPUs and interrupts now give up limited direct execution. This approach also gives the application programmer a simpler programming model where the complexities of CPU, hardware, and protected operations are managed by the operating system; to the benefit of all processes. 4 Before Interrupts Strange at it may seem, CPUs did not always have interrupts. primarily batch systems. Systems in this category are 4.1 First in, first out (FIFO) Also called first come, first serve (FCFS). This algorithm is optimal for average turnaround time when all 5 constraints hold. Since all the jobs are available to the scheduling algorithm before we run even the first job, the scheduler can establish a total ordering on the set of jobs a priori. The schedule can be set outside of the process of running specific jobs. Job selection is very simple: any job will do. All jobs are equal, so even random selection is optimal. This is optimal for the average turnaround case; obviously individual jobs will have differing individual turnaround times. In addition to being optimal, FIFO is very easy to implement with very low overhead. Indeed, no computer required for scheduling. What happens if we allow for jobs that do not all take the same amount of time? That is, relax constraint 1. 5

6 4.2 Shortest job first (SJF) We still require that all jobs arrive a priori, but now the jobs can have different run times. It is important to note that constraint 5 is still holding, so we know how long each job will run when it enters the system for scheduling. Constraint 1 no longer holds. Example: Job A will take 100 time units. Job B and C will each take 10 time units. Let s look at two possible schedules. Figure 2: Shortest Job First The average turnaround time differs dramatically. The order in which we set the schedule now matters. Sorting the jobs so that the shortest job is always selected next results in another optimal schedule. However, now we have the overhead of sorting. This is low, however, because all jobs arrive before any jobs are scheduled, so it is only done once. 6

7 4.3 Shortest job next (SJN) What happens if we allow jobs to arrive at any time? If we remove constraint 2, scheduling becomes much harder. If we allow late arrivals, we cannot establish an optimal total ordering a priori. Instead, we potentially have to sort the list of jobs whenever we want to schedule the next process. After all, a new job, if shorter, should move to the front of the line, right? This approach is called shortest job next 2 Example: Job A will take 100 time units. Job B and C will each take 10 time units. We now allow late jobs and have interrupts. Figure 3: Shortest Job First with Interrupts What a mess! Now, we have the overhead (latency) of running the scheduler after each job finishes. This is a lot of work. The benefits had better outweigh the costs! While we get optimal turnaround time for the shortest jobs, how about jobs that will take a long time? Sure, they will be delayed in the presence of shorter jobs, but could they also suffer from starvation? Unfortunately, the answer is an emphatic yes! 5 After Interrupts Whoever invented interrupts ought to be awarded a medal. We can now remove constraint 3. Virtually all modern schedulers are preemptive, and quite willing to stop one process from running in order to run another. The scheduler employs a mechanism we learned about previously; the scheduler can perform a context switch, stopping one running process temporarily and resuming (or starting) another. We will look at interrupts primarily from the standpoint of the time slice, which is the maximum amount of time that a process can stay in the CPU before it is forced to perform an involuntary yield. A time slice is a fixed period set for each operating system, usually by the operating systems designer. A process can choose to perform a voluntary yield of the CPU in less than a time slice but it cannot hold the CPU for longer than a time slice. When a time slice expires, the scheduler is run no arguments! With the timer interrupt, we now schedule jobs for one time slice at a time. We can even view any jobs that arrives at the back of the queue to be a new job in the sense that it wants to be scheduled for, at least, one more time slice. This is how we get to the definition of response time. T response = T scheduled T arrival. Now we interpret this to mean that the response time is the time it take a process to move to the front of the queue and be scheduled. This is more meaningful than just the first time an interactive job reaches the CPU. 2 See Shortest job next at Wikipedia 7

8 5.1 Shortest time to completion first (STCF) Also called shortest remaining time 3 and shortest time remaining. For this approach, each time that a process is removed form the CPU, we will set T remaining = T remaining T CP U where T CP U is the amount of time the process used during its current time slice. Figure 4: Shortest Time to Completion with Interrupts What about long running jobs? You guessed it, starvation. 5.2 Allowing I/O When I/O is allowed, more issues need to be handled CPU utilization becomes important, requiring a more elaborate concept of process state. Blocked state in now necessary. Figure 5: XV6 Process State Model 3 See Shortest remaining time at Wikipedia 8

9 Blocked jobs are not scheduled. We are able to get schedule compression. 5.3 Round Robin (RR) Figure 6: CPU Utilization with I/O Enabled Round robin uses a simple FIFO queue. This looks suspiciously like FIFO scheduling, but interrupts give this approach better applicability, especially for interactive jobs. We will now remove constraint 4. Our environment is very close to the modern environment for schedulers. New perspective on jobs Each job is comprised of many sub-jobs based on arrival Time slice in length Ordered queue service in FIFO order At fixed intervals Remove job from CPU Put at back of queue Take new job from front of queue Removes need for time to completion Good for interactive jobs What does it means for a job to arrive? With time slices, we can redefine job arrival time such that the definition for T response now makes more sense. Now, a job arrives when it is placed on a scheduling queue and completes when it is removed from the CPU. What about jobs that take many time slices to complete? Oh look! We no longer care about constraint 5, so let s throw that one away. We are now operating without any of the original constraints. All should be perfect, right? Well, not quite. 9

10 Figure 7: Round Robin 5.4 Priority What if we add priority to RR. How about a simple hi/low 2 queue scheme. We ll call the high priority queue interactive and the low priority queue CPU bound. The low priority queue will only be checked when the high priority queue is empty. All jobs will arrive on the interactive queue. If a job uses its entire time slice, we will assume that it is a CPU bound job and place it on that queue in round robin order when the time slice expires. How well does this approach work? Is there any chance of starvation? Sure would be nice to be able to favor short running jobs without causing problems for long running jobs. 5.5 Multilevel feedback queue (MLFQ) In an ideal world, our scheduler would be able to learn about all the jobs and be able to favor interactive ones, for good response time, but not starve CPU-bound jobs. MLFQ to the rescue! Some observations Long running jobs don t care about response time Interactive jobs require good response time Implies a priority How would be implement priorities in RR? Design Remember that we want to keep overhead low Starvation must be avoided Multiple queues based on priority Serviced in high-to-low order Each queue is RR 10

11 We now have the concept of priority. We want to prioritize interactive jobs without starving long-running jobs. Any algorithm that tackles this problem will need to be adaptive. Initial Rules (flawed) 1: If Priority(A) Priority(B), A runs (B doesn?t). 2: If Priority(A) = Priority(B), A & B run in RR. 3: When a job enters the system, it is placed at the highest priority 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue). 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level. The issue Rules 4a and 4b are subject to gaming Remove dependence on time slice with a budget New Rule 4: When budget is used up, demote and provide a new budget Removes gaming based on time slice More fair and equitable Our old friend starvation Lowering a priority means that jobs at higher priorities are given preference What if there is always a higher priority job? In a busy system with many short-lived processes, a demoted process may be shut out from using the CPU Process starvation now possible How do we fix this problem? Promotion to the rescue! Rule 5: After some time period S, promote all jobs Two strategies Promote all to highest priority Promote each job one level Which is simplest? Which better meets the goal of giving interactive jobs priority while not starving long running jobs? 11

12 MLFQ Final Rule Set 1: If Priority(A) Priority(B), A runs (B doesn?t). 2: If Priority(A) = Priority(B), A & B run in RR. 3: When a job enters the system, it is placed at the highest priority (the topmost queue). 4: Once a job uses up its time allotment (budget) at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue). 5: After some time period S, promote all jobs In this class, we use one level promotion 12

13 Figure 8: MLFQ Time to completion is no longer necessary, so we can drop it. We do not care how long a job will be in our system. We also do not care if the job is interactive, long running, or both. However, our schduling algorithms are more complex. They take more time; time taken from the time slice and hence the user process. Question: Is the result worth the overhead? A Note on Complexity FIFO and MLFQ are both O(1) MLFQ better tracks jobs that change back and forth between interactive and long running Better represents the real world Why? Promotion strategies reset v. one-level Would there be a perceivable difference between the two strategies in the real world? Why or why not? 13

14 Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the atomic transaction. atomic transaction An atomic transaction is an indivisible and irreducible series of operations such that either all occur, or nothing occurs. batch dummy binary semaphore dummy busy waiting busy-waiting, busy-looping, or spinning is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input or a lock is available 4. concurrency the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome. concurrency control primitive The basic concurrency control primitives in this class are the lock, condition variable, and semaphore. Concurrency control primitives are used to synchronize operations among multiple threads of control. concurrent Two or more operations are said to be concurrent if they can occur at the same time or appear to occur at the same time. The context switch plays a large role in concurrency. condition variable (CV) Condition variables allow threads to synchronize based upon the actual value of data. A condition variable is always used in conjunction with a mutex lock 5. A condition variable represents some condition that a thread can: Wait on, until the condition occurs; or Notify other waiting threads that the condition has occurred Very useful primitive for signaling between threads. Condition variable indicates an event; cannot store or retrieve a value from a CV. There are three operations on condition variables: wait() Block until another thread calls signal() or broadcast() on the CV signal() Wake up one thread waiting on the CV broadcast() Wake up all threads waiting on the CV Compare with semaphore. 4 See Wikipedia 5 See this LLNL Tutorial 14

15 context switch The process of storing the state of a process or of a thread, so that it can be restored and execution resumed from the same point later. This allows multiple processes to share a single CPU, and is an essential feature of a multitasking operating system. The precise meaning of the phrase context switch varies significantly in usage. In a multitasking context, it refers to the process of storing the system state for one task, so that task can be paused and another task resumed. A context switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up CPU time for other tasks. Some operating systems also require a context switch to move between user mode and kernel mode tasks. The process of context switching can have a negative impact on system performance, although the size of this effect depends on the nature of the switch being performed. CPU bound dummy critical section A critical section is a piece of code that accesses a shared variable (or more generally, a shared resource) and must not be concurrently executed by more than one thread. deadlock A state in which each member of a group is waiting for another member, including itself, to take action, such as sending a message or more commonly releasing a lock. Deadlock is a common problem in multiprocessing systems, parallel computing, and distributed systems, where software and hardware locks are used to arbitrate shared resources and implement process synchronization. In an operating system, a deadlock occurs when a process or thread enters a waiting state because a requested system resource is held by another waiting process, which in turn is waiting for another resource held by another waiting process. If a process is unable to change its state indefinitely because the resources requested by it are being used by another waiting process, then the system is said to be in a deadlock. deterministic An algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. exception In general, an exception breaks the normal flow of execution and executes a preregistered exception handler. The details of how this is done depends on whether it is a hardware or software exception and how the software exception is implemented. Some exceptions, especially hardware ones, may be handled so gracefully that execution can resume where it was interrupted 6. general semaphore long dummy invariant Invariants are properties of data structures that are maintained across operations. Typically, the correct behavior of an operation depends on the invariants being true when the operation begins. The operation may temporarily violate the invariants but must reestablish them before finishing. involuntary yield dummy 6 See Wikipedia 15

16 latency dummy lock A lock or mutex (from mutual exclusion) is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. A lock is designed to enforce a mutual exclusion concurrency control policy 7. See also reentrant mutex. mutex A mutex provides multiple threads with access to a shared resource such that the second thread that needs to acquire a mutex that has already been acquired by another thread has to wait until the first thread releases the mutex. Care should be taken to ensure that a thread does not attempt to acquire a mutex that it already holds, as this can result in a deadlock. See also lock. mutex lock See mutex. mutual exclusion A property of concurrency control, which is instituted for the purpose of preventing race conditions; it is the requirement that one thread of execution never enters its critical section at the same time that another concurrent thread of execution enters its own critical section nondeterministic In computer science, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. An improperly constructed concurrent algorithm can perform differently on different runs due to a race condition. parallel Occurring at the same time. preemptive dummy process control block dummy protection dummy race When multiple threads of execution enter the critical section at roughly the same time; both attempt to update the shared data structure, leading to a surprising (and perhaps undesirable) outcome. See also race condition. race condition A race condition occurs when two or more threads can access shared data and they try to change it at the same time. Because the thread scheduling algorithm can swap between threads at any time, you don t know the order in which the threads will attempt to access the shared data. Therefore, the result of the change in data is dependent on the thread scheduling algorithm, i.e. both threads are racing to access/change the data. 8. reentrant mutex A reentrant mutex, also called recursive mutex or recursive lock), is type of mutual exclusion (mutex) device that may be locked multiple times by the same process or thread, without causing deadlock. 7 See Wikipedia 8 See this thread from Stack Overflow 16

17 While any attempt to perform the lock operation on an ordinary mutex (lock) would either fail or block when the mutex is already locked, on a recursive mutex this operation will succeed if and only if the locking thread is the one that already holds the lock or if the lock is not held by any thread. Typically, a recursive mutex tracks the number of times it has been locked, and requires equally many unlock operations to be performed before other threads may lock it 9. scheduler dummy scheduling dummy scheduling process dummy semaphore a semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system such as a multitasking operating system. A semaphore is simply a variable. This variable is used to solve critical section problems and to achieve process synchronization in the multi processing environment. A trivial semaphore is a plain variable that is changed (for example, incremented or decremented, or toggled) depending on programmer-defined conditions. A useful way to think of a semaphore as used in the real-world system is as a record of how many units of a particular resource are available, coupled with operations to adjust that record safely (i.e. to avoid race conditions) as units are required or become free, and, if necessary, wait until a unit of the resource becomes available. Semaphores are a useful tool in the prevention of race conditions; however, their use is by no means a guarantee that a program is free from these problems. Semaphores which allow an arbitrary resource count are called counting semaphores, while semaphores which are restricted to the values 0 and 1 (or locked/unlocked, unavailable/available) are called binary semaphores and are used to implement locks 10. A semaphore is a shared counter with two operations P() or wait() or down(). From Dutch proeberen, meaning test. Decrement semaphore value. If Sem.val < 0, enter the wait list. V() or signal() or up(). From Dutch verhogen, meaning increment. Atomically increment semaphore value and wake a thread if necessary. If ++Sem.val 0 wake a thread. semaphores Plural of semaphore. serialization dummy sleep lock Another name for a binary semaphore which can logically be coinsidered a simple lock with a sleep queue. In Linux, this type of lock is called a mutex. Note that, in this class, we instead consider mutex as a synonym for lock. 9 See Wikipedia 10 See Wikipedia 17

18 spin lock A lock which causes a thread trying to acquire it to simply wait in a loop ( spin ) while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released 11. Uniprocessor architectures have the option of using uninterpretable sequences of instructions using special instructions or instruction prefixes to disable interrupts temporarily but this technique does not work for multiprocessor shared-memory machines. Proper support for locks in a multiprocessor environment can require quite complex hardware or software support, with substantial synchronization issues. starvation A process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb. synchronization primitive See concurrency control primitive system call dummy time of check to time of use In software development, time of check to time of use (TOCTOU, TOCTTOU or TOC/TOU) is a class of software bugs caused by changes in a system between the checking of a condition (such as a security credential) and the use of the results of that check 12. This is one example of a race condition. time slice dummy TOCTOU See time of check to time of use. TOCTTOU See time of check to time of use. transputer The transputer is a series of pioneering microprocessors from the 1980s, featuring integrated memory and serial communication links, intended for parallel computing. They were designed and produced by Inmos, a semiconductor company based in Bristol, United Kingdom. For some time in the late 1980s, many considered the transputer to be the next great design for the future of computing. While Inmos and the transputer did not achieve this expectation, the transputer architecture was highly influential in provoking new ideas in computer architecture, several of which have re-emerged in different forms in modern systems. virtual CPU dummy voluntary yield dummy wait lock See spin lock. 11 See Wikipedia 12 See Wikipedia 18

Concurrency. Glossary

Concurrency. Glossary Glossary atomic Executing as a single unit or block of computation. An atomic section of code is said to have transactional semantics. No intermediate state for the code unit is visible outside of the

More information

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ! Last Class: CPU Scheduling! Scheduling Algorithms: FCFS Round Robin SJF Multilevel Feedback Queues Lottery Scheduling Review questions: How does each work? Advantages? Disadvantages? Lecture 7, page 1

More information

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Chap 7, 8: Scheduling. Dongkun Shin, SKKU Chap 7, 8: Scheduling 1 Introduction Multiprogramming Multiple processes in the system with one or more processors Increases processor utilization by organizing processes so that the processor always has

More information

Multi-Level Feedback Queues

Multi-Level Feedback Queues CS 326: Operating Systems Multi-Level Feedback Queues Lecture 8 Today s Schedule Building an Ideal Scheduler Priority-Based Scheduling Multi-Level Queues Multi-Level Feedback Queues Scheduling Domains

More information

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008. CSC 4103 - Operating Systems Spring 2008 Lecture - XII Midterm Review Tevfik Ko!ar Louisiana State University March 4 th, 2008 1 I/O Structure After I/O starts, control returns to user program only upon

More information

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski Operating Systems Design Fall 2010 Exam 1 Review Paul Krzyzanowski pxk@cs.rutgers.edu 1 Question 1 To a programmer, a system call looks just like a function call. Explain the difference in the underlying

More information

Lecture 9: Midterm Review

Lecture 9: Midterm Review Project 1 Due at Midnight Lecture 9: Midterm Review CSE 120: Principles of Operating Systems Alex C. Snoeren Midterm Everything we ve covered is fair game Readings, lectures, homework, and Nachos Yes,

More information

Operating Systems. Process scheduling. Thomas Ropars.

Operating Systems. Process scheduling. Thomas Ropars. 1 Operating Systems Process scheduling Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2018 References The content of these lectures is inspired by: The lecture notes of Renaud Lachaize. The lecture

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 2018 Midterm Review Midterm in class on Monday Covers material through scheduling and deadlock Based upon lecture material and modules of the book indicated on

More information

Last Class: Processes

Last Class: Processes Last Class: Processes A process is the unit of execution. Processes are represented as Process Control Blocks in the OS PCBs contain process state, scheduling and memory management information, etc A process

More information

CS 326: Operating Systems. CPU Scheduling. Lecture 6

CS 326: Operating Systems. CPU Scheduling. Lecture 6 CS 326: Operating Systems CPU Scheduling Lecture 6 Today s Schedule Agenda? Context Switches and Interrupts Basic Scheduling Algorithms Scheduling with I/O Symmetric multiprocessing 2/7/18 CS 326: Operating

More information

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo CSE 120 Principles of Operating Systems Fall 2007 Lecture 8: Scheduling and Deadlock Keith Marzullo Aministrivia Homework 2 due now Next lecture: midterm review Next Tuesday: midterm 2 Scheduling Overview

More information

Operating Systems. CPU Scheduling ENCE 360

Operating Systems. CPU Scheduling ENCE 360 Operating Systems CPU Scheduling ENCE 360 Operating System Schedulers Short-Term Which Ready process to Running? CPU Scheduler Long-Term (batch) Which requested process into Ready Queue? Admission scheduler

More information

UNIT:2. Process Management

UNIT:2. Process Management 1 UNIT:2 Process Management SYLLABUS 2.1 Process and Process management i. Process model overview ii. Programmers view of process iii. Process states 2.2 Process and Processor Scheduling i Scheduling Criteria

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

Supplementary Materials on Multilevel Feedback Queue

Supplementary Materials on Multilevel Feedback Queue Supplementary Materials on Multilevel Feedback Queue Review Each job runs for the same amount of time. All jobs arrive at the same time. Once started, each job runs to completion. Performance is more important.

More information

CSE 120 Principles of Operating Systems Spring 2017

CSE 120 Principles of Operating Systems Spring 2017 CSE 120 Principles of Operating Systems Spring 2017 Lecture 5: Scheduling Administrivia Homework #1 due tomorrow Homework #2 out tomorrow October 20, 2015 CSE 120 Lecture 8 Scheduling and Deadlock 2 Scheduling

More information

CSC369 Lecture 5. Larry Zhang, October 19,2015

CSC369 Lecture 5. Larry Zhang, October 19,2015 CSC369 Lecture 5 Larry Zhang, October 19,2015 1 Describe your A1 experience using the feedback form 2 Announcements Assignment 2 out later this week, due November 11 Midterm next week in lecture: 9:10AM

More information

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002

CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization. October 9, 2002 Name: Student Id: General instructions: CMPSCI 377: Operating Systems Exam 1: Processes, Threads, CPU Scheduling and Synchronization October 9, 2002 This examination booklet has 10 pages. Do not forget

More information

CSE 120 Principles of Operating Systems

CSE 120 Principles of Operating Systems CSE 120 Principles of Operating Systems Fall 2016 Lecture 8: Scheduling and Deadlock Geoffrey M. Voelker Administrivia Thursday Friday Monday Homework #2 due at start of class Review material for midterm

More information

Tasks. Task Implementation and management

Tasks. Task Implementation and management Tasks Task Implementation and management Tasks Vocab Absolute time - real world time Relative time - time referenced to some event Interval - any slice of time characterized by start & end times Duration

More information

Operating Systems: Quiz2 December 15, Class: No. Name:

Operating Systems: Quiz2 December 15, Class: No. Name: Operating Systems: Quiz2 December 15, 2006 Class: No. Name: Part I (30%) Multiple Choice Each of the following questions has only one correct answer. Fill the correct one in the blank in front of each

More information

SMD149 - Operating Systems

SMD149 - Operating Systems SMD149 - Operating Systems Roland Parviainen November 3, 2005 1 / 45 Outline Overview 2 / 45 Process (tasks) are necessary for concurrency Instance of a program in execution Next invocation of the program

More information

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the

More information

Concurrency Control. Unlocked. The lock is not held by any thread of control and may be acquired.

Concurrency Control. Unlocked. The lock is not held by any thread of control and may be acquired. 1 Race Condition A race condition, data race, or just race, occurs when more than one thread of control tries to access a shared data item without using proper concurrency control. The code regions accessing

More information

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5 Multiple Processes OS design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed processing

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 12: Scheduling & Deadlock Priority Scheduling Priority Scheduling Choose next job based on priority» Airline checkin for first class passengers Can

More information

Advanced Operating Systems (CS 202) Scheduling (1)

Advanced Operating Systems (CS 202) Scheduling (1) Advanced Operating Systems (CS 202) Scheduling (1) Today: CPU Scheduling 2 The Process The process is the OS abstraction for execution It is the unit of execution It is the unit of scheduling It is the

More information

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006 Operating Systems Comprehensive Exam Spring 2006 Student ID # 3/16/2006 You must complete all of part I (60%) You must complete two of the three sections in part II (20% each) In Part I, circle or select

More information

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable) Past & Present Have looked at two constraints: Mutual exclusion constraint between two events is a requirement that

More information

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) A. Multiple Choice Questions (60 questions) Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering) Unit-I 1. What is operating system? a) collection of programs that manages hardware

More information

CS 318 Principles of Operating Systems

CS 318 Principles of Operating Systems CS 318 Principles of Operating Systems Fall 2017 Midterm Review Ryan Huang 10/12/17 CS 318 Midterm Review 2 Midterm October 17 th Tuesday 9:00-10:20 am at classroom Covers material before virtual memory

More information

CS3733: Operating Systems

CS3733: Operating Systems CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

Timers 1 / 46. Jiffies. Potent and Evil Magic

Timers 1 / 46. Jiffies. Potent and Evil Magic Timers 1 / 46 Jiffies Each timer tick, a variable called jiffies is incremented It is thus (roughly) the number of HZ since system boot A 32-bit counter incremented at 1000 Hz wraps around in about 50

More information

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009 CS211: Programming and Operating Systems Lecture 17: Threads and Scheduling Thursday, 05 Nov 2009 CS211 Lecture 17: Threads and Scheduling 1/22 Today 1 Introduction to threads Advantages of threads 2 User

More information

CS-537: Midterm Exam (Spring 2001)

CS-537: Midterm Exam (Spring 2001) CS-537: Midterm Exam (Spring 2001) Please Read All Questions Carefully! There are seven (7) total numbered pages Name: 1 Grading Page Points Total Possible Part I: Short Answers (12 5) 60 Part II: Long

More information

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Processes Prof. James L. Frankel Harvard University Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Process Model Each process consists of a sequential program

More information

Scheduling. The Basics

Scheduling. The Basics The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU

More information

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Process Scheduling. Copyright : University of Illinois CS 241 Staff Process Scheduling Copyright : University of Illinois CS 241 Staff 1 Process Scheduling Deciding which process/thread should occupy the resource (CPU, disk, etc) CPU I want to play Whose turn is it? Process

More information

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation The given code is as following; boolean flag[2]; int turn; do { flag[i]=true; turn=j; while(flag[j] && turn==j); critical

More information

Chapter 5: CPU Scheduling

Chapter 5: CPU Scheduling Chapter 5: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Operating Systems Examples Algorithm Evaluation Chapter 5: CPU Scheduling

More information

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic)

Sample Questions. Amir H. Payberah. Amirkabir University of Technology (Tehran Polytechnic) Sample Questions Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Sample Questions 1393/8/10 1 / 29 Question 1 Suppose a thread

More information

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter Lecture Topics Today: Uniprocessor Scheduling (Stallings, chapter 9.1-9.3) Next: Advanced Scheduling (Stallings, chapter 10.1-10.4) 1 Announcements Self-Study Exercise #10 Project #8 (due 11/16) Project

More information

Operating System Concepts Ch. 5: Scheduling

Operating System Concepts Ch. 5: Scheduling Operating System Concepts Ch. 5: Scheduling Silberschatz, Galvin & Gagne Scheduling In a multi-programmed system, multiple processes may be loaded into memory at the same time. We need a procedure, or

More information

Midterm Exam. October 20th, Thursday NSC

Midterm Exam. October 20th, Thursday NSC CSE 421/521 - Operating Systems Fall 2011 Lecture - XIV Midterm Review Tevfik Koşar University at Buffalo October 18 th, 2011 1 Midterm Exam October 20th, Thursday 9:30am-10:50am @215 NSC Chapters included

More information

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait Basic Concepts Scheduling Criteria Scheduling Algorithms Multiple-Processor Scheduling Real-Time Scheduling Thread Scheduling Operating Systems Examples Java Thread Scheduling Algorithm Evaluation CPU

More information

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug PROCESSES & THREADS Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA 22807 Voice Phone: 540-568-8746; Cell Phone: 443-956-9424 E-mail: abzugcx@jmu.edu OR CharlesAbzug@ACM.org

More information

Midterm Exam Amy Murphy 6 March 2002

Midterm Exam Amy Murphy 6 March 2002 University of Rochester Midterm Exam Amy Murphy 6 March 2002 Computer Systems (CSC2/456) Read before beginning: Please write clearly. Illegible answers cannot be graded. Be sure to identify all of your

More information

Course Syllabus. Operating Systems

Course Syllabus. Operating Systems Course Syllabus. Introduction - History; Views; Concepts; Structure 2. Process Management - Processes; State + Resources; Threads; Unix implementation of Processes 3. Scheduling Paradigms; Unix; Modeling

More information

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. ! Scheduling II Today! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling Next Time! Memory management Scheduling with multiple goals! What if you want both good turnaround

More information

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Processes and Non-Preemptive Scheduling. Otto J. Anshus Processes and Non-Preemptive Scheduling Otto J. Anshus Threads Processes Processes Kernel An aside on concurrency Timing and sequence of events are key concurrency issues We will study classical OS concurrency

More information

CPS 110 Midterm. Spring 2011

CPS 110 Midterm. Spring 2011 CPS 110 Midterm Spring 2011 Ola! Greetings from Puerto Rico, where the air is warm and salty and the mojitos are cold and sweet. Please answer all questions for a total of 200 points. Keep it clear and

More information

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018 CSCE 311 - Operating Systems Scheduling Qiang Zeng, Ph.D. Fall 2018 Resource Allocation Graph describing the traffic jam CSCE 311 - Operating Systems 2 Conditions for Deadlock Mutual Exclusion Hold-and-Wait

More information

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Operating Systems Lecture 4 - Concurrency and Synchronization Adrien Krähenbühl Master of Computer Science PUF - Hồ Chí Minh 2016/2017 Mutual exclusion Hardware solutions Semaphores IPC: Message passing

More information

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions: Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)

More information

CS 318 Principles of Operating Systems

CS 318 Principles of Operating Systems CS 318 Principles of Operating Systems Fall 2018 Lecture 4: Scheduling Ryan Huang Slides adapted from David Mazières lectures Administrivia Lab 0 Due today Submit in Blackboard Lab 1 released Due in two

More information

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni Operating Systems Lecture 2.2 - Process Scheduling Golestan University Hossein Momeni momeni@iust.ac.ir Scheduling What is scheduling? Goals Mechanisms Scheduling on batch systems Scheduling on interactive

More information

Ch 4 : CPU scheduling

Ch 4 : CPU scheduling Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,

More information

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling

Review. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition

More information

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM. Program #1 Announcements Is due at 9:00 AM on Thursday Program #0 Re-grade requests are due by Monday at 11:59:59 PM Reading Chapter 6 1 CPU Scheduling Manage CPU to achieve several objectives: maximize

More information

EECS 482 Introduction to Operating Systems

EECS 482 Introduction to Operating Systems EECS 482 Introduction to Operating Systems Winter 2018 Harsha V. Madhyastha Monitors vs. Semaphores Monitors: Custom user-defined conditions Developer must control access to variables Semaphores: Access

More information

8: Scheduling. Scheduling. Mark Handley

8: Scheduling. Scheduling. Mark Handley 8: Scheduling Mark Handley Scheduling On a multiprocessing system, more than one process may be available to run. The task of deciding which process to run next is called scheduling, and is performed by

More information

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University CS 571 Operating Systems Midterm Review Angelos Stavrou, George Mason University Class Midterm: Grading 2 Grading Midterm: 25% Theory Part 60% (1h 30m) Programming Part 40% (1h) Theory Part (Closed Books):

More information

CS-537: Midterm Exam (Fall 2013) Professor McFlub

CS-537: Midterm Exam (Fall 2013) Professor McFlub CS-537: Midterm Exam (Fall 2013) Professor McFlub Please Read All Questions Carefully! There are fourteen (14) total numbered pages. Please put your NAME (mandatory) on THIS page, and this page only. Name:

More information

Scheduling. Scheduling 1/51

Scheduling. Scheduling 1/51 Scheduling 1/51 Learning Objectives Scheduling To understand the role of a scheduler in an operating system To understand the scheduling mechanism To understand scheduling strategies such as non-preemptive

More information

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling CSE120 Principles of Operating Systems Prof Yuanyuan (YY) Zhou Scheduling Announcement l Homework 2 due on October 26th l Project 1 due on October 27th 2 Scheduling Overview l In discussing process management

More information

Scheduling of processes

Scheduling of processes Scheduling of processes Processor scheduling Schedule processes on the processor to meet system objectives System objectives: Assigned processes to be executed by the processor Response time Throughput

More information

(MCQZ-CS604 Operating Systems)

(MCQZ-CS604 Operating Systems) command to resume the execution of a suspended job in the foreground fg (Page 68) bg jobs kill commands in Linux is used to copy file is cp (Page 30) mv mkdir The process id returned to the child process

More information

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs CSE 451: Operating Systems Winter 2005 Lecture 7 Synchronization Steve Gribble Synchronization Threads cooperate in multithreaded programs to share resources, access shared data structures e.g., threads

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Synchronization I. Jo, Heeseung

Synchronization I. Jo, Heeseung Synchronization I Jo, Heeseung Today's Topics Synchronization problem Locks 2 Synchronization Threads cooperate in multithreaded programs To share resources, access shared data structures Also, to coordinate

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 1018 L10 Synchronization Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Development project: You

More information

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable What s An OS? Provides environment for executing programs Process abstraction for multitasking/concurrency scheduling Hardware abstraction layer (device drivers) File systems Communication Do we need an

More information

CHAPTER 6: PROCESS SYNCHRONIZATION

CHAPTER 6: PROCESS SYNCHRONIZATION CHAPTER 6: PROCESS SYNCHRONIZATION The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. TOPICS Background

More information

Operating System Review Part

Operating System Review Part Operating System Review Part CMSC 602 Operating Systems Ju Wang, 2003 Fall Virginia Commonwealth University Review Outline Definition Memory Management Objective Paging Scheme Virtual Memory System and

More information

Locks. Dongkun Shin, SKKU

Locks. Dongkun Shin, SKKU Locks 1 Locks: The Basic Idea To implement a critical section A lock variable must be declared A lock variable holds the state of the lock Available (unlocked, free) Acquired (locked, held) Exactly one

More information

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5) Announcements Reading Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) 1 Relationship between Kernel mod and User Mode User Process Kernel System Calls User Process

More information

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015 Q.2 a. Explain the following systems: (9) i. Batch processing systems ii. Time sharing systems iii. Real-time operating systems b. Draw the process state diagram. (3) c. What resources are used when a

More information

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions).

OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions). OS 1 st Exam Name Solution St # (Q1) (19 points) True/False. Circle the appropriate choice (there are no trick questions). (a) (b) (c) (d) (e) (f) (g) (h) (i) T_ The two primary purposes of an operating

More information

Operating Systems. Scheduling

Operating Systems. Scheduling Operating Systems Scheduling Process States Blocking operation Running Exit Terminated (initiate I/O, down on semaphore, etc.) Waiting Preempted Picked by scheduler Event arrived (I/O complete, semaphore

More information

Operating Systems (1DT020 & 1TT802)

Operating Systems (1DT020 & 1TT802) Uppsala University Department of Information Technology Name: Perso. no: Operating Systems (1DT020 & 1TT802) 2009-05-27 This is a closed book exam. Calculators are not allowed. Answers should be written

More information

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou (  Zhejiang University Operating Systems (Fall/Winter 2018) CPU Scheduling Yajin Zhou (http://yajin.org) Zhejiang University Acknowledgement: some pages are based on the slides from Zhi Wang(fsu). Review Motivation to use threads

More information

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019 CS 31: Introduction to Computer Systems 22-23: Threads & Synchronization April 16-18, 2019 Making Programs Run Faster We all like how fast computers are In the old days (1980 s - 2005): Algorithm too slow?

More information

LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science

LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science A Seminar report On LINUX OPERATING SYSTEM Submitted in partial fulfillment of the requirement for the award of degree of Bachelor of Technology in Computer Science SUBMITTED TO: www.studymafia.org SUBMITTED

More information

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID:

CMPS 111 Spring 2003 Midterm Exam May 8, Name: ID: CMPS 111 Spring 2003 Midterm Exam May 8, 2003 Name: ID: This is a closed note, closed book exam. There are 20 multiple choice questions and 5 short answer questions. Plan your time accordingly. Part I:

More information

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Chapter 5 Concurrency: Mutual Exclusion and Synchronization Seventh Edition By William Stallings Designing correct routines for controlling concurrent

More information

Scheduling Mar. 19, 2018

Scheduling Mar. 19, 2018 15-410...Everything old is new again... Scheduling Mar. 19, 2018 Dave Eckhardt Brian Railing Roger Dannenberg 1 Outline Chapter 5 (or Chapter 7): Scheduling Scheduling-people/textbook terminology note

More information

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1

More information

8th Slide Set Operating Systems

8th Slide Set Operating Systems Prof. Dr. Christian Baun 8th Slide Set Operating Systems Frankfurt University of Applied Sciences SS2016 1/56 8th Slide Set Operating Systems Prof. Dr. Christian Baun Frankfurt University of Applied Sciences

More information

1.1 CPU I/O Burst Cycle

1.1 CPU I/O Burst Cycle PROCESS SCHEDULING ALGORITHMS As discussed earlier, in multiprogramming systems, there are many processes in the memory simultaneously. In these systems there may be one or more processors (CPUs) but the

More information

Process- Concept &Process Scheduling OPERATING SYSTEMS

Process- Concept &Process Scheduling OPERATING SYSTEMS OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple

More information

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) Announcements Program #1 Due 2/15 at 5:00 pm Reading Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) 1 Scheduling criteria Per processor, or system oriented CPU utilization

More information

Interprocess Communication By: Kaushik Vaghani

Interprocess Communication By: Kaushik Vaghani Interprocess Communication By: Kaushik Vaghani Background Race Condition: A situation where several processes access and manipulate the same data concurrently and the outcome of execution depends on the

More information

CSL373: Lecture 6 CPU Scheduling

CSL373: Lecture 6 CPU Scheduling CSL373: Lecture 6 CPU Scheduling First come first served (FCFS or FIFO) Simplest scheduling algorithm cpu cpu 0 0 Run jobs in order that they arrive Disadvantage: wait time depends on arrival order. Unfair

More information

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5 OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:

More information

* What are the different states for a task in an OS?

* What are the different states for a task in an OS? * Kernel, Services, Libraries, Application: define the 4 terms, and their roles. The kernel is a computer program that manages input/output requests from software, and translates them into data processing

More information

Operating Systems ECE344. Ding Yuan

Operating Systems ECE344. Ding Yuan Operating Systems ECE344 Ding Yuan Announcement & Reminder Midterm exam Will grade them this Friday Will post the solution online before next lecture Will briefly go over the common mistakes next Monday

More information

Scheduling. Scheduling 1/51

Scheduling. Scheduling 1/51 Scheduling 1/51 Scheduler Scheduling Scheduler allocates cpu(s) to threads and processes. This action is known as scheduling. The scheduler is a part of the process manager code that handles scheduling.

More information

Chapter 9. Uniprocessor Scheduling

Chapter 9. Uniprocessor Scheduling Operating System Chapter 9. Uniprocessor Scheduling Lynn Choi School of Electrical Engineering Scheduling Processor Scheduling Assign system resource (CPU time, IO device, etc.) to processes/threads to

More information