Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Similar documents
Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CPU Scheduling: Objectives

LECTURE 3:CPU SCHEDULING

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

CS370 Operating Systems

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

CS370 Operating Systems

Chapter 5: CPU Scheduling

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

CS307: Operating Systems

Chapter 5: CPU Scheduling

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 6: CPU Scheduling

Properties of Processes

Chapter 6: CPU Scheduling

Operating Systems CS 323 Ms. Ines Abbes

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

CPU Scheduling Algorithms

Chapter 5 CPU scheduling

Chapter 5: Process Scheduling

CHAPTER 2: PROCESS MANAGEMENT

CSE 4/521 Introduction to Operating Systems

Chapter 5: Process Scheduling

Chapter 5: CPU Scheduling

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

TDIU25: Operating Systems II. Processes, Threads and Scheduling

CS370 Operating Systems

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Operating System Concepts Ch. 5: Scheduling

Chapter 6: CPU Scheduling

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

CS370 Operating Systems

Chapter 5 Process Scheduling

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Chapter 5: CPU Scheduling. Operating System Concepts 9 th Edit9on

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Ch 4 : CPU scheduling

Process- Concept &Process Scheduling OPERATING SYSTEMS

Start of Lecture: February 10, Chapter 6: Scheduling

Operating Systems Unit 3

CS3733: Operating Systems

Practice Exercises 305

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Lecture 2 Process Management

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

So far. Next: scheduling next process from Wait to Run. 1/31/08 CSE 30341: Operating Systems Principles

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Scheduling. Today. Next Time Process interaction & communication

Course Syllabus. Operating Systems

CS370 Operating Systems

COSC243 Part 2: Operating Systems

Scheduling of processes

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Scheduling in the Supermarket

by Maria Lima Term Paper for Professor Barrymore Warren Mercy College Division of Mathematics and Computer Information Science

Scheduling. The Basics

Frequently asked questions from the previous class survey

CPU scheduling. Alternating sequence of CPU and I/O bursts. P a g e 31

Round Robin (RR) ACSC 271 Operating Systems. RR Example. RR Scheduling. Lecture 9: Scheduling Algorithms

Job Scheduling. CS170 Fall 2018

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

Chapter 5 Scheduling

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

CPU Scheduling: Part I ( 5, SGG) Operating Systems. Autumn CS4023

8: Scheduling. Scheduling. Mark Handley

Unit 3 : Process Management

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

Titolo presentazione. Scheduling. sottotitolo A.Y Milano, XX mese 20XX ACSO Tutoring MSc Eng. Michele Zanella

Maximum CPU utilization obtained with multiprogramming. CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait

Last class: Today: CPU Scheduling. CPU Scheduling Algorithms and Systems

Chapter 9 Uniprocessor Scheduling

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CS370 Operating Systems Midterm Review

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

CPU Scheduling (Part II)

Chapter 9. Uniprocessor Scheduling

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Last Class: Processes

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Operating Systems. Process scheduling. Thomas Ropars.

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Assignment 3 (Due date: Thursday, 10/15/2009, in class) Part One: Provide brief answers to the following Chapter Exercises questions:

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Operating Systems ECE344. Ding Yuan

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS

Scheduling. Scheduling 1/51

Chapter 19: Real-Time Systems. Operating System Concepts 8 th Edition,

Uniprocessor Scheduling

Transcription:

Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013

Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time CPU Scheduling Operating Systems Examples Algorithm Evaluation 6.2 Silberschatz, Galvin and Gagne 2013

Objectives To introduce CPU scheduling, which is the basis for multiprogrammed operating systems To describe various CPU-scheduling algorithms To discuss evaluation criteria for selecting a CPU-scheduling algorithm for a particular system To examine the scheduling algorithms of several operating systems 6.3 Silberschatz, Galvin and Gagne 2013

Review the life cycle of a process in a single-processor system. A process is executed until it must wait, typically for the completion of some I/O request. 6.4 Silberschatz, Galvin and Gagne 2013

Basic Concepts CPU I/O Burst Cycle Process execution consists of a cycle of CPU execution and I/O wait CPU burst followed by I/O burst Maximum CPU utilization obtained with multiprogramming CPU burst distribution is of main concern 6.5 Silberschatz, Galvin and Gagne 2013

Histogram of CPU-burst Times An I/O-bound program typically has many short CPU bursts. A CPU-bound program might have a few long CPU bursts. Examples of I/O-bound program and CPU-bound programs? 6.6 Silberschatz, Galvin and Gagne 2013

Review: Schedulers Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU Sometimes the only scheduler in a system (every process is in memory, like in Windows OS) Short-term scheduler is invoked frequently (milliseconds) (must be fast) Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue Long-term scheduler is invoked infrequently (seconds, minutes) (may be slow) The long-term scheduler controls the degree of multiprogramming Processes can be described as either: I/O-bound process spends more time doing I/O than computations, many short CPU bursts CPU-bound process spends more time doing computations; very long CPU bursts Long-term scheduler strives for good process mix What happens if all selected processes are I/O bound? What happens if all selected processes are CPU bound? 6.7 Silberschatz, Galvin and Gagne 2013

Medium Term Scheduler Medium-term scheduler can be added if degree of multiple programming needs to decrease Remove process from memory, store on disk, bring back in from disk to continue execution: swapping Why medium-term scheduler? May be necessary to improve the process mix or to free memory 6.8 Silberschatz, Galvin and Gagne 2013

CPU Scheduler When can process scheduling take place? 6.9 Silberschatz, Galvin and Gagne 2013

CPU Scheduler (cont.) CPU scheduling decisions may take place when a process: 1. Switches from running to waiting state 2. Switches from running to ready state 3. Switches from waiting to ready 4. Terminates Scheduling under only 1 and 4 is nonpreemptive or cooperative. Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. Used by Microsoft Windows 3.x Windows 95 introduced preemptive scheduling, and all subsequent versions of Windows OS have used preemptive scheduling. 6.10 Silberschatz, Galvin and Gagne 2013

CPU Scheduler (cont.) Otherwise scheduling is preemptive Preemptive scheduling can result in race conditions when data are shared among several processes: Consider access to shared data the example given in Chapter 5. Preemption also affects the design of the operating-system kernel. Consider preemption while in kernel mode. Several OS, including most versions of UNIX, deal with this problem by waiting either for a system call to complete or for an I/O block to take place before doing a context switch. This kernel-execution model is a poor one for supporting real time computing. Consider interrupts occurring during crucial OS activities. The sections of code affected by interrupts must be guarded from simultaneous use. Disable interrupts at entry and reenable interrupts at exit. The sections of code that disable interrupts do not occur very often and typically contain few instructions. 6.11 Silberschatz, Galvin and Gagne 2013

Dispatcher Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this involves: switching context switching to user mode jumping to the proper location in the user program to restart that program Dispatch latency time it takes for the dispatcher to stop one process and start another running 6.12 Silberschatz, Galvin and Gagne 2013

Scheduling Criteria What criteria make sense? CPU utilization keep the CPU as busy as possible Throughput # of processes that complete their execution per time unit Turnaround time amount of time to execute a particular process Waiting time amount of time a process has been waiting in the ready queue Response time amount of time it takes from when a request was submitted until the first response is produced, not output (for timesharing environment) For interactive systems (such as desktop systems), it is more important to minimize the variance in the response time than to minimize the average response time. However, little work has been done on CPU-scheduling algorithms that minimize variance. 6.13 Silberschatz, Galvin and Gagne 2013

Scheduling Algorithm Optimization Criteria CPU utilization throughput the number of processes that are completed per time unit turnaround time the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. waiting time the sum of the periods spent waiting in the ready queue. response time the time it takes to start responding, not the time it takes to output the response. For interactive systems (such as desktop systems), it is more important to minimize the variance in the response time than to minimize the average response time. 6.14 Silberschatz, Galvin and Gagne 2013

First- Come, First-Served (FCFS) Scheduling Process Burst Time P 1 24 P 2 3 P 3 3 Suppose that the processes arrive in the order: P 1, P 2, P 3 The Gantt Chart for the schedule is: P P P 1 2 3 0 24 27 30 Waiting time for P 1 = 0; P 2 = 24; P 3 = 27 Average waiting time: (0 + 24 + 27)/3 = 17 6.15 Silberschatz, Galvin and Gagne 2013

FCFS Scheduling (Cont.) Suppose that the processes arrive in the order: P 2, P 3, P 1 The Gantt chart for the schedule is: P 2 P 3 P 1 0 3 6 30 Waiting time for P 1 = 6; P 2 = 0 ; P 3 = 3 Average waiting time: (6 + 0 + 3)/3 = 3 Much better than previous case Note that the FCFS scheduling algorithm is nonpreemptive. Convoy effect - short process behind long process Consider one CPU-bound and many I/O-bound processes as an example, what will happen using the FCFS scheduling? How to improve this? 6.16 Silberschatz, Galvin and Gagne 2013

Shortest-Job-First (SJF) Scheduling Associate with each process the length of its next CPU burst Use these lengths to schedule the process with the shortest time SJF is optimal gives minimum average waiting time for a given set of processes The difficulty is knowing the length of the next CPU request Could ask the user 6.17 Silberschatz, Galvin and Gagne 2013

Example of SJF ProcessArrival Time Burst Time P 1 0.0 6 P 2 2.0 8 P 3 4.0 7 P 4 5.0 3 SJF scheduling chart P 4 P 1 P 3 P 2 0 3 9 16 24 Average waiting time = (3 + 16 + 9 + 0) / 4 = 7 6.18 Silberschatz, Galvin and Gagne 2013

Determining Length of Next CPU Burst Can only estimate the length should be similar to the previous one Then pick process with shortest predicted next CPU burst Can be done by using the length of previous CPU bursts, using exponential averaging 1. 2. t n actual length of n n 1 3., 0 1 4. Define : Commonly, α set to ½ CPU burst predicted value for the next CPU th 1. n 1 tn n burst 6.19 Silberschatz, Galvin and Gagne 2013

Prediction of the Length of the Next CPU Burst 6.20 Silberschatz, Galvin and Gagne 2013

Examples of Exponential Averaging =0 =1 n+1 = n Recent history does not count n+1 = t n Only the actual last CPU burst counts If we expand the formula, we get: n+1 = t n +(1 - ) t n -1 + +(1 - ) j t n -j + +(1 - ) n +1 0 Since both and (1 - ) are less than or equal to 1, each successive term has less weight than its predecessor 6.21 Silberschatz, Galvin and Gagne 2013

Shortest-remaining-time-first SJF algorithm can be either preemptive or nonpreemptive. Preemptive SJF scheduling is also called shortest-remaining-timefirst 6.22 Silberschatz, Galvin and Gagne 2013

Example of Shortest-remaining-time-first Now we add the concepts of varying arrival times and preemption to the analysis ProcessAarri Arrival TimeT Burst Time P 1 0 8 P 2 1 4 P 3 2 9 P 4 3 5 Preemptive SJF Gantt Chart P 1 P 2 P 4 P 1 P 3 0 1 5 10 17 26 Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec 6.23 Silberschatz, Galvin and Gagne 2013

Priority Scheduling A priority number (integer) is associated with each process Priority can be defined either internally or externally. The CPU is allocated to the process with the highest priority (smallest integer highest priority) Preemptive Nonpreemptive SJF is priority scheduling where priority is the inverse of predicted next CPU burst time Problem Starvation low priority processes may never execute Solution Aging as time progresses increase the priority of the process 6.24 Silberschatz, Galvin and Gagne 2013

Example of Priority Scheduling ProcessA arri Burst TimeT Priority P 1 10 3 P 2 1 1 P 3 2 4 P 4 1 5 P 5 5 2 Priority scheduling Gantt Chart P 21 P 52 P 1 P 3 P 4 0 1 6 16 18 19 Average waiting time = 8.2 msec 6.25 Silberschatz, Galvin and Gagne 2013

Check Out Line in a Grocery Store Single Server How to schedule? Preemptive or non-preemptive 6.26 Silberschatz, Galvin and Gagne 2013

Round Robin (RR) Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. If the process terminates before the time quantum, the scheduler proceeds to the next process in the ready queue. If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time units. Timer interrupts every quantum to schedule next process Performance q large FIFO q small q must be large with respect to context switch, otherwise overhead is too high 6.27 Silberschatz, Galvin and Gagne 2013

Example of RR with Time Quantum = 4 Process Burst Time P 1 24 P 2 3 P 3 3 The Gantt chart is: P P 2 P 3 P 1 P 1 P 1 P P 1 1 1 0 4 7 10 14 18 22 26 30 Typically, higher average turnaround than SJF, but better response q should be large compared to context switch time q usually 10ms to 100ms, context switch < 10 usec 6.28 Silberschatz, Galvin and Gagne 2013

Time Quantum and Context Switch Time The performance of RR algorithm depends heavily on the size of the time quantum. 6.29 Silberschatz, Galvin and Gagne 2013

Turnaround Time Varies With The Time Quantum 80% of CPU bursts should be shorter than q 6.30 Silberschatz, Galvin and Gagne 2013

Multilevel Queue Ready queue is partitioned into separate queues, eg: foreground (interactive) background (batch) Process permanently in a given queue Each queue has its own scheduling algorithm: foreground RR background FCFS Scheduling must be done between the queues: Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of starvation. Time slice each queue gets a certain amount of CPU time which it can schedule amongst its processes; i.e., 80% to foreground in RR, and 20% to background in FCFS 6.31 Silberschatz, Galvin and Gagne 2013

Multilevel Queue Scheduling 6.32 Silberschatz, Galvin and Gagne 2013

Multilevel Feedback Queue A process can move between the various queues; aging can be implemented this way Multilevel-feedback-queue scheduler defined by the following parameters: number of queues scheduling algorithms for each queue method used to determine when to upgrade a process method used to determine when to demote a process method used to determine which queue a process will enter when that process needs service 6.33 Silberschatz, Galvin and Gagne 2013

Example of Multilevel Feedback Queue Three queues: Q 0 RR with time quantum 8 milliseconds Q 1 RR time quantum 16 milliseconds Q 2 FCFS Scheduling A new job enters queue Q 0 which is served FCFS When it gains CPU, job receives 8 milliseconds If it does not finish in 8 milliseconds, job is moved to queue Q 1 At Q 1 job is again served FCFS and receives 16 additional milliseconds If it still does not complete, it is preempted and moved to queue Q 2 6.34 Silberschatz, Galvin and Gagne 2013

Multiple-Processor Scheduling CPU scheduling more complex when multiple CPUs are available Homogeneous processors within a multiprocessor Asymmetric multiprocessing only one processor accesses the system data structures, alleviating the need for data sharing Symmetric multiprocessing (SMP) each processor is selfscheduling, all processes in common ready queue, or each has its own private queue of ready processes Currently, most common 6.35 Silberschatz, Galvin and Gagne 2013

Processor Affinity Processor affinity process has affinity for processor on which it is currently running. Why? Consider what happens to cache memory when a process has been running on a specific processor. Successive memory accesses by the process are often satisfied in cache memory. When the process migrates to another processor, the cache for the second processor must be repopulated. soft affinity attempting to keep a process running on the same processor but not guaranteeing that it will do so. hard affinity allowing a process to specify that it is not to migrate to other processors. Variations including processor sets. The main-memory architecture of a system can also affect processor affinity issues. See next slide. 6.36 Silberschatz, Galvin and Gagne 2013

NUMA and CPU Scheduling Note that memory-placement algorithms can also consider affinity NUMA Non-uniform memory access 6.37 Silberschatz, Galvin and Gagne 2013

Multiple-Processor Scheduling Load Balancing If SMP, need to keep all CPUs loaded for efficiency Load balancing attempts to keep workload evenly distributed Push migration a specific task periodically checks load on each processor, and if found an imbalance, pushes task from overloaded CPU to other CPUs Pull migration idle processors pulls waiting task from busy processor 6.38 Silberschatz, Galvin and Gagne 2013

Real-Time CPU Scheduling Can present obvious challenges Soft real-time systems no guarantee as to when critical realtime process will be scheduled Hard real-time systems task must be serviced by its deadline Event Latency Antilock break system: 3-5 milliseconds Two types of latencies affect performance 1. Interrupt latency time from arrival of interrupt to start of routine that services interrupt. (interrupts can be disabled for only very short periods of time) 2. Dispatch latency time for scheduling dispatcher to take current process off CPU and switch to another 6.39 Silberschatz, Galvin and Gagne 2013

Real-Time CPU Scheduling (Cont.) Conflict phase of dispatch latency: 1. Preemption of any process running in kernel mode 2. Release by lowpriority process of resources needed by highpriority processes 6.40 Silberschatz, Galvin and Gagne 2013

Priority-based Scheduling For real-time scheduling, scheduler must support preemptive, prioritybased scheduling But only guarantees soft real-time For hard real-time must also provide ability to meet deadlines 6.41 Silberschatz, Galvin and Gagne 2013

Process Characteristics Processes have characteristics: periodic ones require CPU at constant intervals Has processing time t, deadline d, period p 0 t d p Rate of periodic task is 1/p 6.42 Silberschatz, Galvin and Gagne 2013

Rate Montonic Scheduling A priority is assigned based on the inverse of its period Shorter periods = higher priority; Longer periods = lower priority P 1 is assigned a higher priority than P 2. P1=50, t1=20 P2 = 100, t2=35 6.43 Silberschatz, Galvin and Gagne 2013

Missed Deadlines with Rate Monotonic Scheduling P1=50, t1=25 P2 = 80, t2=35 6.44 Silberschatz, Galvin and Gagne 2013

Earliest Deadline First Scheduling (EDF) Priorities are assigned according to deadlines: the earlier the deadline, the higher the priority; the later the deadline, the lower the priority P1=50, t1=25 P2 = 80, t2=35 6.45 Silberschatz, Galvin and Gagne 2013

Operating System Examples Linux scheduling Windows scheduling Solaris scheduling We skip these examples 6.46 Silberschatz, Galvin and Gagne 2013

Algorithm Evaluation How to select CPU-scheduling algorithm for an OS? Determine criteria, then evaluate algorithms Deterministic modeling Type of analytic evaluation Takes a particular predetermined workload and defines the performance of each algorithm for that workload Consider 5 processes arriving at time 0: 6.47 Silberschatz, Galvin and Gagne 2013

Deterministic Evaluation For each algorithm, calculate minimum average waiting time Simple and fast, but requires exact numbers for input, applies only to those inputs FCS is 28ms: Non-preemptive SFJ is 13ms: RR (quantum = 10) is 23ms: 6.48 Silberschatz, Galvin and Gagne 2013

Queueing Models Describes the arrival of processes, and CPU and I/O bursts probabilistically Commonly exponential, and described by mean Computes average throughput, utilization, waiting time, etc Computer system described as network of servers, each with queue of waiting processes Knowing arrival rates and service rates Computes utilization, average queue length, average wait time, etc 6.49 Silberschatz, Galvin and Gagne 2013

Little s Formula n = average queue length W = average waiting time in queue λ = average arrival rate into queue Little s law in steady state, processes leaving queue must equal processes arriving, thus: n = λ x W Valid for any scheduling algorithm and arrival distribution For example, if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds 6.50 Silberschatz, Galvin and Gagne 2013

Simulations Queueing models limited Simulations more accurate Programmed model of computer system Clock is a variable Gather statistics indicating algorithm performance Data to drive simulation gathered via Random number generator according to probabilities Distributions defined mathematically or empirically Trace tapes record sequences of real events in real systems 6.51 Silberschatz, Galvin and Gagne 2013

Evaluation of CPU Schedulers by Simulation 6.52 Silberschatz, Galvin and Gagne 2013

Review and Exercise Exercise 6.3 Exercise 6.3 SJF preemptive scheduling Exercise 6.16 6.53 Silberschatz, Galvin and Gagne 2013

End of Chapter 6 Silberschatz, Galvin and Gagne 2013