Operating Systems. Process scheduling. Thomas Ropars.

Similar documents
CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CS 318 Principles of Operating Systems

Last Class: Processes

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Operating System Concepts Ch. 5: Scheduling

Chapter 5: CPU Scheduling

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

Chapter 6: CPU Scheduling

CS 326: Operating Systems. CPU Scheduling. Lecture 6

1 Textbook scheduling. 2 Priority scheduling. 3 Advanced scheduling topics. - What goals should we have for a scheduling algorithm?

CPU Scheduling Algorithms

Operating Systems. CPU Scheduling ENCE 360

Properties of Processes

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Multi-Level Feedback Queues

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Course Syllabus. Operating Systems

1 Textbook scheduling. 2 Priority scheduling. 3 Advanced scheduling topics 1 / What goals should we have for a scheduling algorithm?

COSC243 Part 2: Operating Systems

Preview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System

Process Scheduling. Copyright : University of Illinois CS 241 Staff

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Frequently asked questions from the previous class survey

Scheduling of processes

CS370 Operating Systems

Supplementary Materials on Multilevel Feedback Queue

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Operating Systems. Lecture Process Scheduling. Golestan University. Hossein Momeni

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSL373: Lecture 6 CPU Scheduling

CSE 120 Principles of Operating Systems Spring 2017

CPU Scheduling. Schedulers. CPSC 313: Intro to Computer Systems. Intro to Scheduling. Schedulers in the OS

CS3733: Operating Systems

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Chap 7, 8: Scheduling. Dongkun Shin, SKKU

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

Operating Systems. Scheduling

Operating Systems CMPSCI 377 Spring Mark Corner University of Massachusetts Amherst

Operating Systems ECE344. Ding Yuan

LECTURE 3:CPU SCHEDULING

CPU Scheduling: Objectives

CHAPTER 2: PROCESS MANAGEMENT

Unit 3 : Process Management

Scheduling in the Supermarket

CS Computer Systems. Lecture 6: Process Scheduling

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

CS370 Operating Systems

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Advanced Operating Systems (CS 202) Scheduling (1)

1.1 CPU I/O Burst Cycle

Chapter 5: CPU Scheduling

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

8: Scheduling. Scheduling. Mark Handley

Processes. CS 475, Spring 2018 Concurrent & Distributed Systems

Process- Concept &Process Scheduling OPERATING SYSTEMS

CS370 Operating Systems

CSCE Operating Systems Scheduling. Qiang Zeng, Ph.D. Fall 2018

CSE 120 Principles of Operating Systems

Chapter 5: CPU Scheduling

Scheduling. Scheduling 1/51

ALL the assignments (A1, A2, A3) and Projects (P0, P1, P2) we have done so far.

Chapter 5 Scheduling

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

Practice Exercises 305

Scheduling Mar. 19, 2018

CSE 451: Operating Systems Winter Module 10 Scheduling

A COMPARATIVE STUDY OF CPU SCHEDULING POLICIES IN OPERATING SYSTEMS

Scheduling. The Basics

Chapter 5 CPU scheduling

Scheduling. Scheduling 1/51

Announcements/Reminders

Multitasking and scheduling

Scheduling. Multiple levels of scheduling decisions. Classes of Schedulers. Scheduling Goals II: Fairness. Scheduling Goals I: Performance

Frequently asked questions from the previous class survey

CPU Scheduling (1) CPU Scheduling (Topic 3) CPU Scheduling (2) CPU Scheduling (3) Resources fall into two classes:

Chapter 5: CPU Scheduling. Operating System Concepts 8 th Edition,

Last class: Today: CPU Scheduling. CPU Scheduling Algorithms and Systems

Lecture 2 Process Management

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

Ch 4 : CPU scheduling

So far. Next: scheduling next process from Wait to Run. 1/31/08 CSE 30341: Operating Systems Principles

But this will not be complete (no book covers 100%) So consider it a rough approximation Last lecture OSPP Sections 3.1 and 4.1

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Job Scheduling. CS170 Fall 2018

Recap. Run to completion in order of arrival Pros: simple, low overhead, good for batch jobs Cons: short jobs can stuck behind the long ones

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

Operating Systems Unit 3

Scheduling. CS 161: Lecture 4 2/9/17

by Maria Lima Term Paper for Professor Barrymore Warren Mercy College Division of Mathematics and Computer Information Science

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Comp 310 Computer Systems and Organization

3. CPU Scheduling. Operating System Concepts with Java 8th Edition Silberschatz, Galvin and Gagn

Chapter 5: Process Scheduling

Transcription:

1 Operating Systems Process scheduling Thomas Ropars thomas.ropars@univ-grenoble-alpes.fr 2018

References The content of these lectures is inspired by: The lecture notes of Renaud Lachaize. The lecture notes of Prof. David Mazières. The lectures notes of Arnaud Legrand. Operating Systems: Three Easy Pieces by R. Arpaci-Dusseau and A. Arpaci-Dusseau Other references: Modern Operating Systems by A. Tanenbaum Operating System Concepts by A. Silberschatz et al. 2

In this lecture The scheduling problem Definition Metrics to optimize Dealing with I/O Scheduling policies First come, first served Shortest job first Round robin (time slicing) Multi-level feedback queues Completely fair scheduler 3

4 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

5 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

CPU scheduling CPU 1 P k... P 3 P 2 P 1 CPU 2. CPU n The scheduling problem: The system has k processes ready to run The system has n 1 CPUs that can run them Which process should we assign to which CPU(s)? 6

7 About threads and multiprocessors Thread scheduling When the operating system implements kernel threads, scheduling is applied to threads The following slides discuss process scheduling but also applies to kernel threads. Multiprocessors Having multiple CPUs available to schedule processes increases the complexity of the scheduling problem In a first step, we consider scheduling on a single CPU

8 Process state new admitted scheduler dispatch exit terminated ready running I/O or event completion interrupt waiting I/O or event wait Process state (in addition to new/terminated): Running: currently executing (or will execute on kernel return) Ready: can run, but kernel has chosen a different process to run Waiting: needs an external event (e.g., end of disk operation, signal on condition variable) to proceed

Need for a scheduling decision Which process should the kernel run? If no runnable process, run the idle task CPU halts until next interrupt 1. if a single process runnable, run this one. If more than one runnable process, a scheduling decision must be taken 1 https://manybutfinite.com/post/what-does-an-idle-cpu-do/ 9

10 When to schedule? When is a scheduling decision taken? 1. A process switches from running to waiting state I/O request Synchronization 2. A process switches from running to ready state An interrupt occurs Call to yield() 3. A process switches from new/waiting to ready state 4. A process terminates Note that early schedulers were non-preemptive (e.g., windows 3.x). It means that no scheduling decision was taken until the running process was explicitly releasing the CPU (case 1, 4 or 2 with yield()).

11 Preemption A process can be preempted when kernel gets control. There are several such opportunities: A running process can transfer control to kernel through a trap (System call (including exit), page fault, illegal instruction, etc.) May put current process to wait e.g., read from disk May make other processes ready to run e.g., fork, mutex release May destroy current process Periodic timer interrupt If running process used up time quantum, schedule another Device interrupt (e.g., disk request completed, packet arrived on network) A previously waiting process becomes ready Schedule if higher priority than current running process

Context switching Changing the running process implies a context switch. This operation is processor dependent but it typically includes: Save/restore general registers Save/restore floating point or other special registers Switch virtual address translations (e.g., pointer to root of paging structure) In case we are switching between processes (address space switch) Save/restore program counter A context switch has a non negligible cost: In addition to saving/restoring registers, it may induce TLB flush/misses, cache misses, etc. (different working set). 12

13 Context switch cost: cache misses P 1 P 2 CPU cache CPU cache

13 Context switch cost: cache misses P 1 P 2 CPU cache CPU cache

13 Context switch cost: cache misses P 1 P 2 P 1 CPU cache CPU cache CPU cache

Scheduling criteria Main performance metrics: Throughput: Number of processes that complete per time unit (higher is better) Global performance of the system Turnaround time: Time for each process to complete (lower is better) Important from the point of view of one process Response time: Time from request to first response (e.g., key press to character echo) (lower is better) More meaningful than turnaround time for interactive jobs Secondary goals: CPU utilization: Fraction of time that the CPU spends doing productive work (i.e., not idle) (to be maximized) Waiting time: Time that each process spends waiting in ready queue (to be minimized) 14

Scheduling policy The problem is complex because their can be multiple (conflicting) goals: Fairness prevent starvation Priority reflect relative importance of processes Deadlines must do x by a certain time Reactivity minimize response time Efficiency minimize the overhead of the scheduler itself 15

Scheduling policy The problem is complex because their can be multiple (conflicting) goals: Fairness prevent starvation Priority reflect relative importance of processes Deadlines must do x by a certain time Reactivity minimize response time Efficiency minimize the overhead of the scheduler itself There is no universal policy Many goals cannot optimize for all Conflicting goals (e.g., throughput or priority versus fairness) 15

16 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

17 How to pick up which process to run? Why not picking first runnable process in the process table?

How to pick up which process to run? Why not picking first runnable process in the process table? Expensive Weird priorities (low PIDs have higher priority?) 17

How to pick up which process to run? Why not picking first runnable process in the process table? Expensive Weird priorities (low PIDs have higher priority?) We need to maintain a set of ready processes What policy? FIFO? Priority? 17

First Come, First Served (FCFS) Description Idea: run jobs in order of arrival Implementation: a FIFO queue (simple) Example 3 processes: P 1 needs 24 sec, P 2 and P 3 need 3 sec. P 1 arrives just before P 2 and P 3. P 1 P 2 P 3 0 24 27 30 Performance Throughput: 3 jobs / 30 sec = 0.1 jobs/sec Turnaround Time: P 1 : 24, P 2 : 27, P 3 : 30 (Avg = 27) 18

Can we do better? Suppose we would schedule first P 2 and P 3, and then P 1. P 2 P 3 P 1 0 3 6 30 Performance Throughput: 3 jobs / 30 sec = 0.1 jobs/sec Turnaround Time: P 1 : 30, P 2 : 3, P 3 : 6 (Avg = 13) Lessons learned The scheduling algorithm can reduce turnaround time Minimizing waiting time can improve turnaround time and response time 19

20 Computation and I/O Most jobs contain computation and I/O (disk, network) Burst of computation and then wait on I/O To maximize throughput, we must optimize CPU utilization I/O device utilization Response time is very important for I/O-intensive jobs The I/O device will be idle until job gets small amount of CPU to issue next I/O request

21 Computation and I/O The idea is to overlap I/O and computation from multiple jobs Example: ideal scenario Disk-bound grep + CPU-bound matrix multiply grep wait for disk wait for disk wait for disk matrix multiply wait for CPU With perfect overlapping, the throughput can be almost doubled

Duration of CPU bursts In practice, many workloads have short CPU bursts What does this mean for FCFS? 22

23 Back to FCFS: the convoy effect Consider our previous example with a disk-bound and a cpu-bound application. What is going to happen with FCFS?

23 Back to FCFS: the convoy effect Consider our previous example with a disk-bound and a cpu-bound application. What is going to happen with FCFS? grep wait for disk wait for CPU wait for disk... matrix multiply Imagine now there are several I/O-bound job and one CPU-bound job...

Back to FCFS: the convoy effect Definition A number of relatively-short potential consumers of a resource get queued behind a heavyweight resource consumer Consequences CPU bound jobs will hold CPU until exit or I/O (but I/O rare for CPU-bound threads) Long period with CPU held and no I/O request issued Poor I/O device utilization Simple hack Run process whose I/O just completed New problems? 24

Back to FCFS: the convoy effect Definition A number of relatively-short potential consumers of a resource get queued behind a heavyweight resource consumer Consequences CPU bound jobs will hold CPU until exit or I/O (but I/O rare for CPU-bound threads) Long period with CPU held and no I/O request issued Poor I/O device utilization Simple hack Run process whose I/O just completed New problems? What if after the I/O it has a long CPU burst? 24

Shortest Job First (SJF) Idea Schedule the job whose next CPU burst is the shortest 2 versions: Non-preemptive: Once CPU given to the process it cannot be preempted until completes its CPU burst Preemptive: if a new process arrives with CPU burst length less than remaining time of current executing process, preempt (Known as the Shortest-Remaining-Time-First or SRTF) The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes. Moving a short process before a long one decreases the waiting time of the short process more than it increases the waiting time of the long process 25

26 Examples Process Arrival Time Burst Time P 1 0 7 P 2 2 4 P 3 4 1 P 4 5 4 Draw the execution timeline and compute average turnaround time, for FCFS, SJF, and SRTF scheduling policies.

Examples Process Arrival Time Burst Time P 1 0 7 P 2 2 4 P 3 4 1 P 4 5 4 Non-preemptive (SJF) P 1 P 3 P 2 P 4 0 7 8 12 16 Preemptive (SRTF) P 1 P 2 P 3 P 2 P 4 P 1 0 2 4 5 7 11 16 Average turnaround time: FCFS= 8.75; SJF = 8; SRTF = 7 26

27 SJF limitations Doesn t always minimize average turnaround time Only minimizes response time Example where not optimal: Overall longer job has shorter bursts It can lead to unfairness or even starvation A job with very short CPU and I/O bursts will be run very often A job with very long CPU bursts might never get to run In practice, we can t predict the future... But we can estimate the length of CPU bursts based on the past Idea: Predict future bursts based on past bursts with more weight to recent bursts. (See textbooks for details, e.g., Silberschatz et al.) Hard to apply to interactive jobs

28 Round Robin (RR) Scheduling Description Similar to FCFS scheduling, but timer-based preemption is added to switch between processes. Time slicing: RR runs a job for a time slice (sometimes called a scheduling quantum) and then switches to the next job in the run queue. If the running process stops running (waits or terminates) before the end of the time slice, the scheduling decision is taken immediately (and the length of the time slice is evaluated from this point in time) Example P 1 P 2 P 3 P 1 P 2 P 1

Round Robin (RR) Scheduling Solution to fairness and starvation Implement the ready list as a FIFO queue At the end of the time slice, put the running process back at the end of the queue Most systems implement some flavor of this Advantages Fair allocation of CPU across jobs Low variations in waiting time even when jobs length vary Good for responsiveness if small number of jobs (and time quantum is small) 29

RR Scheduling: Drawbacks RR performs poorly with respect to Turnaround Time (especially if the time quantum is small). Example Let s consider 2 jobs of length 100 with a time quantum of 1: P 1 P 2 P 1 P 2 P 1 P 2 P 1 P 2 0 1 2 3 4 5 6 198 199 200 Even if context switches were for free: Avg turnaround time with RR: 199.5 Avg turnaround time with FCFS: 150 30

31 Time quantum How to pickup a time quantum? Should be much larger than context switch cost We want to amortize context switch cost Majority of bursts should be shorter than the quantum But not so large system reverts to FCFS The shorter the quantum, the better it is for response time Typical values: 1 100 ms (often 10 ms)

Priority scheduling Principle Associate a numeric priority with each process Ex: smaller number means higher priority (Unix) Give CPU to process with highest priority (can be done preemptively or non-preemptively) Note that SJF is a priority scheduling where priority is the predicted next CPU burst time. Problem of starvation Low priority processes may never execute Solution: Aging increase the priority of a process as it waits 32

33 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

Multi-level feedback queues (MLFQ) scheduling To be read: Operating Systems: Three Easy Pieces chapter 8 Goals Optimize turnaround time (as SJF but without a priori knowledge of next CPU burst length) Make the system feel responsive to interactive users (as RR does) Basic principles A set of queues with different priorities At any moment, a ready job is in at most one queue Basic scheduling rules: Rule 1: If priority(a) > priority(b), then A runs (B doesn t) Rule 2: If priority(a) == priority(b), RR is applied 34

35 MLFQ scheduling 0 tail 1 tail 2 tail. n tail Problem?

MLFQ scheduling 0 tail 1 tail 2 tail. n tail Problem? Starvation: Only the processes with the highest priority run How to change priorities over time? 35

MLFQ scheduling: managing priorities (first try) Additional rules Rule 3: When a job enters the system, it is placed at the highest priority (the topmost queue) Everybody gets a chance to be considered as high priority job (first assume all jobs are short-running). Rule 4a: If a job uses up an entire time slice while running, its priority is reduced (i.e., it moves down one queue) The priority of CPU-intensive jobs decreases rapidly (this tries to simulate SJF). Rule 4b: If a job gives up the CPU before the time slice is up, it stays at the same priority level. Short CPU bursts are typical of interactive jobs, so keep them with high priority for responsiveness More generally, optimize overlapping between I/O and computation 36

37 MLFQ scheduling: managing priorities (second try) Weaknesses of the current solution Risk of starvation for CPU-bound jobs if too many I/O-bound jobs A user can trick the system: put a garbage I/O just before the end of the time slice to keep high priority What if a program changes its behavior over time?

37 MLFQ scheduling: managing priorities (second try) Weaknesses of the current solution Risk of starvation for CPU-bound jobs if too many I/O-bound jobs A user can trick the system: put a garbage I/O just before the end of the time slice to keep high priority What if a program changes its behavior over time? Priority Boost Rule 5: After some time period S, move all the jobs in the system to the topmost queue. Avoids starvation Deals with the case of an application changing from CPU-bound to I/O-bound

38 MLFQ scheduling: managing priorities (third try) Better accounting We replace rules 4a and 4b by the following single rule: Rule 4: Once a job uses up its time slice at a given level (regardless of how many times it has given up the CPU), its priority is reduced (i.e., it moves down one queue). The scheduler keeps track of how much CPU time each job uses Impossible to use some gaming strategy to keep high priority

39 MLFQ scheduling: configuration Several parameters of MLFQ can be tuned. There is no single good configuration. How many queues? Ex: 60 queues How long should be the time slice in each queue? Some systems use small time slices for high priority queues, and big time slices for low priority. How often should priority boost be run? Ex: every 1 second

40 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

The Completely Fair Scheduler https://www.kernel.org/doc/documentation/scheduler/sched-design-cfs.txt Default Linux scheduler since version 2.6.23 (author: Ingo Molnar) Prior state Linux was using a MLFQ algorithm (the O(1) algorithm) Note that Windows (at least up to Windows 7) also uses a MLFQ algorithm Complex management of priorities and I/O-bound tasks. Goals of CFS Promote fairness + deal with malicious users CFS basically models an ideal, precise multi-tasking CPU on real hardware 41

42 The Completely Fair Scheduler Basic idea: Keep track of how unfair the system has been treating a task relative to the others. Each task has a vruntime value that increases when it runs. Increase by the amount of time the task has run To account for priorities, this increase is weighted using a priority factor. The next task to run is the one with the lowest vruntime. Ready tasks are sorted based on vruntime (uses a Red-Black Tree data structure) When a new task is created, its vruntime is set to minimum existing vruntime. When a task i wakes up, ( its vruntime is set as follows: ) vruntime i = max vruntime i, vruntime min C

43 Agenda The problem Textbook algorithms Multi-level feedback queues CFS Multiprocessor scheduling

44 Multiprocessor scheduling Why can t we simply reuse what we have just seen? The problem is more complex: We need to decide which process to run on which CPU. Migrating processes from CPU to CPU is very costly: It will generate a lot of cache misses

45 Multiprocessor scheduling Affinity scheduling Typically one scheduler per CPU Risk of load imbalance Do cost-benefit analysis when deciding to migrate P 2 P 3 P 1 P 1 P 2 P 3 P 1 P 2 P 3 P 1 P 2 P 3 P 3 P 1 P 2 P 1 P 2 P 3 P 2 P 3 P 1 P 1 P 2 P 3 CPU 1 CPU 2 CPU 3 CPU 1 CPU 2 CPU 3 no affinity affinity

46 References for this lecture Operating Systems: Three Easy Pieces by R. Arpaci-Dusseau and A. Arpaci-Dusseau Chapter 7: CPU scheduling Chapter 8: Multi-level feedback Chapter 10: Multi-CPU scheduling Operating System Concepts by A. Silberschatz et al. Chapter 5: CPU scheduling