Scheduling. Monday, November 22, 2004

Similar documents
ò mm_struct represents an address space in kernel ò task represents a thread in the kernel ò A task points to 0 or 1 mm_structs

Scheduling. Don Porter CSE 306

Why is scheduling so difficult?

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

238P: Operating Systems. Lecture 14: Process scheduling

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

8: Scheduling. Scheduling. Mark Handley

Scheduling Monday, November 22, :22 AM

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

CS3733: Operating Systems

Scheduling of processes

Course Syllabus. Operating Systems

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Operating Systems. Process scheduling. Thomas Ropars.

Epilogue. Thursday, December 09, 2004

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

CSL373: Lecture 6 CPU Scheduling

PROCESS SCHEDULING Operating Systems Design Euiseong Seo

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Processes and threads

Operating Systems. CPU Scheduling ENCE 360

Process Scheduling. Copyright : University of Illinois CS 241 Staff

EECS 750: Advanced Operating Systems. 01/29 /2014 Heechul Yun

Can "scale" cloud applications "on the edge" by adding server instances. (So far, haven't considered scaling the interior of the cloud).

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

Uniprocessor Scheduling. Aim of Scheduling

Uniprocessor Scheduling. Aim of Scheduling. Types of Scheduling. Long-Term Scheduling. Chapter 9. Response time Throughput Processor efficiency

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Recall from deadlock lecture. Tuesday, October 18, 2011

Scheduling Mar. 19, 2018

Operating Systems. Scheduling

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Chapter 9. Uniprocessor Scheduling

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

Scheduling - Overview

Computer Systems Assignment 4: Scheduling and I/O

Scheduling. Scheduling 1/51

CS 4284 Systems Capstone. Resource Allocation & Scheduling Godmar Back

Frequently asked questions from the previous class survey

Last Class: Processes

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Scheduling. Scheduling 1/51

CSC369 Lecture 5. Larry Zhang, October 19,2015

Uniprocessor Scheduling

Scheduling, part 2. Don Porter CSE 506

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science

Computer Science 4500 Operating Systems

Process- Concept &Process Scheduling OPERATING SYSTEMS

Scheduling. Scheduling. Scheduling. Scheduling Criteria. Priorities. Scheduling

Notes based on prof. Morris's lecture on scheduling (6.824, fall'02).

Uniprocessor Scheduling. Chapter 9

Interactive Scheduling

ò Paper reading assigned for next Tuesday ò Understand low-level building blocks of a scheduler User Kernel ò Understand competing policy goals

Two Level Scheduling. Interactive Scheduling. Our Earlier Example. Round Robin Scheduling. Round Robin Schedule. Round Robin Schedule

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

A comparison between the scheduling algorithms used in RTLinux and in VxWorks - both from a theoretical and a contextual view

Chapter 5 Scheduling

CS 326: Operating Systems. CPU Scheduling. Lecture 6

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

Scheduling Bits & Pieces

CS Computer Systems. Lecture 6: Process Scheduling

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Lecture 5: Performance Analysis I

Process behavior. Categories of scheduling algorithms.

Scheduling in the Supermarket

MUD: Send me your top 1 3 questions on this lecture

Operating Systems ECE344. Ding Yuan

CS370 Operating Systems

Unit 3 : Process Management

Context Switching & CPU Scheduling

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Process management. Scheduling

Chapter 9 Uniprocessor Scheduling

Ch 4 : CPU scheduling

Linux Scheduler. OS 323 (Spring 2013)

MODELING OF SMART GRID TRAFFICS USING NON- PREEMPTIVE PRIORITY QUEUES

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

Frequently asked questions from the previous class survey

SMD149 - Operating Systems

Scheduling: Case Studies. CS 161: Lecture 5 2/14/17

Multi-Level Feedback Queues

CS3210: Processes and switching 1. Taesoo Kim

Scheduling. The Basics

CS 318 Principles of Operating Systems

Table 9.1 Types of Scheduling

CPU Scheduling. The scheduling problem: When do we make decision? - Have K jobs ready to run - Have N 1 CPUs - Which jobs to assign to which CPU(s)

CSE 120 Principles of Operating Systems Spring 2017

Chapter 6 Queuing Disciplines. Networking CS 3470, Section 1

Advanced Operating Systems (CS 202) Scheduling (1)

Scheduling. Multiple levels of scheduling decisions. Classes of Schedulers. Scheduling Goals II: Fairness. Scheduling Goals I: Performance

Lecture Little s Law Flux Queueing Theory Simulation Definition. CMPSCI 691ST Systems Fall 2011

CSE 451: Operating Systems Winter Module 10 Scheduling

Transcription:

Scheduling Page 1 Scheduling Monday, November 22, 2004 11:22 AM The scheduling problem (Chapter 9) Decide which processes are allowed to run when. Optimize throughput, response time, etc. Subject to constraints including context switch time, swap time, etc.

What's a schedule? Tuesday, November 17, 2009 1:57 PM What's a schedule? Input: some processes contending for resources. Output: a "schedule" of which process does what when. A "scheduler" is anything that produces a schedule. Scheduling Page 2

Scheduling Page 3 Kinds of scheduling Tuesday, November 17, 2009 1:17 PM Some kinds of scheduling: Short term: which process, already in memory, gets to execute next? Long term: which commands or requests are "admitted" for later processing? A.K.A. "admission control" I/O: which pending I/O request should be handled first? (when writing disk blocks, which block should go first?)

Today: short-term Monday, November 6, 2017 2:30 PM Run-queue: a set of processes that are "ready to run". The scheduling problem: give each process in the run queue a time to execute that is Prioritized according to the importance of the process. Fair to other processes. Scheduling Page 4

Scheduling Page 5 Basics of queueing models Monday, November 6, 2017 1:52 PM A model of queueing λ: arrival rate: arrivals per second. μ: service rate: jobs processed per second 1/λ: seconds between arrivals 1/μ: seconds to complete a job If arrival rate = departure rate, we say that the system is in steady state.

Scheduling Page 6 A queueing model of scheduling Thursday, November 03, 2011 3:48 PM A queueing model of scheduling: Processes that aren't done get queued again. Processes that exit aren't queued again.

Little's law Tuesday, November 17, 2009 4:38 PM The most fundamental and useful result: Little's law Assume we have a steady-state system. Let L be the average number of requests pending inside the system. L = "load" Let λ represent average arrival (and exit) rate for requests. Let W be the average time-in-system for a request (i.e., entrance to exit). W = "wait" Then L = λ W. No distribution assumptions on arrival or service. Really easy to understand, but exceedingly hard to prove! Scheduling Page 7

Bar example Tuesday, November 16, 2010 5:13 PM Scheduling Page 8

Using Little's law Tuesday, November 17, 2009 4:45 PM Using Little's law If you know any two of average time-insystem, average requests-in-system, or average arrival rate, you can get the other one. Example: A web service receives about 100 requests per minute and you observe that it takes 1/2 second to service a request. How many requests are pending at a time? λ = 100/60 requests/second W = 1/2 second L = λ W = 5/6 requests in system, on average. Scheduling Page 9

Another example Tuesday, November 17, 2009 4:50 PM Another example of Little's law Round-robin scheduling takes as input a sequence of runnable processes, processes them, and puts them back into the queue (closed loop) If there are 6 runnable processes and a time-slice of 1/100 second, what is the time in system for a process to get one slice? λ = 100 arrivals/second L = 6 slice requests in system L = λ W, so W = L/λ = 6/100 of a second between slices. Trivial result, but in general, Little's laws are powerful. Scheduling Page 10

Another example Tuesday, November 17, 2009 5:07 PM Suppose there are three processes in the run queue with priorities 1, 2, and 3, and their priorities are the same as the number of slices they get in each sequence of 6 slices. Each slice takes 100 ms. What is the average rate at which processes are processed? L=3 things in system at a time W=time for total of 6 slices = 600ms λ=l/w = 3/(600/1000) = 3000/600=5 per second. Scheduling Page 11

Little's "Laws" Thursday, November 18, 2010 12:02 PM There are a number of related laws to the main Little's Law: http://en.wikipedia.org/wiki/little's_law Assume that the system is in steady state Let λ = average arrival rate = average departure rate Let W = average time in system ("wait") Let L = average jobs ("load") Then L=λW But the system has two parts: A processing unit. A queue. But looking only at the queue: L-1 = average number of jobs waiting for processing. (L-1)/λ = average time in queue Scheduling Page 12

Let μ = average service rate (requests per second) Then 1/μ = average service time (seconds per request) Then, assuming serial processing: W-(1/μ) = average time spent in queue before processing. W-(1/μ) = (L-1)/ λ In general, any time you can put a box around a subsystem and the box is in balance, Little's law applies. Mnemonics: W=wait L=load μ = service rate λ = arrival rate Scheduling Page 13

Computational physics Tuesday, November 17, 2009 1:37 PM Computational physics To understand scheduling, it's helpful to start with a physical metaphor. work=cpu cycles expended on a goal. Analogous to "distance traveled". throughput = work/ wallclock time = work per unit time. (= work cycles / second) efficiency = work / total cycles = % of time used wisely. processor speed = total cycles / wallclock time overhead = 1 - work / total cycles = % of time "wasted": used for purposes other than accomplishing computational goals. efficiency + overhead = 1 throughput = efficiency * processor speed throughput and response time are independent variables. Scheduling Page 14

Scheduling Page 15 This is not a COMP15 queue Thursday, November 03, 2011 4:07 PM The run queue is not a COMP15 queue In scheduling theory, a queue is any mechanism that takes in jobs and releases one at a time, according to some "queueing discipline". Some queueing disciplines: First-in, First-out (FIFO) Prioritized - chosen according to univariate priority. Strategized - chosen according to multivariate rubrics. Randomized - chosen at random.

Inside the google search engine Thursday, November 03, 2011 4:39 PM Scheduling Page 16

Fairness Tuesday, November 17, 2009 1:48 PM Fairness For N processes, how equally is work distributed? Only meaningful for processes at the same priority. Some authors define unfairness as the difference between maximum and minimum "work" runtimes allocated at the same priority level. 0 unfairness: everything is equal high unfairness: things are unequal Opposite of fairness is starvation: one or more processes don't get to run at all. Scheduling Page 17

Scheduling Page 18 Goals of scheduling Tuesday, November 17, 2009 2:10 PM Goals of scheduling Minimize unfairness and latency. Maximize throughput and efficiency. Except that these are conflicting goals!

Common to all short-term strategies Tuesday, November 17, 2009 3:37 PM Common to all short-term strategies A system clock that interrupts process execution at predetermined intervals and invokes the scheduler. Whenever a process blocks, control is returned to the scheduler even if its scheduling interval is not done. The scheduler only acts on processes in the run queue, i.e., processes ready to execute. When a process becomes runnable, it is not executed immediately; it is returned to the run queue for eventual execution. Scheduling Page 19

The basic conflict Tuesday, November 17, 2009 2:14 PM The basic conflict in scheduling maximize efficiency => minimize scheduler overhead and context switches. reduce unfairness and latency => make context switches more frequent! The fairest scheduler in the world has large overhead, while the most efficient one is very unfair, i.e., one process runs until it must sleep. Scheduling Page 20

Scheduling Page 21 Scheduling is an active research topic Tuesday, November 06, 2012 11:55 AM Scheduling is an active research topic Satisfy particular goals... Better, stronger, faster... Two linux schedulers of note: "The O(1) scheduler" (<2009) "The Completely Fair Scheduler (>2009)

The O(1) Linux Scheduler Tuesday, November 06, 2012 11:59 AM The O(1) linux scheduler (pre-2009) http://joshaas.net/linux/linux_cpu_schedu ler.pdf Epoch: a scheduling event in which every process in the run queue gets a timeslice. Priority array: an array of linked lists of tasks, where each linked list corresponds to a priority. Preconditions: Two priority arrays: waiting[]: processes to be scheduled in this epoch. done[]: processes that have been scheduled in this epoch. The algorithm: Repeat forever While waiting[] is non-empty, choose a process from the highest priority list in waiting[]. schedule it for a time slice with a length determined by its priority. move it to the done[] array. end while Swap waiting[] and done[] roles and start over. Scheduling Page 22

End repeat Why is this O(1)? O(1) to choose a non-empty list. O(1) to unlink an element from the head. O(1) to relink it elsewhere. Scheduling Page 23

Scheduling Page 24 Scheduling priority Tuesday, November 06, 2012 12:21 PM Scheduling priority In all linux scheduling, each linux process has a static priority between -19 (highest priority) and +20 (lowest priority). In O(1) scheduling, there is also a dynamic priority that is +- 5 of a process's static priority. In O(1) scheduling, a process's priority determines the length of its time slice: #define BASE_TIMESLICE(priority) (MIN_TIMESLICE + \ ((MAX_TIMESLICE - MIN_TIMESLICE) * \ (MAX_PRIO-1 - priority) / (MAX_USER_PRIO-1))) In other words, linearly proportional to priority. Static priority is set via the nice(1,2) command and system call. dynamic priority is increased for I/O bound processes that are blocked waiting for I/O

Scheduling Page 25 decreased for CPU-bound processes that compute without blocking. This is a dirty hack to address the intrinsic unfairness of O(1) scheduling.

From whence comes unfairness? Monday, November 6, 2017 5:18 PM Scheduling Page 26

Scheduling Page 27 Why O(1) scheduling is unfair Wednesday, November 4, 2015 10:47 AM Why O(1) scheduling is unfair Processes that block a lot get much less CPU time. When they unblock, they're given only what they deserve if they didn't block at all. There is no "catch up" for CPU time when a process unblocks. So, an I/O intensive process gets -- on average -- much less CPU time than a compute-bound process.

Scheduling Page 28... because it's my ball... Tuesday, November 06, 2012 12:15 PM... because it's my ball... O(1) scheduling minimizes overhead lacks fairness How could we compensate for processes that wait a lot. give them higher priority according to a process lifecycle definition of fairness?

Scheduling Page 29 The completely fair scheduler Tuesday, November 06, 2012 11:57 AM The Completely Fair Scheduler (Linux > 2009) http://www.ibm.com/developerworks/library/l -completely-fair-scheduler/index.html https://en.wikipedia.org/wiki/completely_fair _Scheduler Track total runtime allotted to a process. Store each priority of run queue as a priority queue where the priority is time spent so far. At each point, the process that has had the least runtime so far is scheduled. A process that hasn't run lately can "catch up" by consuming several contiguous slices. Implementation: A runqueue for each process priority. Each runqueue is a red-black tree whose key is time spent running so far. For each epoch: For each runqueue, highest to lowest priority: Repeat until runqueue's time allotment ends:

Scheduling Page 30 Schedule the leftmost process (least key) for a slice. Remove it from the tree. Update its time spent computing. Reinsert it into the tree. End repeat End for Basic idea can be applied to Fairness of processes Fairness of process groups. Fairness among users

Scheduling Page 31 End of lecture on 11/6/2017 Monday, November 6, 2017 5:27 PM

Scheduling Page 32 O(1) versus Completely Fair Scheduling (CFS) Tuesday, November 06, 2012 5:03 PM O(1): simple, but not fair. CFS: fair, but not simple: If n is the number of processes in all run queues, fairness is bounded by O(n) runtime of scheduler is bounded by O(log n) per time slice, O(n log n) per epoch. Few other details Fairness is assured over a small window of epochs. That window rolls over as time advances.

Short-term strategies Tuesday, November 17, 2009 1:20 PM Other short-term scheduling strategies Round-robin: give a slice of time to each runnable process. Prioritized round-robin: give longer or more frequent slices to higher-priority processes. Run-until-blocked: process in memory gets priority until it must sleep. Pre-emptive: higher-priority processes exclude lower-priority ones until finished, then lower-priority processes can continue. Real-time: schedule according to behavior of external devices rather than an internal clock. Scheduling Page 33

Scheduling Page 34 Long-term scheduling Monday, November 22, 2004 11:22 AM Long-term scheduling Also known as "admission control". A big research topic in grid/cloud/autonomic computing. While a short-term scheduler decides what to do with active processes, a long-term scheduler decides when a job will become a process. Questions asked by a long-term scheduler: How many concurrent processes can the system handle? How much memory is needed for each? How important is response time versus throughput versus efficiency? How important is it to eventually admit a job?

An apparent paradox Tuesday, November 17, 2009 6:07 PM An apparent paradox: Marketing data suggests that if you make a user wait more than 2-5 seconds for a travelocity search, the sale is lost. For optimal business, it is better to drop the late request and process the not-yetlate ones. This is an admission-control policy! Scheduling Page 35

Long-term scheduling factors Monday, November 22, 2004 11:22 AM Long-term scheduling factors Deadlines: how long of a wait time is acceptable? Example: weather forecasting must be finished before the 7pm news. Example: rendering of today's animation must be complete by tomorrow at 8 am. Throughput: how much of the cpu can one use via latency hiding? Example: How fast can one finish n predictions of geological substructure, where n is large? Example: how fast can we render a scene in a Pixar movie? Example: How fast can we complete both accounts payable and accounts receivable for the night? Scheduling Page 36

Admission control Monday, November 22, 2004 11:22 AM Typical admission control algorithm Predict resource utilization. Predict overlap and latency hiding. Admit process only if within bounds. Otherwise, defer process until there are enough resources to assure completion, or delete if deadlines are past. Application: grid computing Process: a parallel program that runs on a grid. Resource utilization: # of nodes required, amount of memory on each node (I/O bandwidth to each node) Resources must be known in advance Application: cloud computing Process: services one request Resource utilization: sum of current active processes + new one. Resource needs known in advance. Capacity: some prediction of how many processes one can handle. Admission: only admit jobs you think you can handle. Elasticity: add processing power as number of concurrent requests increases. Scheduling Page 37

Scheduling Page 38 Theory vs. practice Monday, November 22, 2004 12:59 PM Tension between "theory" and "practice" "theory": things we can understand and analyze. simple scheduling algorithms models of task arrival, completion, and resource utilization "practice": what works. much more complex mechanisms than can be analyzed. much more complex inputs than can be modeled. "dirty hacks that work" Example: admission control algorithms theory: predicts what happens when all instances of a problem are autonomous. practice: instances aren't autonomous!