PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Similar documents
CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Introduction to Embedded Systems

Chapter 5: CPU Scheduling. Operating System Concepts 9 th Edit9on

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CS4514 Real Time Scheduling

Practice Exercises 305

CSCI-GA Operating Systems Lecture 3: Processes and Threads -Part 2 Scheduling Hubertus Franke

Operating System Concepts Ch. 5: Scheduling

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Scheduling. Scheduling 1/51

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

PROCESS SCHEDULING Operating Systems Design Euiseong Seo

Uniprocessor Scheduling

Titolo presentazione. Scheduling. sottotitolo A.Y Milano, XX mese 20XX ACSO Tutoring MSc Eng. Michele Zanella

Scheduling. Scheduling 1/51

Announcements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

8: Scheduling. Scheduling. Mark Handley

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

CS370 Operating Systems

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Course Syllabus. Operating Systems

CS 4284 Systems Capstone. Resource Allocation & Scheduling Godmar Back

Computer Systems Assignment 4: Scheduling and I/O

Operating Systems. Scheduling

Multiprocessor and Real- Time Scheduling. Chapter 10

238P: Operating Systems. Lecture 14: Process scheduling

Multimedia Systems 2011/2012

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

Learning Outcomes. Scheduling. Is scheduling important? What is Scheduling? Application Behaviour. Is scheduling important?

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

SMD149 - Operating Systems

Microkernel/OS and Real-Time Scheduling

Embedded Systems: OS

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts

1.1 CPU I/O Burst Cycle

Frequently asked questions from the previous class survey

Computer Science 4500 Operating Systems


Chapter 5: CPU Scheduling

Scheduling. Today. Next Time Process interaction & communication

CSE120 Principles of Operating Systems. Prof Yuanyuan (YY) Zhou Scheduling

Operating Systems ECE344. Ding Yuan

Comparison of soft real-time CPU scheduling in Linux kernel 2.6 series with Solaris 10

CISC 7310X. C05: CPU Scheduling. Hui Chen Department of Computer & Information Science CUNY Brooklyn College. 3/1/2018 CUNY Brooklyn College

CSE 4/521 Introduction to Operating Systems

Frequently asked questions from the previous class survey

Embedded Systems Dr. Santanu Chaudhury Department of Electrical Engineering Indian Institution of Technology, IIT Delhi

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Scheduling of processes

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Multiprocessor and Real-Time Scheduling. Chapter 10

Embedded Systems: OS. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Process Scheduling. Copyright : University of Illinois CS 241 Staff

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3.

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

CPU Scheduling: Objectives

Scheduling - Overview

Chapter 5: CPU Scheduling

Real-time operating systems and scheduling

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Lecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter

Scheduling. CSC400 - Operating Systems. 7: Scheduling. J. Sumey. one of the main tasks of an OS. the scheduler / dispatcher

Ch 4 : CPU scheduling

CPU Scheduling. CSE 2431: Introduction to Operating Systems Reading: Chapter 6, [OSC] (except Sections )

Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras

Operating Systems. Process scheduling. Thomas Ropars.

Comparison of scheduling in RTLinux and QNX. Andreas Lindqvist, Tommy Persson,

Scheduling II. Today. Next Time. ! Proportional-share scheduling! Multilevel-feedback queue! Multiprocessor scheduling. !

Tasks. Task Implementation and management

CPU Scheduling. Basic Concepts. Histogram of CPU-burst Times. Dispatcher. CPU Scheduler. Alternating Sequence of CPU and I/O Bursts

ò mm_struct represents an address space in kernel ò task represents a thread in the kernel ò A task points to 0 or 1 mm_structs

Announcements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)

Scheduling. Don Porter CSE 306

OPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD

CS370 Operating Systems

Chapter 9. Uniprocessor Scheduling

Chapter 9 Uniprocessor Scheduling

Last Class: Processes

Advanced Operating Systems (CS 202) Scheduling (1)

Chapter 5: CPU Scheduling

Scheduling, part 2. Don Porter CSE 506

Chapter 5: CPU Scheduling. Operating System Concepts Essentials 8 th Edition

Uniprocessor Scheduling. Chapter 9

CS Computer Systems. Lecture 6: Process Scheduling

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7

CS3733: Operating Systems

Scheduling. The Basics

INF1060: Introduction to Operating Systems and Data Communication. Pål Halvorsen. Wednesday, September 29, 2010

LECTURE 3:CPU SCHEDULING

Today s class. Scheduling. Informationsteknologi. Tuesday, October 9, 2007 Computer Systems/Operating Systems - Class 14 1

Process- Concept &Process Scheduling OPERATING SYSTEMS

High level scheduling: Medium level scheduling: Low level scheduling. Scheduling 0 : Levels

Scheduling in the Supermarket

Operating Systems CS 323 Ms. Ines Abbes

Operating Systems Unit 3

COSC243 Part 2: Operating Systems

Transcription:

PROCESS SCHEDULING II CS124 Operating Systems Fall 2017-2018, Lecture 13

2 Real-Time Systems Increasingly common to have systems with real-time scheduling requirements Real-time systems are driven by specific events Often a periodic hardware timer interrupt Can also be other events, e.g. detecting a wheel slipping, or an optical sensor triggering, or a proximity sensor reaching a threshold Event latency is the amount of time between an event occurring, and when it is actually serviced Usually, real-time systems must keep event latency below a minimum required threshold e.g. antilock braking system has 3-5 ms to respond to wheel-slide The real-time system must try to meet its deadlines, regardless of system load Of course, may not always be possible

3 Real-Time Systems (2) Hard real-time systems require tasks to be serviced before their deadlines, otherwise the system has failed e.g. robotic assembly lines, antilock braking systems Soft real-time systems do not guarantee tasks will be serviced before their deadlines Typically only guarantee that real-time tasks will be higher priority than other non-real-time tasks e.g. media players Within the operating system, two latencies affect the event latency of the system s response: Interrupt latency is the time between an interrupt occurring, and the interrupt service routine beginning to execute Dispatch latency is the time the scheduler dispatcher takes to switch from one process to another

4 Interrupt Latency Interrupt latency in context: Interrupt! Task Running OS Interrupt Handler Context-Switch from Task Interrupt Service Routine When an interrupt occurs: Interrupt Latency CPU must complete the current instruction (If interrupts are turned off, must complete current critical section.) CPU dispatches to the operating system s interrupt handler OS interrupt handler saves the interrupted process context, and invokes the actual interrupt service routine All of this can take some time

5 Interrupt Latency (2) Interrupt latency can be dramatically increased by kernel code that disables interrupt handlers! Frequently necessary to avoid synchronization issues Real-time systems must disable interrupts for as short as possible, to minimize interrupt latency Allowing kernel preemption also helps reduce both the length and variance of interrupt latency Allow the kernel to context-switch away from a user process at any time, even when the process is in kernel code

6 Dispatch Latency If the interrupt signals an event for a real-time process, the process can now execute Need to get the real-time process onto the CPU as fast as possible Minimize dispatch latency Implication 1: real-time processes must be highest priority of all processes in the system Implication 2: the OS scheduler must support preemption of lower-priority processes by higher-priority processes OS Interrupt Handler Context-Switch from Task Interrupt Service Routine Dispatch to RT Process Interrupt! Task Running Interrupt Latency Dispatch Latency Real-Time Process Running Event Latency

7 Dispatch Latency (2) Scheduler dispatcher must be as efficient as possible Also, it s possible that a lower-priority process currently holds other resources that the real-time process needs In this case, resource conflicts must be resolved: i.e. cause lower-priority processes to release these resources so the real-time process can use them OS Interrupt Handler Context-Switch from Task Interrupt Service Routine Conflicts Dispatch to RT Process Interrupt! Task Running Interrupt Latency Dispatch Latency Real-Time Process Running Event Latency

8 Real-Time Process Scheduling Real-time scheduling algorithms often make assumptions about real-time processes Real-time processes are periodic: they require the CPU at constant intervals Must execute on a regular period p Each period, execution takes a fixed time t to complete (t < p) Given these values (and event latency), can determine a deadline d for the process to receive the CPU If real-time process doesn t receive the CPU within the deadline, it will not complete on time Time t Deadline d Deadline d Period p Period p

9 Real-Time Process Scheduling (2) Some OSes require real-time processes to state their scheduling requirements to the OS scheduler The OS can then make a decision about whether it can schedule the real-time process successfully Called an admission-control algorithm If the OS cannot schedule the process based on its requirements, the scheduling request is rejected Otherwise, the OS admits the process, guaranteeing that it can satisfy the process scheduling requirements (Primarily a feature of hard real-time operating systems ) Time t Deadline d Deadline d Period p Period p

10 Rate-Monotonic Scheduling Real-time processes can be scheduled with the rate-monotonic scheduling algorithm Tasks assigned a priority inversely based on their period The shorter the period, the higher the priority! Higher-priority tasks preempt lower-priority ones This algorithm is optimal in terms of static priorities If a set of real-time processes can t be scheduled by this algorithm, no algorithm that assigns static priorities can schedule them! Time t Deadline d Deadline d Period p Period p

11 Rate-Monotonic Scheduling (2) Lower-priority processes may be preempted by higherpriority ones As with fixed-priority preemptive scheduling, priority inversion can become an issue (and the solutions are the same) Is a set of real-time processes actually schedulable? Processes must execute on a specific period p Processes complete a CPU burst of length t during each period Example: two real-time processes P 1 has a period of 50 clocks, CPU burst of 20 clocks P 2 has a period of 100 clocks, CPU burst of 35 clocks Required Completion: P 1 P 1, P 2 P 1 P 1, P 2 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

12 Rate-Monotonic Scheduling (2) Example: two real-time processes P 1 has a period of 50 clocks, CPU burst of 20 clocks P 2 has a period of 100 clocks, CPU burst of 35 clocks P 1 and P 2 can begin executing at the same time P 1 has the higher priority, so it takes the CPU first P 1 completes its processing, and then P 2 starts Part way through P 2 s CPU burst, P 1 must execute again Preempts P 2, and completes P 2 regains the CPU and completes its processing Required Completion: P 1 P 1, P 2 P 1 P 1, P 2 P 1 P 2 P 1 P 1 P 2 P 1 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

13 Rate-Monotonic Scheduling (3) With these processes, clearly have some time left over P 1 has a period of 50 clocks, CPU burst of 20 clocks P 2 has a period of 100 clocks, CPU burst of 35 clocks CPU utilization of P1 = time / period = 20 / 50 = 0.4 CPU utilization of P2 = 35 / 100 = 0.35 Total CPU utilization = 0.75 Can we use CPU utilization to tell if a set of real-time processes can be scheduled? Required Completion: P 1 P 1, P 2 P 1 P 1, P 2 P 1 P 2 P idle 1 P 1 P 2 P 1 idle 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

14 Rate-Monotonic Scheduling (4) Another example: P 1 has a period of 50 clocks, CPU burst of 25 clocks P 2 has a period of 80 clocks, CPU burst of 35 clocks CPU utilization of P 1 = time / period = 25 / 50 = 0.5 CPU utilization of P 2 = time / period = 35 / 80 = ~0.44 Total CPU utilization = ~0.94 But can we actually schedule these processes with ratemonotonic scheduling? Required Completion: P 1 P 2 P 1 P 1 P 2 P 1 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

15 Rate-Monotonic Scheduling (5) Again, simulate rate-monotonic scheduling: P 1 has higher priority, executes first, and completes P 2 begins to execute but is preempted by P 1 before it completes When P 2 regains CPU, it misses its deadline by 5 clocks Even though total CPU utilization is < 100%, still not good enough! Required Completion: P 1 P 2 P 1 P 1 P 2 P 1 P 1 P 2 P 1 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

16 Rate-Monotonic Scheduling (6) For scheduling N real-time processes, CPU utilization can be no more than N(2 1/N 1) e.g. for two processes, CPU utilization must be 0.828 As N à, maximum CPU utilization à 0.69 For details on the above constraint, see: Scheduling Algorithms for Multiprogramming in a Hard-Real-Time Environment by Liu and Layland (1973) Required Completion: P 1 P 2 P 1 P 1 P 2 P 1 P 1 P 2 P 1 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

17 Earliest Deadline First Scheduling Earliest deadline first (EDF) scheduling algorithm dynamically assigns priorities to real-time processes The earlier a process deadline, the higher its priority! Processes must state their deadlines to the scheduler Our previous example: P 1 has a period of 50 clocks, CPU burst of 25 clocks P 2 has a period of 80 clocks, CPU burst of 35 clocks Required Completion: P 1 P 2 P 1 P 1 P 2 P 1 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

18 Earliest Deadline First Scheduling (2) With earliest deadline first scheduling: P 1 has earliest deadline at 50, so it runs first, completes P 2 begins running This time when P 1 s next deadline at 100 comes up, P 2 s deadline at 80 is still earlier, so it runs to completion Both processes complete in time for their first deadline Required Completion: P 1 P 2 P 1 P 1 P 2 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

19 Earliest Deadline First Scheduling (3) P 1 starts running again for its second deadline at 100, and it completes before that deadline P 2 begins running for its second deadline at 160 but is preempted by P 1 due to its 3 rd deadline at 150 P 1 completes in time for its third deadline P 2 resumes execution, and also completes in time for 2 nd deadline Both processes complete in time for their deadlines Required Completion: P 1 P 2 P 1 P 1 P 2 P 1 P 2 P 1 P 2 P 1 P 2 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200

20 Earliest Deadline First Scheduling (4) Earliest deadline first scheduling has fewer requirements than rate-monotonic scheduling: Doesn t actually require periodic processes, or constant CPU burst times! Only requires processes to state their deadlines EDF scheduling is theoretically optimal: If a set of processes has at most 100% CPU utilization, EDF can schedule that set of processes such that it meets its deadlines Unfortunately, event latency imposes an overhead Prevents EDF scheduling from achieving 100% CPU utilization

21 Linux 2.4 Scheduler Linux has used a wide range of scheduling algorithms Linux 2.4 scheduler: time is divided into epochs At the start of each epoch, scheduler assigns a priority to every process based on its behavior Real-time processes have an absolute priority assigned to them, and are highest priority Interactive processes have a dynamic priority assigned to them based on behavior in the previous epoch Batch processes are given the lowest priority Each process priority is used to compute a time quantum Different processes can have different size quantums (Higher-priority processes generally get larger timeslices) When a process has completely used up its quantum, it is preempted and another process runs

22 Linux 2.4 Scheduler (2) When scheduler is invoked, it iterates over all processes Computes a goodness, based on priority and other considerations e.g. processes can receive a bonus if they last ran on the same CPU that the scheduler is running on (encourages CPU affinity) The best process is given the CPU! Higher priority processes preempt lower priority ones If a higher-priority process becomes runnable, it takes the CPU from a lower-priority process that currently holds the CPU The current epoch ends when all runnable processes have consumed their entire time quantum A process can be scheduled on the CPU multiple times within an epoch e.g. it yields or blocks before its quantum is consumed, and then becomes runnable before the epoch completes

23 Linux 2.4 Scheduler (3) At start of next epoch, the scheduler iterates through all processes again, computing a new priority for each one Several O(N) computations in the scheduler made it scale terribly to large numbers of processes Assigning a dynamic priority to each process at start of epochs Choosing a process from the run queue Linux 2.6 scheduler was written to eliminate as many inefficiencies in the 2.4 scheduler as possible Captured the basic principles of the 2.4 scheduler, but a far faster implementation

24 Linux 2.6 O(1) Scheduler Linux O(1) scheduler still includes the notion of epochs, but only informally Processes are maintained in a priority array structure Each priority level has a queue of processes A bitmap records which priority levels have runnable processes Finding the highest-priority process to run is a constant-time operation Find index of lowest 1-bit in bitmap Use that index to access the priority array Bitmap: Lowest Priority Highest Priority Lowest Priority Highest Priority 0 0 0 1 1 0 1 0

25 Linux 2.6 O(1) Scheduler (2) Linux O(1) scheduler maintains two priority arrays: Active array contains processes with remaining time Expired array holds processes that have used up their quantum When an active process uses entire quantum, it is moved to the expired array When it is moved, a new priority is given to the process When the active priority array is empty, the epoch is over! O(1) scheduler switches the active and expired pointers and starts over again! Active: Expired: Highest Priority Lowest Priority Highest Priority Lowest Priority

26 Linux 2.6 O(1) Scheduler (3) The O(1) scheduler is very cleverly written to be efficient Same basic mechanism as the 2.4 scheduler, but much faster Used until Linux kernel 2.6.23 Unfortunately, the O(1) scheduler implementation was very complicated and difficult to maintain After version 2.6.23, Linux switched to the Completely Fair Scheduler (CFS)

27 Linux Completely Fair Scheduler Linux Completely Fair Scheduler (CFS): Instead of maintaining processes in various queues, the CFS simply ensures that each process gets its fair share of the CPU What constitutes a fair share is affected by the process priority; e.g. high-priority processes get a larger share, etc. Scheduler maintains a virtual run time for each process: Records how long each process has run on the CPU This virtual clock is inversely scaled by the process priority: the clock runs slower for high-priority processes, faster for low-priority All ready processes are maintained in a red-black tree, ordered by increasing virtual run times O(log N) time to insert a process into the tree The leftmost process has run for shortest amount of time, and therefore has the greatest need for the CPU

28 Linux Completely Fair Scheduler (2) To facilitate rapid identification of leftmost process, a separate variable records this process O(1) lookup for next process to run The CFS scheduler chooses a time quantum based on: The targeted latency of the system: a time interval in which every ready process should receive the CPU at least once The total number of processes in the system: the targeted latency has minimum and default values, but can be dynamically increased if a system is under heavy load Using these values and a process virtual run time, the scheduler determines when to preempt each process

29 Linux Completely Fair Scheduler (3) General behaviors: Processes that block or yield frequently will have lower virtual run times than processes that don t When these processes become ready to execute, they will receive the CPU very quickly No idea how long Linux will stick with the Completely Fair Scheduler! Who knows what scheduling mechanism they might devise next Very interesting to study Linux schedulers: very different from the most widely used approaches to process scheduling

30 Other Scheduling Algorithms Linux Completely Fair Scheduler is similar to another fascinating algorithm called stride scheduling Unfortunately, not enough time to discuss this algorithm See: Lottery and Stride Scheduling: Flexible Proportional-Share Resource Management by Waldspurger (1995)

31 Next Time Implementation of user processes and system calls