Microkernel/OS and Real-Time Scheduling

Similar documents
Introduction to Embedded Systems

Introduction to Embedded Systems

Verification of Real-Time Systems Resource Sharing

Operating systems and concurrency (B10)

Introduction to Embedded Systems

Analyzing Real-Time Systems

Simplified design flow for embedded systems

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Schedulability with resource sharing. Priority inheritance protocol Priority ceiling protocol Stack resource policy

CS4514 Real Time Scheduling

Real-time operating systems and scheduling

1.1 Explain the difference between fast computing and real-time computing.

Overview. Sporadic tasks. Recall. Aperiodic tasks. Real-time Systems D0003E 2/26/2009. Loosening D = T. Aperiodic tasks. Response-time analysis

Embedded Systems: OS. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Event-Driven Scheduling. (closely following Jane Liu s Book)

CEC 450 Real-Time Systems

Exam Review TexPoint fonts used in EMF.

Embedded Systems: OS

Aperiodic Task Scheduling

An Improved Priority Ceiling Protocol to Reduce Context Switches in Task Synchronization 1

Multiprocessor and Real- Time Scheduling. Chapter 10

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Tasks. Task Implementation and management

4/6/2011. Informally, scheduling is. Informally, scheduling is. More precisely, Periodic and Aperiodic. Periodic Task. Periodic Task (Contd.

Final Examination. Thursday, December 3, :20PM 620 PM. NAME: Solutions to Selected Problems ID:

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

Real-Time Architectures 2004/2005

Resource Access Control in Real-Time Systems. Resource Access Control in Real-Time Systems

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Multiprocessor and Real-Time Scheduling. Chapter 10

An Evaluation of the Dynamic and Static Multiprocessor Priority Ceiling Protocol and the Multiprocessor Stack Resource Policy in an SMP System

Scheduling Algorithm and Analysis

CSE 237A Timing and scheduling. Tajana Simunic Rosing Department of Computer Science and Engineering University of California, San Diego.

ECE519 Advanced Operating Systems

Real-Time Systems Resource Access Protocols

Real-Time and Concurrent Programming Lecture 6 (F6): Scheduling and bounded response times

CS 153 Design of Operating Systems Winter 2016

Lecture 12: An Overview of Scheduling Theory

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

18-642: Race Conditions

Fixed-Priority Multiprocessor Scheduling

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Computer Systems Assignment 4: Scheduling and I/O

CHAPTER 6: PROCESS SYNCHRONIZATION

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Real-Time Programming


Constructing and Verifying Cyber Physical Systems

EE458 - Embedded Systems Lecture 8 Semaphores

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CSE 4/521 Introduction to Operating Systems

Copyright Notice. COMP9242 Advanced Operating Systems S2/2014 Week 9: Real-Time Systems. Real-Time System: Definition

Department of Computer Science Institute for System Architecture, Operating Systems Group REAL-TIME MICHAEL ROITZSCH OVERVIEW

Reference Model and Scheduling Policies for Real-Time Systems

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Resource Access Control (2) Real-Time and Embedded Systems (M) Lecture 14

Lesson 6: Process Synchronization

Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #10: More on Scheduling and Introduction of Real-Time OS

Precedence Graphs Revisited (Again)

The Priority Ceiling Protocol: A Method for Minimizing the Blocking of High-Priority Ada Tasks

Operating Systems ECE344. Ding Yuan

Resource Access Protocols

Deadlock. Lecture 4: Synchronization & Communication - Part 2. Necessary conditions. Deadlock handling. Hierarchical resource allocation

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts

Real-Time Operating Systems M. 9. Real-Time: Basic Concepts

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating system concepts. Task scheduling

Integrating Real-Time Synchronization Schemes into Preemption Threshold Scheduling

Lecture 3. Introduction to Real-Time kernels. Real-Time Systems

AUTOSAR Extensions for Predictable Task Synchronization in Multi- Core ECUs

CS A331 Programming Language Concepts

CS 431 Introduction to Computer Systems Solutions Mid semester exam 2005

Chapter 5: Synchronization 1

CS A320 Operating Systems for Engineers

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Simulation of Priority Driven Algorithms to Schedule Real-Time Systems T.S.M.Priyanka a*, S.M.K.Chaitanya b

Predictable Interrupt Management and Scheduling in the Composite Component-based System

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CSE 4/521 Introduction to Operating Systems

Priority-driven Scheduling of Periodic Tasks (2) Advanced Operating Systems (M) Lecture 5

Commercial Real-time Operating Systems An Introduction. Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory

Processes. Overview. Processes. Process Creation. Process Creation fork() Processes. CPU scheduling. Pål Halvorsen 21/9-2005

Your (ES1) programs so far (ES1) software development so far

CSE 120 Principles of Operating Systems

Process & Thread Management II. Queues. Sleep() and Sleep Queues CIS 657

Process & Thread Management II CIS 657

Efficient Implementation of IPCP and DFP

UNIT:2. Process Management

Computer Science Window-Constrained Process Scheduling for Linux Systems

Concurrency, Thread. Dongkun Shin, SKKU

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!!

Synchronization I. To do

Fixed-Priority Multiprocessor Scheduling. Real-time Systems. N periodic tasks (of different rates/periods) i Ji C J. 2 i. ij 3

A Modified Maximum Urgency First Scheduling Algorithm for Real-Time Tasks

Transcription:

Chapter 12 Microkernel/OS and Real-Time Scheduling Hongwei Zhang http://www.cs.wayne.edu/~hzhang/ Ack.: this lecture is prepared in part based on slides of Lee, Sangiovanni-Vincentelli, Seshia.

Outline Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Outline Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Responsibilities of a Microkernel (a small, custom OS) Scheduling of threads or processes Creation and termination of threads Timing of thread activations Synchronization Semaphores and locks Input and output Interrupt handling

A Few More Advanced Functions of an OS (not discussed here) Networking TCP/IP stack Memory management Separate stacks Segmentation Allocation and deallocation File system Persistent storage Security User vs. kernel space Identity management

Outline of a Microkernel Scheduler (1) Main Scheduler Thread (Task): set up periodic timer interrupts; create default thread data structures; dispatch a thread (procedure call); execute main thread (idle or power save, for example). Thread data structure: copy of all state (machine registers) address at which to resume executing the thread status of the thread (e.g. blocked on mutex) priority, WCET (worst case execution time), and other info to assist the scheduler

Outline of a Microkernel Scheduler (2) Timer interrupt service routine: dispatch a thread. Dispatching a thread: disable interrupts; determine which thread should execute (scheduling); if the same one, enable interrupts and return; save state (registers) into current thread data structure; save return address from the stack for current thread; copy new thread state into machine registers; replace program counter on the stack for the new thread; enable interrupts; return.

The Focus in this Lecture: How to decide which thread to schedule? Considerations: Preemptive vs. non-preemptive scheduling Periodic vs. aperiodic tasks Fixed priority vs. dynamic priority Scheduling anomalies: priority inversion, multi-process scheduling

When can a new thread be dispatched? Under Non-Preemptive scheduling: When the current thread completes. Under Preemptive scheduling: Upon a timer interrupt Upon an I/O interrupt (possibly) When a new thread is created, or one completes When the current thread blocks on or releases a mutex When the current thread blocks on a semaphore When a semaphore state is changed When the current thread makes any OS call file system access network access

Preemptive Scheduling Assumptions: 1. All threads have priorities either statically assigned (constant for the duration of the thread) or dynamically assigned (can vary). 2. Kernel keeps track of which threads are enabled (able to execute, e.g. not blocked waiting for a semaphore or a mutex or for a time to expire). Preemptive scheduling: At any instant, the enabled thread with the highest priority is executing Whenever any thread changes priority or enabled status, the kernel can dispatch a new thread

Outline Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Rate Monotonic Scheduling Assume n tasks invoked periodically with: periods T 1,,T n (impose real-time constraints) worst-case execution times (WCET) C 1,,C n assumes no mutexes, semaphores, or blocking I/O no precedence constraints fixed priorities preemptive scheduling Rate Monotonic Scheduling: Priorities ordered by period (smallest period has the highest priority)

Rate Monotonic Scheduling Assume n tasks invoked periodically with: periods T 1,,T n (impose real-time constraints) worst-case execution times (WCET) C 1,,C n assumes no mutexes, semaphores, or blocking I/O no precedence constraints fixed priorities preemptive scheduling Theorem: If any priority assignment yields a feasible schedule, then RMS (smallest period has the highest priority) also yields a feasible schedule. RMS is optimal in the sense of feasibility. Liu and Layland, Scheduling algorithms for multiprogramming in a hard-real-time environment, Journal of ACM, 20(1), 1973.

Liu and Layland, JACM, Jan. 1973

Feasibility for RMS Feasibility is defined for RMS to mean that every task executes to completion once within its designated period.

Utilization of RMS If total utilization is no greater than uu = nn(2 1/nn 1), then the RMS schedule is feasible. nn = 2 uu 0.828 uu nn 0.693 lim nn Not achieving full utilization of CPU resource due to fixed-priority

Outline Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Deadline Driven Scheduling: Jackson s Algorithm: EDD (1955) Given n independent one-time tasks with deadlines d 1,, d n, schedule them to minimize the maximum lateness, defined as L = max f d max 1 i n { } where f i is the finishing time of task i. Note that this is negative iff all deadlines are met. Earliest Due Date (EDD) algorithm: Execute them in order of non-decreasing deadlines. i i Note that this does not require preemption.

Theorem: EDD is Optimal in the Sense of Minimizing Maximum Lateness To prove, use an interchange argument. Given a schedule S that is not EDD, there must be tasks a and b where a immediately precedes b in the schedule but d a > d b. Why? We can prove that this schedule can be improved by interchanging a and b. Thus, no non-edd schedule is achieves smaller max lateness than EDD, so the EDD schedule must be optimal.

Deadline Driven Scheduling: Horn s algorithm: EDF (1974) Extend EDD by allowing tasks to arrive (become ready) at any time. Earliest deadline first (EDF): Given a set of n independent tasks with arbitrary arrival times, any algorithm that at any instant executes the task with the earliest absolute deadline among all arrived tasks is optimal w.r.t. minimizing the maximum lateness. Proof uses a similar interchange argument.

Using EDF for Periodic Tasks The EDF algorithm can be applied to periodic tasks as well as aperiodic tasks. Simplest use: Deadline is the end of the period (a.k.a. implicit deadline) Alternative use: Separately specify deadline (relative to the period start time) and period.

RMS vs. EDF? Which one is better? What are the pros and cons of each?

Comparison of EDF and RMS Favoring RMS Scheduling decisions are simpler (fixed priorities vs. the dynamic priorities required by EDF. EDF scheduler must maintain a list of ready tasks that is sorted by priority.) Favoring EDF Since EDF is optimal w.r.t. minimizing maximum lateness, it is also optimal w.r.t. feasibility. RMS is only optimal w.r.t. feasibility. For infeasible schedules, RMS completely blocks lower priority tasks, resulting in unbounded maximum lateness. EDF can achieve full utilization where RMS fails to do that EDF results in fewer preemptions in practice, and hence less overhead for context switching. Deadlines can be different from the period.

Precedence Constraints 4 1 2 3 5 6 DAG, showing that task 1 must complete before tasks 2 and 3 can be started, etc. A directed acyclic graph (DAG) shows precedences, which indicate which tasks must complete before other tasks start.

Example: EDF Schedule 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5 Is this feasible?

EDF is not optimal under precedence constraints C 2 = 1 d 2 = 5 2 1 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5 The EDF schedule chooses task 3 at time 1 because it has an earlier deadline. This choice results in task 4 missing its deadline. Is there a feasible schedule?

Latest Deadline First (LDF) (Lawler, 1973) C 2 = 1 d 2 = 5 2 1 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5 The LDF scheduling strategy builds a schedule backwards. Given a DAG, choose the leaf node with the latest deadline to be scheduled last, and work backwards.

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

Latest Deadline First (LDF) (Lawler, 1973) 1 C 2 = 1 d 2 = 5 2 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5

LDF is optimal under precedence constraints C 2 = 1 d 2 = 5 2 1 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5 The LDF schedule shown at the bottom respects all precedences and meets all deadlines. Also minimizes maximum lateness

Latest Deadline First (LDF) (Lawler, 1973) LDF is optimal in the sense that it minimizes the maximum lateness. It does not require preemption. (We ll see that EDF can be made to work with preemption.) However, it requires that all tasks be available and their precedences known before any task is executed.

EDF*: EDF with Precedences With a preemptive scheduler, EDF can be modified to account for precedences and to allow tasks to arrive at arbitrary times. Simply adjust the deadlines and arrival times according to the precedences. C 2 = 1 d 2 = 5 2 1 C 1 = 1 3 d 1 = 2 C 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 4 5 6 C 6 = 1 d 6 = 6 C 5 = 1 d 5 = 5 Recall that for the tasks at the left, EDF yields the schedule above, where task 4 misses its deadline.

EDF*: Modifying release times Given n tasks with precedences and release times r i, if task i immediately precedes task j, then modify the release times as follows: assume: r i = 0 1 C 1 = 1 d 1 = 2 r' 1 = 0 C 2 = 1 d 2 = 5 r 2 = 1 2 3 C 3 = 1 d 3 = 4 r 3 = 1 C 4 = 1 d 4 = 3 r 4 = 2 4 C 5 5 = 1 d 5 = 5 r 5 = 2 6 C 6 = 1 d 6 = 6 r 6 = 2 r = max( r, r + C j j i i )

EDF*: Modifying deadlines Given n tasks with precedences and deadlines d i, if task i immediately precedes task j, then modify the deadlines as follows: d = min( d, d C ) assume: r i = 0 1 C 1 = 1 d 1 = 2 r' 1 = 0 d 2 = 1 C 2 = 1 d 2 = 5 r 2 = 1 d 2 = 2 2 3 C 3 = 1 d 3 = 4 r 3 = 1 d 3 = 4 C 4 = 1 d 4 = 3 r 4 = 2 d' 4 = 3 4 C 5 5 = 1 d 5 = 5 r 5 = 2 6 d 5 = 5 C 6 = 1 d 6 = 6 r 6 = 2 d 6 = 6 i i j j Using the revised release times and deadlines, the above EDF schedule is optimal and meets all deadlines.

Optimality EDF* is optimal in the sense of minimizing the maximum lateness.

Outline Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Scheduling can interact with other constraints/requirements in non-intuitive ways Mutual exclusion Priority inversion Priority inheritance Priority ceiling Multiprocessor scheduling Richard s anomalies

Accounting for Mutual Exclusion When threads access shared resources, they need to use mutexes to ensure data integrity. Mutexes can also complicate scheduling.

#include <pthread.h>... pthread_mutex_t lock; Recall mutual exclusion mechanism in pthreads void* addlistener(notify listener) { pthread_mutex_lock(&lock);... pthread_mutex_unlock(&lock); } void* update(int newvalue) { pthread_mutex_lock(&lock); value = newvalue; elementtype* element = head; while (element!= 0) { (*(element->listener))(newvalue); element = element->next; } pthread_mutex_unlock(&lock); } int main(void) { pthread_mutex_init(&lock, NULL);... } Whenever a data structure is shared across threads, access to the data structure must usually be atomic. This is enforced using mutexes, or mutual exclusion locks. The code executed while holding a lock is called a critical section.

Priority Inversion: A Hazard with Mutexes Task 1 has highest priority, task 3 lowest. Task 3 acquires a lock on a shared object, entering a critical section. It gets preempted by task 1, which then tries to acquire the lock and blocks. Task 2 preempts task 3 at time 4, keeping the higher priority task 1 blocked for an unbounded amount of time. In effect, the priorities of tasks 1 and 2 get inverted, since task 2 can keep task 1 waiting arbitrarily long.

Mars Rover Pathfinder The Mars Rover Pathfinder landed on Mars on July 4th, 1997. A few days into the mission, the Pathfinder began sporadically missing deadlines, causing total system resets, each with loss of data. The problem was diagnosed on the ground as priority inversion, where a low priority meteorological task was holding a lock blocking a high-priority task while medium priority tasks executed. Source: RISKS-19.49 on the comp.programming.threads newsgroup, December 07, 1997, by Mike Jones (mbj@microsoft.com).

Priority Inheritance Protocol (PIP) (Sha, Rajkumar, Lehoczky, 1990) Task 1 has highest priority, task 3 lowest. Task 3 acquires a lock on a shared object, entering a critical section. It gets preempted by task 1, which then tries to acquire the lock and blocks. Task 3 inherits the priority of task 1, preventing preemption by task 2.

Deadlock The lower priority task starts first and acquires lock a, then gets preempted by the higher priority task, which acquires lock b and then blocks trying to acquire lock a. The lower priority task then blocks trying to acquire lock b, and no further progress is possible. #include <pthread.h>... pthread_mutex_t lock_a, lock_b; void* thread_1_function(void* arg) { pthread_mutex_lock(&lock_b);... pthread_mutex_lock(&lock_a);... pthread_mutex_unlock(&lock_a);... pthread_mutex_unlock(&lock_b);... } void* thread_2_function(void* arg) { pthread_mutex_lock(&lock_a);... pthread_mutex_lock(&lock_b);... pthread_mutex_unlock(&lock_b);... pthread_mutex_unlock(&lock_a);... }

Priority Ceiling Protocol (PCP) (Sha, Rajkumar, Lehoczky, 1990) Every lock or semaphore is assigned a priority ceiling equal to the priority of the highest-priority task that can lock it. Can one automatically compute the priority ceiling? A task T can acquire a lock only if the task s priority is strictly higher than the priority ceilings of all locks currently held by other tasks Intuition: the task T will not later try to acquire these locks held by other tasks Locks that are not held by any task don t affect the task This prevents deadlocks There are extensions supporting dynamic priorities and dynamic creations of locks (stack resource policy)

Priority Ceiling Protocol In this version, locks a and b have priority ceilings equal to the priority of task 1. At time 3, task 1 attempts to lock b, but it can t because task 2 currently holds lock a, which has priority ceiling equal to the priority of task 1. #include <pthread.h>... pthread_mutex_t lock_a, lock_b; void* thread_1_function(void* arg) { pthread_mutex_lock(&lock_b);... pthread_mutex_lock(&lock_a);... pthread_mutex_unlock(&lock_a);... pthread_mutex_unlock(&lock_b);... } void* thread_2_function(void* arg) { pthread_mutex_lock(&lock_a);... pthread_mutex_lock(&lock_b);... pthread_mutex_unlock(&lock_b);... pthread_mutex_unlock(&lock_a);... }

Brittleness In general, all thread scheduling algorithms are brittle: Small changes can have big, unexpected consequences. E.g., for multiprocessor (or multicore) schedules: Theorem (Richard Graham, 1976): If a task set with fixed priorities, execution times, and precedence constraints is scheduled according to priorities on a fixed number of processors, then increasing the number of processors, reducing execution times, or weakening precedence constraints can increase the schedule length.

Brittleness (contd.) Timing behavior under all known task scheduling strategies is brittle. Small changes can have big (and unexpected) consequences. Unfortunately, since execution times are so hard to predict, such brittleness can result in unexpected system failures.

Summary Microkernel/OS Rate Monotonic Scheduling Earliest Deadline First Scheduling Anomalies

Assignment Exercise #12: Chapter 12: 1, 2, 4, 6