Extending RTAI Linux with Fixed-Priority Scheduling with Deferred Preemption

Similar documents
Extending RTAI/Linux with FPDS

OSPERT 2009: International Workshop on Operating Systems Platforms for Embedded Real-Time Applications. Editors: Stefan M. Petters Peter Zijlstra

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

15: OS Scheduling and Buffering

Multiprocessor and Real-Time Scheduling. Chapter 10

Computer Science Window-Constrained Process Scheduling for Linux Systems

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

SMD149 - Operating Systems

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Real-Time (Paradigms) (47)

Scheduling - Overview

Thanks to... Composing and synchronizing real-time components through virtual platforms in vehicular systems

Predictable Interrupt Management and Scheduling in the Composite Component-based System

Final Examination. Thursday, December 3, :20PM 620 PM. NAME: Solutions to Selected Problems ID:

Grasp: Tracing, Visualizing and Measuring the Behavior of Real-Time Systems

Microkernel Construction

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

CSE398: Network Systems Design

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

LECTURE 3:CPU SCHEDULING

Chapter 5: CPU Scheduling

Lecture 3. Introduction to Real-Time kernels. Real-Time Systems

Programming Embedded Systems

Multimedia Systems 2011/2012

Empirical Approximation and Impact on Schedulability

Real-time Support in Operating Systems

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

The control of I/O devices is a major concern for OS designers

Operating Systems (2INC0) 2018/19. Introduction (01) Dr. Tanir Ozcelebi. Courtesy of Prof. Dr. Johan Lukkien. System Architecture and Networking Group

CS3733: Operating Systems

Overview of Scheduling a Mix of Periodic and Aperiodic Tasks

On Latency Management in Time-Shared Operating Systems *

Lecture 5 / Chapter 6 (CPU Scheduling) Basic Concepts. Scheduling Criteria Scheduling Algorithms

Embedded Software Programming

A Comparison of Scheduling Latency in Linux, PREEMPT_RT, and LITMUS RT. Felipe Cerqueira and Björn Brandenburg

Chapter 5: CPU Scheduling

Lecture 2 Process Management

RT extensions/applications of general-purpose OSs

7. Multimedia Operating System. Contents. 7.3 Resource Management. 7.4 Process Management. 7.2 Real Time Systems. 7.5 Prototype Systems. 7.

OPERATING SYSTEMS. UNIT II Sections A, B & D. An operating system executes a variety of programs:

Multiprocessor and Real- Time Scheduling. Chapter 10

Scheduling Algorithm and Analysis

Unix SVR4 (Open Solaris and illumos distributions) CPU Scheduling

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

2. Which of the following resources is not one which can result in deadlocking processes? a. a disk file b. a semaphore c. the central processor (CPU)

CPU Scheduling. Operating Systems (Fall/Winter 2018) Yajin Zhou ( Zhejiang University

Scheduling of processes

Real-Time Operating Systems Issues. Realtime Scheduling in SunOS 5.0

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

EC EMBEDDED AND REAL TIME SYSTEMS

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Chapter -5 QUALITY OF SERVICE (QOS) PLATFORM DESIGN FOR REAL TIME MULTIMEDIA APPLICATIONS

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

TDDD07 Real-time Systems Lecture 10: Wrapping up & Real-time operating systems

Operating system concepts. Task scheduling

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

In examining performance Interested in several things Exact times if computable Bounded times if exact not computable Can be measured

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

Operating System Concepts Ch. 5: Scheduling

by I.-C. Lin, Dept. CS, NCTU. Textbook: Operating System Concepts 8ed CHAPTER 13: I/O SYSTEMS

Tasks. Task Implementation and management

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

DSP/BIOS Kernel Scalable, Real-Time Kernel TM. for TMS320 DSPs. Product Bulletin

CS370: System Architecture & Software [Fall 2014] Dept. Of Computer Science, Colorado State University

CHAPTER 2: PROCESS MANAGEMENT

Computer Systems Assignment 4: Scheduling and I/O

Real Time Linux patches: history and usage

Chapter 13: I/O Systems

Multimedia-Systems. Operating Systems. Prof. Dr.-Ing. Ralf Steinmetz Prof. Dr. rer. nat. Max Mühlhäuser Prof. Dr.-Ing. Wolfgang Effelsberg

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

ECE519 Advanced Operating Systems

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #10: More on Scheduling and Introduction of Real-Time OS

Embedded Systems. 6. Real-Time Operating Systems

Sporadic Server Scheduling in Linux Theory vs. Practice. Mark Stanovich Theodore Baker Andy Wang

by Maria Lima Term Paper for Professor Barrymore Warren Mercy College Division of Mathematics and Computer Information Science

Real-Time Operating Systems Design and Implementation. LS 12, TU Dortmund

Implementation Alternatives. Lecture 2: Implementation Alternatives & Concurrent Programming. Current Implementation Alternatives

Device-Functionality Progression

Chapter 12: I/O Systems. I/O Hardware

1995 Paper 10 Question 7

Real-Time Networking for Quality of Service on TDM based Ethernet

Exam Review TexPoint fonts used in EMF.

[08] IO SUBSYSTEM 1. 1

QUESTION BANK UNIT I

RT- Xen: Real- Time Virtualiza2on from embedded to cloud compu2ng

Efficient Implementation of IPCP and DFP

Chapter 13: I/O Systems

CSCI-375 Operating Systems

«Computer Science» Requirements for applicants by Innopolis University

Input Output (IO) Management

Frequently asked questions from the previous class survey

Resource Reservation & Resource Servers

Module 12: I/O Systems

Aperiodic Servers (Issues)

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

Chapter 13: I/O Systems

Chapter 13: I/O Systems. Chapter 13: I/O Systems. Objectives. I/O Hardware. A Typical PC Bus Structure. Device I/O Port Locations on PCs (partial)

Utilizing Linux Kernel Components in K42 K42 Team modified October 2001

操作系统概念 13. I/O Systems

Transcription:

Extending RTAI Linux with Fixed-Priority Scheduling with Deferred Preemption Mark Bergsma, Mike Holenderski, Reinder J. Bril, Johan J. Lukkien System Architecture and Networking Department of Mathematics and Computer Science Eindhoven University of Technology This work was conducted within the ITEA CANTATA project at TU/e 1

Outline Context Problem description Solution direction: FPDS + reservations Extending RTAI Linux with FPDS Measurements Evaluation and conclusions 2 2

Context Surveillance multimedia streaming application: - Computation and data intensive tasks - Dependent tasks (producer/consumer relation) - Networked system: several cameras + server - Cost-constrained system Camera platform - Processor + memory (cache, local, global) + bus + network Application: two main tasks - Video task (processor + memory + bus) - Network task (processor + memory + bus + network) We conducted this work in the context of the CANTATA project, which has the aim of demonstrating surveillance multimedia streaming application. The system is comprised of several cameras + server, communicating over the network. We can identify two main tasks in the application: video processing task... invoked periodically upon video frame arrival video transmission task... invoked by the video processing task 3 3

Application example video task network task raw frames encoded frames network packets 4 Here we have a picture of the application where video and network tasks communicate via buffers in the memory. 4

Platform example 5 The tasks execute on a platform consisting of a processor, a co-processor and memory, connected via a bus with a DMA controller, which is used by the tasks for moving data between the memory and the processors. Video task and Network task execute on the main Processor They move data from main Memory to IRAM and LM (Local Memory) and back, using DMA channels. 5

Platform example network task 5 The tasks execute on a platform consisting of a processor, a co-processor and memory, connected via a bus with a DMA controller, which is used by the tasks for moving data between the memory and the processors. Video task and Network task execute on the main Processor They move data from main Memory to IRAM and LM (Local Memory) and back, using DMA channels. 5

Platform example video task 5 The tasks execute on a platform consisting of a processor, a co-processor and memory, connected via a bus with a DMA controller, which is used by the tasks for moving data between the memory and the processors. Video task and Network task execute on the main Processor They move data from main Memory to IRAM and LM (Local Memory) and back, using DMA channels. 5

Problem illustration (FPPS) network task video task 6 Our industrial partner started with FPPS, however, high overheads due to the frequent preemptions, lead to deadline misses -> FPNS. 6

Problem illustration (FPNS) network task video task 7 This became problematic with fluctuating network bandwidth, e.g. due to congestion. 7 Since the network task is released after the nonpreemptive video processing task has finished, it may be assigned the processor when the network is not available, and therefore it may not make optimal use of the processor

Problem Video task is greedy and non-preemptive + Network availability fluctuates (e.g. congestion) Network task cannot make optimal use of the processor We can summarize the problem as: the combination of... and fluctuating... leading to the network task not being able to... 8 8

Goal Optimize network usage while guaranteeing processor to the video processing task - by allowing limited preemptions of the video processing task, while guaranteeing its effectiveness 9 9

Solution direction Network task: increase the frequency of processor availability - Resolved by means of FPDS Introduce preemption points, at appropriate places - minimizing context switch overhead Video task: still guarantee sufficient processing time - Resolved by means of reservations Bound the interference of the network task 10 We want to match the bursty availability of the network with the resource provisioning to the network task. 10 Wrap the network task in a reservation driven by a deferrable or sporadic server (bandwidth preserving). In this presentation we focus on FPDS, without going into detail about reservations.

Solution illustration network task video task 11 If the combination of FPDS and reservations is applied properly, everything works out. 11

Outline Context Problem description Solution direction: FPDS + reservations Extending RTAI Linux with FPDS Measurements Evaluation and conclusions 12 12

RTAI Linux Real-time extension of Linux - Supports FPPS - Primitives have a short and bounded latency Basis for the ITEA2/CANTATA framework, meant to be used by our industrial partner This work focuses on an efficient implementation of FPDS in RTAI Linux. 13 13

RTAI Architecture RTAI is a hypervisor between hardware and Linux RTAI scheduler - FPPS Co-operative scheduling for tasks with equal priority - Linux scheduler is treated as a low priority soft task - Support for periodic and one-shot tasks, task suspension, timed sleep - Priority inheritance for shared resources 14 Intercepts all interrupts and dispatches them to the appropriate subsystems. In particular the timer interrupts, allowing RTAI to schedule real-time tasks and provide timeliness guarantees. The hard real-time tasks are scheduled directly by RTAI, while the Linux scheduler takes care of scheduling soft tasks, which is itself treated as a low priority soft task. 14

RTAI Task Fixed priority, 16 bit, 0 the highest priority Periodic tasks implemented as a loop: while (true) {... rt_task_wait_period(); } Task Control Block (TCB) - Contains all administrative task info, including task state field - Resides in the kernel space 15 15

RTAI Scheduler Ready queue: priority queue sorted by task priority Periodic tasks reside in the waiting queue (called timed tasks): priority queue sorted by release time Scheduler invocation rt_schedule(): 1. Update current time 2. Move tasks from waiting to ready queue 3. Select highest priority task from ready queue 4. Context switch to the selected task Implemented by two similar functions: - rt_timer_handler() triggered periodically (by the timer) - rt_schedule() event triggered 16 16 The waiting queue can be an ordered list or a red-black tree. Event triggered may be synchronous with execution of tasks (task suspension, timed sleep) or due to other interrupts

Requirements for the extension Compatible - Conservative with no effect on existing functionality Efficient - Low overhead of the scheduler and preemption points Maintainable - Easily integrated with future releases of RTAI 17 17 Compatible: want to preserve the existing functionality Efficient: low overhead (both processor and memory) Maintainable: changes to the RTAI code should be minimal

Extend Task Control Block (TCB) preemptible (Boolean) - true: task is preemptive - false: task is non-preemptive between preemption points RT_FPDS_YIELDING (Boolean) - flag in the task state - Indicates to the scheduler that the non-preemptive task is at a preemption point Invariant: - RT_FPDS_YIELDING preemptible 18 Our first step is to extend the TCB with two boolean flags:... 18

Additional primitive and scheduler modification rt_fpds_yield(): 1. RT_FPDS_YIELDING := true; 2. rt_schedule(); 3. RT_FPDS_YIELDING := false; rt_schedule(): 1. Update current time 2. Move tasks from waiting to ready queue 3. if (preemptible RT_FPDS_YIELDING) 4. Select highest priority task from ready queue 5. Context switch to the selected task 19 We introduce an additional primitive, rt_fpds_yield(), which wraps the call to the scheduler between setting and clearing the... The added if statement in rt_schedule() the makes sure that preempting task will be switched in only if the currently running task at a preemption point, or if it is preemptive. Both the preemptible and RT_FPDS_YIELDING pertain to the current task. 19

Optimize preemption point Avoid calling the scheduler when no new pending tasks Extend Task Control Block - should_yield (Boolean) The should_yield flag is set to indicate that the task has to call rt_fpds_yield() upon next preemption point Add primitive - fpds_pp(): 1. if (rt_should_yield()) 2. rt_fpds_yield(); - First step: rt_should_yield() is a system call 20 Our second step is to optimize the preemption point, and avoid calling the scheduler when there are no pending higher priority tasks which can preempt the current task. should_yield is set by the scheduler and indicates that Initially the should_yield was a flag in the TCB, requiring a system call to inspect it. 20

Optimize preemption point rt_schedule(): 1. Update current time 2. Move tasks from waiting to ready queue 3. if (higher priority task ready preemptible) 4. should_yield := true 5. if (preemptible RT_FPDS_YIELDING) 6. Select highest priority task from ready queue 7. Context switch to the selected task Now the scheduler also needs to set the should_yield flag when a higher priority task is ready AND the current task is not preemptible. 21 21

Experimental results Synthetic task set: two tasks Measured the response time of the low priority task under RTAI and RTAI+FPDS Tasks implemented in user space - Benefit from address space protection - Preemption point includes a system call overhead 22 The higher priority task was there to create work for the scheduler. 22

Scheduler overhead No measurable scheduler overhead - Limited to a single if, with 3 variables and evaluating to false We ran the same task set under stock RTAI (indicated by red crosses) and the modified RTAI (indicated by green crosses covering the red ones). Two tasks: h and l T_h = T_sched (so that a new task arrives at every scheduler invocation) The overhead is lost in the noise of the measurements. 23 23

Preemption point overhead Preemption point overhead: 440us - Conform to the 434 us system call overhead in RTAI 24 For measuring the overhead of the preemption point, we compared the response time of the lower priority task with and without preemption points, running under the modified RTAI. 434 us system call suggests that the preemption point overhead can be reduced if a system call can be avoided. 24 Two tasks: l and h l for FPDS: for K/M { for i<m { counter := counter + 1; } rt_fpds_pp(); } l for FPPS: for K/M { for M { counter := counter + 1; } }

Avoid the system call overhead Place the should_yield flag in the user space Similar to Litmus RT: soft real-time extension of the Linux kernel [Dr. James H. Anderson & Students] Measurements showed that the preemption point overhead was reduced to a few cycles 25 If the should_yield flag is stored in the kernel space, then checking the should_yield flag will require a system call. The system call can be avoided by storing the flag in the user space of the current task (similar to Litmus). We have implemented the should_yield in user space. 25

Evaluation and conclusions Compatible - Inserting preemption points and setting the task to nonpreemptive (between the preemption points) are optional Efficient - Low processor overhead of the scheduler and preemption points - Low memory overhead (two additional integers in TCB) - Note: blocking due to non-preemptive subtasks is intrinsic to FPDS (taken care of in the schedulability analysis) Maintainable - 106 lines added/modified 26 26

Future Work Monitoring - Longest subtask will not interfere with higher priority tasks - Critical path through the task graph will not interfere with lower priority tasks Reservations 27 27

Questions? Elaborate on the optional preemption points: 28 - Initially optional preemption points were meant to allow tasks adapt their control path. In systems which incur large context switch overheads, such as the original platform of our industrial partner, where a preemption would invalidate the instruction pipeline and ongoing DMA transfers, one may want to refrain from starting a computation which will be aborted in case the task is preempted, and later restarted. The spare capacity can be used for other more useful computations.