Systemy RT i embedded Wykład 11 Systemy RTOS

Similar documents
What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Multiprocessor and Real-Time Scheduling. Chapter 10

What s an Operating System? Real-Time Operating Systems. Cyclic Executive. Do I Need One? Handling an Interrupt. Interrupts

Implementing Scheduling Algorithms. Real-Time and Embedded Systems (M) Lecture 9

Multiprocessor and Real- Time Scheduling. Chapter 10


Tasks. Task Implementation and management

Commercial Real-time Operating Systems An Introduction. Swaminathan Sivasubramanian Dependable Computing & Networking Laboratory

Real-Time Programming

Embedded Systems: OS. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Embedded Systems. 6. Real-Time Operating Systems

Embedded Systems: OS

OPERATING SYSTEM CONCEPTS UNDERSTAND!!! IMPLEMENT!!! ANALYZE!!!

Real-time operating systems and scheduling

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

ECE519 Advanced Operating Systems

Reference Model and Scheduling Policies for Real-Time Systems

6/17/2011. Real-time Operating Systems

OPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5

Introduction to Real-Time Operating Systems

Timers 1 / 46. Jiffies. Potent and Evil Magic

Concurrent activities in daily life. Real world exposed programs. Scheduling of programs. Tasks in engine system. Engine system

DSP/BIOS Kernel Scalable, Real-Time Kernel TM. for TMS320 DSPs. Product Bulletin

Multiprocessor scheduling

Micrium µc/os II RTOS Introduction EE J. E. Lumpp

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

IT 540 Operating Systems ECE519 Advanced Operating Systems

Real-Time Systems. Real-Time Operating Systems

NuttX Realtime Programming

Chapter 6 Concurrency: Deadlock and Starvation

EECS 571 Principles of Real-Time Embedded Systems. Lecture Note #10: More on Scheduling and Introduction of Real-Time OS

REAL-TIME OPERATING SYSTEMS SHORT OVERVIEW

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Chapter 6: CPU Scheduling. Operating System Concepts 9 th Edition

UNIT:2. Process Management

Computer Science 4500 Operating Systems

Processes and Non-Preemptive Scheduling. Otto J. Anshus

Microkernel/OS and Real-Time Scheduling

Subject Name: OPERATING SYSTEMS. Subject Code: 10EC65. Prepared By: Kala H S and Remya R. Department: ECE. Date:

Enriching Enea OSE for Better Predictability Support

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

Real-Time Systems Hermann Härtig Real-Time Operating Systems Brief Overview

Cs703 Current Midterm papers solved by Naina_Mailk. Q: 1 what are the Characteristics of Real-Time Operating system?

Introduction to Real-Time Systems and Multitasking. Microcomputer Architecture and Interfacing Colorado School of Mines Professor William Hoff

Introduction to Embedded Systems

Process Scheduling Part 2

Precedence Graphs Revisited (Again)

Back to RTOS. CSE466 Autumn 00-1

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

CS 571 Operating Systems. Midterm Review. Angelos Stavrou, George Mason University

Module 8. Industrial Embedded and Communication Systems. Version 2 EE IIT, Kharagpur 1

Unit 3 : Process Management

Lecture notes Lectures 1 through 5 (up through lecture 5 slide 63) Book Chapters 1-4

SMD149 - Operating Systems

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

8: Scheduling. Scheduling. Mark Handley

Interprocess Communication By: Kaushik Vaghani

CPU Scheduling. Daniel Mosse. (Most slides are from Sherif Khattab and Silberschatz, Galvin and Gagne 2013)

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Main Points of the Computer Organization and System Software Module

Module 1. Introduction:

Process- Concept &Process Scheduling OPERATING SYSTEMS

Your (ES1) programs so far (ES1) software development so far

B. V. Patel Institute of Business Management, Computer &Information Technology, UTU

RT3 - FreeRTOS Real Time Programming

Dr. Rafiq Zakaria Campus. Maulana Azad College of Arts, Science & Commerce, Aurangabad. Department of Computer Science. Academic Year

Operating Systems. Computer Science & Information Technology (CS) Rank under AIR 100

The Real Time Thing. What the hack is real time and what to do with it. 22C3 30. December Erwin Erkinger e.at

Operating Systems Overview. Chapter 2

Operating System Study Notes Department of Computer science and Engineering Prepared by TKG, SM and MS

Operating Systems, Fall

Scheduling Algorithm and Analysis

OVERVIEW. Last Week: But if frequency of high priority task increases temporarily, system may encounter overload: Today: Slide 1. Slide 3.

Real Time Kernels and Operating Systems Introduction

CHAPTER NO - 1 : Introduction:

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

PROCESS SCHEDULING II. CS124 Operating Systems Fall , Lecture 13

Performance Throughput Utilization of system resources

Operating System Review Part

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Table of Contents 1. OPERATING SYSTEM OVERVIEW OPERATING SYSTEM TYPES OPERATING SYSTEM SERVICES Definition...

Handout. The ARM Instruction Set. Real Time Systems. Real Time Operating Systems. Real Time System Organization. Classification of Real Time Systems

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Lecture 3: Concurrency & Tasking

Following are a few basic questions that cover the essentials of OS:

Priority Based Assignment of Shared resources in RTOS

GLOSSARY. VisualDSP++ Kernel (VDK) User s Guide B-1

Introduction. Real Time Systems. Flies in a bottle. Real-time kernel

QUESTION BANK UNIT I

Scheduling. Scheduling 1/51

Process Coordination and Shared Data

MARUTHI SCHOOL OF BANKING (MSB)

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Exam Review TexPoint fonts used in EMF.

(b) External fragmentation can happen in a virtual memory paging system.

Dealing with Issues for Interprocess Communication

Verification of Real-Time Systems Resource Sharing

Course Syllabus. Operating Systems

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Uniprocessor Scheduling. Basic Concepts Scheduling Criteria Scheduling Algorithms. Three level scheduling

Transcription:

Systemy RT i embedded Wykład 11 Systemy RTOS Wrocław 2013

Plan Introduction Tasks Queues Interrupts Resources Memory management Multiprocessor operation

Introduction

What s an Operating System? Provides environment for executing programs Process abstraction for multitasking/concurrency Scheduling Hardware abstraction layer (device drivers) Filesystems Communication Can be Real-Time

Classic OS vs RTOS Desktop OS OS is in control at all times and runs applications, OS runs in different address space RTOS OS and embedded software are integrated, ES starts and activates the OS both run in the same address space (RTOS is less protected) RTOS includes only service routines needed by the ES application RTOS vendors: FreeRTOS, VxWorks, VTRX, Nucleus, LynxOS, uc/os Desirable RTOS properties: use less memory, application programming interface, debugging tools, support for variety of microprocessors, already-debugged network drivers

Most Real-Time Systems are embedded An embedded system is a computer built into a system but not seen by users as being a computer Examples FAX machines Copiers Printers Scanners Routers Robots

Characteristics of Real-Time Operating Systems Deterministic Operations are performed at fixed, predetermined times or within predetermined time intervals Concerned with how long the operating system delays before acknowledging an interrupt

Characteristics of Real-Time Operating Systems Responsiveness How long, after acknowledgment, it takes the operating system to service the interrupt Includes amount of time to begin execution of the interrupt Includes the amount of time to perform the interrupt

Characteristics of Real-Time Operating Systems User control User specifies priority Specify paging What processes must always reside in main memory Disks algorithms to use Rights of processes

Characteristics of Real-Time Operating Systems Reliability Degradation of performance may have catastrophic consequences Attempt either to correct the problem or minimize its effects while continuing to run Most critical, high priority tasks execute

Characteristics of Real-Time Operating Systems Fail-soft operation ability of a system to fail in such a way as to preserve as much capability and data as possible; the RTOS tries to correct the problem or minimize its effects while continuing to run; RTOS is stable, i.e. it will meet the deadlines of its most critical, highest-priority tasks, even if some less critical task deadlines are not always met.

Features of Real-Time Operating Systems Fast context switch Small size Ability to respond to external interrupts quickly Multitasking with interprocess communication tools such as semaphores, signals, and events

Features of Real-Time Operating Systems Use of special sequential files that can accumulate data at a fast rate Preemptive scheduling based on priority Minimization of intervals during which interrupts are disabled Delay tasks for fixed amount of time Special alarms and timeouts

RTOS is it necessary? Not always Simplest approach: cyclic executive loop do part of task 1 do part of task 2 do part of task 3 end loop

Cyclic Executive Plus Interrupts Works fine for many signal processing applications Insanely cheap, predictable interrupt handler: When interrupt occurs, execute a single userspecified instruction This typically copies peripheral data into a circular buffer No context switch, no environment save, no delay

Drawbacks of CE + Interrupts Main loop still running in lockstep Programmer responsible for scheduling Scheduling static Sporadic events handled slowly

Cooperative Multitasking A cheap alternative Non-preemptive Processes responsible for relinquishing control Examples: Original Windows A process had to periodically call get_next_event() to let other processes proceed Drawbacks: Programmer had to ensure this was called frequently An errant program would lock up the whole system Alternative: preemptive multitasking

Preemptive Multitasking Tasks are swapped either voluntarily or automaticaly by the RTOS Tasks normally selected based upon priority

Preemptive Multitasking - example

RTOS KERNEL

Tasks

Tasks A task a simple subroutine ES application makes calls to the RTOS functions to start tasks, passing to the OS, start address, stack pointers, etc. of the tasks Task States: Running Ready (possibly: suspended, pended) Blocked (possibly: waiting, dormant, delayed) [Exit] Scheduler schedules/shuffles tasks between Running and Ready states Blocking is self-blocking by tasks, and moved to Running state via other tasks interrupt signaling (when block-factor is removed/satisfied) When a task is unblocked with a higher priority over the running task, the scheduler switches context immediately (for all pre-

Tasks Most tasks are blocked or ready most of the time because generally only one task can run at a time per CPU The number of items in the ready queue can vary greatly, depending on the number of tasks the system needs to perform and the type of scheduler that the system uses

Tasks and Data Each tasks has its won context - not shared, private registers, stack, etc. In addition, several tasks share common data (via global data declaration; use of extern in one task to point to another task that declares the shared data Shared data cause the shared-data problem or use of Reentrancy characterization of functions

Reentrant functions Reentrancy A function that works correctly regardless of the number of tasks that call it between interrupts Characteristics of reentrant functions Only access shared variable in an atomicway, or when variable is on callee s stack A reentrant function calls only reentrant functions A reentrant function uses system hardware (shared resource) atomically

Semaphores and Shared Data Semaphore a variable/lock/flag used to control access to shared resource (to avoid shared-data problems in RTOS) Protection at the start is via primitive function, called take, indexed by the semaphore Protection at the end is via a primitive function, called release, also indexed similarly Simple semaphores Binary semaphores are often adequate for shared data problems in RTOS

Semaphores and Shared Data Semaphore Problems The initial values of semaphores when not set properly or at the wrong place The symmetry of takes and releases must match or correspond each take must have a corresponding release somewhere in the ES application Taking the wrong semaphore unintentionally (issue with multiple semaphores) Holding a semaphore for too long can cause waiting tasks deadline to be missed

Semaphores and Shared Data Variants: Binary semaphores single resource, one-at-a time, alternating in use (also for resources) Counting semaphores multiple instances of resources, increase/decrease of integer semaphore variable Mutex protects data shared while dealing with priority inversion problem Summary Protecting shared data in RTOS Disabling/Enabling interrupts (for task code and interrupt routines), faster Taking/Releasing semaphores (can t use them in interrupt routines), slower, affecting response times of those tasks that need the semaphore Disabling task switches (no effect on interrupt routines), holds all other tasks response

Task Synchronization Events Messages Semaphores Mutexes

Semaphores

Semaphores Invented by Edgser Dijkstra in the mid- 1960s Offered by most multitasking kernels Used for: Mutual exclusion Signaling the occurrence of an event Synchronizing activities among tasks

Semaphores (cont.) A semaphore is a key that the code acquires in order to continue execution If the key is already in use, the requesting task is suspended until the key is released There are two types Binary semaphores 0 or 1 Counting semaphores >= 0

Semaphore Operations Initialize (or create) Value must be provided Waiting list is initially empty Wait (or pend) Used for acquiring the semaphore If the semaphore is available (the semaphore value is positive), the value is decremented, and the task is not blocked Otherwise, the task is blocked and placed in the waiting list Most kernels allow you to specify a timeout If the timeout occurs, the task will be unblocked and an error code will be returned to the task

Semaphore Operations (cont.) Signal (or post) Used for releasing the semaphore If no task is waiting, the semaphore value is incremented Otherwise, make one of the waiting tasks ready to run but the value is not incremented Which waiting task to receive the key? Highest-priority waiting task First waiting task

Sharing I/O Devices

Encapsulating a Semaphore

Applications of Counting Semaphores A counting semaphore is used when a resource can be used by more than one task at the same time Example: Managing a buffer pool of 10 buffers

Tasks scheduling

Scheduling Algorithms in RTOS What is it? Multi-tasking requires all tasks get scheduled to run on CPU according to some pre pre-determined scheme Types of Scheduling: Co-operative, Pre-emptive Round-robin, Deadline Monotonic, Least Slack Slack- Time etc. Issues: Task dead-lines Missed dead-lines may have severe consequences Context Switch Time

Scheduling Algorithms in RTOS Off-line scheduling algorithms The algorithm is executed on the entire task set before actual task activation. The schedule generated in this way is stored in a table and later executed by a dispatcher. On-line scheduling algorithms The scheduling decisions are taken at runtime every time a new task enters the system or when a running task terminates.

Priority inversion Priority inversion: low-priority process keeps high-priority process from running. Improper use of system resources can cause scheduling problems: Low-priority process grabs I/O device. High-priority device needs I/O device, but can t get it until low-priority process is done. Can cause deadlock.

Solving priority inversion Give priorities to system resources. Have process inherit the priority of a resource that it requests. Low-priority process inherits priority of device if higher.

Context-switching time Non-zero context switch time can push limits of a tight schedule. Hard to calculate effects---depends on order of context switches. In practice, OS context switch overhead is small.

Off-line scheduling algorithms The main advantage of this approach is that the run-time overhead is low and does not depend on the complexity of the scheduling algorithm used to build the schedule. However, the system is quite inflexible to environmental changes.

On-line scheduling algorithms With on-line algorithms, each task is assigned a priority, according to one of its temporal parameters. These priorities can be either fixed priorities or dynamic priorities fixed priorities: based on fixed parameters and assigned to the tasks before their activation dynamic priorities: based on dynamic parameters that may change during system evolution When task activations are not known, an on-line guarantee test has to be done every time a new task enters the system.

Scheduling Algorithms in RTOS Off-line scheduling algorithms: Clock Driven Scheduling Weighted Round Robin Scheduling On-line scheduling algorithms: Static: Rate monotonic Inverse deadline (deadline monotonic) Dynamic: Earliest deadline first Least laxity first

Scheduling Algorithms in RTOS Clock Driven Simplest All parameters about jobs (execution time/deadline) known in advance. Schedule can be computed offline or at some regular time instances. Minimal runtime overhead. Not suitable for many applications.

Scheduling Algorithms in RTOS Weighted Round Robin Jobs scheduled in FIFO manner Time quantum given to jobs is proportional to it s weight Example use : High speed switching network QoS guarantee. Not suitable for precedence constrained jobs. Job A can run only after Job B. No point in giving time quantum to Job B before Job A.

Rate Monotonic Scheduling For a set of periodic tasks, assigning the priorities according to the rate monotonic (RM) algorithm means that tasks with shorted periods (higher request rates) get higher priorities. It is an optimal, preemptive, staticpriority, scheduling algorithm used in real-time operating systems. If a task set cannot be scheduled using the RMA algorithm, it cannot be scheduled using any static-priority

Rate Monotonic Scheduling The inputs to the algorithm are processes (tasks, threads) with: No resource sharing (processes do not share resources, e.g. a hardware resource, a queue, or a semaphore) Deterministic deadlines exactly equal to periods Static priorities (whenever a processor is free or a new task period begins, the task with the highest static priority is selected to preempt all other tasks) Static priorities assigned according to the rate monotonic principle (tasks with shorter periods/deadlines are given higher priorities)

RMS The RM algorithm assigns different priorities proportional to the frequency of tasks RM can schedule a set of tasks to meet deadlines if total resource utilization is less than 69.3% The RM algorithm provides no support for dynamically changing task periods and/or priorities and tasks that may experience priority inversion

RMS - example

RMS Summary One major limitation of fixed-priority scheduling and RMS is that it is not always possible to fully utilize the CPU.

Inverse deadline algorithm Inverse deadline allows a weakening of the condition which requires equality between periods and deadlines in staticpriority schemes. The inverse deadline algorithm (IDA) assigns priorities to tasks according to their deadlines: The task with the shortest relative deadline is assigned the highest priority

IDA - example

Earliest deadline first algorithm The EDF algorithm assigns priority to tasks according to their absolute deadlines: The task with the earliest deadline will be executed at the highest priority. This algorithm is optimal in the sense that feasibility. If there exists a feasible schedule for a task set, then EDF is able to find it.

Earliest deadline first algorithm EDF does not make any assumption about the periodicity of the tasks; hence it can be used for scheduling periodic as well as aperiodic tasks.

EDF implementation On each timer interrupt: compute time to deadline; choose process closest to deadline. Generally considered too expensive to use in practice.

EDF - example

Least Laxity first algorithm The LLF algorithm assigns priority to tasks according to their relative laxity. The task with the smallest laxity will be executed at the highest priority. LLF is optimal and the schedulability of a set of tasks can be guaranteed using the EDF schedulability test.

Least Laxity first algorithm When a task is executed, its relative laxity is constant. The relative laxity of ready tasks decreases. When the laxity of the tasks is computed only at arrival times, the LLF schedule is equivalent to EDF. If the laxity is computed at every time t, more context-switching will be necessary.

LLF example 1

LLF example 2

Hybrid task scheduling Some real-time applications may require aperiodic tasks. Hybrid task sets contain both types of tasks. Periodic tasks usually have hard timing constraints and are scheduled with one of the four basic algorithms. Aperiodic tasks have either soft or hard timing constraints.

Hybrid task scheduling The main objective of the system is to guarantee the schedulability of all the periodic tasks. If the aperiodic tasks have soft real timing constraints, the system aims to provide good average response times. If the aperiodic tasks have hard timing constraints, the system aims to maximize the guarantee ratio of these aperiodic tasks.

Scheduling of soft aperiodic tasks Three main types: Background scheduling Task server Slack stealing

Background scheduling Aperiodic tasks are scheduled in the background when there are no periodic tasks ready to execute. Aperiodic tasks are queued according to first-come-first-serve strategy. The major advantage of background scheduling is its simplicity. Its major drawback is that, for high loads due to periodic tasks, response time of aperiodic requests can be high.

Background scheduling - example

Task Servers A server is a periodic task whose purpose is to server aperiodic requests. A server is characterized by a period and a computation time called server capacity. The server is scheduled with the algorithm used for the periodic tasks and, once it is active, it serves the aperiodic requests within the limit of server capacity. The ordering of aperiodic requests does not depend on the scheduling algorithm used for periodic tasks.

Types of Task Servers Polling server The simplest servers serves pending aperiodic request at regular intervals equal to its period. Deferred server, priority exchange server sporadic server More improved Better aperiodic responsiveness

Polling server The polling server becomes active at regular intervals equal to its period. It serves pending aperiodic requests within the limit of its capacity. If no aperiodic quests are pending, the polling server suspends itself until the beginning the its next period and the time originally reserved for aperiodic requests is used by periodic tasks.

Polling Server - example

Deferrable server The deferrable server is an extension of the polling server which improves the response time of aperiodic requests. The deferrable server looks like the polling server, with some differences: the deferrable server preserves its capacity if no aperiodic requests are pending at the beginning of its period. Thus an aperiodic request that enters the system just after the server suspends itself can be executed immediately.

Sporadic server Like the deferrable server, the sporadic server preserves its capacity until an aperiodic request occurs. It differs in the way it replenishes this capacity. It does not recover its capacity to its full value at the beginning of each new period, but only after it has been consumed by aperiodic task executions.

Sporadic Server - example

Slack stealing and joint scheduling These two techniques are quite similar and both use the laxity of the periodic tasks to schedule aperiodic tasks. Slack stealing The tasks are scheduled with RMA Joint scheduling The tasks are scheduled with EDF

Slack stealing and joint scheduling Unlike the server techniques, they do not require the use of a periodic task for aperiodic task service. Each time an aperiodic request enters the system, time for servicing this request is made by stealing processing time from the periodic tasks without causing deadline missing. The laxity of the periodic tasks is used to schedule aperiodic requests as soon as possible.

Slack Stealing - example

Scheduling of hard aperiodic tasks The hard aperiodic task can be mapped onto a periodic task and scheduled with the periodic task set not always usable! Two main types: Background scheduling Joint scheduling of aperiodic and periodic tasks

Background scheduling The principle of this technique consists in scheduling aperiodic tasks in the background when there are no periodic tasks ready to execute according to EDF. The aperiodic requests have hard timing constraints and as they are accepted, they are queued according to a strict increasing order of deadlines.

Joint scheduling of aperiodic and periodic tasks Each time a new aperiodic task enters the system, a new EDF schedule is built with a task set which is composed of the periodic requests, the previously accepted requests, and the new request. If this schedule meets all the deadlines, then the new requests is accepted.

Example

Message passing

Message passing Tasks must be able to communicate with one another to coordinate their activities or to share data. Tasks can use shared data and semaphores to allow taskcommunication. There are several other methods that most RTOSs offer: queues, mailboxes, and pipes

What is a queue Queues are the primary form of intertask communications Can be used to send messages between tasks, and between interrupts and tasks In most cases they are used as thread safe FIFO (First In First Out) buffers with new data being sent to the back of the queue, although data can also be sent to the front

What is a queue Messages are sent through queues by copy, not by reference!!!

Queue - remarks Most RTOSs require to initialize the queues before use There can be many queues If the code tries to write to a full queue then the RTOS either returns an error or blocks the task until some other task reads data from the queue and thereby creates some space

What is a mailbox Mailboxes are like queues The typical RTOS has functions: to create, to write to, To read from mailboxes, to check whether the mailbox contains any messages. to destroy the mailbox if it is no longer needed

What is a mailbox Some RTOSs allow a certain number of messages in each mailbox (chosen during compile time), Others allow only one message in a mailbox at a time. In certain RTOS, mailbox messages can be prioritized.

What is a pipe Pipes are also like queues. The RTOS can create, write, read, and so on. Pipes in some RTOSs are entirely byteoriented. Some RTOSs use the standard C library functions fread and fwrite to read from and write to pipes

Queue, mailbox, pipe - problems #1: Queues, mailboxes, and pipes make it easy to share data among tasks, but they also make it easy to insert bugs #2: Most RTOSs do not restrict which tasks can read from or write to any given queue, mailbox, or pipe #3: RTOS cannot ensure that data written onto a queue, mailbox, or pipe will be properlyinterpreted by the task that reads it

Queue, mailbox, pipe - problems #4: Running out of space in queues, mailboxes, or pipes is usually a disaster for embedded software #5: Passing pointers from one task to another through a queue, mailbox, or pipe is one of several ways to create shared data inadvertently

Interrupts

Interrupts Embedded applications running on top of (RTOSes) require Interrupt Service Routines (ISRs) to handle interrupts generated by external events Since application code execution is interrupted (delayed) during the execution of an ISR, the amount of code in the ISR should be minimized

RTOS Interrupt Architectures Base problem: supporting asynchronous access to internal RTOS data structures by interrupt routines and RTOS services Modifications to the same structure can be catastrophic Two solutions: Unified Interrupt Architecture Segmented Interrupt Architecture

Unified Interrupt Architecture Interrupts are locked out while an ISR or system service is modifying critical data structures inside the RTOS

Segmented Interrupt Architecture Less popular approach is not to allow any asynchronous access to critical data structures by ISRs or other service calls Service call access to critical data structures from an ISR is deferred to a secondary routine - ISR2 ISR2 is executed along with application threads under scheduler control

Interrupts - nesting Nesting means when an interrupt source call of higher priority occurs then the control is passed to higher priority and on return from the higher priority the lower priority ISR starts executing. Each ISR on letting a higher priority interrupt call sends the ISM to the RTOS. Common stack for the ISR nested calls, similar to the nested function calls.

Resources

Resource Allocation in RTOS Resource Allocation The issues with scheduling applicable here. Resources can be allocated in Weighted Round Robin Priority Based Some resources are non preemptible Example : semaphores Priority Inversion if priority scheduling is used

Memory management

Memory management Two types of memory management in RTOSs: The first type is used to provide tasks with temporary data space The second type of memory management is used to dynamically swap code in and out of main memory

Memory management First type: The system s free memory is divided into fixed sized memory blocks, which can be requested by tasks When a task finishes using a memory block it must return it to the pool Access to pools can be prioritized

Memory management Second type used techinques: Memory swapping method keeps the OS and one task in memory at the same time. When another task needs to run, it replaces the first task in main memory, after the first task and its context have been saved to secondary memory In the Overlays method, the code is partitioned into smaller pieces, which are swapped from disk to memory. In this way, programs larger than the available memory

Memory management Second type used techinques: In MFT (multiprogramming with a fixed number of tasks) method, a fixed number of equalized code parts are in memory at the same time. As needed, these parts are overlaid from disk MVT (variable number of tasks) method is similar to MFT except that the size of the partition depends on the needs of the program in MVT Demand paging systems have fixed-size pages that are given to programs as

Multiprocessor operation

Multiprocessoroperation Most RTOSs that are multiprocessorcapable use a separate instance of the kernel on each processor The multiprocessor ability comes from the kernel s ability to send and receive information between processors

Multiprocessoroperation In many RTOSs that support multiprocessing, there is no difference between the single processor case and the multiprocessor case from the task s point of view The RTOS uses a table in each local kernel that contains the location of each task in the system When one task sends a message to another task the local kernel looks up the location of the destination task and routes the message appropriately. From the task s point of view all tasks are

RTOS Design hints

Basic Design using RTOS Large number of tasks - pros: better control of the priorities and by this of the relative response times, better modularity, cleaner code, more effective encapsulation of data, better hardware sharing, simpler tasks

Basic Design using RTOS Large number of tasks - cons: more data sharing, more semaphores, more time on handling semaphores more bugs, more time on message passing between tasks

Basic Design using RTOS Use as few tasks as possible Add more tasks to your design only for clear reason Write short ISRs Avoid creating and destroying tasks while the system is running, because: it is time consuming it may be difficult to destroy a task without leaving something behind; it may be better to create all the tasks at system startup and leave them

RTOS market view

RTOS - Market view RTOSes on the market can be categorized into: RTOS for small footprint, mobile and connected devices: FreeRTOS µc/os RTOS for complex, hard real-time applications General purpose RTOS in the embedded industry VxWorks RTOS for the Java Platform Objected-oriented RTOS

Thank you for your attention

References [1] http://www.freertos.org/freertos- Plus/Instructor_Led_FreeRTOS_Training/FreeRTOS_Training.shtml [2] http://www.nunoalves.com/classes/spring_2012_cpe355/cpe355-04- c.pdf [3] http://rtos.com/articles/18835 [4] http://13thmonkey.org/documentation/arm/hai.pdf [5] http://www.csie.ntu.edu.tw/~ktw/rts/rt-kernel-prn.pdf [6] http://www.ee.up.ac.za/main/_media/en/undergrad/subjects/emk3 10/1326.pdf [7] http://www.ece.stevenstech.edu/~ymeng/courses/cpe555/papers/rtos_paper.pdf [8] http://netlab.cse.yzu.edu.tw/~bpmania/%c5%f8%c0y/%ad%d7%bd%d 2/951%20%B4O%A4J%A6%A1%A7Y%AE%C9%A7@%B7~%A8t%B2%CE/%C1%B F%B8q/12%20Scheduling%20of%20Independent%20Tasks.pdf