EE4144: Basic Concepts of Real-Time Operating Systems

Similar documents
Jump Tables A jump table is essentially a list of places in the code to jump to. This can be thought of in C as an array of function pointers. In Asse

Migrate RTX to CMSIS-RTOS

Concurrency: a crash course

Virtual Machine Design

Introduction to Real-Time Operating Systems

Some Basic Concepts EL6483. Spring EL6483 Some Basic Concepts Spring / 22

Lecture 8: September 30

Final Exam Study Guide

Dealing with Issues for Interprocess Communication

Process Synchronization and Communication

CS370 Operating Systems

CS370 Operating Systems

Lecture notes Lectures 1 through 5 (up through lecture 5 slide 63) Book Chapters 1-4

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

IT 540 Operating Systems ECE519 Advanced Operating Systems

Real-Time Programming

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

2 Threads vs. Processes

5/11/2012 CMSIS-RTOS. Niall Cooling Feabhas Limited CMSIS. Cortex Microcontroller Software Interface Standard.

Multitasking. Embedded Systems

Lecture 5: Synchronization w/locks

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Chapter 5 Asynchronous Concurrent Execution

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Module 1. Introduction:

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

CODE TIME TECHNOLOGIES. Abassi RTOS. CMSIS Version 3.0 RTOS API

Synchronization I. Jo, Heeseung

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS-537: Midterm Exam (Spring 2001)

Problem Set 2. CS347: Operating Systems

DYNAMIC MEMORY ALLOCATION ON REAL-TIME LINUX

CS510 Advanced Topics in Concurrency. Jonathan Walpole

CS420: Operating Systems. Process Synchronization

Efficiency and memory footprint of Xilkernel for the Microblaze soft processor

Tasks. Task Implementation and management

Synchronization COMPSCI 386

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Interprocess Communication and Synchronization

CS 333 Introduction to Operating Systems. Class 4 Concurrent Programming and Synchronization Primitives

CS370 Operating Systems Midterm Review

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Migrating to Cortex-M3 Microcontrollers: an RTOS Perspective

Lesson 5: Software for embedding in System- Part 2

Synchronization. Dr. Yingwu Zhu

The components in the middle are core and the components on the outside are optional.

VORAGO VA108xx FreeRTOS port application note

Exam TI2720-C/TI2725-C Embedded Software

Programming in Parallel COMP755

Threads Tuesday, September 28, :37 AM

What is a Real Time Operating System?

Interprocess Communication By: Kaushik Vaghani

DSP/BIOS Kernel Scalable, Real-Time Kernel TM. for TMS320 DSPs. Product Bulletin

Module 6: Process Synchronization. Operating System Concepts with Java 8 th Edition

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

EECS 482 Introduction to Operating Systems

Process Monitoring in Operating System Linux

MARUTHI SCHOOL OF BANKING (MSB)

Virtual Memory COMPSCI 386

Operating Systems Design Fall 2010 Exam 1 Review. Paul Krzyzanowski

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Mid-term Roll no: Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism

Chapter 6 Concurrency: Deadlock and Starvation

15-323/ Spring 2019 Homework 5 Due April 18

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

Threads Questions Important Questions

Final Examination. Thursday, December 3, :20PM 620 PM. NAME: Solutions to Selected Problems ID:

PROCESS SYNCHRONIZATION

RT3 - FreeRTOS Real Time Programming

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

Introduction to the ThreadX Debugger Plugin for the IAR Embedded Workbench C-SPYDebugger

Main Points of the Computer Organization and System Software Module

Pointers II. Class 31

Concurrency and Race Conditions (Linux Device Drivers, 3rd Edition (

Synchronization Spinlocks - Semaphores

EE458 - Embedded Systems Lecture 8 Semaphores

Midterm Exam. October 20th, Thursday NSC

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015

REVIEW OF COMMONLY USED DATA STRUCTURES IN OS

Advance Operating Systems (CS202) Locks Discussion

Embedded Systems. 5. Operating Systems. Lothar Thiele. Computer Engineering and Networks Laboratory

Synchronization I. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

1 Introduction. 2 Managing contexts. Green, green threads of home. Johan Montelius HT2018

Operating Systems. Synchronization

EL6483: Some Additional Concepts of Real-Time Operating Systems

Lesson 6: Process Synchronization

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Midterm Exam March 3, 1999 CS162 Operating Systems

1. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6.

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Precept 2: Non-preemptive Scheduler. COS 318: Fall 2018

Kernel Korner AEM: A Scalable and Native Event Mechanism for Linux

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10

Introduction to Embedded Systems

Process Coordination and Shared Data

Introduction to OS Synchronization MOS 2.3

Chapter 6: Process Synchronization

Transcription:

EE4144: Basic Concepts of Real-Time Operating Systems EE4144 Fall 2014 EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 1 / 10

Real-Time Operating System (RTOS) A Real-Time Operating System (RTOS) is a layer of software that is utilized to provide various general-purpose functionalities that can then be utilized by an application-specific layer of software above it. One primary functionality provided by an RTOS is multithreading (i.e., being able to run multiple tasks/threads concurrently); while only one thread can physically be running at a time (unless the processor has multiple processing cores), the RTOS makes it appear that multiple threads are running at the same time by switching between them. An RTOS is often utilized to meet application-specific real-time requirements of software. Note that real-time does not necessarily mean fast, but only fast enough for the particular application. Often in embedded systems, more important than speed is predictability/consistency; the time to respond to an event (e.g., responding to an interrupt) is latency while variability in the time to respond to an event is jitter. Commonly used real-time operating systems in embedded applications include FreeRTOS, embos, Keil RTX, etc. Keil RTX is included in the Keil IDE installation. In embedded applications wherein more memory (e.g., atleast a few MB) is available, RTLinux (a real-time Linux-based RTOS) is also commonly used. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 2 / 10

Concurrent Tasks/Threads Concurrent threads (also called tasks in an embedded RTOS context) are very useful in applications where an embedded microcontroller is required to do multiple things in parallel, some of which require waiting for an external device/resource. For example, if the microcontroller needs to read a sensor periodically and needs to wait for the sensor to send the data (using polling instead of interrupts), the waiting time can be utilized to run other code if a separate task can run during that waiting time. For example, a task might have a while (1) loop. Giving a task a chance to run for some time means that the function corresponding to that task continues to run for some amount of time and then later, when the task is given the next chance to run, the processor continues from the spot in the function where it had stopped (i.e., the program counter at the time when that task was put into the background and another task started running). Each task is essentially a function (e.g., a C/C++ function func). If the RTOS is given a set of functions (as function pointers), it can switch between the functions either based on elapsed time (each task gets some time to run and then a different task gets a chance to run) or by letting the functions themselves decide when to go into the background (i.e., give another task a chance to run). These two methods are called preemptive multithreading and non-preemptive (cooperative) multithreading, respectively. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 3 / 10

Preemptive and Non-Preemptive Multithreading Preemptive multithreading: the RTOS gives each task a certain amount of time to run and then that task is put into the background while another task is given a chance to run; many RTOS implementations provide functionalities to set task priorities that will then be used to determine how often or how long a task gets to run. Non-preemptive (cooperative) multithreading: a task utilizes a function (often called yield or sleep) to tell the RTOS that it is ready at that time to go to sleep (i.e., to go to the background and wait for the next chance to continue running) and let another task run. This is cooperative multithreading since the tasks cooperate to share the processor. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 4 / 10

Thread Switching and Latency When switching between tasks, the RTOS saves the context of the task so that the same context can be restored when the task again gets a chance to run. As part of the context saving, the contents of the processor registers are saved into memory; the processor registers include the program counter (so that the task knows the exact spot in the function that was being executed when control was transferred to another task so that it can continue executing from the same spot), the condition flags (in the application program status register), stack pointer, the general-purpose registers, etc. To enable multiple concurrently running tasks to use stacks, an RTOS usually provides separate per-task stacks (so that each task has a separate region of memory that it utilizes as its stack); when switching between tasks, the stack pointer is updated to point to the top of the corresponding task s stack. Thread switching latency: The time required to switch between tasks is called the thread switching latency or the context switching latency. Interrupt latency: The time elapsed between when an external event occurs (e.g., a byte arriving on a serial port) and when the interrupt service routine is called; in real-time applications, it is important to keep this time delay predictable (e.g., a guaranteed max delay). For this, it is good to keep the interrupt service routines short (e.g., just set a flag or increment some variable in the interrupt service routine and then continually check the value of this flag/variable in a separately running task). EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 5 / 10

Mutexes and Semaphores If multiple concurrently running tasks (i.e., threads) need to access a variable shared between them, then some mechanism should be used to ensure that the access to this shared variable is properly coordinated between the tasks so that tasks do not overwrite each other s updates to the shared variable or access inconsistent shared data (e.g., if two tasks both access a struct and one task has finished updating only half the data members of the struct when the RTOS switches control to the second task which tries to read the struct and thus gets inconsistent data). Using a flag (e.g., having one task set a flag dataupdated to 0 before updating a shared variable and then setting the flag to 1 after updating and having the other task check the value of the flag before accessing the shared variable) does not work since in preemptive multithreading, the RTOS can switch between tasks at any time (since each task gets some amount of time to run and then the RTOS automatically switches to another task). In particular, the RTOS can possibly even switch tasks between the time that the flag is checked and before it is updated (e.g., if we have code such as if (dataupdated == 0) { dataupdated = 1;... } ), the RTOS can possibly switch tasks between the time the comparison (dataupdated == 0) is done and the time dataupdated = 1; is executed. Hence, to properly coordinate access to a shared variable (or a shared resource), some mechanism provided by the RTOS itself must be used (so that the RTOS can ensure that this mechanism itself has atomic functions, i.e., an unpredictable task switch does not occur during the time that one of the functions for this mechanism is being called). Semaphores and mutexes are general mechanisms for this purpose provided by most RTOS implementations. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 6 / 10

Mutexes and Semaphores A mutex (mutual exclusion) is like a lock that can be acquired or released. So, for example if we have two tasks that need to access a shared variable (e.g., one task writing to it and another task reading from it) and we want to ensure that the two tasks do not access the shared variable at the same time, we can do the following in each task: Acquire the mutex Access the shared variable (read or write) Release the mutex If one task has acquired (locked) the mutex and not yet released (unlocked) it and a second task wants to acquire it, then the second task will automatically be placed in a wait queue by the RTOS so that the second task can continue only after the first task has released the mutex. A mutex can be utilized to coordinate access to a shared variable or, in general, any shared resource (e.g., if we have two tasks that are printing to a screen and we want to ensure that the characters within strings being printed by the two tasks do not appear intermingled on the screen). In RTOS implementations, the function to acquire a mutex is also sometimes called wait (e.g., osmutexwait) or lock. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 7 / 10

Mutexes and Semaphores A semaphore is like a counter that can count to some arbitrary number (which is usually related to a number of available resources). A mutex is essentially a binary semaphore (i.e., only counts between 0 and 1, i.e., locked and unlocked). For example, if we are utilizing a software library to write files to an SD card, but the library can only handle a specific number (e.g., 4) of files at the same time, then we can utilize a semaphore that is acquired (i.e., decrementing of the semaphore counter) by any task that wants to write a file and is released (i.e., incrementing of the semaphore counter) after the file has been written. Then, if the max number of files are already open, a task that wants to write another file will automatically be made to wait by the RTOS when the task tries to acquire the semaphore. A semaphore has an associated counter variable C that is decremented when the semaphore is acquired and incremented when the semaphore is released, i.e., the counter keeps track of how many of a set of available resources is available. When a task tries to acquire a semaphore, if the semaphore counter would become negative by decrementing by 1, the task is automatically placed on a wait queue (the semaphore wait queue). When a task releases a semaphore, if a task is on the semaphore wait queue, that waiting task will be given a chance to run. In RTOS implementations, the function to acquire a semaphore is also sometimes called wait (e.g., ossemaphorewait) or lock. The function to release a semaphore is also sometimes called signal. From an application programmer s perspective, we just need to ensure that we lock a corresponding mutex/semaphore before accessing a shared variable and then unlock the mutex/semaphore after accessing it. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 8 / 10

Fixed-Size Memory Pool When variables are dynamically allocated (e.g., using malloc or new) and de-allocated (e.g., using free or delete), heap fragmentation can occur over time wherein there is free memory available, but not in adequately sized contiguous regions. Fixed-size memory pools are a technique to avoid this heap fragmentation. In a fixed-size memory pool, memory chunks of the same fixed size are allocated/deallocated, i.e., when creating a memory pool, the size (in bytes) of each item is specified and then each allocation/deallocation from the memory pool is always of this fixed size. Hence, any free memory within the memory pool is always in chunks of this fixed size, thus preventing fragmentation. For example, if the item type is a struct (or a class) that includes two 32-bit integers and two 32-bit floats, the size of each item is 16 bytes (this can be calculated during compile-time using the sizeof operator). With a fixed-size memory pool, since all allocations and deallocations are of the same fixed size (e.g., 16 bytes), there will not be a fragmentation since any free memory will definitely be of size sufficient to be able to allocate the required number of bytes for an item from it. A memory pool is basically initialized as a memory region (which is basically equivalent to an array) of some size and then the memory pool manager allocates fixed-size chunks from it whenever requested by the application. Since a fixed-size memory pool is a useful functionality for many embedded applications, a memory pool mechanism is often provided by a real-time operating system as part of its functionalities. EE4144 EE4144: Basic Concepts of Real-Time Operating Systems Fall 2014 9 / 10

Keil-RTX The Keil RTX CMSIS-RTOS implementation (http://www.keil.com/pack/doc/cmsis_rtx/index.html) is included in the Keil IDE installation. To enable this RTOS in a Keil project, go to the menu item Project -> Manage -> Run-Time Environment and then select CMSIS -> RTOS(API) -> Keil RTX. Clicking on the description link next to the Keil RTX item will open the documentation for Keil RTX in a browser window. This will be a file in the Keil installation (e.g., C:\Keil_v5\ARM\Pack\ARM\CMSIS\4.1.0\CMSIS_RTX\Doc\index.html). This RTOS provides functions for multithreading (e.g., osthreadcreate, osthreadyield, osthreadsetpriority, etc.), mutexes and semaphores (e.g., osmutexcreate, osmutexwait, osmutexrelease, ossemaphorecreate, ossemaphorewait, ossemaphorerelease, etc.), timers (e.g., ostimercreate, ostimerstart, ostimerstop, etc.), memory pools (e.g., ospoolcreate, ospoolalloc, ospoolfree, etc.), etc. These functions are described under Function Overview (linked from the RTX documentation page mentioned above). EE4144 EE4144: Basic Concepts of Real-Time Operating SystemsFall 2014 10 / 10