EE458 - Embedded Systems Lecture 8 Semaphores

Similar documents
ENGG4420 CHAPTER 2 LECTURE 6

EE458 - Embedded Systems Exceptions and Interrupts

Introduction to Real-Time Operating Systems

Lecture 4: Real Time Semaphores

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Semaphores. Chapter. verview inary Semaphores for Task Synchronization utex Semaphores to Solve Mutual Exclusion Problems

IT 540 Operating Systems ECE519 Advanced Operating Systems

Dealing with Issues for Interprocess Communication

Concurrency: a crash course

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Real Time Operating System: Inter-Process Communication (IPC)

The components in the middle are core and the components on the outside are optional.

CS370 Operating Systems

Deadlock. Lecture 4: Synchronization & Communication - Part 2. Necessary conditions. Deadlock handling. Hierarchical resource allocation

Concurrency: Deadlock and Starvation

CS A320 Operating Systems for Engineers

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Deterministic Futexes Revisited

Week 3. Locks & Semaphores

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Synchronization. Before We Begin. Synchronization. Example of a Race Condition. CSE 120: Principles of Operating Systems. Lecture 4.

INTERRUPT MANAGEMENT An interrupt is a hardware mechanism used to service an event that can be external or internal to

Last Class: Synchronization

ArdOS The Arduino Operating System Reference Guide Contents

Microkernel/OS and Real-Time Scheduling

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

CSE 120: Principles of Operating Systems. Lecture 4. Synchronization. October 7, Prof. Joe Pasquale

Real-time operating systems and scheduling

Synchronization for Concurrent Tasks

Operating Systems 2006/2007

Lecture 8: September 30

Final Examination. Thursday, December 3, :20PM 620 PM. NAME: Solutions to Selected Problems ID:

CS370 Operating Systems

Process Synchronisation (contd.) Operating Systems. Autumn CS4023

Real-Time Programming

IV. Process Synchronisation

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

CS420: Operating Systems. Process Synchronization

1 Process Coordination

Chapter 6: Process Synchronization

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Interprocess Communication and Synchronization

ENGG4420 CHAPTER 2 HOMEWORK

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Verification of Real-Time Systems Resource Sharing

Experiment #3 Semaphores

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CSC501 Operating Systems Principles. Process Synchronization

CS 153 Design of Operating Systems Winter 2016

CS533 Concepts of Operating Systems. Jonathan Walpole

Last Class: Deadlocks. Today

Schedulability with resource sharing. Priority inheritance protocol Priority ceiling protocol Stack resource policy

CSE 4/521 Introduction to Operating Systems

Nios II. uc/os-ii porting with Nios II Altera Corporation

Resource Access Control in Real-Time Systems. Resource Access Control in Real-Time Systems

CSE 120. Fall Lecture 8: Scheduling and Deadlock. Keith Marzullo

Lecture 9: Thread Synchronizations. Spring 2016 Jason Tang

Process Synchronization: Semaphores. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Operating Systems ECE344. Ding Yuan

Implementation and Evaluation of the Synchronization Protocol Immediate Priority Ceiling in PREEMPT-RT Linux

Chapter 6: Process Synchronization

Interprocess Communication By: Kaushik Vaghani

Chapter 6 Process Synchronization

Concurrency Race Conditions and Deadlocks

Chapter 6 Concurrency: Deadlock and Starvation

Real-Time and Concurrent Programming Lecture 4 (F4): Monitors: synchronized, wait and notify

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS153: Midterm (Fall 16)

Process Coordination and Shared Data

Process Synchronization. studykorner.org

Programming in Parallel COMP755

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

Sistemi in tempo reale Anno accademico

Lecture 9: Midterm Review

CSL373: Lecture 5 Deadlocks (no process runnable) + Scheduling (> 1 process runnable)

Concept of a process

Synchronization II: EventBarrier, Monitor, and a Semaphore. COMPSCI210 Recitation 4th Mar 2013 Vamsi Thummala

Liveness properties. Deadlock

Deadlock and Monitors. CS439: Principles of Computer Systems February 7, 2018

CS A331 Programming Language Concepts

The Real Time Thing. What the hack is real time and what to do with it. 22C3 30. December Erwin Erkinger e.at

CMSC421: Principles of Operating Systems

Tasks. Task Implementation and management

CS450/550 Operating Systems

Precept 3: Preemptive Scheduler. COS 318: Fall 2018

CIS Operating Systems Synchronization based on Busy Waiting. Professor Qiang Zeng Spring 2018

Programming Languages

Semaphores. Blocking in semaphores. Two types of semaphores. Example: Bounded buffer problem. Binary semaphore usage

Systemy RT i embedded Wykład 11 Systemy RTOS

real time operating systems course

Lesson 6: Process Synchronization

Operating Systems ECE344

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

FreeRTOS. A Brief Overview. Christopher Kenna. October 1, Avionics. FreeRTOS 1 / 34

Chapter 5 Asynchronous Concurrent Execution

Introduction to OS Synchronization MOS 2.3

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Transcription:

EE458 - Embedded Systems Lecture 8 Semaphores Outline Introduction to Semaphores Binary and Counting Semaphores Mutexes Typical Applications RTEMS Semaphores References RTC: Chapter 6 CUG: Chapter 9 1

Introduction A semaphore is a kernel object that one or more tasks can acquire or release for the purpose of synchronization or mutual exclusion. Mutual exclusion is a provision by which only one task at a time can access a shared resource (port, memory block, etc.) 2

Introduction Think of a semaphore as a key. Your task can make a request for the key. If it is available, your task can check out the key and proceed. If it is not available (another task has it checked out) your task will block until the key becomes available (the other task checks it back in). 3

Introduction Acquiring the semaphore is analogous to checking out the key. Releasing the semaphore is analogous to checking the key back in. Multiple semaphores can be used if desired. (Keys to different doors.) There may be multiple tasks that are waiting (blocked) on a semaphore. When released, either the oldest task on the queue or the highest priority task is given the semaphore. (RTEMS supports both methods.) 4

Binary Semaphores An integer variable is used to implement a semaphore. We have been discussing binary semaphores in which a value of 0 means that the semaphore is unavailable. A value 1 means the semaphore is available. Semaphores are global resources. Any task can release the semaphore even if it was acquired by another task. This is useful for task synchronization. (This is analogous to a friend returning the key for you.) 5

Counting Semaphores A counting semaphore uses a count that allows it to be released multiple times (up to some maximum value). (This is analogous to having multiple copies of a key to a single door available for check out.) When it is acquired the count value is decremented. When the count reaches 0 any task trying to acquire the semaphore will block. As the semaphore is released the count is incremented. 6

Additional Semaphore Info Semaphores can be created in the available (value > 0) or unavailable (value = 0) states. Proper initialization is very important!!! You must ensure that every semaphore acquisition is paired with a release. Most OSes optionally allow a call to acquire a semaphore to time-out after a number of ticks if the semaphore is not available. (RTEMS does.) 7

Priority Inversion Priority inversion can occur when HI and LO priority tasks share a semaphore. Assume HI is blocked and LO acquires the semaphore. HI unblocks and tries to acquire the semaphore. Since LO has it, HI will block allowing LO to run. (This is all OK so far.) But now MED unblocks and preempts LO. MED is not using the semaphore but HI is waiting for MED to complete, so that LO can run and release the semaphore. 8

Mutexes Mutual exclusion semaphores (mutexes) are similar to binary semaphores except they provide ownership and priority inversion avoidance. (The terms lock and unlock are often used with mutexes instead of acquire and release.) When a task locks a mutex only that task can release it. This is ownership. (In contrast, a semaphore can be released by any task.) RTEMS does not support ownership. 9

Mutexes Two common methods are used for avoiding priority inversion with mutexes: The priority inheritance protocol raises the priority of the lower priority task to that of the higher priority task when the high priority task requests the mutex. The priority drops back when the mutex is released. With the ceiling priority protocol, tasks acquiring a mutex have their priority raised to a max level. RTEMS supports both methods. 10

Typical Directives Create, delete: The same calls may be used to create both binary and counting semaphores. Separate routines are usually provided for mutexes. (RTEMS uses the same calls for semaphores and mutexes.) Acquire, release: Routines may be named take and give, pend and post, or p and v. (lock and unlock are common for mutexes) The acquire call may optionally take a timeout argument. 11

Typical Applications Here are some of the applications for which semaphores are used: wait and signal synchronization credit tracking synchronization single shared resource access synchronization multiple shared resource access synchronization 12

Wait and Signal Synch A binary semaphore is created with an initial value of 0 (unavailable). Task HI runs and blocks on acquire. This allows LO to run. LO completes its task and releases the semaphore. HI immediately preempts LO and continues running until it tries to acquire the semaphore again. HI blocks (HI already has the semaphore), allowing LO to run until LO releases the semaphore again. 13

Wait and Signal Synch This is known as a unilateral rendezvous. A bilateral rendezvous can be accomplished using two semaphores. This method can also be used to synchronize task activity with an interrupt service routine (ISR). The ISR should only release the semaphore, never acquire it. An ISR should never block! 14

Credit Tracking Synch An ISR (or HI task) can release a counting semaphore to indicate the number of occurrences of an event. A task (or another LO task) can acquire the semaphore and process the event. This is useful if the ISR (or HI task) releases semaphores in bursts. There must be sufficient catch-up time between bursts for the other task to process the events. 15

Single Shared Resource Access Single shared resource access is one of the more common uses of semaphores. We have a resource (printer, serial port, area of memory) that should only be accessed by a single task at a time. Use of the resource is bracketed by calls to acquire and release a semaphore (usually either a binary semaphore or a mutex). 16

Single Shared Resource Access You should take care to protect ALL global resources that are shared by tasks: static int counter; void foo() { counter++; } Is there a problem with the above code if only one task calls foo()? What if two separate tasks call foo()? Would the problem exist if counter could be incremented in a single machine language instruction? 17

Single Shared Resource Access Areas of code which manipulate shared data are known as critical sections and they must be protected. In the example on the previous slide the code might run properly 99.999% of the time without being protected. Bugs like this can be nearly impossible to find. Although semaphores are often used to protect critical sections other methods may be available: disabling the scheduler, disabling interrupts. 18

Single Shared Resource Access If two or more resources are being protected by semaphores, you must take care to prevent deadlock (aka deadly embrace). Task 1 has exclusive access to resource R1, while Task 2 has exclusive access to resource R2. If T1 needs access to R2 and T2 needs access to R1, deadlock occurs. To prevent: (1) acquire all resources first, (2) acquire all resources in the same order, (3) release resources in reverse order. 19

Mult. Shared Resource Access A counting semaphore can be used to protect multiple equivalent shared resources. For example, a memory manager might have 10 blocks of memory available. A counting semaphore would be created and initialized to 10. Up to 10 tasks could simultaneously acquire the semaphore and use one of the memory blocks. An 11 th task would block. Tasks release the semaphore to indicate that they are done with the memory block. 20

RTEMS Semaphores The directive rtems_semaphore_create() is used to create binary and counting semaphores. Attribute sets are passed as an argument to create different types. RTEMS defines both binary and simple binary semaphores. A simple binary semaphore does not allow nested access and can be deleted when locked. Simple binary semaphores must be used for task synchronization! 21

RTEMS Semaphores The full set of semaphore attributes includes: RTEMS_FIFO - tasks wait in FIFO order (default) RTEMS_PRIORITY - tasks wait in priority order RTEMS_BINARY_SEMAPHORE - only 0 and 1 RTEMS_COUNTING_SEMAPHORE any val. (def.) RTEMS_SIMPLE_BINARY_SEMAPHORE RTEMS_NO_INHERIT_PRIORITY - (default) RTEMS_INHERIT_PRIORITY - use prior. inheritance RTEMS_PRIORITY_CEILING - use priority ceiling RTEMS_NO_PRIORITY_CEILING - (default) RTEMS_LOCAL - local task (default) RTEMS_GLOBAL - global task 22

RTEMS Semaphores Here is the prototype for the create routine: rtems_status_code rtems_semaphore_create( rtems_name name, rtems_unsigned32 count, rtems_attribute attribute_set, rtems_task_priority priority_ceiling, rtems_id *id ); count is the initial value. priority_ceiling is only used when the priority ceiling attribute is used. The id is a return value and is used in other directives. 23

RTEMS Semaphores Attribute values are ORed to obtained a desired attribute set. For all default values use RTEMS_DEFAULT_ATTRIBUTES. The priority inheritance and priority ceiling attributes are only supported when the priority queue (not the default FIFO queue) attribute (RTEMS_PRIORITY) is also specified. 24