Real-Time Systems. Lecture #4. Professor Jan Jonsson. Department of Computer Science and Engineering Chalmers University of Technology

Similar documents
Resource management. Real-Time Systems. Resource management. Resource management

CHAPTER 6: PROCESS SYNCHRONIZATION

What is the Race Condition? And what is its solution? What is a critical section? And what is the critical section problem?

Synchronization for Concurrent Tasks

Concurrency: Deadlock and Starvation. Chapter 6

CSE 153 Design of Operating Systems

Deadlock. Concurrency: Deadlock and Starvation. Reusable Resources

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Process & Thread Management II. Queues. Sleep() and Sleep Queues CIS 657

Process & Thread Management II CIS 657

Last Class: Deadlocks. Today

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

Synchronization. Race Condition. The Critical-Section Problem Solution. The Synchronization Problem. Typical Process P i. Peterson s Solution

Lesson 6: Process Synchronization

CS370 Operating Systems

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

Concurrency: Deadlock and Starvation

1 Process Coordination

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Chapter 6: Process Synchronization. Operating System Concepts 9 th Edit9on

IT 540 Operating Systems ECE519 Advanced Operating Systems

Process Synchronisation (contd.) Deadlock. Operating Systems. Spring CS5212

Implementing Mutual Exclusion. Sarah Diesburg Operating Systems CS 3430

Chapter 6: Process Synchronization

MULTITHREADING AND SYNCHRONIZATION. CS124 Operating Systems Fall , Lecture 10

Locks. Dongkun Shin, SKKU

Process/Thread Synchronization

CS420: Operating Systems. Process Synchronization

Synchronization. Before We Begin. Synchronization. Credit/Debit Problem: Race Condition. CSE 120: Principles of Operating Systems.

Last Class: Synchronization

General Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:

Interprocess Communication By: Kaushik Vaghani

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

CS 318 Principles of Operating Systems

Chapter 5 Asynchronous Concurrent Execution

Operating Systems Antonio Vivace revision 4 Licensed under GPLv3

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Process Co-ordination OPERATING SYSTEMS

Process Management And Synchronization

Process Synchronization

Comp 310 Computer Systems and Organization

Lecture 9: Midterm Review

Dept. of CSE, York Univ. 1

Chapter 6: Process Synchronization

Chapter 5: Process Synchronization

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

1. Motivation (Race Condition)

EECS 482 Introduction to Operating Systems

SSC - Concurrency and Multi-threading Advanced topics about liveness

Synchronization Principles

C09: Process Synchronization

Synchronization Spinlocks - Semaphores

Chapter 6: Process Synchronization

Chapter 6 Concurrency: Deadlock and Starvation

Process Synchronization. CISC3595, Spring 2015 Dr. Zhang

IV. Process Synchronisation

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Chapter 6. Deadlock. (A Quick Introduction)

Process Synchronization

CS370 Operating Systems Midterm Review

Concurrency: a crash course

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

CS370 Operating Systems

Process Synchronization

Process Synchronization

Chapter 6: Process Synchronization

Performance Throughput Utilization of system resources

Synchronization I. Jo, Heeseung

Chapter 6 Process Synchronization

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

CPSC/ECE 3220 Summer 2018 Exam 2 No Electronics.

Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Chapter 8. Basic Synchronization Principles

G52CON: Concepts of Concurrency

Lecture 8: September 30

Chapters 5 and 6 Concurrency

Today: Synchronization. Recap: Synchronization

Mutual Exclusion and Synchronization

CSE 4/521 Introduction to Operating Systems

Midterm Exam Amy Murphy 19 March 2003

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Lecture 6. Process Synchronization

CS370 Operating Systems

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Concurrency: Mutual Exclusion and

Process/Thread Synchronization

Summary Semaphores. Passing the Baton any await statement. Synchronisation code not linked to the data

Process Synchronization

Deadlock and Monitors. CS439: Principles of Computer Systems September 24, 2018

Threads. Concurrency. What it is. Lecture Notes Week 2. Figure 1: Multi-Threading. Figure 2: Multi-Threading

Classical concurrency control: topic overview 1 In these lectures we consider shared writeable data in main memory

Suggested Solutions (Midterm Exam October 27, 2005)

Concept of a process

Achieving Synchronization or How to Build a Semaphore

Interrupts and Time. Real-Time Systems, Lecture 5. Martina Maggio 28 January Lund University, Department of Automatic Control

Fast and Lock-Free Concurrent Priority Queues for Multi-Thread Systems

Transcription:

Real-Time Systems Lecture #4 Professor Jan Jonsson Department of Computer Science and Engineering Chalmers University of Technology

Real-Time Systems Specification Resource management Mutual exclusion Implementation Verification

Resource management Resource management is a general problem that exists at several levels in a real-time system. Shared resources internal to the the run-time system: CPU time Memory pool (for dynamic allocation of memory) Data structures (queues, tables, buffers, ) I/O device access (ports, status registers, ) Shared resources specific to the application program: Data structures (buffers, state variables, ) Displays (to avoid garbled text if multiple tasks use it) Entities in the application environment (seats in a cinema or an aircraft, a car parking facility, etc)

Resource management Classification of resources: Exclusive access: there must be only one user at a time. Examples: manipulation of data structures or I/O device registers Exclusiveness is guaranteed through mutual exclusion Program code that is executed while mutual exclusion applies is called a critical region Shared access: there can be multiple users at a time. Examples: a car parking facility The program code that makes sure that the number of users are within acceptable limits is itself a critical region

Resource management Operations for resource management: acquire: to request access to a resource release: to release a previously acquired resource The acquire operation can be either blocking or non-blocking: Blocking: the task that calls acquire is blocked until the requested resource becomes available. Non-blocking: acquire returns a status code that indicates whether access to the resource was granted or not. The acquire operation can be generalized so that the calling task can provide a priority. The task with the highest priority will then be granted access to the resource in case of simultaneous requests.

Resource management Problems with resource management: Deadlock: tasks blocks each other and none of them can use the resource. Deadlock can only occur if the tasks require access to more than one resource at the same time Deadlock can be avoided by following certain guidelines Starvation: Some task is blocked because resources are always assigned to other (higher priority) tasks. Starvation can occur in most resource management scenarios Starvation can be avoided by granting access to resources in FIFO order In general, deadlock and starvation are problems that must be solved by the program designer!

Deadlock Example: Assume that two tasks, A and B, want to use two different resources at the same time R1, R2 : Shared_Resource; task A; task body A is R1.Acquire; -- task switch from A to B after this line causes deadlock R2.Acquire; -- program code using both resources R2.Release; R1.Release; end A; task B; task body B is R2.Acquire; R1.Acquire; -- program code using both resources R1.Release; R2.Release; end B;

Deadlock Conditions for deadlock to occur: 1. Mutual exclusion only one task at a time can use a resource 2. Hold and wait there must be tasks that hold one resource at the same time as they request access to another resource 3. No preemption a resource can only be released by the task holding it 4. Circular wait there must exist a cyclic chain of tasks such that each task holds a resource that is requested by another task in the chain

Deadlock Guidelines for avoiding deadlock: 1. Task should, if possible, only use one resource at a time. 2. If (1) is not possible, all tasks should request resources in the same order. 3. If (1) and (2) are not possible, special precautions should be taken to avoid deadlock. For example, resources could be requested using non-blocking calls. The TinyTimber kernel can detect deadlock situations when a synchronous call is made. In such situations SYNC() will return a value of (-1).

Mutual exclusion Program constructs that provide mutual exclusion: Ada 95 uses protected objects. Older languages (e.g. Modula-1, Concurrent Pascal) use monitors. Java uses synchronized methods, a simplified version of monitors. When programming in languages (e.g. C and C++) that do not provide the constructs mentioned above, mechanisms provided by the real-time kernels or operating system must be used. POSIX offers semaphores and methods with mutual exclusion. The TinyTimber kernel offers methods with mutual exclusion. To guarantee mutual exclusion in the implementation itself of the constructs mentioned above, machine-level methods must be used.

Protected objects Protected objects: A protected object is a construct offered by Ada95. A protected object offers operations with mutual exclusion for data being shared by multiple tasks. A protected operation can be an entry, a procedure or a function. The latter is a read-only operation. Protected entries are guarded by a Boolean expression called a barrier. The barrier must evaluate to true to allow the entry body code to be executed. If the barrier evaluates to false, the calling task will block until the barrier condition changes.

Protected objects Implementing an exclusive resource in Ada95: protected type Exclusive_Resource is entry Acquire; procedure Release; private Busy : Boolean := false; end Exclusive_Resource; protected body Exclusive_Resource is entry Acquire when not Busy is Busy := true; end Acquire; procedure Release is Busy := false; end Release; end Exclusive_Resource;

Protected objects R : Exclusive_Resource; -- instance of the resource task A, B; -- tasks using the resource task body A is R.Acquire; -- critical region with code using the resource R.Release; end A; task body B is R.Acquire; -- critical region with code using the resource R.Release; End B;

Monitors Monitors: A monitor is a construct offered by some (older) languages, e.g., Modula-1, Concurrent Pascal, Mesa. A monitor encapsulates data structures that are shared among multiple tasks and provides procedures to be called when a task needs to access the data structures. Execution of monitor procedures are done under mutual exclusion. Synchronization of tasks is done with a mechanism called condition variables.

Monitors Monitors vs. protected objects: Monitors are similar to protected objects in the sense that both are objects that can guarantee mutual exclusion during calls to procedures manipulating shared data. The difference between monitors and protected objects are in the way they handle synchronization: Protected objects use entries with barriers (auto wake-up) Monitors use condition variables (manual wake-up) Java offers a monitor-like construct: Java s synchronized methods correspond to monitor procedures However, Java has no mechanism that corresponds to condition variables; a thread that gets woken up must check manually whether the resource is available.

Monitors Operations on condition variables: wait(cond_var): the calling task is blocked and is inserted into a FIFO queue corresponding to cond_var. send(cond_var): wake up first task in the queue corresponding to cond_var. No effect if the queue is empty. Properties: 1. After a call to wait the monitor is released (e.g., other tasks may execute the monitor procedures). 2. A call to send must be the last statement in a monitor procedure.

Monitors Implementing an exclusive resource with a monitor: monitor body Exclusive_Resource is -- NOT Ada 95 Busy : Boolean := false; CR : condition_variable; procedure Acquire is if Busy then Wait(CR); end if; Busy := true; end Acquire; procedure Release is Busy := false; Send(CR); end Release; end Exclusive_Resource;

Monitors R : Exclusive_Resource; -- instance of the resource task A, B; -- tasks using the resource task body A is R.Acquire; -- critical region with code using the resource R.Release; end A; task body B is R.Acquire; -- critical region with code using the resource R.Release; End B;

Semaphores Semaphores: A semaphore is a passive synchronization primitive that is used for protecting shared and exclusive resources. Synchronization is done using two operations, wait and signal. These operations are atomic (indivisible) and are themselves critical regions with mutual exclusion. Semaphores are often used in run-time systems to implement more advanced mechanisms, e.g., protected objects or monitors.

Semaphores A semaphore s is an integer variable with value domain 0 Atomic operations on semaphores: Init(s,n): assign s an initial value n Wait(s): if s > 0 then s := s - 1; else block calling task ; Signal(s): if any task that has called Wait(s) is blocked then allow one such task to execute ; else s := s + 1;

Semaphores Implementing semaphores in Ada95: protected type Semaphore (Initial : Natural := 0) is entry Wait; procedure Signal; private Value : Natural := Initial; end Semaphore; protected body Semaphore is entry Wait when Value > 0 is Value := Value - 1; end Wait; procedure Signal is Value := Value + 1; end Signal; end Semaphore;

Semaphores R : Semaphore(1); task A, B; -- tasks using the resource task body A is R.Wait; -- critical region with code using the resource R.Signal; end A; task body B is R.Wait; -- critical region with code using the resource R.Signal; end B;

Machine-level mutual exclusion At some point all techniques mentioned so far need themselves be implemented using mutual exclusion. For this purpose there are two techniques are offered at the lowest (machine) level: Disabling the processor s interrupt service mechanism Should involve any interrupt that may lead to a task switch Only works for single-processor systems Atomic processor instructions For example: the test-and-set instruction Variables can be tested and updated in one operation Necessary for systems with two or more processors

Disabling processor interrupts In single-processor systems, the mutual exclusion is guaranteed by disabling the processor s interrupt service mechanism ( interrupt masking ) while the critical region is executed. This way, unwanted task switches in the critical region (caused by e.g. timer interrupts) are avoided. However, all other tasks are unable to execute during this time. Therefore, critical regions should only contain such instructions that really require mutual exclusion (e.g., code that handles the operations wait and signal for semaphores). Note: this method is typically not used in multi-processor systems since interrupts are local to each processor.

Disabling processor interrupts task A; task B; task body A is Disable_Interrupts; -- turn off interrupt handling -- critical region Enable_Interrupts; -- A leaves critical region -- remaining program code end A; task body B is Disable_Interrupts; -- turn off interrupt handling -- critical region Enable_Interrupts; -- B leaves critical region -- remaining program code end B;

Atomic processor instruction In multi-processor systems with shared memory, a test-and-set instruction is used for handling critical regions. A test-and-set instruction is a processor instruction that reads from and writes to a variable in one atomic operation. The functionality of the test-and-set instruction can be illustrated by the following Ada procedure: procedure testandset(lock, previous : in out Boolean) is previous := lock; lock := true; -- lock is read and its value saved -- lock is set to true end testandset; The combined read and write of lock must be atomic. In a multiprocessor system, this is guaranteed by locking (disabling access to) the memory bus during the entire operation.

Atomic processor instruction lock : Boolean := false; -- shared flag task A, B; task body A is previous : Boolean; loop testandset(lock, previous); exit when not previous; -- A waits if critical region is busy end loop; -- critical region lock := false; -- A leaves critical region -- remaining program code end A; task body B is previous : Boolean; loop testandset(lock, previous); -- A waits if critical region is busy exit when not previous; end loop; -- critical region lock := false; -- A leaves critical region -- remaining program code end B;