Global shared variables. Message passing paradigm. Communication Ports. Port characteristics. Sending a message 07/11/2018

Similar documents
Inter-task communication mechanisms

Deadlock. Concurrency: Deadlock and Starvation. Reusable Resources

Operating Systems: William Stallings. Starvation. Patricia Roy Manatee Community College, Venice, FL 2008, Prentice Hall

Concurrency: Deadlock and Starvation. Chapter 6

Concurrency: Deadlock and Starvation

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

2.c Concurrency Mutual exclusion & synchronization mutexes. Unbounded buffer, 1 producer, N consumers

Chapter 5 Concurrency: Mutual Exclusion and Synchronization

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Multitasking / Multithreading system Supports multiple tasks

Performance Throughput Utilization of system resources

Agenda. Highlight issues with multi threaded programming Introduce thread synchronization primitives Introduce thread safe collections

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

CS370 Operating Systems

AC59/AT59/AC110/AT110 OPERATING SYSTEMS & SYSTEMS SOFTWARE DEC 2015

CS533 Concepts of Operating Systems. Jonathan Walpole

Dealing with Issues for Interprocess Communication

COMP SCI 3SH3: Operating System Concepts (Term 2 Winter 2006) Test 2 February 27, 2006; Time: 50 Minutes ;. Questions Instructor: Dr.

Midterm Exam. October 20th, Thursday NSC

Chapters 5 and 6 Concurrency

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Chapter 5 Concurrency: Mutual Exclusion. and. Synchronization. Operating Systems: Internals. and. Design Principles

CS 31: Intro to Systems Misc. Threading. Kevin Webb Swarthmore College December 6, 2018

Introduction to Real-Time Operating Systems

Real-Time Programming

Experience with Processes and Monitors in Mesa. Arvind Krishnamurthy

2 Threads vs. Processes

Interprocess Communication By: Kaushik Vaghani

Process Concept. Minsoo Ryu. Real-Time Computing and Communications Lab. Hanyang University.

Event Ordering. Greg Bilodeau CS 5204 November 3, 2009

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

Chapter 6 Concurrency: Deadlock and Starvation

Process Co-ordination OPERATING SYSTEMS

Enforcing Mutual Exclusion Using Monitors

Condition Variables CS 241. Prof. Brighten Godfrey. March 16, University of Illinois

Non-Blocking Write Protocol NBW:

Concurrent Processes Rab Nawaz Jadoon

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Synchronization COMPSCI 386

Process Synchronization. studykorner.org

What s An OS? Cyclic Executive. Interrupts. Advantages Simple implementation Low overhead Very predictable

Diagram of Process State Process Control Block (PCB)

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

CSC501 Operating Systems Principles. Process Synchronization

Process Management And Synchronization

Chapter 5 Asynchronous Concurrent Execution

CHAPTER 6: PROCESS SYNCHRONIZATION

Concurrency Abstractions in C#

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

message passing model

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Chapter 6: Synchronization. Chapter 6: Synchronization. 6.1 Background. Part Three - Process Coordination. Consumer. Producer. 6.

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Concurrency. Chapter 5

Chapter 6: Process Synchronization

Dr. D. M. Akbar Hussain DE5 Department of Electronic Systems

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

Concurrency, Mutual Exclusion and Synchronization C H A P T E R 5

Synchronization Classic Problems

Concurrent Control with "Readers" and "Writers"

Concurrency: Mutual Exclusion and

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Threading and Synchronization. Fahd Albinali

WEEK 5 - APPLICATION OF PETRI NETS. 4.4 Producers-consumers problem with priority

Embedded Systems. 6. Real-Time Operating Systems

Concept of a process

Introduction to Operating Systems

Module 6: Process Synchronization

Module 1. Introduction:

CSE 421/521 - Operating Systems Fall Lecture - XXV. Final Review. University at Buffalo

Exception handling. Exceptions can be created by the hardware or by software: Examples. Printer out of paper End of page Divide by 0

Motivation. Threads. Multithreaded Server Architecture. Thread of execution. Chapter 4

EMERALDS: a small-memory real-time microkernel

Synchronization I. Jo, Heeseung

UNIX File System. UNIX File System. The UNIX file system has a hierarchical tree structure with the top in root.

Process Synchronization Mechanisms

1 Process Coordination

Chapter 6: Process Synchronization. Operating System Concepts 8 th Edition,

CS370 Operating Systems

Classical concurrency control: topic overview 1 In these lectures we consider shared writeable data in main memory

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

C++ Idioms for Concurrent Operations

Concurrency: a crash course

EE458 - Embedded Systems Lecture 8 Semaphores

Concurrency Abstractions in C#

Concurrency Abstractions in C#

Processes. Operating System Concepts with Java. 4.1 Sana a University, Dr aimen

The Big Picture So Far. Chapter 4: Processes

Concurrent & Distributed Systems Supervision Exercises

KERNEL DESIGN ISSUES 9.1 STRUCTURE OF A REAL-TIME KERNEL

Multithreaded Programming Part II. CSE 219 Stony Brook University, Department of Computer Science

Synchronization Principles II

1995 Paper 10 Question 7

Coordination and Agreement

CS3502 OPERATING SYSTEMS

Page 1. Analogy: Problems: Operating Systems Lecture 7. Operating Systems Lecture 7

CYES-C++: A Concurrent Extension of C++ through Compositional Mechanisms

Transcription:

Global shared variables In most RT applications, tasks exchange data through global shared variables. Advantages High efficiency Low run-time overhead Schedulability analysis is available Disadvantages Data must be accessed in mutual exclusion: non-preemptive regions; semaphores, priority inheritance, priority ceiling. ot good for modular design: local details are exposed to other tasks; a change in a task affect other tasks. Message passing paradigm Another approach is to exchange data through a message passing paradigm: Every task operates on a private memory space; Data are exchanged by messages through a channel. 1 channel Communication Ports Many operating systems provides the channel abstraction through the port construct. A task can a port to exchange messages by means of two primitives: send sends a message to a port receive receives a message from a port Message: Channel: set of data having a predefined format. logical link by which two tasks can communicate. Some operating systems allow the r to define different types of ports, with peculiar semantics. Port characteristics Ports may differ for: number of tasks allowed to send messages; number of tasks allowed to receive messages; policy d to insert and extract messages. behavior to manage exceptions (sending to a full port or receiving from an empty port). Before being d, a port has to be created, and then destroyed when it is not needed any more. Port attributes need to be defined at creation time Sending a message A message sent to a port is inserted to an internal buffer, whose size must be defined at creation time: 1 C B A If a message is sent when the port buffer is full, an exception policy has to be selected. Typically: the sender is blocked until the receiver reads a message (synchronous behavior); the new message is lost; an error message is returned by the send. 1

Receiving a message Using a port When receiving from a port, the message at the head of the buffer is extracted (consumed): 1 C BC BA Task A p = port_create(); send(p, mes); port p; Task B port_connect(p); receive(p, mes); When receiving from a port with and empty buffer, an exception policy has to be selected. Typically: the receiver is blocked until a new message is sent (synchronous behavior); an error message is returned by the receive. port_destroy(p); port_disconnect(p); OTE: Task A is the owner and must start first. Periodic task communication Problem (T 1 < T 2 ) 1 T 1 T 2 If (T 1 <T 2 ), 1 puts more messages than what can read. hen the buffer becomes full, 1 must proceed at the same rate as. Periodic task communication Problem (T 1 > T 2 ) 1 T 1 T 2 If (T 1 >T 2 ), reads more messages than what 1 can produce. hen the buffer becomes empty, must proceed at the same rate as 1. Periodic task communication Exchanged messages Thus, if T 1 T 2, after a certain time, both tasks will proceed at the rate of the slowest task. # of exchanged messages = t f i f 1 To keep its own rate, tasks using synchronous ports should have the same period. # of messages in the buffer f 2 How long two tasks with different periods can run at their own rate before they synchronize on a full (or empty) buffer? t 2

Buffer saturation STICK Ports If T 1 < T 2, the buffer saturates when: t t T1 T2 Hence, the tasks proceed at their proprer rate while: t t t t 1 T1 T2 T1 T2 That is, while: T1T 2 t ( 1) T T 2 1 They are ports with state message semantics: the most recent message is always available for reading; a new message overrides the previous one; a message is not consumed by the receiver. OTE: Since only the most recent message is of interest, there is no need to maintain a queue of past messages. A task never blocks for a full or empty buffer. STICK Ports Blocking on STICK ports Example T 1 < T 2 Example T 1 > T 2 T 1 T 2 = 2T 1 1 STICK PORT abcdefg aceg T 1 = 2T 2 T 2 1 STICK PORT abcde aabbccddee Although a task cannot blocks for a full or empty buffer, it can block for mutual exclusion: indeed a semaphore is needed to protect the internal buffer from simultaneous accesses. Long messages may ca long blocking delays on such a semaphore. Long waiting times due to long messages can be avoided through a buffer replication mechanism. Dual buffering Dual buffering Dual buffering is often d to transfer large data, (images) from the input peripheral device to a task: If a writer task produces a new message while R is reading, the new message is written in a new buffer: msg1 msg1 R R msg2 msg2 3

Dual buffering Once written, the new message becomes available to the next reader: Cyclic Asynchronous Buffers A generalizes dual buffering for readers: It is a mechanism for exchanging messages among periodic tasks with different rates. write write write M3 Memory conflicts are avoided by replicating the internal buffers. It s a state-message semantics: R read read At any time, the most recent message is always available for reading; A new message overrides the previous one; Messages are not consumed by reading. Accessing a Data is accessed through a memory pointer. Hence, a reader is not forced to copy the message in its memory space. More tasks can concurrently read the same message. At any time, a pointer () points to the most recent buffer d for writing a new message. M3 situation at time t 1 t 1 M3 4

M3 M3 Dimensioning a M3 situation at time t 2 t 2 M6 If a is d by tasks, to avoid blocking, it must have at least +1 buffers. The (+1)-th buffer is needed for keeping the most recent message in the case all the other buffers are d. M6 Inconsistency with buffers M3 Writing Protocol To write a message in a a task must: 1. ask the for a pointer to a free buffer; 1 3 2. copy the message into the buffer using the pointer; Assume all buffers are d and overwrites the most recent message () with. If (while is writing) 1 finishes and requests a new message, it finds the inconsistent. 3. release the pointer to the to make the message accessible to the next reader. 5

Reading Protocol Primitives To read a message from a a task must 1. get the pointer to the most recent message in the ; 2. process the message through the pointer; 3. release the pointer, to allow the to recycle the buffer if it is not d. cab_create(cab_name, buf_size, max_buf); creates a with max_buf buffers with size buf_size bytes. It returns a global identifier. cab_delete(cab_id); deletes the specified. Primitives For writing cab_reserve(cab_id, pointer) provides a pointer to write in a free buffer cab_putmes(cab_id, pointer) release the pointer after a write operation For reading cab_getmes(cab_id, pointer) provides a pointer to the most recent message cab_unget(cab_id, pointer) release the pointer after a read operation Writing in a cab_reserve(cab_id, p); <copy message in *p> cab_putmes(cab_id, p); Reading from a Implementing s data structure cab_getmes(cab_id, p); <process message with *p> cab_unget(cab_id, p); i control block free max_buf dim_buf next data next most recent data next empty ULL empty 6

cab_reserve() cab_putmes() Provides the pointer to the first free buffer and updates the pointer to the free list. void cab_reserve(cab c, void *p) p = c.free; c.free = p.next; Updates the pointer to the most recent buffer () to point to the last message. If no one is using the previous buffer, the buffer is recycled. void cab_putmes(cab c, void *p) if (c.. == 0) c..next = c.free; c.free = c.; c. = p; cab_getmes() Provides the pointer to the most recent buffer and increments the reader counter. void cab_getmes(cab c, void *p) p = c.; p.++; cab_unget() Decrements the readers counter and recycles the buffer, only if no one is using it and it is not the most recent buffer. void cab_unget(cab c, void *p) p.--; if ((p. == 0) && (p!= c.)) p.next = c.free; c.free = p; 7